Design Notes — Conversation Summary

RESM // Minimal Computation

Ownership, register-based execution, indirect addressing, and many-core scaling

Author Dario Cangialosi
Interlocutor Claude (Anthropic)
Language Italian → English
Date 2026

01 What is RESM?

RESM is a minimal computation model designed as a substrate for language research, compiler targets, and hardware experimentation. Its core is a transport-triggered OISC: a single COPY(from, to) instruction, combined with conditional branching via IP_CASE_0 / IP_CASE_1.

The circuit operates in three clock phases: fetch (IP → ROM), execute (ROM → RAM copy), and branch (read 1-bit condition, select next IP). The entire design fits in a 256B ROM + 256B RAM with a handful of registers and a MUX on the instruction pointer. It is deliberately Turing-complete in the smallest possible footprint.

RESM connects to Kolmogorov Complexity and the Hutter Prize: the most compressed program description of a computation is the one running on the most minimal substrate. RESM is that substrate.

02 Ownership Types on RESM

Rust's ownership model is not a runtime mechanism — it is a static affine type system applied to the control-flow graph. Every value has exactly one owner; the compiler inserts drop at the last-use point automatically. No GC, no reference counting.

RESM, being a flat explicit IR, is a natural target for the same analysis. Ownership on RESM means:

ConceptRustRESM equivalent
Ownervariable bindingregister / memory cell
Movevariable consumedregister invalidated after COPY
Dropcompiler-inserted destructorFREE inserted at last-use point
Borrow&T / &mut TILOAD_BORROW mode (read-only indirection)
Lifetime'a annotationsstatic frame ranges, compile-time

The key insight: liveness analysis is ownership-lite. Knowing the last use of every value is sufficient to insert FREE automatically and reuse slots — without any runtime overhead whatsoever.

03 Register-Based, No Value Stack

Stack-based VMs (JVM, CPython) use an implicit operand stack. Register-based VMs (Lua 5, Dalvik, LuaJIT) use explicit register numbers — fewer instructions, better liveness visibility. RESM follows the register-based model.

A full call stack is not necessary. With statically allocated frames (each function owns a fixed range of cells, determined at compile time by ownership analysis), data never needs to be pushed or popped. The only remaining need is storing return addresses — and that is handled by a side-stack opaque to RESM:

; RESM sees only:
CALL  label    ; push PC to side-stack, JMP label
RET            ; pop side-stack, JMP

; data lives in statically-assigned cell ranges,
; never pushed or popped at runtime

The value stack disappears entirely. The call stack shrinks to a narrow return-address LIFO, external to the RESM data model.

04 Indirect Addressing — When Is It Needed?

Indirect addressing is required only when an address is not known at compile time. With static frame allocation and ownership analysis, this covers a surprisingly large class of programs:

SituationIndirect needed?
Static frames per function✗ no
Return address side-stack✗ no (opaque to RESM)
Tail-call only recursion✗ no (becomes JMP)
Dynamic data structures (lists, trees)✓ yes
Variable-depth recursion✓ yes

The design decision: keep RESM core without indirect addressing, and add it as a clean orthogonal extension when genuinely needed — a MUX on the address bus, one extra bit in the instruction word, nothing more. The core remains untouched.

05 Extension Layers

RESM core — COPY + branch, 3-phase, 256B ROM / 256B RAM invariant
+ side-stack — LIFO for return addresses, opaque to RESM data model +MUX on SELECT_IP
+ indirect addressing — ILOAD / ISTORE via register-held address +MUX on address bus
+ ownership analysis — static liveness, zero-cost FREE, frame reuse compile-time only
+ many-core — N instances, channel-based communication +FIFO between cores

Each layer is strictly additive — no layer modifies the semantics of those below it. This is the orthogonal extension principle: the core stays valid and minimal; complexity is opt-in and localized.

06 Many-Core Scaling

A minimal core occupies very little silicon area. On a chip where a modern x86 core requires 100–200 mm², a RESM core could fit in under 1 mm². This means the same die area could host tens or hundreds of RESM cores.

This is the same reasoning behind the Transputer (INMOS, 1980s), XMOS xCore, and dataflow architectures: trade individual-core complexity for core count and explicit parallelism.

The key enabler: ownership as data-race freedom.
If the compiler statically proves that two values are never accessed by the same owner at the same time, those values can be assigned to different cores with no synchronization required — no locks, no mutexes, no race conditions. Correctness is guaranteed by construction at compile time.

The optimizing compiler (Pangea → RESM multicore) performs:

PassPurpose
Dependency analysisidentify independent instruction groups
Partitioningassign groups to separate cores
Schedulingminimize inter-core wait time
Communication insertionautomatically place COPY/channel ops at boundaries

07 Connection to Pangea / pang

Pangea (also called pang, for Polish-Notation Language) is a minimalist prefix-notation language targeting RESM as its backend. The compilation pipeline envisioned is:

Pangea / pang  (prefix notation, minimal syntax)
      ↓
  ownership analysis  (affine types, liveness)
      ↓
  RESM IR  (register-based, static frames)
      ↓
  partitioning + scheduling
      ↓
  RESM multicore binary  (correct-by-construction parallel)

The minimalism of the source language, the IR, and the target hardware are all co-designed — each layer's simplicity makes the next layer's analysis easier and more powerful.