Skip to content

Intro

The tinygrad framework has four pieces

  • a PyTorch like frontend.
  • a scheduler which breaks the compute into kernels.
  • a lowering engine which converts ASTs into code that can run on the accelerator.
  • an execution engine which can run that code.

There is a good bunch of tutorials by Di Zhu that go over tinygrad internals.

Frontend¤

Everything in Tensor is syntactic sugar around function.py, where the forwards and backwards passes are implemented for the different functions. There's about 25 of them, implemented using about 20 basic ops. Those basic ops go on to construct a graph of:

LazyBuffer ¤

LazyBuffer(
    device: str,
    st: ShapeTracker,
    dtype: DType,
    op: Optional[Ops] = None,
    arg: Any = None,
    srcs: Tuple[LazyBuffer, ...] = (),
    base: Optional[LazyBuffer] = None,
    metadata: Optional[Metadata] = None,
)

Bases: MathTrait

The LazyBuffer graph specifies the compute in terms of low level tinygrad ops. Not all LazyBuffers will actually become realized. There's two types of LazyBuffers, base and view. base contains compute into a contiguous buffer, and view is a view (specified by a ShapeTracker). Inputs to a base can be either base or view, inputs to a view can only be a single base.

Scheduling¤

The scheduler converts the graph of LazyBuffers into a list of ScheduleItem. One ScheduleItem is one kernel on the GPU, and the scheduler is responsible for breaking the large compute graph into subgraphs that can fit in a kernel. ast specifies what compute to run, and bufs specifies what buffers to run it on.

ScheduleItem dataclass ¤

ScheduleItem(
    ast: UOp,
    bufs: Tuple[Buffer, ...],
    metadata: Tuple[Metadata, ...],
    assign_preloads: FrozenSet[UOp],
)

Attributes:

  • inputs (Tuple[Buffer, ...]) –

    Read only buffers in the schedule.

  • outputs (Tuple[Buffer, ...]) –

    Read/write or write only buffers in the schedule.

inputs property ¤

inputs: Tuple[Buffer, ...]

Read only buffers in the schedule.

outputs property ¤

outputs: Tuple[Buffer, ...]

Read/write or write only buffers in the schedule.

Lowering¤

The code in realize lowers ScheduleItem to ExecItem with

lower_schedule ¤

lower_schedule(
    schedule: List[ScheduleItem],
) -> Generator[ExecItem, None, None]
Source code in tinygrad/engine/realize.py
199
200
201
202
203
204
205
206
207
208
def lower_schedule(schedule:List[ScheduleItem]) -> Generator[ExecItem, None, None]:
  while len(schedule):
    si = schedule.pop(0)
    try: yield lower_schedule_item(si)
    except Exception as e:
      if DEBUG >= 2:
        print(f"error lowering {si.ast.op}")
        print("tensor operations:")
        pprint.pprint(si.metadata, indent=2)
      raise e

There's a ton of complexity hidden behind this, see the codegen/ directory.

First we lower the AST to UOps, which is a linear list of the compute to be run. This is where the BEAM search happens.

Then we render the UOps into code with a Renderer, then we compile the code to binary with a Compiler.

Execution¤

Creating ExecItem, which has a run method

ExecItem dataclass ¤

ExecItem(
    prg: Runner,
    bufs: List[Optional[Buffer]],
    metadata: Optional[Tuple[Metadata, ...]] = None,
)

Methods:

Attributes:

bufs instance-attribute ¤

bufs: List[Optional[Buffer]]

metadata class-attribute instance-attribute ¤

metadata: Optional[Tuple[Metadata, ...]] = None

prg instance-attribute ¤

prg: Runner

run ¤

run(
    _var_vals: Optional[Dict[Variable, int]] = None,
    wait=False,
    jit=False,
    do_update_stats=True,
) -> Optional[float]
Source code in tinygrad/engine/realize.py
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
def run(self, _var_vals:Optional[Dict[Variable, int]]=None, wait=False, jit=False, do_update_stats=True) -> Optional[float]:
  var_vals = {} if _var_vals is None else _var_vals
  bufs = [cast(Buffer, x) for x in self.bufs] if jit else [cast(Buffer, x).ensure_allocated() for x in self.bufs]
  et = self.prg(bufs, var_vals, wait=wait or DEBUG >= 2)
  if do_update_stats:
    GlobalCounters.kernel_count += 1
    GlobalCounters.global_ops += (op_est:=sym_infer(self.prg.op_estimate, var_vals))
    GlobalCounters.global_mem += (mem_est:=sym_infer(self.prg.mem_estimate, var_vals))
    if et is not None: GlobalCounters.time_sum_s += et
    if DEBUG >= 2:
      lds_est = sym_infer(self.prg.lds_estimate, var_vals)
      mem_est = min(mem_est, lds_est)   # there can't be more memory accessed than loads/stores. remove this when symbolic is fixed
      ptm = (colored(f"{et*1e3:9.2f}ms", "yellow") if et > 0.01 else f"{et*1e6:9.2f}us") if et is not None else ""
      print(f"{colored(f'*** {self.prg.device[:7]:7s} {GlobalCounters.kernel_count:4d}', 'magenta' if jit else ('green' if self.prg.first_run else None))} {self.prg.display_name+' '*(41-ansilen(self.prg.display_name))} arg {len(bufs):2d} mem {GlobalCounters.mem_used/1e9:5.2f} GB " +  # noqa: E501
            (str() if et is None else f"tm {ptm}/{GlobalCounters.time_sum_s*1e3:9.2f}ms ({op_est/((et or 1e-20)*1e9):9.2f} GFLOPS {mem_est/((et or 1e-20)*1e9):6.1f}|{lds_est/((et or 1e-20)*1e9):<7.1f} GB/s)" +  # noqa: E501
             f" {[repr(m) if TRACEMETA >= 2 else str(m) for m in self.metadata] if self.metadata else ''}"))
    self.prg.first_run = False
  return et

Lists of ExecItem can be condensed into a single ExecItem with the Graph API (rename to Queue?)

Runtime¤

Runtimes are responsible for device-specific interactions. They handle tasks such as initializing devices, allocating memory, loading/launching programs, and more. You can find more information about the runtimes API on the runtime overview page.

All runtime implementations can be found in the runtime directory.

HCQ Compatible Runtimes¤

HCQ API is a lower-level API for defining runtimes. Interaction with HCQ-compatible devices occurs at a lower level, with commands issued directly to hardware queues. Some examples of such backends are NV and AMD, which are userspace drivers for NVIDIA and AMD devices respectively. You can find more information about the API on HCQ overview page