Properties
Basic¤
ndim
property
¤
ndim: int
Returns the number of dimensions in the tensor.
t = Tensor([[1, 2], [3, 4]])
print(t.ndim)
2
numel
¤
numel() -> sint
Returns the total number of elements in the tensor.
t = Tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
print(t.numel())
8
Source code in tinygrad/mixin/movement.py
40 41 42 43 44 45 46 47 48 49 | |
element_size
¤
element_size() -> int
Returns the size in bytes of an individual element in the tensor.
t = Tensor([5], dtype=dtypes.int16)
print(t.element_size())
2
Source code in tinygrad/tensor.py
3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 | |
nbytes
¤
nbytes() -> int
Returns the total number of bytes of all elements in the tensor.
t = Tensor([8, 9], dtype=dtypes.float)
print(t.nbytes())
8
Source code in tinygrad/tensor.py
3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 | |
is_floating_point
¤
is_floating_point() -> bool
Returns True if the tensor contains floating point types, i.e. is one of dtypes.float64, dtypes.float32,
dtypes.float16, dtypes.bfloat16.
t = Tensor([8, 9], dtype=dtypes.float32)
print(t.is_floating_point())
True
Source code in tinygrad/tensor.py
3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 | |
size
¤
Returns the size of the tensor. If dim is specified, return the length along dimension dim. Otherwise return the shape of the tensor.
t = Tensor([[4, 5, 6], [7, 8, 9]])
print(t.size())
(2, 3)
print(t.size(dim=1))
3
Source code in tinygrad/tensor.py
3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 | |
Data Access¤
data
¤
data() -> memoryview
Returns the data of this tensor as a memoryview.
t = Tensor([1, 2, 3, 4])
print(np.frombuffer(t.data(), dtype=np.int32))
[1 2 3 4]
Source code in tinygrad/tensor.py
343 344 345 346 347 348 349 350 351 352 353 354 355 356 | |
item
¤
item() -> PyConst
Returns the value of this tensor as a standard Python number.
t = Tensor(42)
print(t.item())
42
Source code in tinygrad/tensor.py
358 359 360 361 362 363 364 365 366 367 368 | |
tolist
¤
Returns the value of this tensor as a nested list. Returns single value for const tensor.
t = Tensor([1, 2, 3, 4])
print(t.tolist())
[1, 2, 3, 4]
t = Tensor(5)
print(t.tolist())
5
Source code in tinygrad/tensor.py
371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 | |
numpy
¤
numpy() -> 'numpy.ndarray'
Returns the value of this tensor as a numpy.ndarray.
t = Tensor([1, 2, 3, 4])
print(repr(t.numpy()))
array([1, 2, 3, 4], dtype=int32)
Source code in tinygrad/tensor.py
389 390 391 392 393 394 395 396 397 398 399 400 401 402 | |
tinygrad ops¤
schedule_with_vars
¤
Creates the schedule needed to realize these Tensor(s), with Variables.
Note
A Tensor can only be scheduled once.
Source code in tinygrad/tensor.py
253 254 255 256 257 258 259 260 261 262 263 264 | |
schedule
¤
Creates the schedule needed to realize these Tensor(s).
Source code in tinygrad/tensor.py
266 267 268 269 270 | |
realize
¤
Triggers the computation needed to create these Tensor(s).
Source code in tinygrad/tensor.py
272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 | |
replace
¤
Replaces the data of this tensor with the data of another tensor. Only the shape of the tensors must match.
Source code in tinygrad/tensor.py
295 296 297 298 299 300 301 302 | |
assign
¤
Source code in tinygrad/tensor.py
304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 | |
detach
¤
detach() -> Tensor
Returns a new tensor with the same data as this tensor, but detached from the autograd graph.
Source code in tinygrad/tensor.py
327 328 329 330 331 | |
clone
¤
clone() -> Tensor
Creates a clone of this tensor allocating a separate buffer for the data.
Source code in tinygrad/tensor.py
404 405 406 407 408 409 410 | |
to
¤
Moves the tensor to the given device.
Source code in tinygrad/tensor.py
412 413 414 415 416 417 418 419 420 421 | |
to_
¤
Moves the tensor to the given device in place.
Source code in tinygrad/tensor.py
423 424 425 426 427 428 429 | |
shard
¤
Shards the tensor across the given devices. Optionally specify which axis to shard on.
t = Tensor.empty(2, 4)
print(t.shard((t.device, t.device), axis=1).uop)
UOp(Ops.MULTI, dtypes.float, arg=1, src=(
UOp(Ops.SHRINK, dtypes.float, arg=None, src=(
UOp(Ops.COPY, dtypes.float, arg=None, src=(
UOp(Ops.RESHAPE, dtypes.float, arg=None, src=(
UOp(Ops.BUFFER, dtypes.float, arg=8, src=(
UOp(Ops.UNIQUE, dtypes.void, arg=1516, src=()),
UOp(Ops.DEVICE, dtypes.void, arg='CPU', src=()),)),
UOp(Ops.VCONST, dtypes.index.vec(2), arg=(2, 4), src=()),)),
UOp(Ops.DEVICE, dtypes.void, arg=('CPU', 'CPU'), src=()),)),
UOp(Ops.VECTORIZE, dtypes.index.vec(2), arg=None, src=(
UOp(Ops.CONST, dtypes.index, arg=0, src=()),
x10:=UOp(Ops.MUL, dtypes.index, arg=None, src=(
UOp(Ops.DEFINE_VAR, dtypes.index, arg=('_device_num', 0, 1), src=()),
x12:=UOp(Ops.CONST, dtypes.index, arg=2, src=()),)),)),
UOp(Ops.VECTORIZE, dtypes.index.vec(2), arg=None, src=(
x12,
UOp(Ops.ADD, dtypes.index, arg=None, src=(
x10,
x12,)),)),)),))
Source code in tinygrad/tensor.py
431 432 433 434 435 436 437 438 439 440 441 442 443 444 | |
shard_
¤
Shards the tensor across the given devices in place.
Source code in tinygrad/tensor.py
446 447 448 449 450 | |
contiguous
¤
contiguous(*args, **kwargs) -> Tensor
Returns a contiguous tensor.
Source code in tinygrad/tensor.py
2836 2837 2838 2839 2840 | |
contiguous_backward
¤
contiguous_backward() -> Tensor
Inserts a contiguous operation in the backward pass.
Source code in tinygrad/tensor.py
2842 2843 2844 2845 2846 | |
Gradient¤
gradient
¤
gradient(
*targets: Tensor,
gradient: Tensor | None = None,
materialize_grads=False
) -> list[Tensor]
Computes the gradient of the targets with respect to self.
x = Tensor.eye(3)
y = Tensor([[2.0,0,-2.0]])
z = y.matmul(x).sum()
dx, dy = z.gradient(x, y)
print(dx.tolist()) # dz/dx
print(dy.tolist()) # dz/dy
[[2.0, 2.0, 2.0], [0.0, 0.0, 0.0], [-2.0, -2.0, -2.0]]
[[1.0, 1.0, 1.0]]
Source code in tinygrad/tensor.py
1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 | |
backward
¤
Propagates the gradient of a tensor backwards through the computation graph. If the 'gradient' argument is not provided, the tensor must be a scalar, and the gradient is implicitly set to 1.0.
t = Tensor([1.0, 2.0, 3.0, 4.0], requires_grad=True)
t.sum().backward()
print(t.grad.numpy())
[1. 1. 1. 1.]
Source code in tinygrad/tensor.py
1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 | |