Properties
Basic¤
ndim
property
¤
ndim: int
Returns the number of dimensions in the tensor.
t = Tensor([[1, 2], [3, 4]])
print(t.ndim)
2
numel
¤
numel() -> sint
Returns the total number of elements in the tensor.
t = Tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
print(t.numel())
8
Source code in tinygrad/tensor.py
3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 |
|
element_size
¤
element_size() -> int
Returns the size in bytes of an individual element in the tensor.
t = Tensor([5], dtype=dtypes.int16)
print(t.element_size())
2
Source code in tinygrad/tensor.py
3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 |
|
nbytes
¤
nbytes() -> int
Returns the total number of bytes of all elements in the tensor.
t = Tensor([8, 9], dtype=dtypes.float)
print(t.nbytes())
8
Source code in tinygrad/tensor.py
3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 |
|
is_floating_point
¤
is_floating_point() -> bool
Returns True
if the tensor contains floating point types, i.e. is one of dtype.float64
, dtype.float32
,
dtype.float16
, dtype.bfloat16
.
t = Tensor([8, 9], dtype=dtypes.float32)
print(t.is_floating_point())
True
Source code in tinygrad/tensor.py
3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 |
|
size
¤
Return the size of the tensor. If dim
is specified, return the length along dimension dim
. Otherwise return the shape of the tensor.
t = Tensor([[4, 5, 6], [7, 8, 9]])
print(t.size())
(2, 3)
print(t.size(dim=1))
3
Source code in tinygrad/tensor.py
3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 |
|
Data Access¤
data
¤
data() -> memoryview
Returns the data of this tensor as a memoryview.
t = Tensor([1, 2, 3, 4])
print(np.frombuffer(t.data(), dtype=np.int32))
[1 2 3 4]
Source code in tinygrad/tensor.py
306 307 308 309 310 311 312 313 314 315 316 317 318 |
|
item
¤
item() -> ConstType
Returns the value of this tensor as a standard Python number.
t = Tensor(42)
print(t.item())
42
Source code in tinygrad/tensor.py
320 321 322 323 324 325 326 327 328 329 330 |
|
tolist
¤
Returns the value of this tensor as a nested list.
t = Tensor([1, 2, 3, 4])
print(t.tolist())
[1, 2, 3, 4]
Source code in tinygrad/tensor.py
334 335 336 337 338 339 340 341 342 343 |
|
numpy
¤
numpy() -> 'np.ndarray'
Returns the value of this tensor as a numpy.ndarray
.
t = Tensor([1, 2, 3, 4])
print(repr(t.numpy()))
array([1, 2, 3, 4], dtype=int32)
Source code in tinygrad/tensor.py
345 346 347 348 349 350 351 352 353 354 355 356 357 358 |
|
tinygrad ops¤
schedule_with_vars
¤
Creates the schedule needed to realize these Tensor(s), with Variables.
Note
A Tensor can only be scheduled once.
Source code in tinygrad/tensor.py
233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 |
|
schedule
¤
schedule(*lst: Tensor) -> list[ScheduleItem]
Creates the schedule needed to realize these Tensor(s).
Source code in tinygrad/tensor.py
254 255 256 257 258 |
|
realize
¤
Triggers the computation needed to create these Tensor(s).
Source code in tinygrad/tensor.py
260 261 262 263 |
|
replace
¤
Replaces the data of this tensor with the data of another tensor. Only the shape of the tensors must match.
Source code in tinygrad/tensor.py
265 266 267 268 269 270 271 272 |
|
assign
¤
assign(x) -> Tensor
Source code in tinygrad/tensor.py
274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 |
|
detach
¤
detach() -> Tensor
Returns a new tensor with the same data as this tensor, but detached from the autograd graph.
Source code in tinygrad/tensor.py
291 292 293 294 295 |
|
to
¤
Moves the tensor to the given device.
Source code in tinygrad/tensor.py
368 369 370 371 372 373 374 375 376 377 |
|
to_
¤
Moves the tensor to the given device in place.
Source code in tinygrad/tensor.py
379 380 381 382 383 384 385 |
|
shard
¤
Shards the tensor across the given devices. Optionally specify which axis to shard on.
t = Tensor.empty(2, 4)
print(t.shard((t.device, t.device), axis=1).lazydata)
UOp(Ops.MULTI, dtypes.float, arg=(1, (True, True)), src=(
UOp(Ops.CONTIGUOUS, dtypes.float, arg=None, src=(
UOp(Ops.COPY, dtypes.float, arg=False, src=(
x2:=UOp(Ops.DEVICE, dtypes.void, arg='CLANG', src=()),
UOp(Ops.CONTIGUOUS, dtypes.float, arg=None, src=(
UOp(Ops.SHRINK, dtypes.float, arg=((0, 2), (0, 2)), src=(
x5:=UOp(Ops.VIEW, dtypes.float, arg=ShapeTracker(views=(View(shape=(2, 4), strides=(4, 1), offset=0, mask=None, contiguous=True),)), src=(
UOp(Ops.BUFFER, dtypes.float, arg=(4326, 8), src=(
x2,)),)),)),)),)),)),
UOp(Ops.CONTIGUOUS, dtypes.float, arg=None, src=(
UOp(Ops.COPY, dtypes.float, arg=False, src=(
x2,
UOp(Ops.CONTIGUOUS, dtypes.float, arg=None, src=(
UOp(Ops.SHRINK, dtypes.float, arg=((0, 2), (2, 4)), src=(
x5,)),)),)),)),))
Source code in tinygrad/tensor.py
387 388 389 390 391 392 393 394 395 396 397 398 399 |
|
shard_
¤
Shards the tensor across the given devices in place.
Source code in tinygrad/tensor.py
401 402 403 404 405 |
|
contiguous
¤
contiguous()
Returns a contiguous tensor.
Source code in tinygrad/tensor.py
2489 2490 2491 2492 2493 |
|
contiguous_backward
¤
contiguous_backward()
Inserts a contiguous operation in the backward pass.
Source code in tinygrad/tensor.py
2494 2495 2496 2497 2498 |
|
Gradient¤
gradient
¤
gradient(
*targets: Tensor,
gradient: Optional[Tensor] = None,
materialize_grads=False
) -> list[Tensor]
Compute the gradient of the targets with respect to self.
x = Tensor.eye(3)
y = Tensor([[2.0,0,-2.0]])
z = y.matmul(x).sum()
dx, dy = z.gradient(x, y)
print(dx.tolist()) # dz/dx
print(dy.tolist()) # dz/dy
[[2.0, 2.0, 2.0], [0.0, 0.0, 0.0], [-2.0, -2.0, -2.0]]
[[1.0, 1.0, 1.0]]
Source code in tinygrad/tensor.py
886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 |
|
backward
¤
Propagates the gradient of a tensor backwards through the computation graph. If the 'gradient' argument is not provided, the tensor must be a scalar, and the gradient is implicitly set to 1.0.
t = Tensor([1.0, 2.0, 3.0, 4.0], requires_grad=True)
t.sum().backward()
print(t.grad.numpy())
[1. 1. 1. 1.]
Source code in tinygrad/tensor.py
915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 |
|