Properties
Basic¤
ndim
property
¤
ndim: int
Returns the number of dimensions in the tensor.
t = Tensor([[1, 2], [3, 4]])
print(t.ndim)
2
numel
¤
numel() -> sint
Returns the total number of elements in the tensor.
t = Tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
print(t.numel())
8
Source code in tinygrad/tensor.py
3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 |
|
element_size
¤
element_size() -> int
Returns the size in bytes of an individual element in the tensor.
t = Tensor([5], dtype=dtypes.int16)
print(t.element_size())
2
Source code in tinygrad/tensor.py
3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 |
|
nbytes
¤
nbytes() -> int
Returns the total number of bytes of all elements in the tensor.
t = Tensor([8, 9], dtype=dtypes.float)
print(t.nbytes())
8
Source code in tinygrad/tensor.py
3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 |
|
is_floating_point
¤
is_floating_point() -> bool
Returns True
if the tensor contains floating point types, i.e. is one of dtype.float64
, dtype.float32
,
dtype.float16
, dtype.bfloat16
.
t = Tensor([8, 9], dtype=dtypes.float32)
print(t.is_floating_point())
True
Source code in tinygrad/tensor.py
3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 |
|
size
¤
Return the size of the tensor. If dim
is specified, return the length along dimension dim
. Otherwise return the shape of the tensor.
t = Tensor([[4, 5, 6], [7, 8, 9]])
print(t.size())
(2, 3)
print(t.size(dim=1))
3
Source code in tinygrad/tensor.py
3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 |
|
Data Access¤
data
¤
data() -> memoryview
Returns the data of this tensor as a memoryview.
t = Tensor([1, 2, 3, 4])
print(np.frombuffer(t.data(), dtype=np.int32))
[1 2 3 4]
Source code in tinygrad/tensor.py
276 277 278 279 280 281 282 283 284 285 286 287 288 |
|
item
¤
item() -> ConstType
Returns the value of this tensor as a standard Python number.
t = Tensor(42)
print(t.item())
42
Source code in tinygrad/tensor.py
290 291 292 293 294 295 296 297 298 299 300 |
|
tolist
¤
Returns the value of this tensor as a nested list.
t = Tensor([1, 2, 3, 4])
print(t.tolist())
[1, 2, 3, 4]
Source code in tinygrad/tensor.py
304 305 306 307 308 309 310 311 312 313 |
|
numpy
¤
numpy() -> 'np.ndarray'
Returns the value of this tensor as a numpy.ndarray
.
t = Tensor([1, 2, 3, 4])
print(repr(t.numpy()))
array([1, 2, 3, 4], dtype=int32)
Source code in tinygrad/tensor.py
315 316 317 318 319 320 321 322 323 324 325 326 327 328 |
|
tinygrad ops¤
schedule_with_vars
¤
Creates the schedule needed to realize these Tensor(s), with Variables.
Note
A Tensor can only be scheduled once.
Source code in tinygrad/tensor.py
213 214 215 216 217 218 219 220 |
|
schedule
¤
schedule(*lst: Tensor) -> list[ScheduleItem]
Creates the schedule needed to realize these Tensor(s).
Source code in tinygrad/tensor.py
222 223 224 225 226 |
|
realize
¤
Triggers the computation needed to create these Tensor(s).
Source code in tinygrad/tensor.py
228 229 230 231 |
|
replace
¤
Replaces the data of this tensor with the data of another tensor. Only the shape of the tensors must match.
Source code in tinygrad/tensor.py
233 234 235 236 237 238 239 240 241 |
|
assign
¤
assign(x) -> Tensor
Source code in tinygrad/tensor.py
243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 |
|
detach
¤
detach() -> Tensor
Returns a new tensor with the same data as this tensor, but detached from the autograd graph.
Source code in tinygrad/tensor.py
262 263 264 265 266 |
|
to
¤
Moves the tensor to the given device.
Source code in tinygrad/tensor.py
339 340 341 342 343 344 345 346 347 348 349 |
|
to_
¤
Moves the tensor to the given device in place.
Source code in tinygrad/tensor.py
351 352 353 354 355 356 357 358 |
|
shard
¤
shard(
devices: tuple[str, ...],
axis: Optional[int] = None,
splits: Optional[tuple[int, ...]] = None,
) -> Tensor
Shards the tensor across the given devices. Optionally specify which axis to shard on, and how to split it across devices.
t = Tensor.empty(2, 3)
print(t.shard((t.device, t.device), axis=1, splits=(2, 1)).lazydata)
<MLB self.axis=1 self.real=[True, True]
CLANG ShapeTracker(views=(View(shape=(2, 2), strides=(2, 1), offset=0, mask=None, contiguous=True),))
CLANG ShapeTracker(views=(View(shape=(2, 1), strides=(1, 0), offset=0, mask=None, contiguous=True),))>
Source code in tinygrad/tensor.py
360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 |
|
shard_
¤
shard_(
devices: tuple[str, ...],
axis: Optional[int] = None,
splits: Optional[tuple[int, ...]] = None,
)
Shards the tensor across the given devices in place.
Source code in tinygrad/tensor.py
382 383 384 385 386 387 |
|
contiguous
¤
contiguous()
Returns a contiguous tensor.
Source code in tinygrad/tensor.py
2430 2431 2432 2433 2434 |
|
contiguous_backward
¤
contiguous_backward()
Inserts a contiguous operation in the backward pass.
Source code in tinygrad/tensor.py
2435 2436 2437 2438 2439 |
|
Gradient¤
gradient
¤
Compute the gradient of the targets with respect to self.
x = Tensor.eye(3)
y = Tensor([[2.0,0,-2.0]])
z = y.matmul(x).sum()
dx, dy = z.gradient(x, y)
print(dx.tolist()) # dz/dx
print(dy.tolist()) # dz/dy
[[2.0, 2.0, 2.0], [0.0, 0.0, 0.0], [-2.0, -2.0, -2.0]]
[[1.0, 1.0, 1.0]]
Source code in tinygrad/tensor.py
869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 |
|
backward
¤
Propagates the gradient of a tensor backwards through the computation graph. If the 'gradient' argument is not provided, the tensor must be a scalar, and the gradient is implicitly set to 1.0. If 'retain_graph' is false, the graph used to compute the grads will be freed. Otherwise, it will be kept. Keeping it can increase memory usage.
t = Tensor([1.0, 2.0, 3.0, 4.0], requires_grad=True)
t.sum().backward()
print(t.grad.numpy())
[1. 1. 1. 1.]
Source code in tinygrad/tensor.py
904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 |
|