Properties
Basic¤
ndim
property
¤
ndim: int
Returns the number of dimensions in the tensor.
t = Tensor([[1, 2], [3, 4]])
print(t.ndim)
2
numel
¤
numel() -> sint
Returns the total number of elements in the tensor.
t = Tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
print(t.numel())
8
Source code in tinygrad/tensor.py
3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 |
|
element_size
¤
element_size() -> int
Returns the size in bytes of an individual element in the tensor.
t = Tensor([5], dtype=dtypes.int16)
print(t.element_size())
2
Source code in tinygrad/tensor.py
3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 |
|
nbytes
¤
nbytes() -> int
Returns the total number of bytes of all elements in the tensor.
t = Tensor([8, 9], dtype=dtypes.float)
print(t.nbytes())
8
Source code in tinygrad/tensor.py
3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 |
|
is_floating_point
¤
is_floating_point() -> bool
Returns True
if the tensor contains floating point types, i.e. is one of dtype.float64
, dtype.float32
,
dtype.float16
, dtype.bfloat16
.
t = Tensor([8, 9], dtype=dtypes.float32)
print(t.is_floating_point())
True
Source code in tinygrad/tensor.py
3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 |
|
size
¤
Return the size of the tensor. If dim
is specified, return the length along dimension dim
. Otherwise return the shape of the tensor.
t = Tensor([[4, 5, 6], [7, 8, 9]])
print(t.size())
(2, 3)
print(t.size(dim=1))
3
Source code in tinygrad/tensor.py
3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 |
|
Data Access¤
data
¤
data() -> memoryview
Returns the data of this tensor as a memoryview.
t = Tensor([1, 2, 3, 4])
print(np.frombuffer(t.data(), dtype=np.int32))
[1 2 3 4]
Source code in tinygrad/tensor.py
314 315 316 317 318 319 320 321 322 323 324 325 326 |
|
item
¤
item() -> ConstType
Returns the value of this tensor as a standard Python number.
t = Tensor(42)
print(t.item())
42
Source code in tinygrad/tensor.py
328 329 330 331 332 333 334 335 336 337 338 |
|
tolist
¤
Returns the value of this tensor as a nested list.
t = Tensor([1, 2, 3, 4])
print(t.tolist())
[1, 2, 3, 4]
Source code in tinygrad/tensor.py
342 343 344 345 346 347 348 349 350 351 |
|
numpy
¤
numpy() -> 'np.ndarray'
Returns the value of this tensor as a numpy.ndarray
.
t = Tensor([1, 2, 3, 4])
print(repr(t.numpy()))
array([1, 2, 3, 4], dtype=int32)
Source code in tinygrad/tensor.py
353 354 355 356 357 358 359 360 361 362 363 364 365 366 |
|
tinygrad ops¤
schedule_with_vars
¤
Creates the schedule needed to realize these Tensor(s), with Variables.
Note
A Tensor can only be scheduled once.
Source code in tinygrad/tensor.py
226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 |
|
schedule
¤
schedule(*lst: Tensor) -> list[ScheduleItem]
Creates the schedule needed to realize these Tensor(s).
Source code in tinygrad/tensor.py
260 261 262 263 264 |
|
realize
¤
Triggers the computation needed to create these Tensor(s).
Source code in tinygrad/tensor.py
266 267 268 269 |
|
replace
¤
Replaces the data of this tensor with the data of another tensor. Only the shape of the tensors must match.
Source code in tinygrad/tensor.py
271 272 273 274 275 276 277 278 279 |
|
assign
¤
assign(x) -> Tensor
Source code in tinygrad/tensor.py
281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 |
|
detach
¤
detach() -> Tensor
Returns a new tensor with the same data as this tensor, but detached from the autograd graph.
Source code in tinygrad/tensor.py
300 301 302 303 304 |
|
to
¤
Moves the tensor to the given device.
Source code in tinygrad/tensor.py
377 378 379 380 381 382 383 384 385 386 387 |
|
to_
¤
Moves the tensor to the given device in place.
Source code in tinygrad/tensor.py
389 390 391 392 393 394 395 |
|
shard
¤
Shards the tensor across the given devices. Optionally specify which axis to shard on.
t = Tensor.empty(2, 4)
print(t.shard((t.device, t.device), axis=1).lazydata)
<MLB self.axis=1 self.real=[True, True]
CLANG ShapeTracker(views=(View(shape=(2, 2), strides=(2, 1), offset=0, mask=None, contiguous=True),))
CLANG ShapeTracker(views=(View(shape=(2, 2), strides=(2, 1), offset=0, mask=None, contiguous=True),))>
Source code in tinygrad/tensor.py
397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 |
|
shard_
¤
Shards the tensor across the given devices in place.
Source code in tinygrad/tensor.py
420 421 422 423 424 |
|
contiguous
¤
contiguous()
Returns a contiguous tensor.
Source code in tinygrad/tensor.py
2539 2540 2541 2542 2543 |
|
contiguous_backward
¤
contiguous_backward()
Inserts a contiguous operation in the backward pass.
Source code in tinygrad/tensor.py
2544 2545 2546 2547 2548 |
|
Gradient¤
gradient
¤
Compute the gradient of the targets with respect to self.
x = Tensor.eye(3)
y = Tensor([[2.0,0,-2.0]])
z = y.matmul(x).sum()
dx, dy = z.gradient(x, y)
print(dx.tolist()) # dz/dx
print(dy.tolist()) # dz/dy
[[2.0, 2.0, 2.0], [0.0, 0.0, 0.0], [-2.0, -2.0, -2.0]]
[[1.0, 1.0, 1.0]]
Source code in tinygrad/tensor.py
905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 |
|
backward
¤
Propagates the gradient of a tensor backwards through the computation graph. If the 'gradient' argument is not provided, the tensor must be a scalar, and the gradient is implicitly set to 1.0. If 'retain_graph' is false, the graph used to compute the grads will be freed. Otherwise, it will be kept. Keeping it can increase memory usage.
t = Tensor([1.0, 2.0, 3.0, 4.0], requires_grad=True)
t.sum().backward()
print(t.grad.numpy())
[1. 1. 1. 1.]
Source code in tinygrad/tensor.py
946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 |
|