Properties
Basic¤
ndim
property
¤
ndim: int
Returns the number of dimensions in the tensor.
t = Tensor([[1, 2], [3, 4]])
print(t.ndim)
2
numel
¤
numel() -> sint
Returns the total number of elements in the tensor.
t = Tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
print(t.numel())
8
Source code in tinygrad/tensor.py
3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 |
|
element_size
¤
element_size() -> int
Returns the size in bytes of an individual element in the tensor.
t = Tensor([5], dtype=dtypes.int16)
print(t.element_size())
2
Source code in tinygrad/tensor.py
3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 |
|
nbytes
¤
nbytes() -> int
Returns the total number of bytes of all elements in the tensor.
t = Tensor([8, 9], dtype=dtypes.float)
print(t.nbytes())
8
Source code in tinygrad/tensor.py
3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 |
|
is_floating_point
¤
is_floating_point() -> bool
Returns True
if the tensor contains floating point types, i.e. is one of dtype.float64
, dtype.float32
,
dtype.float16
, dtype.bfloat16
.
t = Tensor([8, 9], dtype=dtypes.float32)
print(t.is_floating_point())
True
Source code in tinygrad/tensor.py
3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 |
|
size
¤
Return the size of the tensor. If dim
is specified, return the length along dimension dim
. Otherwise return the shape of the tensor.
t = Tensor([[4, 5, 6], [7, 8, 9]])
print(t.size())
(2, 3)
print(t.size(dim=1))
3
Source code in tinygrad/tensor.py
3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 |
|
Data Access¤
data
¤
data() -> memoryview
Returns the data of this tensor as a memoryview.
t = Tensor([1, 2, 3, 4])
print(np.frombuffer(t.data(), dtype=np.int32))
[1 2 3 4]
Source code in tinygrad/tensor.py
272 273 274 275 276 277 278 279 280 281 282 283 284 |
|
item
¤
item() -> ConstType
Returns the value of this tensor as a standard Python number.
t = Tensor(42)
print(t.item())
42
Source code in tinygrad/tensor.py
286 287 288 289 290 291 292 293 294 295 296 |
|
tolist
¤
Returns the value of this tensor as a nested list.
t = Tensor([1, 2, 3, 4])
print(t.tolist())
[1, 2, 3, 4]
Source code in tinygrad/tensor.py
300 301 302 303 304 305 306 307 308 309 |
|
numpy
¤
numpy() -> 'np.ndarray'
Returns the value of this tensor as a numpy.ndarray
.
t = Tensor([1, 2, 3, 4])
print(repr(t.numpy()))
array([1, 2, 3, 4], dtype=int32)
Source code in tinygrad/tensor.py
311 312 313 314 315 316 317 318 319 320 321 322 323 324 |
|
tinygrad ops¤
schedule_with_vars
¤
Creates the schedule needed to realize these Tensor(s), with Variables.
Note
A Tensor can only be scheduled once.
Source code in tinygrad/tensor.py
209 210 211 212 213 214 215 216 |
|
schedule
¤
schedule(*lst: Tensor) -> List[ScheduleItem]
Creates the schedule needed to realize these Tensor(s).
Source code in tinygrad/tensor.py
218 219 220 221 222 |
|
realize
¤
Triggers the computation needed to create these Tensor(s).
Source code in tinygrad/tensor.py
224 225 226 227 |
|
replace
¤
Replaces the data of this tensor with the data of another tensor. Only the shape of the tensors must match.
Source code in tinygrad/tensor.py
229 230 231 232 233 234 235 236 237 |
|
assign
¤
assign(x) -> Tensor
Source code in tinygrad/tensor.py
239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 |
|
detach
¤
detach() -> Tensor
Returns a new tensor with the same data as this tensor, but detached from the autograd graph.
Source code in tinygrad/tensor.py
258 259 260 261 262 |
|
to
¤
Moves the tensor to the given device.
Source code in tinygrad/tensor.py
335 336 337 338 339 340 341 342 343 344 345 |
|
to_
¤
Moves the tensor to the given device in place.
Source code in tinygrad/tensor.py
347 348 349 350 351 352 353 354 |
|
shard
¤
shard(
devices: Tuple[str, ...],
axis: Optional[int] = None,
splits: Optional[Tuple[int, ...]] = None,
) -> Tensor
Shards the tensor across the given devices. Optionally specify which axis to shard on, and how to split it across devices.
t = Tensor.empty(2, 3)
print(t.shard((t.device, t.device), axis=1, splits=(2, 1)).lazydata)
<MLB self.axis=1 self.real=[True, True]
CLANG ShapeTracker(views=(View(shape=(2, 2), strides=(2, 1), offset=0, mask=None, contiguous=True),))
CLANG ShapeTracker(views=(View(shape=(2, 1), strides=(1, 0), offset=0, mask=None, contiguous=True),))>
Source code in tinygrad/tensor.py
356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 |
|
shard_
¤
shard_(
devices: Tuple[str, ...],
axis: Optional[int] = None,
splits: Optional[Tuple[int, ...]] = None,
)
Shards the tensor across the given devices in place.
Source code in tinygrad/tensor.py
379 380 381 382 383 384 |
|
contiguous
¤
contiguous()
Returns a contiguous tensor.
Source code in tinygrad/tensor.py
2368 2369 2370 2371 2372 |
|
contiguous_backward
¤
contiguous_backward()
Inserts a contiguous operation in the backward pass.
Source code in tinygrad/tensor.py
2373 2374 2375 2376 2377 |
|
backward
¤
Propagates the gradient of a tensor backwards through the computation graph. If the 'gradient' argument is not provided, the tensor must be a scalar, and the gradient is implicitly set to 1.0. If 'retain_graph' is false, the graph used to compute the grads will be freed. Otherwise, it will be kept. Keeping it can increase memory usage.
t = Tensor([1.0, 2.0, 3.0, 4.0], requires_grad=True)
t.sum().backward()
print(t.grad.numpy())
[1. 1. 1. 1.]
Source code in tinygrad/tensor.py
881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 |
|