Properties
Basic¤
ndim
property
¤
ndim: int
Returns the number of dimensions in the tensor.
t = Tensor([[1, 2], [3, 4]])
print(t.ndim)
2
numel
¤
numel() -> sint
Returns the total number of elements in the tensor.
t = Tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
print(t.numel())
8
Source code in tinygrad/tensor.py
3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 |
|
element_size
¤
element_size() -> int
Returns the size in bytes of an individual element in the tensor.
t = Tensor([5], dtype=dtypes.int16)
print(t.element_size())
2
Source code in tinygrad/tensor.py
3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 |
|
nbytes
¤
nbytes() -> int
Returns the total number of bytes of all elements in the tensor.
t = Tensor([8, 9], dtype=dtypes.float)
print(t.nbytes())
8
Source code in tinygrad/tensor.py
3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 |
|
is_floating_point
¤
is_floating_point() -> bool
Returns True
if the tensor contains floating point types, i.e. is one of dtype.float64
, dtype.float32
,
dtype.float16
, dtype.bfloat16
.
t = Tensor([8, 9], dtype=dtypes.float32)
print(t.is_floating_point())
True
Source code in tinygrad/tensor.py
3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 |
|
size
¤
Return the size of the tensor. If dim
is specified, return the length along dimension dim
. Otherwise return the shape of the tensor.
t = Tensor([[4, 5, 6], [7, 8, 9]])
print(t.size())
(2, 3)
print(t.size(dim=1))
3
Source code in tinygrad/tensor.py
3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 |
|
Data Access¤
data
¤
data() -> memoryview
Returns the data of this tensor as a memoryview.
t = Tensor([1, 2, 3, 4])
print(np.frombuffer(t.data(), dtype=np.int32))
[1 2 3 4]
Source code in tinygrad/tensor.py
265 266 267 268 269 270 271 272 273 274 275 276 |
|
item
¤
item() -> ConstType
Returns the value of this tensor as a standard Python number.
t = Tensor(42)
print(t.item())
42
Source code in tinygrad/tensor.py
278 279 280 281 282 283 284 285 286 287 288 289 |
|
tolist
¤
Returns the value of this tensor as a nested list.
t = Tensor([1, 2, 3, 4])
print(t.tolist())
[1, 2, 3, 4]
Source code in tinygrad/tensor.py
293 294 295 296 297 298 299 300 301 302 |
|
numpy
¤
numpy() -> 'np.ndarray'
Returns the value of this tensor as a numpy.ndarray
.
t = Tensor([1, 2, 3, 4])
print(repr(t.numpy()))
array([1, 2, 3, 4], dtype=int32)
Source code in tinygrad/tensor.py
304 305 306 307 308 309 310 311 312 313 314 315 316 317 |
|
tinygrad ops¤
schedule_with_vars
¤
Creates the schedule needed to realize these Tensor(s), with Variables.
Note
A Tensor can only be scheduled once.
Source code in tinygrad/tensor.py
202 203 204 205 206 207 208 209 |
|
schedule
¤
schedule(*lst: Tensor) -> List[ScheduleItem]
Creates the schedule needed to realize these Tensor(s).
Source code in tinygrad/tensor.py
211 212 213 214 215 |
|
realize
¤
Triggers the computation needed to create these Tensor(s).
Source code in tinygrad/tensor.py
217 218 219 220 |
|
replace
¤
Replaces the data of this tensor with the data of another tensor. Only the shape of the tensors must match.
Source code in tinygrad/tensor.py
222 223 224 225 226 227 228 229 230 |
|
assign
¤
assign(x) -> Tensor
Source code in tinygrad/tensor.py
232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 |
|
detach
¤
detach() -> Tensor
Returns a new tensor with the same data as this tensor, but detached from the autograd graph.
Source code in tinygrad/tensor.py
251 252 253 254 255 |
|
to
¤
Moves the tensor to the given device.
Source code in tinygrad/tensor.py
328 329 330 331 332 333 334 335 336 337 338 |
|
to_
¤
Moves the tensor to the given device in place.
Source code in tinygrad/tensor.py
340 341 342 343 344 345 346 347 |
|
shard
¤
shard(
devices: Tuple[str, ...],
axis: Optional[int] = None,
splits: Optional[Tuple[int, ...]] = None,
) -> Tensor
Shards the tensor across the given devices. Optionally specify which axis to shard on, and how to split it across devices.
t = Tensor.empty(2, 3)
print(t.shard((t.device, t.device), axis=1, splits=(2, 1)).lazydata)
<MLB self.axis=1 self.real=[True, True]
CLANG ShapeTracker(views=(View(shape=(2, 2), strides=(2, 1), offset=0, mask=None, contiguous=True),))
CLANG ShapeTracker(views=(View(shape=(2, 1), strides=(1, 0), offset=0, mask=None, contiguous=True),))>
Source code in tinygrad/tensor.py
349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 |
|
shard_
¤
shard_(
devices: Tuple[str, ...],
axis: Optional[int] = None,
splits: Optional[Tuple[int, ...]] = None,
)
Shards the tensor across the given devices in place.
Source code in tinygrad/tensor.py
372 373 374 375 376 377 |
|
contiguous
¤
contiguous()
Returns a contiguous tensor.
Source code in tinygrad/tensor.py
2332 2333 2334 2335 2336 |
|
contiguous_backward
¤
contiguous_backward()
Inserts a contiguous operation in the backward pass.
Source code in tinygrad/tensor.py
2337 2338 2339 2340 2341 |
|
backward
¤
Propagates the gradient of a tensor backwards through the computation graph. If the 'gradient' argument is not provided, the tensor must be a scalar, and the gradient is implicitly set to 1.0. If 'retain_graph' is false, the graph used to compute the grads will be freed. Otherwise, it will be kept. Keeping it can increase memory usage.
t = Tensor([1.0, 2.0, 3.0, 4.0], requires_grad=True)
t.sum().backward()
print(t.grad.numpy())
[1. 1. 1. 1.]
Source code in tinygrad/tensor.py
874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 |
|