Complex Ops
Reduce¤
sum
¤
sum(
axis: int | Sequence[int] | None = None,
keepdim=False,
dtype: DTypeLike | None = None,
) -> Tensor
Returns the sum of the elements of the tensor along the specified axis or axes.
You can pass in axis
and keepdim
keyword arguments to control the axis along
which the maximum is computed and whether the reduced dimensions are retained.
You can pass in dtype
keyword argument to control the data type of the accumulation.
If not specified, the accumulation data type is chosen based on the input tensor's data type.
t = Tensor.arange(6).reshape(2, 3)
print(t.numpy())
[[0 1 2]
[3 4 5]]
print(t.sum().numpy())
15
print(t.sum(axis=0).numpy())
[3 5 7]
print(t.sum(axis=1).numpy())
[ 3 12]
Source code in tinygrad/tensor.py
1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 |
|
prod
¤
prod(
axis: int | Sequence[int] | None = None,
keepdim=False,
dtype: DTypeLike | None = None,
) -> Tensor
Returns the product of the elements of the tensor along the specified axis or axes.
You can pass in axis
and keepdim
keyword arguments to control the axis along
which the maximum is computed and whether the reduced dimensions are retained.
You can pass in dtype
keyword argument to control the data type of the accumulation.
If not specified, the accumulation data type is chosen based on the input tensor's data type.
t = Tensor([-1, -2, -3, 1, 2, 3]).reshape(2, 3)
print(t.numpy())
[[-1 -2 -3]
[ 1 2 3]]
print(t.prod().numpy())
-36
print(t.prod(axis=0).numpy())
[-1 -4 -9]
print(t.prod(axis=1).numpy())
[-6 6]
Source code in tinygrad/tensor.py
1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 |
|
max
¤
Returns the maximum value of the tensor along the specified axis or axes.
You can pass in axis
and keepdim
keyword arguments to control the axis along
which the maximum is computed and whether the reduced dimensions are retained.
t = Tensor([[1, 0, 2], [5, 4, 3]])
print(t.numpy())
[[1 0 2]
[5 4 3]]
print(t.max().numpy())
5
print(t.max(axis=0).numpy())
[5 4 3]
print(t.max(axis=1, keepdim=True).numpy())
[[2]
[5]]
Source code in tinygrad/tensor.py
1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 |
|
min
¤
Returns the minimum value of the tensor along the specified axis or axes.
You can pass in axis
and keepdim
keyword arguments to control the axis along
which the minimum is computed and whether the reduced dimensions are retained.
t = Tensor([[1, 0, 2], [5, 4, 3]])
print(t.numpy())
[[1 0 2]
[5 4 3]]
print(t.min().numpy())
0
print(t.min(axis=0).numpy())
[1 0 2]
print(t.min(axis=1, keepdim=True).numpy())
[[0]
[3]]
Source code in tinygrad/tensor.py
1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 |
|
any
¤
Tests if any element evaluates to True
along the specified axis or axes.
You can pass in axis
and keepdim
keyword arguments to control the reduce axis and whether the reduced dimensions are retained.
t = Tensor([[True, True], [True, False], [False, False]])
print(t.numpy())
[[ True True]
[ True False]
[False False]]
print(t.any().numpy())
True
print(t.any(axis=0).numpy())
[ True True]
print(t.any(axis=1, keepdim=True).numpy())
[[ True]
[ True]
[False]]
Source code in tinygrad/tensor.py
1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 |
|
all
¤
Tests if all element evaluates to True
along the specified axis or axes.
You can pass in axis
and keepdim
keyword arguments to control the reduce axis and whether the reduced dimensions are retained.
t = Tensor([[True, True], [True, False], [False, False]])
print(t.numpy())
[[ True True]
[ True False]
[False False]]
print(t.all().numpy())
False
print(t.all(axis=0).numpy())
[False False]
print(t.all(axis=1, keepdim=True).numpy())
[[ True]
[False]
[False]]
Source code in tinygrad/tensor.py
1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 |
|
isclose
¤
Returns a new tensor with element-wise comparison of closeness to other
within a tolerance.
The rtol
and atol
keyword arguments control the relative and absolute tolerance of the comparison.
By default, two NaN
values are not close to each other. If equal_nan
is True
, two NaN
values are considered close.
print(Tensor([1e-7, 1e-8, 1e-9, float('nan')]).isclose(Tensor([0.0, 0.0, 0.0, float('nan')])).numpy())
[False True True False]
print(Tensor([float('nan')]).isclose(Tensor([float('nan')]), equal_nan=True).numpy())
[ True]
Source code in tinygrad/tensor.py
1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 |
|
mean
¤
Returns the mean value of the tensor along the specified axis or axes.
You can pass in axis
and keepdim
keyword arguments to control the axis along
which the mean is computed and whether the reduced dimensions are retained.
Tensor.manual_seed(42)
t = Tensor.normal(2, 3, mean=2.5, std=0.5)
print(t.numpy())
[[2.9889 2.7339 2.7763]
[2.3356 2.0722 2.6376]]
print(t.mean().numpy())
2.5907671
print(t.mean(axis=0).numpy())
[2.6623 2.4031 2.707 ]
print(t.mean(axis=1).numpy())
[2.833 2.3485]
Source code in tinygrad/tensor.py
1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 |
|
var
¤
Returns the variance of the tensor along the specified axis or axes.
You can pass in axis
, keepdim
, and correction
keyword arguments to control the axis along
which the variance is computed, whether the reduced dimensions are retained, and the Bessel's correction applied.
Tensor.manual_seed(42)
t = Tensor.normal(2, 3, mean=2.5, std=0.5)
print(t.numpy())
[[2.9889 2.7339 2.7763]
[2.3356 2.0722 2.6376]]
print(t.var().numpy())
0.10992539
print(t.var(axis=0).numpy())
[0.2134 0.2189 0.0096]
print(t.var(axis=1).numpy())
[0.0187 0.08 ]
Source code in tinygrad/tensor.py
1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 |
|
var_mean
¤
var_mean(
axis: int | Sequence[int] | None = None,
keepdim=False,
correction=1,
) -> tuple[Tensor, Tensor]
Calculates the variance and mean over the dimensions specified by dim.
Syntactic sugar around Tensor.var
and Tensor.mean
to match torch.var_mean
.
Tensor.manual_seed(42)
t = Tensor.normal(2, 3, mean=2.5, std=0.5)
print(t.numpy())
[[2.9889 2.7339 2.7763]
[2.3356 2.0722 2.6376]]
var, mean = t.var_mean()
print(var.numpy(), mean.numpy())
0.10992539 2.5907671
Source code in tinygrad/tensor.py
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 |
|
std
¤
Returns the standard deviation of the tensor along the specified axis or axes.
You can pass in axis
, keepdim
, and correction
keyword arguments to control the axis along
which the standard deviation is computed, whether the reduced dimensions are retained, and the Bessel's correction applied.
Tensor.manual_seed(42)
t = Tensor.normal(2, 3, mean=2.5, std=0.5)
print(t.numpy())
[[2.9889 2.7339 2.7763]
[2.3356 2.0722 2.6376]]
print(t.std().numpy())
0.33154997
print(t.std(axis=0).numpy())
[0.462 0.4679 0.0981]
print(t.std(axis=1).numpy())
[0.1367 0.2829]
Source code in tinygrad/tensor.py
2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 |
|
std_mean
¤
std_mean(
axis: int | Sequence[int] | None = None,
keepdim=False,
correction=1,
) -> tuple[Tensor, Tensor]
Calculates the standard deviation and mean over the dimensions specified by dim.
Syntactic sugar around Tensor.std
and Tensor.mean
to match torch.std_mean
.
Tensor.manual_seed(42)
t = Tensor.normal(2, 3, mean=2.5, std=0.5)
print(t.numpy())
[[2.9889 2.7339 2.7763]
[2.3356 2.0722 2.6376]]
std, mean = t.std_mean()
print(std.numpy(), mean.numpy())
0.33154997 2.5907671
Source code in tinygrad/tensor.py
2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 |
|
softmax
¤
softmax(
axis=-1,
dtype: DTypeLike | None = None,
_single_kernel=getenv("SINGLE_KERNEL_SOFTMAX"),
) -> Tensor
Applies the softmax function to the tensor along the specified axis.
Rescales the elements of the tensor such that they lie in the range [0, 1] and sum to 1.
You can pass in the axis
keyword argument to control the axis along which the softmax is computed.
Tensor.manual_seed(42)
t = Tensor.randn(2, 3)
print(t.numpy())
[[ 0.9779 0.4678 0.5526]
[-0.3288 -0.8555 0.2753]]
print(t.softmax().numpy())
[[0.4436 0.2664 0.29 ]
[0.2924 0.1727 0.5349]]
print(t.softmax(axis=0).numpy())
[[0.787 0.7897 0.5689]
[0.213 0.2103 0.4311]]
Source code in tinygrad/tensor.py
2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 |
|
log_softmax
¤
log_softmax(
axis=-1, dtype: DTypeLike | None = None
) -> Tensor
Applies the log-softmax function to the tensor along the specified axis.
The log-softmax function is a numerically stable alternative to the softmax function in log space.
You can pass in the axis
keyword argument to control the axis along which the log-softmax is computed.
Tensor.manual_seed(42)
t = Tensor.randn(2, 3)
print(t.numpy())
[[ 0.9779 0.4678 0.5526]
[-0.3288 -0.8555 0.2753]]
print(t.log_softmax().numpy())
[[-0.8127 -1.3228 -1.238 ]
[-1.2297 -1.7564 -0.6256]]
print(t.log_softmax(axis=0).numpy())
[[-0.2396 -0.2361 -0.564 ]
[-1.5463 -1.5594 -0.8414]]
Source code in tinygrad/tensor.py
2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 |
|
logsumexp
¤
logsumexp(axis=None, keepdim=False) -> Tensor
Computes the log-sum-exp of the tensor along the specified axis or axes.
The log-sum-exp function is a numerically stable way to compute the logarithm of the sum of exponentials.
You can pass in axis
and keepdim
keyword arguments to control the axis along
which the log-sum-exp is computed and whether the reduced dimensions are retained.
Tensor.manual_seed(42)
t = Tensor.randn(2, 3)
print(t.numpy())
[[ 0.9779 0.4678 0.5526]
[-0.3288 -0.8555 0.2753]]
print(t.logsumexp().numpy())
2.1347282
print(t.logsumexp(axis=0).numpy())
[1.2174 0.7039 1.1167]
print(t.logsumexp(axis=1).numpy())
[1.7906 0.9009]
Source code in tinygrad/tensor.py
2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 |
|
logcumsumexp
¤
logcumsumexp(axis=0) -> Tensor
Computes the log-cumsum-exp of the tensor along the specified axis or axes.
The log-cumsum-exp function is a numerically stable way to compute the logarithm of the cumulative sum of exponentials.
You can pass in the axis
keyword argument to control the axis along which
the log-cumsum-exp is computed.
Tensor.manual_seed(42)
t = Tensor.randn(2, 3)
print(t.numpy())
[[ 0.9779 0.4678 0.5526]
[-0.3288 -0.8555 0.2753]]
print(t.logcumsumexp().numpy())
[[0.9779 0.4678 0.5526]
[1.2174 0.7039 1.1167]]
print(t.logcumsumexp(axis=0).numpy())
[[0.9779 0.4678 0.5526]
[1.2174 0.7039 1.1167]]
print(t.logcumsumexp(axis=1).numpy())
[[ 0.9779 1.4481 1.7906]
[-0.3288 0.1353 0.9009]]
Source code in tinygrad/tensor.py
2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 |
|
argmax
¤
argmax(axis=None, keepdim=False) -> Tensor
Returns the indices of the maximum value of the tensor along the specified axis.
You can pass in axis
and keepdim
keyword arguments to control the axis along
which the maximum is computed and whether the reduced dimensions are retained.
t = Tensor([[1, 0, 2], [5, 4, 3]])
print(t.numpy())
[[1 0 2]
[5 4 3]]
print(t.argmax().numpy()) # Returns the index of the maximum value in the flattened tensor.
3
print(t.argmax(axis=0).numpy()) # Returns the indices of the maximum values along axis 0.
[1 1 1]
print(t.argmax(axis=1).numpy()) # Returns the indices of the maximum values along axis 1.
[2 0]
Source code in tinygrad/tensor.py
2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 |
|
argmin
¤
argmin(axis=None, keepdim=False) -> Tensor
Returns the indices of the minimum value of the tensor along the specified axis.
You can pass in axis
and keepdim
keyword arguments to control the axis along
which the minimum is computed and whether the reduced dimensions are retained.
t = Tensor([[1, 0, 2], [5, 4, 3]])
print(t.numpy())
[[1 0 2]
[5 4 3]]
print(t.argmin().numpy()) # Returns the index of the minimum value in the flattened tensor.
1
print(t.argmin(axis=0).numpy()) # Returns the indices of the minimum values along axis 0.
[0 0 0]
print(t.argmin(axis=1).numpy()) # Returns the indices of the minimum values along axis 1.
[1 2]
Source code in tinygrad/tensor.py
2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 |
|
Processing¤
avg_pool2d
¤
avg_pool2d(
kernel_size: tuple[int, ...] = (2, 2),
stride=None,
dilation=1,
padding: int | tuple[int, ...] = 0,
ceil_mode=False,
count_include_pad=True,
) -> Tensor
Applies average pooling over a tensor.
This function supports three different types of padding
-
int
(single value): Applies the same padding value uniformly to all spatial dimensions. -
tuple[int, ...]
(length = number of spatial dimensions): Specifies a distinct padding value for each spatial dimension in the form(padding_height, padding_width, ...)
. -
tuple[int, ...]
(length = 2 * number of spatial dimensions): Specifies explicit padding for each side of each spatial dimension in the form(padding_left, padding_right, padding_top, padding_bottom, ...)
.
When ceil_mode
is set to True
, output shape will be determined using ceil division.
When count_include_pad
is set to False
, zero padding will not be included in the averaging calculation.
Note
unlike PyTorch, this implementation is not limited to only 2d pooling and instead works for any number of dimensions.
t = Tensor.arange(25).reshape(1, 1, 5, 5)
print(t.avg_pool2d().numpy())
[[[[ 3. 5.]
[13. 15.]]]]
print(t.avg_pool2d(ceil_mode=True).numpy())
[[[[ 3. 5. 6.5]
[13. 15. 16.5]
[20.5 22.5 24. ]]]]
print(t.avg_pool2d(padding=1).numpy())
[[[[ 0. 0.75 1.75]
[ 3.75 9. 11. ]
[ 8.75 19. 21. ]]]]
print(t.avg_pool2d(padding=1, count_include_pad=False).numpy())
[[[[ 0. 1.5 3.5]
[ 7.5 9. 11. ]
[17.5 19. 21. ]]]]
Source code in tinygrad/tensor.py
2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 |
|
max_pool2d
¤
max_pool2d(
kernel_size: tuple[int, ...] = (2, 2),
stride=None,
dilation=1,
padding: int | tuple[int, ...] = 0,
ceil_mode=False,
return_indices=False,
) -> Tensor | tuple[Tensor, Tensor]
Applies max pooling over a tensor.
This function supports three different types of padding
-
int
(single value): Applies the same padding value uniformly to all spatial dimensions. -
tuple[int, ...]
(length = number of spatial dimensions): Specifies a distinct padding value for each spatial dimension in the form(padding_height, padding_width, ...)
. -
tuple[int, ...]
(length = 2 * number of spatial dimensions): Specifies explicit padding for each side of each spatial dimension in the form(padding_left, padding_right, padding_top, padding_bottom, ...)
.
When ceil_mode
is set to True
, output shape will be determined using ceil division.
When return_indices
is set to True
, the argmax will be returned along with the max values.
Note
unlike PyTorch, this implementation is not limited to only 2d pooling and instead works for any number of dimensions.
t = Tensor.arange(25).reshape(1, 1, 5, 5)
print(t.max_pool2d().numpy())
[[[[ 6 8]
[16 18]]]]
print(t.max_pool2d(ceil_mode=True).numpy())
[[[[ 6 8 9]
[16 18 19]
[21 23 24]]]]
print(t.max_pool2d(padding=1).numpy())
[[[[ 0 2 4]
[10 12 14]
[20 22 24]]]]
Source code in tinygrad/tensor.py
2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 |
|
max_unpool2d
¤
max_unpool2d(
indices: Tensor,
kernel_size: tuple[int, ...] = (2, 2),
stride=None,
dilation=1,
padding: int | tuple[int, ...] = 0,
output_size=None,
)
Performs a partial inverse of max_pool2d
using the indices from the argmax.
When output_size
is provided, the output shape disambiguates to the provided shape.
Note
unlike PyTorch, this implementation is not limited to only 2d pooling and instead works for any number of dimensions.
t = Tensor.arange(1, 17).reshape(1, 1, 4, 4)
print(t.numpy())
[[[[ 1 2 3 4]
[ 5 6 7 8]
[ 9 10 11 12]
[13 14 15 16]]]]
output, indices = Tensor.max_pool2d(t, return_indices=True)
print(output.numpy())
print(indices.numpy())
[[[[ 6 8]
[14 16]]]]
[[[[ 5 7]
[13 15]]]]
print(Tensor.max_unpool2d(output, indices).numpy())
[[[[ 0 0 0 0]
[ 0 6 0 8]
[ 0 0 0 0]
[ 0 14 0 16]]]]
Source code in tinygrad/tensor.py
2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 |
|
conv2d
¤
conv2d(
weight: Tensor,
bias: Tensor | None = None,
groups=1,
stride=1,
dilation=1,
padding: int | tuple[int, ...] = 0,
dtype: DTypeLike | None = None,
) -> Tensor
Applies a convolution over a tensor with a given weight
and optional bias
.
This function supports three different types of padding
-
int
(single value): Applies the same padding value uniformly to all spatial dimensions. -
tuple[int, ...]
(length = number of spatial dimensions): Specifies a distinct padding value for each spatial dimension in the form(padding_height, padding_width, ...)
. -
tuple[int, ...]
(length = 2 * number of spatial dimensions): Specifies explicit padding for each side of each spatial dimension in the form(padding_left, padding_right, padding_top, padding_bottom, ...)
.
Note
unlike PyTorch, this implementation is not limited to only 2d convolutions and instead works for any number of dimensions.
See: https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html
t = Tensor.arange(9).reshape(1, 1, 3, 3)
w = Tensor.ones(1, 1, 2, 2)
print(t.conv2d(w).numpy())
[[[[ 8. 12.]
[20. 24.]]]]
Source code in tinygrad/tensor.py
2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 |
|
conv_transpose2d
¤
conv_transpose2d(
weight: Tensor,
bias: Tensor | None = None,
groups=1,
stride=1,
dilation=1,
padding=0,
output_padding=0,
) -> Tensor
Applies a transposed convolution over a tensor with a given weight
and optional bias
.
This function supports three different types of padding
-
int
(single value): Applies the same padding value uniformly to all spatial dimensions. -
tuple[int, ...]
(length = number of spatial dimensions): Specifies a distinct padding value for each spatial dimension in the form(padding_height, padding_width, ...)
. -
tuple[int, ...]
(length = 2 * number of spatial dimensions): Specifies explicit padding for each side of each spatial dimension in the form(padding_left, padding_right, padding_top, padding_bottom, ...)
.
Note
unlike PyTorch, this implementation is not limited to only 2d transposed convolutions and instead works for any number of dimensions.
See: https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html
t = Tensor.arange(9).reshape(1, 1, 3, 3)
w = Tensor.ones(1, 1, 2, 2)
print(t.conv_transpose2d(w).numpy())
[[[[ 0. 1. 3. 2.]
[ 3. 8. 12. 7.]
[ 9. 20. 24. 13.]
[ 6. 13. 15. 8.]]]]
Source code in tinygrad/tensor.py
2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 |
|
dot
¤
Performs dot product between two tensors.
If w
is 1-D, it's a sum product over the last axis of self
and w
.
If w
is N-D with N>=2, it's a sum product over the last axis of self
and the second-to-last axis of w
.
You can pass in the optional dtype
keyword argument to control the data type of the accumulation.
a = Tensor([1, 2, 3])
b = Tensor([1, 1, 0])
print(a.dot(b).numpy())
3
a = Tensor([[1, 2], [3, 4]])
b = Tensor([[5, 6], [7, 8]])
print(a.dot(b).numpy())
[[19 22]
[43 50]]
Source code in tinygrad/tensor.py
2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 |
|
matmul
¤
Performs matrix multiplication between two tensors.
You can pass in the reverse
keyword argument to control the order of the matrix multiplication.
You can pass in the optional dtype
keyword argument to control the data type of the accumulation.
a = Tensor([[1, 2], [3, 4]])
b = Tensor([[5, 6], [7, 8]])
print(a.matmul(b).numpy())
[[19 22]
[43 50]]
Source code in tinygrad/tensor.py
2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 |
|
einsum
staticmethod
¤
einsum(
formula: str,
*operands: Tensor | Sequence[Tensor],
dtype: DTypeLike | None = None
) -> Tensor
Sums the product of the elements of the input tensors according to a formula based on the Einstein summation convention.
See: https://pytorch.org/docs/stable/generated/torch.einsum.html
x = Tensor([[1, 2], [3, 4]])
y = Tensor([[5, 6], [7, 8]])
print(Tensor.einsum("ij,ij->", x, y).numpy())
70
Source code in tinygrad/tensor.py
2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 |
|
cumsum
¤
Computes the cumulative sum of the tensor along the specified axis
.
t = Tensor.ones(2, 3)
print(t.numpy())
[[1. 1. 1.]
[1. 1. 1.]]
print(t.cumsum(1).numpy())
[[1. 2. 3.]
[1. 2. 3.]]
Source code in tinygrad/tensor.py
2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 |
|
cummax
¤
Computes the cumulative max of the tensor along the specified axis
.
t = Tensor([0, 1, -1, 2, -2, 3, -3])
print(t.numpy())
[ 0 1 -1 2 -2 3 -3]
print(t.cummax(0).numpy())
[0 1 1 2 2 3 3]
Source code in tinygrad/tensor.py
2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 |
|
triu
¤
Returns the upper triangular part of the tensor, the other elements are set to 0.
The argument diagonal
determines which diagonal is on the boundary. diagonal = 0
means the main diagonal.
Positive diagonal
means above the main diagonal, and negative diagonal
means below the main diagonal.
t = Tensor([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
print(t.numpy())
[[ 1 2 3 4]
[ 5 6 7 8]
[ 9 10 11 12]]
print(t.triu(diagonal=0).numpy())
[[ 1 2 3 4]
[ 0 6 7 8]
[ 0 0 11 12]]
print(t.triu(diagonal=1).numpy())
[[ 0 2 3 4]
[ 0 0 7 8]
[ 0 0 0 12]]
print(t.triu(diagonal=-1).numpy())
[[ 1 2 3 4]
[ 5 6 7 8]
[ 0 10 11 12]]
Source code in tinygrad/tensor.py
2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 |
|
tril
¤
Returns the lower triangular part of the tensor, the other elements are set to 0.
The argument diagonal
determines which diagonal is on the boundary. diagonal = 0
means the main diagonal.
Positive diagonal
means above the main diagonal, and negative diagonal
means below the main diagonal.
t = Tensor([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
print(t.numpy())
[[ 1 2 3 4]
[ 5 6 7 8]
[ 9 10 11 12]]
print(t.tril(diagonal=0).numpy())
[[ 1 0 0 0]
[ 5 6 0 0]
[ 9 10 11 0]]
print(t.tril(diagonal=1).numpy())
[[ 1 2 0 0]
[ 5 6 7 0]
[ 9 10 11 12]]
print(t.tril(diagonal=-1).numpy())
[[ 0 0 0 0]
[ 5 0 0 0]
[ 9 10 0 0]]
Source code in tinygrad/tensor.py
2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 |
|
interpolate
¤
Downsamples or Upsamples to the input size
, accepts 0 to N batch dimensions.
The interpolation algorithm is selected with mode
which currently only supports linear
, nearest
and nearest-exact
.
To run bilinear
or trilinear
, pass in a 2D or 3D size.
t = Tensor([[1, 2, 3, 4], [21, 22, 23, 24], [41, 42, 43, 44]])
print(t.numpy())
[[ 1 2 3 4]
[21 22 23 24]
[41 42 43 44]]
print(t.interpolate(size=(2,3), mode="linear").numpy())
[[ 6 7 8]
[36 37 38]]
Source code in tinygrad/tensor.py
2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 |
|
scatter
¤
scatter(
dim: int,
index: Tensor,
src: Tensor | ConstType,
reduce: Literal["multiply", "add"] | None = None,
) -> Tensor
Scatters src
values along an axis specified by dim
.
Apply add
or multiply
reduction operation with reduce
.
Note
To use the reduce
argument with a Tensor src
, see Tensor.scatter_reduce
.
src = Tensor.arange(1, 11).reshape(2, 5)
print(src.numpy())
[[ 1 2 3 4 5]
[ 6 7 8 9 10]]
index = Tensor([[0, 1, 2, 0]])
print(Tensor.zeros(3, 5, dtype=src.dtype).scatter(0, index, src).numpy())
[[1 0 0 4 0]
[0 2 0 0 0]
[0 0 3 0 0]]
index = Tensor([[0, 1, 2], [0, 1, 4]])
print(Tensor.zeros(3, 5, dtype=src.dtype).scatter(1, index, src).numpy())
[[1 2 3 0 0]
[6 7 0 0 8]
[0 0 0 0 0]]
print(Tensor.full((2, 4), 2.0).scatter(1, Tensor([[2], [3]]), 1.23, reduce='multiply').numpy())
[[2. 2. 2.46 2. ]
[2. 2. 2. 2.46]]
print(Tensor.full((2, 4), 2.0).scatter(1, Tensor([[2], [3]]), 1.23, reduce='add').numpy())
[[2. 2. 3.23 2. ]
[2. 2. 2. 3.23]]
Source code in tinygrad/tensor.py
2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 |
|
scatter_reduce
¤
scatter_reduce(
dim: int,
index: Tensor,
src: Tensor,
reduce: Literal["sum", "prod", "mean", "amax", "amin"],
include_self: bool = True,
) -> Tensor
Scatters src
values along an axis specified by dim
.
Apply "sum"
, "prod"
, "mean"
, "amax"
, or "amin"
reduction operations with reduce
.
Set include_self=False
to exclude values in the self
Tensor from the reduction.
src = Tensor.arange(1, 11).cast(dtypes.float).reshape(2, 5)
print(src.numpy())
index = Tensor([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]])
print(index.numpy())
[[ 1. 2. 3. 4. 5.]
[ 6. 7. 8. 9. 10.]]
[[0 0 0 0 0]
[0 0 0 0 0]]
print(Tensor.ones(1, 5, dtype=src.dtype).scatter_reduce(0, index, src, reduce='sum').numpy())
[[ 8. 10. 12. 14. 16.]]
print(Tensor.ones(1, 5, dtype=src.dtype).scatter_reduce(0, index, src, reduce='prod').numpy())
[[ 6. 14. 24. 36. 50.]]
print(Tensor.ones(1, 5, dtype=src.dtype).scatter_reduce(0, index, src, reduce='mean', include_self=False).numpy())
[[3.5 4.5 5.5 6.5 7.5]]
print(Tensor([[-10, 20, 0, 5, 10]], dtype=src.dtype).scatter_reduce(0, index, src, reduce='amax').numpy())
[[ 6. 20. 8. 9. 10.]]
print(Tensor([[-10, 20, 0, 5, 10]], dtype=src.dtype).scatter_reduce(0, index, src, reduce='amin').numpy())
[[-10. 2. 0. 4. 5.]]
Source code in tinygrad/tensor.py
2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 |
|
masked_select
¤
masked_select(mask)
Selects elements from self
based on the boolean mask
.
t = Tensor([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
mask = Tensor([[True, False, True], [False, True, False], [False, False, True]])
print(t.numpy())
print(mask.numpy())
[[0 1 2]
[3 4 5]
[6 7 8]]
[[ True False True]
[False True False]
[False False True]]
print(t.masked_select(mask).numpy())
[0 2 4 8]
Source code in tinygrad/tensor.py
1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 |
|
masked_fill
¤
Replaces self
with value
wherever the elements of mask
are True.
t = Tensor([1, 2, 3, 4, 5])
mask = Tensor([True, False, True, False, False])
print(t.masked_fill(mask, -12).numpy())
[-12 2 -12 4 5]
t = Tensor([1, 2, 3, 4, 5])
mask = Tensor([True, False, True, False, False])
value = Tensor([-1, -2, -3, -4, -5])
print(t.masked_fill(mask, value).numpy())
[-1 2 -3 4 5]
Source code in tinygrad/tensor.py
1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 |
|
sort
¤
Performs a bitonic sort on the tensor along the specified dimension.
Order of indices for equivalent elements is always preserved.
See: https://en.wikipedia.org/wiki/Bitonic_sorter
t = Tensor([[0.1, 0.5, 1.2, 3.4, 2.1], [2.2, 1.9, 0.3, 4.5, 0.8]])
print(t.numpy())
[[0.1 0.5 1.2 3.4 2.1]
[2.2 1.9 0.3 4.5 0.8]]
sorted_values, indices = t.sort(dim=1, descending=True)
print(sorted_values.numpy())
print(indices.numpy())
[[3.4 2.1 1.2 0.5 0.1]
[4.5 2.2 1.9 0.8 0.3]]
[[3 4 2 1 0]
[3 0 1 4 2]]
Source code in tinygrad/tensor.py
2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 |
|
topk
¤
Computes the top-k elements of the tensor along the specified dim
.
Order of indices for equivalent elements is always preserved.
t = Tensor([[0.1, 0.5, 1.2, 3.4, 2.1], [2.2, 1.9, 0.3, 4.5, 0.8]])
print(t.numpy())
[[0.1 0.5 1.2 3.4 2.1]
[2.2 1.9 0.3 4.5 0.8]]
topk_values, topk_indices = t.topk(2, dim=1)
print(topk_values.numpy())
print(topk_indices.numpy())
[[3.4 2.1]
[4.5 2.2]]
[[3 4]
[3 0]]
Source code in tinygrad/tensor.py
2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 |
|
multinomial
¤
Returns a tensor with num_samples
indices sampled from a multinomial distribution weighted by self
.
Note
replacement=False
for num_samples > 1
is not supported yet.
Tensor.manual_seed(42)
t = Tensor([1, 2, 3, 4])
print(t.multinomial(20, replacement=True).numpy())
[2 1 3 2 3 1 2 2 3 3 3 3 3 3 2 3 2 3 3 3]
Source code in tinygrad/tensor.py
963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 |
|
Neural Network (functional)¤
linear
¤
Applies a linear transformation to self
using weight
and bias
.
See: https://pytorch.org/docs/stable/generated/torch.nn.Linear.html
t = Tensor([[1, 2], [3, 4]])
weight = Tensor([[1, 2], [3, 4]])
bias = Tensor([1, 2])
print(t.linear(weight, bias).numpy())
[[ 8 12]
[16 24]]
Source code in tinygrad/tensor.py
3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 |
|
sequential
¤
Applies a sequence of functions to self
chaining the output of each function to the input of the next.
t = Tensor([1, 2, 3])
print(t.sequential([lambda x: x * 2, lambda x: x + 1]).numpy())
[3 5 7]
Source code in tinygrad/tensor.py
3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 |
|
layernorm
¤
Applies Layer Normalization over a mini-batch of inputs.
t = Tensor.randn(8, 10, 16) * 2 + 8
print(t.mean().item(), t.std().item())
7.9793524742126465 2.074720621109009
t = t.layernorm()
print(t.mean().item(), t.std().item())
7.269673196752535e-10 1.0003894567489624
Source code in tinygrad/tensor.py
3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 3925 3926 3927 3928 3929 3930 |
|
batchnorm
¤
batchnorm(
weight: Tensor | None,
bias: Tensor | None,
mean: Tensor,
invstd: Tensor,
axis: int | tuple[int, ...] = 1,
) -> Tensor
Applies Batch Normalization over a mini-batch of inputs.
t = Tensor.randn(8, 4, 16, 16) * 2 + 8
print(t.mean().item(), t.std().item())
8.019729614257812 1.9927232265472412
t = t.batchnorm(None, None, t.mean(axis=(0,2,3)), t.var(axis=(0,2,3)).add(1e-5).rsqrt())
print(t.mean().item(), t.std().item())
6.119149134065083e-07 0.9998146891593933
Source code in tinygrad/tensor.py
3932 3933 3934 3935 3936 3937 3938 3939 3940 3941 3942 3943 3944 3945 3946 3947 3948 3949 3950 3951 3952 |
|
dropout
¤
dropout(p=0.5) -> Tensor
Applies dropout to self
.
Note
dropout is only applied when Tensor.training
is True
.
Tensor.manual_seed(42)
t = Tensor.randn(2, 2)
with Tensor.train():
print(t.dropout().numpy())
[[-1.0287 2.17 ]
[ 1.8178 0. ]]
Source code in tinygrad/tensor.py
3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 |
|
one_hot
¤
Converts self
to a one-hot tensor.
num_classes
defaults to -1, which means num_classes will be inferred as max(self) + 1.
t = Tensor([0, 1, 3, 3, 4])
print(t.one_hot(5).numpy())
[[1 0 0 0 0]
[0 1 0 0 0]
[0 0 0 1 0]
[0 0 0 1 0]
[0 0 0 0 1]]
Source code in tinygrad/tensor.py
3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 |
|
scaled_dot_product_attention
¤
scaled_dot_product_attention(
key: Tensor,
value: Tensor,
attn_mask: Tensor | None = None,
dropout_p: float = 0.0,
is_causal: bool = False,
enable_gqa: bool = False,
) -> Tensor
Computes scaled dot-product attention.
self
is the query tensor, key
is the key tensor, and value
is the value tensor.
q = Tensor.randn(2, 4, 8)
k = Tensor.randn(2, 4, 8)
v = Tensor.randn(2, 4, 8)
print(q.scaled_dot_product_attention(k, v).numpy())
[[[ 0.6408 0.3264 0.7317 -1.0943 0.5778 -0.0534 -0.0104 -0.0488]
[ 0.1243 -0.8259 1.6481 -0.8035 -0.3961 0.4269 0.1232 1.6462]
[ 0.9535 0.1068 0.8545 -0.5395 0.4692 -0.0548 -0.2274 0.6152]
[ 0.8891 -0.0411 0.7818 -0.3322 0.3931 -0.0202 -0.1101 0.8129]]
[[-0.4273 -0.6085 -0.0465 0.5246 0.3641 -0.0381 -0.0106 0.8349]
[ 0.6321 0.3654 0.4137 -0.2327 0.2558 0.1418 -1.27 -0.802 ]
[ 0.1794 0.4616 0.1847 -0.1988 0.2123 0.1837 -0.9583 -0.5364]
[ 0.4408 0.6125 0.0811 -0.3886 0.3602 0.4987 -1.4414 -0.9565]]]
Source code in tinygrad/tensor.py
3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 |
|
binary_crossentropy
¤
Computes the binary cross-entropy loss between self
and Y
.
See: https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html
t = Tensor([0.1, 0.9, 0.2])
Y = Tensor([0, 1, 0])
print(t.binary_crossentropy(Y).item())
0.14462155103683472
Source code in tinygrad/tensor.py
4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 |
|
binary_crossentropy_logits
¤
binary_crossentropy_logits(
Y: Tensor,
reduction: ReductionStr = "mean",
pos_weight: Tensor | None = None,
) -> Tensor
Computes the binary cross-entropy loss between self
and Y
where self
is logits.
See: https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html
t = Tensor([-1, 2, -3])
Y = Tensor([0, 1, 0])
print(t.binary_crossentropy_logits(Y).item())
0.16292566061019897
Source code in tinygrad/tensor.py
4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 |
|
sparse_categorical_crossentropy
¤
sparse_categorical_crossentropy(
Y: Tensor,
ignore_index: int = -1,
label_smoothing=0.0,
reduction: ReductionStr = "mean",
) -> Tensor
Computes the sparse categorical cross-entropy loss between self
and Y
.
Note
self
is logits and Y
is the target labels.
NOTE: unlike PyTorch, this function expects the class axis to be -1
See: https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html
t = Tensor([[-1, 2, -3], [1, -2, 3]])
Y = Tensor([1, 2])
print(t.sparse_categorical_crossentropy(Y).item())
0.09391524642705917
Source code in tinygrad/tensor.py
4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 |
|
cross_entropy
¤
cross_entropy(
Y: Tensor,
reduction: ReductionStr = "mean",
label_smoothing: float = 0.0,
) -> Tensor
Computes the cross entropy loss between input logits and target.
Note
self
are logits and Y
are the target labels or class probabilities.
See: https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html
t = Tensor([[-1, 2, -3], [1, -2, 3]])
Y = Tensor([1, 2])
print(t.cross_entropy(Y).item())
0.09391524642705917
t = Tensor([[-1, 2, -3], [1, -2, 3]])
Y = Tensor([1, 2])
print(t.cross_entropy(Y, reduction='none').numpy())
[0.055 0.1328]
Source code in tinygrad/tensor.py
4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 |
|
nll_loss
¤
nll_loss(
Y: Tensor,
weight: Tensor | None = None,
ignore_index: int | None = None,
reduction: ReductionStr = "mean",
) -> Tensor
Computes the negative log likelihood loss between log-probabilities and target labels.
Note
self
is log-probabilities and Y
is the Y labels or class probabilities.
See: https://pytorch.org/docs/stable/generated/torch.nn.functional.nll_loss.html
t = Tensor([[-1, 2, -3], [1, -2, 3]])
Y = Tensor([1, 2])
print(t.log_softmax().nll_loss(Y).item())
0.09391524642705917
t = Tensor([[-1, 2, -3], [1, -2, 3]])
Y = Tensor([1, 2])
print(t.log_softmax().nll_loss(Y, reduction='none').numpy())
[0.055 0.1328]
Source code in tinygrad/tensor.py
4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 |
|