Elementwise
Elementwise ops operate on a per element basis. They don't change the shape of the tensor.
Unary Ops (math)¤
logical_not
¤
logical_not()
Computes the logical NOT of the tensor element-wise.
print(Tensor([False, True]).logical_not().numpy())
[ True False]
Source code in tinygrad/tensor.py
2234 2235 2236 2237 2238 2239 2240 2241 2242 |
|
neg
¤
neg()
Negates the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).neg().numpy())
[ 3. 2. 1. -0. -1. -2. -3.]
Source code in tinygrad/tensor.py
2243 2244 2245 2246 2247 2248 2249 2250 2251 |
|
log
¤
log()
Computes the natural logarithm element-wise.
See: https://en.wikipedia.org/wiki/Logarithm
print(Tensor([1., 2., 4., 8.]).log().numpy())
[0. 0.6931 1.3863 2.0794]
Source code in tinygrad/tensor.py
2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 |
|
log2
¤
log2()
Computes the base-2 logarithm element-wise.
See: https://en.wikipedia.org/wiki/Logarithm
print(Tensor([1., 2., 4., 8.]).log2().numpy())
[0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 |
|
exp
¤
exp()
Computes the exponential function element-wise.
See: https://en.wikipedia.org/wiki/Exponential_function
print(Tensor([0., 1., 2., 3.]).exp().numpy())
[ 1. 2.7183 7.3891 20.0855]
Source code in tinygrad/tensor.py
2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 |
|
exp2
¤
exp2()
Computes the base-2 exponential function element-wise.
See: https://en.wikipedia.org/wiki/Exponential_function
print(Tensor([0., 1., 2., 3.]).exp2().numpy())
[1. 2. 4. 8.]
Source code in tinygrad/tensor.py
2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 |
|
sqrt
¤
sqrt()
Computes the square root of the tensor element-wise.
print(Tensor([1., 2., 3., 4.]).sqrt().numpy())
[1. 1.4142 1.7321 2. ]
Source code in tinygrad/tensor.py
2328 2329 2330 2331 2332 2333 2334 2335 2336 |
|
rsqrt
¤
rsqrt()
Computes the reciprocal of the square root of the tensor element-wise.
print(Tensor([1., 2., 3., 4.]).rsqrt().numpy())
[1. 0.7071 0.5774 0.5 ]
Source code in tinygrad/tensor.py
2337 2338 2339 2340 2341 2342 2343 2344 2345 |
|
sin
¤
sin()
Computes the sine of the tensor element-wise.
print(Tensor([0., math.pi/2, math.pi, 3*math.pi/2, 2*math.pi]).sin().numpy())
[ 0. 1. -0. -1. 0.]
Source code in tinygrad/tensor.py
2346 2347 2348 2349 2350 2351 2352 2353 2354 |
|
cos
¤
cos()
Computes the cosine of the tensor element-wise.
print(Tensor([0., math.pi/2, math.pi, 3*math.pi/2, 2*math.pi]).cos().numpy())
[ 1.0000e+00 0.0000e+00 -1.0000e+00 -2.3842e-07 1.0000e+00]
Source code in tinygrad/tensor.py
2355 2356 2357 2358 2359 2360 2361 2362 2363 |
|
tan
¤
tan()
Computes the tangent of the tensor element-wise.
print(Tensor([0., math.pi/4, math.pi/2, 3*math.pi/4, math.pi]).tan().numpy())
[ 0. 1. inf -1. 0.]
Source code in tinygrad/tensor.py
2364 2365 2366 2367 2368 2369 2370 2371 2372 |
|
trunc
¤
trunc() -> Tensor
Truncates the tensor element-wise.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).trunc().numpy())
[-3. -2. -1. 0. 0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2376 2377 2378 2379 2380 2381 2382 2383 2384 |
|
ceil
¤
ceil() -> Tensor
Rounds the tensor element-wise towards positive infinity.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).ceil().numpy())
[-3. -2. -1. 0. 1. 2. 3. 4.]
Source code in tinygrad/tensor.py
2385 2386 2387 2388 2389 2390 2391 2392 2393 |
|
floor
¤
floor() -> Tensor
Rounds the tensor element-wise towards negative infinity.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).floor().numpy())
[-4. -3. -2. -1. 0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2394 2395 2396 2397 2398 2399 2400 2401 2402 |
|
round
¤
round() -> Tensor
Rounds the tensor element-wise with rounding half to even.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).round().numpy())
[-4. -2. -2. 0. 0. 2. 2. 4.]
Source code in tinygrad/tensor.py
2403 2404 2405 2406 2407 2408 2409 2410 2411 |
|
lerp
¤
Linearly interpolates between self
and end
by weight
.
print(Tensor([1., 2., 3.]).lerp(Tensor([4., 5., 6.]), 0.5).numpy())
[2.5 3.5 4.5]
Source code in tinygrad/tensor.py
2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 |
|
square
¤
square()
Squares the tensor element-wise.
Equivalent to self*self
.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).square().numpy())
[9. 4. 1. 0. 1. 4. 9.]
Source code in tinygrad/tensor.py
2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 |
|
clamp
¤
clamp(min_=None, max_=None)
Clips (clamps) the values in the tensor between min_
and max_
element-wise.
If min_
is None
, there is no lower bound. If max_
is None, there is no upper bound.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).clip(-1, 1).numpy())
[-1. -1. -1. 0. 1. 1. 1.]
Source code in tinygrad/tensor.py
2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 |
|
clip
¤
clip(min_=None, max_=None)
Alias for Tensor.clamp
.
Source code in tinygrad/tensor.py
2448 2449 2450 2451 2452 |
|
sign
¤
sign()
Returns the sign of the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sign().numpy())
[-1. -1. -1. 0. 1. 1. 1.]
Source code in tinygrad/tensor.py
2453 2454 2455 2456 2457 2458 2459 2460 2461 |
|
abs
¤
abs()
Computes the absolute value of the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).abs().numpy())
[3. 2. 1. 0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2462 2463 2464 2465 2466 2467 2468 2469 2470 |
|
reciprocal
¤
reciprocal()
Compute 1/x
element-wise.
print(Tensor([1., 2., 3., 4.]).reciprocal().numpy())
[1. 0.5 0.3333 0.25 ]
Source code in tinygrad/tensor.py
2471 2472 2473 2474 2475 2476 2477 2478 2479 |
|
Unary Ops (activation)¤
relu
¤
relu()
Applies the Rectified Linear Unit (ReLU) function element-wise.
- Described: https://paperswithcode.com/method/relu
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).relu().numpy())
[0. 0. 0. 0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 |
|
sigmoid
¤
sigmoid()
Applies the Sigmoid function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sigmoid().numpy())
[0.0474 0.1192 0.2689 0.5 0.7311 0.8808 0.9526]
Source code in tinygrad/tensor.py
2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 |
|
elu
¤
elu(alpha=1.0)
Applies the Exponential Linear Unit (ELU) function element-wise.
- Described: https://paperswithcode.com/method/elu
- Paper: https://arxiv.org/abs/1511.07289v5
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).elu().numpy())
[-0.9502 -0.8647 -0.6321 0. 1. 2. 3. ]
Source code in tinygrad/tensor.py
2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 |
|
celu
¤
celu(alpha=1.0)
Applies the Continuously differentiable Exponential Linear Unit (CELU) function element-wise.
- Described: https://paperswithcode.com/method/celu
- Paper: https://arxiv.org/abs/1704.07483
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).celu().numpy())
[-0.9502 -0.8647 -0.6321 0. 1. 2. 3. ]
Source code in tinygrad/tensor.py
2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 |
|
swish
¤
swish()
See .silu()
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).swish().numpy())
[-0.1423 -0.2384 -0.2689 0. 0.7311 1.7616 2.8577]
Source code in tinygrad/tensor.py
2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 |
|
silu
¤
silu()
Applies the Sigmoid Linear Unit (SiLU) function element-wise.
- Described: https://paperswithcode.com/method/silu
- Paper: https://arxiv.org/abs/1606.08415
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).silu().numpy())
[-0.1423 -0.2384 -0.2689 0. 0.7311 1.7616 2.8577]
Source code in tinygrad/tensor.py
2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 |
|
relu6
¤
relu6()
Applies the ReLU6 function element-wise.
- Described: https://paperswithcode.com/method/relu6
- Paper: https://arxiv.org/abs/1704.04861v1
print(Tensor([-9., -6., -3., 0., 3., 6., 9.]).relu6().numpy())
[0. 0. 0. 0. 3. 6. 6.]
Source code in tinygrad/tensor.py
2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 |
|
hardswish
¤
hardswish()
Applies the Hardswish function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).hardswish().numpy())
[-0. -0.3333 -0.3333 0. 0.6667 1.6667 3. ]
Source code in tinygrad/tensor.py
2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 |
|
tanh
¤
tanh()
Applies the Hyperbolic Tangent (tanh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).tanh().numpy())
[-0.9951 -0.964 -0.7616 0. 0.7616 0.964 0.9951]
Source code in tinygrad/tensor.py
2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 |
|
sinh
¤
sinh()
Applies the Hyperbolic Sine (sinh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sinh().numpy())
[-10.0179 -3.6269 -1.1752 0. 1.1752 3.6269 10.0179]
Source code in tinygrad/tensor.py
2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 |
|
cosh
¤
cosh()
Applies the Hyperbolic Cosine (cosh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).cosh().numpy())
[10.0677 3.7622 1.5431 1. 1.5431 3.7622 10.0677]
Source code in tinygrad/tensor.py
2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 |
|
atanh
¤
atanh()
Applies the Inverse Hyperbolic Tangent (atanh) function element-wise.
print(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9]).atanh().numpy())
[-1.4722 -0.6931 -0.3095 0. 0.3095 0.6931 1.4722]
Source code in tinygrad/tensor.py
2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 |
|
asinh
¤
asinh()
Applies the Inverse Hyperbolic Sine (asinh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).asinh().numpy())
[-1.8184 -1.4436 -0.8814 0. 0.8814 1.4436 1.8184]
Source code in tinygrad/tensor.py
2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 |
|
acosh
¤
acosh()
Applies the Inverse Hyperbolic Cosine (acosh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).acosh().numpy())
[ nan nan nan nan 0. 1.317 1.7627]
Source code in tinygrad/tensor.py
2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 |
|
hardtanh
¤
hardtanh(min_val=-1, max_val=1)
Applies the Hardtanh function element-wise.
print(Tensor([-1.5, -1.0, -0.5, 0., 0.5, 1.0, 1.5]).hardtanh().numpy())
[-1. -1. -0.5 0. 0.5 1. 1. ]
Source code in tinygrad/tensor.py
2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 |
|
gelu
¤
gelu()
Applies the Gaussian Error Linear Unit (GELU) function element-wise.
- Described: https://paperswithcode.com/method/gelu
- Paper: https://arxiv.org/abs/1606.08415v5
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).gelu().numpy())
[-0.0036 -0.0454 -0.1588 0. 0.8412 1.9546 2.9964]
Source code in tinygrad/tensor.py
2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 |
|
quick_gelu
¤
quick_gelu()
Applies the Sigmoid GELU approximation element-wise.
- Described: https://paperswithcode.com/method/gelu
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).quick_gelu().numpy())
[-0.0181 -0.0643 -0.1542 0. 0.8458 1.9357 2.9819]
Source code in tinygrad/tensor.py
2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 |
|
leakyrelu
¤
leakyrelu(neg_slope=0.01)
Applies the Leaky ReLU function element-wise.
- Described: https://paperswithcode.com/method/leaky-relu
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).leakyrelu().numpy())
[-0.03 -0.02 -0.01 0. 1. 2. 3. ]
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).leakyrelu(neg_slope=0.42).numpy())
[-1.26 -0.84 -0.42 0. 1. 2. 3. ]
Source code in tinygrad/tensor.py
2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 |
|
mish
¤
mish()
Applies the Mish function element-wise.
- Described: https://paperswithcode.com/method/mish
- Paper: https://arxiv.org/abs/1908.08681v3
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).mish().numpy())
[-0.1456 -0.2525 -0.3034 0. 0.8651 1.944 2.9865]
Source code in tinygrad/tensor.py
2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 |
|
softplus
¤
softplus(beta=1)
Applies the Softplus function element-wise.
- Described: https://paperswithcode.com/method/softplus
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).softplus().numpy())
[0.0486 0.1269 0.3133 0.6931 1.3133 2.1269 3.0486]
Source code in tinygrad/tensor.py
2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 |
|
softsign
¤
softsign()
Applies the Softsign function element-wise.
- Described: https://paperswithcode.com/method/softsign
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).softsign().numpy())
[-0.75 -0.6667 -0.5 0. 0.5 0.6667 0.75 ]
Source code in tinygrad/tensor.py
2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 |
|
Elementwise Ops (broadcasted)¤
add
¤
Adds self
and x
.
Equivalent to self + x
.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.add(20).numpy())
[19.4856 21.085 20.9089 19.9159]
print(t.add(Tensor([[2.0], [3.5]])).numpy())
[[1.4856 3.085 2.9089 1.9159]
[2.9856 4.585 4.4089 3.4159]]
Source code in tinygrad/tensor.py
2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 |
|
sub
¤
Subtracts x
from self
.
Equivalent to self - x
.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.sub(20).numpy())
[-20.5144 -18.915 -19.0911 -20.0841]
print(t.sub(Tensor([[2.0], [3.5]])).numpy())
[[-2.5144 -0.915 -1.0911 -2.0841]
[-4.0144 -2.415 -2.5911 -3.5841]]
Source code in tinygrad/tensor.py
2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 |
|
mul
¤
Multiplies self
and x
.
Equivalent to self * x
.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.mul(3).numpy())
[-1.5431 3.2549 2.7267 -0.2523]
print(t.mul(Tensor([[-1.0], [2.0]])).numpy())
[[ 0.5144 -1.085 -0.9089 0.0841]
[-1.0287 2.17 1.8178 -0.1682]]
Source code in tinygrad/tensor.py
2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 |
|
div
¤
Divides self
by x
.
Equivalent to self / x
.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
By default, div
performs true division. Set upcast
to False
for integer division.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.div(3).numpy())
[-0.1715 0.3617 0.303 -0.028 ]
print(Tensor([1, 4, 10]).div(Tensor([2, 3, 4])).numpy())
[0.5 1.3333 2.5 ]
print(Tensor([1, 4, 10]).div(Tensor([2, 3, 4]), upcast=False).numpy())
[0 1 2]
Source code in tinygrad/tensor.py
2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 |
|
xor
¤
Computes bitwise xor of self
and x
.
Equivalent to self ^ x
.
Supports broadcasting to a common shape, type promotion, and integer, boolean inputs.
print(Tensor([-1, -2, 3]).xor(Tensor([1, 0, 3])).numpy())
[-2 -2 0]
print(Tensor([True, True, False, False]).xor(Tensor([True, False, True, False])).numpy())
[False True True False]
Source code in tinygrad/tensor.py
2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 |
|
lshift
¤
lshift(x: int)
Computes left arithmetic shift of self
by x
bits. self
must have unsigned dtype.
Equivalent to self << x
.
print(Tensor([1, 3, 31], dtype=dtypes.uint8).lshift(2).numpy())
[ 4 12 124]
Source code in tinygrad/tensor.py
2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 |
|
rshift
¤
rshift(x: int)
Computes right arithmetic shift of self
by x
bits. self
must have unsigned dtype.
Equivalent to self >> x
.
print(Tensor([4, 13, 125], dtype=dtypes.uint8).rshift(2).numpy())
[ 1 3 31]
Source code in tinygrad/tensor.py
2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 |
|
pow
¤
Computes power of self
with x
.
Equivalent to self ** x
.
print(Tensor([-1, 2, 3]).pow(2).numpy())
[1 4 9]
print(Tensor([-1, 2, 3]).pow(Tensor([-1.5, 0.5, 1.5])).numpy())
[ nan 1.4142 5.1962]
print((2 ** Tensor([-1, 2, 3])).numpy())
[0.5 4. 8. ]
Source code in tinygrad/tensor.py
2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 |
|
maximum
¤
Computes element-wise maximum of self
and x
.
print(Tensor([-1, 2, 3]).maximum(1).numpy())
[1 2 3]
print(Tensor([-1, 2, 3]).maximum(Tensor([-4, -2, 9])).numpy())
[-1 2 9]
Source code in tinygrad/tensor.py
2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 |
|
minimum
¤
Computes element-wise minimum of self
and x
.
print(Tensor([-1, 2, 3]).minimum(1).numpy())
[-1 1 1]
print(Tensor([-1, 2, 3]).minimum(Tensor([-4, -2, 9])).numpy())
[-4 -2 3]
Source code in tinygrad/tensor.py
2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 |
|
where
¤
Return a tensor of elements selected from either x
or y
, depending on self
.
output_i = x_i if self_i else y_i
.
cond = Tensor([[True, True, False], [True, False, False]])
print(cond.where(1, 3).numpy())
[[1 1 3]
[1 3 3]]
Tensor.manual_seed(42)
cond = Tensor.randn(2, 3)
print(cond.numpy())
[[ 0.9779 0.4678 0.5526]
[-0.3288 -0.8555 0.2753]]
print((cond > 0).where(cond, -float("inf")).numpy())
[[0.9779 0.4678 0.5526]
[ -inf -inf 0.2753]]
Source code in tinygrad/tensor.py
2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 |
|
Casting Ops¤
cast
¤
cast(dtype: DTypeLike) -> Tensor
Casts self
to the given dtype
.
t = Tensor([-1, 2.5, 3], dtype=dtypes.float)
print(t.dtype, t.numpy())
dtypes.float [-1. 2.5 3. ]
t = t.cast(dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
Source code in tinygrad/tensor.py
3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 |
|
bitcast
¤
bitcast(dtype: DTypeLike) -> Tensor
Bitcasts self
to the given dtype
of the same itemsize.
self
must not require a gradient.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.bitcast(dtypes.uint32)
print(t.dtype, t.numpy())
dtypes.uint [4294967295 2 3]
Source code in tinygrad/tensor.py
3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 |
|
float
¤
float() -> Tensor
Convenience method to cast self
to a float32
Tensor.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.float()
print(t.dtype, t.numpy())
dtypes.float [-1. 2. 3.]
Source code in tinygrad/tensor.py
3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 |
|
half
¤
half() -> Tensor
Convenience method to cast self
to a float16
Tensor.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.half()
print(t.dtype, t.numpy())
dtypes.half [-1. 2. 3.]
Source code in tinygrad/tensor.py
3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 |
|
int
¤
int() -> Tensor
Convenience method to cast self
to a int32
Tensor.
t = Tensor([-1.5, -0.5, 0.0, 0.5, 1.5])
print(t.dtype, t.numpy())
dtypes.float [-1.5 -0.5 0. 0.5 1.5]
t = t.int()
print(t.dtype, t.numpy())
dtypes.int [-1 0 0 0 1]
Source code in tinygrad/tensor.py
3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 |
|
bool
¤
bool() -> Tensor
Convenience method to cast self
to a bool
Tensor.
t = Tensor([-1, 0, 1])
print(t.dtype, t.numpy())
dtypes.int [-1 0 1]
t = t.bool()
print(t.dtype, t.numpy())
dtypes.bool [ True False True]
Source code in tinygrad/tensor.py
3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 |
|