Elementwise
Elementwise ops operate on a per element basis. They don't change the shape of the tensor.
Unary Ops (math)¤
logical_not
¤
logical_not()
Computes the logical NOT of the tensor element-wise.
print(Tensor([False, True]).logical_not().numpy())
[ True False]
Source code in tinygrad/tensor.py
2521 2522 2523 2524 2525 2526 2527 2528 2529 |
|
neg
¤
neg()
Negates the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).neg().numpy())
[ 3. 2. 1. -0. -1. -2. -3.]
Source code in tinygrad/tensor.py
2530 2531 2532 2533 2534 2535 2536 2537 2538 |
|
log
¤
log()
Computes the natural logarithm element-wise.
See: https://en.wikipedia.org/wiki/Logarithm
print(Tensor([1., 2., 4., 8.]).log().numpy())
[0. 0.6931 1.3863 2.0794]
Source code in tinygrad/tensor.py
2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 |
|
log2
¤
log2()
Computes the base-2 logarithm element-wise.
See: https://en.wikipedia.org/wiki/Logarithm
print(Tensor([1., 2., 4., 8.]).log2().numpy())
[0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 |
|
exp
¤
exp()
Computes the exponential function element-wise.
See: https://en.wikipedia.org/wiki/Exponential_function
print(Tensor([0., 1., 2., 3.]).exp().numpy())
[ 1. 2.7183 7.3891 20.0855]
Source code in tinygrad/tensor.py
2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 |
|
exp2
¤
exp2()
Computes the base-2 exponential function element-wise.
See: https://en.wikipedia.org/wiki/Exponential_function
print(Tensor([0., 1., 2., 3.]).exp2().numpy())
[1. 2. 4. 8.]
Source code in tinygrad/tensor.py
2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 |
|
sqrt
¤
sqrt()
Computes the square root of the tensor element-wise.
print(Tensor([1., 2., 3., 4.]).sqrt().numpy())
[1. 1.4142 1.7321 2. ]
Source code in tinygrad/tensor.py
2632 2633 2634 2635 2636 2637 2638 2639 2640 |
|
rsqrt
¤
rsqrt()
Computes the reciprocal of the square root of the tensor element-wise.
print(Tensor([1., 2., 3., 4.]).rsqrt().numpy())
[1. 0.7071 0.5774 0.5 ]
Source code in tinygrad/tensor.py
2641 2642 2643 2644 2645 2646 2647 2648 2649 |
|
sin
¤
sin()
Computes the sine of the tensor element-wise.
print(Tensor([0., math.pi/2, math.pi, 3*math.pi/2, 2*math.pi]).sin().numpy())
[ 0. 1. -0. -1. 0.]
Source code in tinygrad/tensor.py
2650 2651 2652 2653 2654 2655 2656 2657 2658 |
|
cos
¤
cos()
Computes the cosine of the tensor element-wise.
print(Tensor([0., math.pi/2, math.pi, 3*math.pi/2, 2*math.pi]).cos().numpy())
[ 1.0000e+00 0.0000e+00 -1.0000e+00 -2.3842e-07 1.0000e+00]
Source code in tinygrad/tensor.py
2659 2660 2661 2662 2663 2664 2665 2666 2667 |
|
tan
¤
tan()
Computes the tangent of the tensor element-wise.
print(Tensor([0., math.pi/4, math.pi/2, 3*math.pi/4, math.pi]).tan().numpy())
[ 0. 1. inf -1. 0.]
Source code in tinygrad/tensor.py
2668 2669 2670 2671 2672 2673 2674 2675 2676 |
|
asin
¤
asin()
Computes the inverse sine (arcsine) of the tensor element-wise.
print(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9]).asin().numpy())
[-1.1198 -0.6435 -0.3047 0. 0.3047 0.6435 1.1198]
Source code in tinygrad/tensor.py
2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 |
|
acos
¤
acos()
Computes the inverse cosine (arccosine) of the tensor element-wise.
print(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9]).acos().numpy())
[2.6906 2.2143 1.8755 1.5708 1.2661 0.9273 0.451 ]
Source code in tinygrad/tensor.py
2691 2692 2693 2694 2695 2696 2697 2698 2699 |
|
atan
¤
atan()
Computes the inverse tangent (arctan) of the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).atan().numpy())
[-1.249 -1.1071 -0.7854 0. 0.7854 1.1071 1.249 ]
Source code in tinygrad/tensor.py
2701 2702 2703 2704 2705 2706 2707 2708 2709 |
|
trunc
¤
trunc() -> Tensor
Truncates the tensor element-wise.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).trunc().numpy())
[-3. -2. -1. 0. 0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2713 2714 2715 2716 2717 2718 2719 2720 2721 |
|
ceil
¤
ceil() -> Tensor
Rounds the tensor element-wise towards positive infinity.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).ceil().numpy())
[-3. -2. -1. 0. 1. 2. 3. 4.]
Source code in tinygrad/tensor.py
2722 2723 2724 2725 2726 2727 2728 2729 2730 |
|
floor
¤
floor() -> Tensor
Rounds the tensor element-wise towards negative infinity.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).floor().numpy())
[-4. -3. -2. -1. 0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2731 2732 2733 2734 2735 2736 2737 2738 2739 |
|
round
¤
round() -> Tensor
Rounds the tensor element-wise with rounding half to even.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).round().numpy())
[-4. -2. -2. 0. 0. 2. 2. 4.]
Source code in tinygrad/tensor.py
2740 2741 2742 2743 2744 2745 2746 2747 2748 |
|
isinf
¤
Checks the tensor element-wise to return True where the element is infinity, otherwise returns False
print(Tensor([1, float('inf'), 2, float('-inf'), float('nan')]).isinf().numpy())
[False True False True False]
Source code in tinygrad/tensor.py
2750 2751 2752 2753 2754 2755 2756 2757 2758 |
|
isnan
¤
isnan()
Checks the tensor element-wise to return True where the element is NaN, otherwise returns False
print(Tensor([1, float('inf'), 2, float('-inf'), float('nan')]).isnan().numpy())
[False False False False True]
Source code in tinygrad/tensor.py
2759 2760 2761 2762 2763 2764 2765 2766 2767 |
|
lerp
¤
Linearly interpolates between self
and end
by weight
.
print(Tensor([1., 2., 3.]).lerp(Tensor([4., 5., 6.]), 0.5).numpy())
[2.5 3.5 4.5]
Source code in tinygrad/tensor.py
2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 |
|
square
¤
square()
Squares the tensor element-wise.
Equivalent to self*self
.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).square().numpy())
[9. 4. 1. 0. 1. 4. 9.]
Source code in tinygrad/tensor.py
2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 |
|
clamp
¤
clamp(min_=None, max_=None)
Clips (clamps) the values in the tensor between min_
and max_
element-wise.
If min_
is None
, there is no lower bound. If max_
is None, there is no upper bound.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).clip(-1, 1).numpy())
[-1. -1. -1. 0. 1. 1. 1.]
Source code in tinygrad/tensor.py
2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 |
|
clip
¤
clip(min_=None, max_=None)
Alias for Tensor.clamp
.
Source code in tinygrad/tensor.py
2804 2805 2806 2807 2808 |
|
sign
¤
sign()
Returns the sign of the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sign().numpy())
[-1. -1. -1. 0. 1. 1. 1.]
Source code in tinygrad/tensor.py
2809 2810 2811 2812 2813 2814 2815 2816 2817 |
|
abs
¤
abs()
Computes the absolute value of the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).abs().numpy())
[3. 2. 1. 0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2818 2819 2820 2821 2822 2823 2824 2825 2826 |
|
reciprocal
¤
reciprocal()
Compute 1/x
element-wise.
print(Tensor([1., 2., 3., 4.]).reciprocal().numpy())
[1. 0.5 0.3333 0.25 ]
Source code in tinygrad/tensor.py
2827 2828 2829 2830 2831 2832 2833 2834 2835 |
|
Unary Ops (activation)¤
relu
¤
relu()
Applies the Rectified Linear Unit (ReLU) function element-wise.
- Described: https://paperswithcode.com/method/relu
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).relu().numpy())
[0. 0. 0. 0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 |
|
sigmoid
¤
sigmoid()
Applies the Sigmoid function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sigmoid().numpy())
[0.0474 0.1192 0.2689 0.5 0.7311 0.8808 0.9526]
Source code in tinygrad/tensor.py
2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 |
|
hardsigmoid
¤
Applies the Hardsigmoid function element-wise.
NOTE: default alpha
and beta
values is taken from torch
- Described: https://paperswithcode.com/method/hard-sigmoid
- See: https://pytorch.org/docs/stable/generated/torch.nn.functional.hardsigmoid.html
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).hardsigmoid().numpy())
[0. 0.1667 0.3333 0.5 0.6667 0.8333 1. ]
Source code in tinygrad/tensor.py
2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 |
|
elu
¤
elu(alpha=1.0)
Applies the Exponential Linear Unit (ELU) function element-wise.
- Described: https://paperswithcode.com/method/elu
- Paper: https://arxiv.org/abs/1511.07289v5
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).elu().numpy())
[-0.9502 -0.8647 -0.6321 0. 1. 2. 3. ]
Source code in tinygrad/tensor.py
2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 |
|
celu
¤
celu(alpha=1.0)
Applies the Continuously differentiable Exponential Linear Unit (CELU) function element-wise.
- Described: https://paperswithcode.com/method/celu
- Paper: https://arxiv.org/abs/1704.07483
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).celu().numpy())
[-0.9502 -0.8647 -0.6321 0. 1. 2. 3. ]
Source code in tinygrad/tensor.py
2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 |
|
selu
¤
selu(alpha=1.67326, gamma=1.0507)
Applies the Scaled Exponential Linear Unit (SELU) function element-wise.
- Described: https://paperswithcode.com/method/selu
- Paper: https://arxiv.org/abs/1706.02515v5
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).selu().numpy())
[-1.6706 -1.5202 -1.1113 0. 1.0507 2.1014 3.1521]
Source code in tinygrad/tensor.py
2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 |
|
swish
¤
swish()
See .silu()
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).swish().numpy())
[-0.1423 -0.2384 -0.2689 0. 0.7311 1.7616 2.8577]
Source code in tinygrad/tensor.py
2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 |
|
silu
¤
silu()
Applies the Sigmoid Linear Unit (SiLU) function element-wise.
- Described: https://paperswithcode.com/method/silu
- Paper: https://arxiv.org/abs/1606.08415
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).silu().numpy())
[-0.1423 -0.2384 -0.2689 0. 0.7311 1.7616 2.8577]
Source code in tinygrad/tensor.py
2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 |
|
relu6
¤
relu6()
Applies the ReLU6 function element-wise.
- Described: https://paperswithcode.com/method/relu6
- Paper: https://arxiv.org/abs/1704.04861v1
print(Tensor([-9., -6., -3., 0., 3., 6., 9.]).relu6().numpy())
[0. 0. 0. 0. 3. 6. 6.]
Source code in tinygrad/tensor.py
2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 |
|
hardswish
¤
hardswish()
Applies the Hardswish function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).hardswish().numpy())
[-0. -0.3333 -0.3333 0. 0.6667 1.6667 3. ]
Source code in tinygrad/tensor.py
2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 |
|
tanh
¤
tanh()
Applies the Hyperbolic Tangent (tanh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).tanh().numpy())
[-0.9951 -0.964 -0.7616 0. 0.7616 0.964 0.9951]
Source code in tinygrad/tensor.py
2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 |
|
sinh
¤
sinh()
Applies the Hyperbolic Sine (sinh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sinh().numpy())
[-10.0179 -3.6269 -1.1752 0. 1.1752 3.6269 10.0179]
Source code in tinygrad/tensor.py
2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 |
|
cosh
¤
cosh()
Applies the Hyperbolic Cosine (cosh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).cosh().numpy())
[10.0677 3.7622 1.5431 1. 1.5431 3.7622 10.0677]
Source code in tinygrad/tensor.py
2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 |
|
atanh
¤
atanh()
Applies the Inverse Hyperbolic Tangent (atanh) function element-wise.
print(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9]).atanh().numpy())
[-1.4722 -0.6931 -0.3095 0. 0.3095 0.6931 1.4722]
Source code in tinygrad/tensor.py
2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 |
|
asinh
¤
asinh()
Applies the Inverse Hyperbolic Sine (asinh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).asinh().numpy())
[-1.8184 -1.4436 -0.8814 0. 0.8814 1.4436 1.8184]
Source code in tinygrad/tensor.py
2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 |
|
acosh
¤
acosh()
Applies the Inverse Hyperbolic Cosine (acosh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).acosh().numpy())
[ nan nan nan nan 0. 1.317 1.7627]
Source code in tinygrad/tensor.py
2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 |
|
hardtanh
¤
hardtanh(min_val=-1, max_val=1)
Applies the Hardtanh function element-wise.
print(Tensor([-1.5, -1.0, -0.5, 0., 0.5, 1.0, 1.5]).hardtanh().numpy())
[-1. -1. -0.5 0. 0.5 1. 1. ]
Source code in tinygrad/tensor.py
3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 |
|
erf
¤
erf()
Applies error function element-wise.
- Described: https://en.wikipedia.org/wiki/Error_function
print(Tensor([-1.5, -1.0, -0.5, 0., 0.5, 1.0, 1.5]).erf().numpy())
[-0.9661 -0.8427 -0.5205 0. 0.5205 0.8427 0.9661]
Source code in tinygrad/tensor.py
3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 |
|
gelu
¤
gelu()
Applies the Gaussian Error Linear Unit (GELU) function element-wise.
- Described: https://paperswithcode.com/method/gelu
- Paper: https://arxiv.org/abs/1606.08415v5
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).gelu().numpy())
[-0.0036 -0.0454 -0.1588 0. 0.8412 1.9546 2.9964]
Source code in tinygrad/tensor.py
3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 |
|
quick_gelu
¤
quick_gelu()
Applies the Sigmoid GELU approximation element-wise.
- Described: https://paperswithcode.com/method/gelu
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).quick_gelu().numpy())
[-0.0181 -0.0643 -0.1542 0. 0.8458 1.9357 2.9819]
Source code in tinygrad/tensor.py
3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 |
|
leakyrelu
¤
leakyrelu(neg_slope=0.01)
Applies the Leaky ReLU function element-wise.
- Described: https://paperswithcode.com/method/leaky-relu
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).leakyrelu().numpy())
[-0.03 -0.02 -0.01 0. 1. 2. 3. ]
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).leakyrelu(neg_slope=0.42).numpy())
[-1.26 -0.84 -0.42 0. 1. 2. 3. ]
Source code in tinygrad/tensor.py
3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 |
|
mish
¤
mish()
Applies the Mish function element-wise.
- Described: https://paperswithcode.com/method/mish
- Paper: https://arxiv.org/abs/1908.08681v3
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).mish().numpy())
[-0.1456 -0.2525 -0.3034 0. 0.8651 1.944 2.9865]
Source code in tinygrad/tensor.py
3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 |
|
softplus
¤
softplus(beta=1)
Applies the Softplus function element-wise.
- Described: https://paperswithcode.com/method/softplus
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).softplus().numpy())
[0.0486 0.1269 0.3133 0.6931 1.3133 2.1269 3.0486]
Source code in tinygrad/tensor.py
3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 |
|
softsign
¤
softsign()
Applies the Softsign function element-wise.
- Described: https://paperswithcode.com/method/softsign
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).softsign().numpy())
[-0.75 -0.6667 -0.5 0. 0.5 0.6667 0.75 ]
Source code in tinygrad/tensor.py
3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 |
|
Elementwise Ops (broadcasted)¤
add
¤
Adds self
and x
.
Equivalent to self + x
.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.add(20).numpy())
[19.4856 21.085 20.9089 19.9159]
print(t.add(Tensor([[2.0], [3.5]])).numpy())
[[1.4856 3.085 2.9089 1.9159]
[2.9856 4.585 4.4089 3.4159]]
Source code in tinygrad/tensor.py
3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 |
|
sub
¤
Subtracts x
from self
.
Equivalent to self - x
.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.sub(20).numpy())
[-20.5144 -18.915 -19.0911 -20.0841]
print(t.sub(Tensor([[2.0], [3.5]])).numpy())
[[-2.5144 -0.915 -1.0911 -2.0841]
[-4.0144 -2.415 -2.5911 -3.5841]]
Source code in tinygrad/tensor.py
3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 |
|
mul
¤
Multiplies self
and x
.
Equivalent to self * x
.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.mul(3).numpy())
[-1.5431 3.2549 2.7267 -0.2523]
print(t.mul(Tensor([[-1.0], [2.0]])).numpy())
[[ 0.5144 -1.085 -0.9089 0.0841]
[-1.0287 2.17 1.8178 -0.1682]]
Source code in tinygrad/tensor.py
3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 |
|
div
¤
Divides self
by x
.
Equivalent to self / x
.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
div
performs true division.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.div(3).numpy())
[-0.1715 0.3617 0.303 -0.028 ]
print(Tensor([1, 4, 10]).div(Tensor([2, 3, 4])).numpy())
[0.5 1.3333 2.5 ]
Source code in tinygrad/tensor.py
3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 |
|
idiv
¤
Divides self
by x
.
Equivalent to self // x
.
Supports broadcasting to a common shape, type promotion, and integer inputs.
idiv
performs integer division (truncate towards zero).
print(Tensor([-4, 7, 5, 4, -7, 8]).idiv(Tensor([2, -3, 8, -2, 3, 5])).numpy())
[-2 -2 0 -2 -2 1]
Source code in tinygrad/tensor.py
3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 |
|
mod
¤
Mod self
by x
.
Equivalent to self % x
.
Supports broadcasting to a common shape, type promotion, and integer inputs.
print(Tensor([-4, 7, 5, 4, -7, 8]).mod(Tensor([2, -3, 8, -2, 3, 5])).numpy())
[ 0 -2 5 0 2 3]
Source code in tinygrad/tensor.py
3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 |
|
xor
¤
Computes bitwise xor of self
and x
.
Equivalent to self ^ x
.
Supports broadcasting to a common shape, type promotion, and integer, boolean inputs.
print(Tensor([-1, -2, 3]).xor(Tensor([1, 0, 3])).numpy())
[-2 -2 0]
print(Tensor([True, True, False, False]).xor(Tensor([True, False, True, False])).numpy())
[False True True False]
Source code in tinygrad/tensor.py
3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 |
|
lshift
¤
lshift(x: int)
Computes left arithmetic shift of self
by x
bits. self
must have unsigned dtype.
Equivalent to self << x
.
print(Tensor([1, 3, 31], dtype=dtypes.uint8).lshift(2).numpy())
[ 4 12 124]
Source code in tinygrad/tensor.py
3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 |
|
rshift
¤
rshift(x: int)
Computes right arithmetic shift of self
by x
bits. self
must have unsigned dtype.
Equivalent to self >> x
.
print(Tensor([4, 13, 125], dtype=dtypes.uint8).rshift(2).numpy())
[ 1 3 31]
Source code in tinygrad/tensor.py
3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 |
|
pow
¤
Computes power of self
with x
.
Equivalent to self ** x
.
print(Tensor([-1, 2, 3]).pow(2).numpy())
[1 4 9]
print(Tensor([-1, 2, 3]).pow(Tensor([-1.5, 0.5, 1.5])).numpy())
[-2147483648 1 5]
print((2 ** Tensor([-1, 2, 3])).numpy())
[0.5 4. 8. ]
Source code in tinygrad/tensor.py
3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 |
|
maximum
¤
Computes element-wise maximum of self
and x
.
print(Tensor([-1, 2, 3]).maximum(1).numpy())
[1 2 3]
print(Tensor([-1, 2, 3]).maximum(Tensor([-4, -2, 9])).numpy())
[-1 2 9]
Source code in tinygrad/tensor.py
3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 |
|
minimum
¤
Computes element-wise minimum of self
and x
.
print(Tensor([-1, 2, 3]).minimum(1).numpy())
[-1 1 1]
print(Tensor([-1, 2, 3]).minimum(Tensor([-4, -2, 9])).numpy())
[-4 -2 3]
Source code in tinygrad/tensor.py
3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 |
|
where
¤
Return a tensor of elements selected from either x
or y
, depending on self
.
output_i = x_i if self_i else y_i
.
cond = Tensor([[True, True, False], [True, False, False]])
print(cond.where(1, 3).numpy())
[[1 1 3]
[1 3 3]]
Tensor.manual_seed(42)
cond = Tensor.randn(2, 3)
print(cond.numpy())
[[ 0.9779 0.4678 0.5526]
[-0.3288 -0.8555 0.2753]]
print((cond > 0).where(cond, -float("inf")).numpy())
[[0.9779 0.4678 0.5526]
[ -inf -inf 0.2753]]
Source code in tinygrad/tensor.py
3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 |
|
Casting Ops¤
cast
¤
cast(dtype: DTypeLike) -> Tensor
Casts self
to the given dtype
.
t = Tensor([-1, 2.5, 3], dtype=dtypes.float)
print(t.dtype, t.numpy())
dtypes.float [-1. 2.5 3. ]
t = t.cast(dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.cast(dtypes.uint8)
print(t.dtype, t.numpy())
dtypes.uchar [255 2 3]
Source code in tinygrad/tensor.py
3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 |
|
bitcast
¤
bitcast(dtype: DTypeLike) -> Tensor
Bitcasts self
to the given dtype
of the same itemsize.
self
must not require a gradient.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.bitcast(dtypes.uint32)
print(t.dtype, t.numpy())
dtypes.uint [4294967295 2 3]
Source code in tinygrad/tensor.py
3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 |
|
float
¤
float() -> Tensor
Convenience method to cast self
to a float32
Tensor.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.float()
print(t.dtype, t.numpy())
dtypes.float [-1. 2. 3.]
Source code in tinygrad/tensor.py
3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 |
|
half
¤
half() -> Tensor
Convenience method to cast self
to a float16
Tensor.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.half()
print(t.dtype, t.numpy())
dtypes.half [-1. 2. 3.]
Source code in tinygrad/tensor.py
3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 |
|
int
¤
int() -> Tensor
Convenience method to cast self
to a int32
Tensor.
t = Tensor([-1.5, -0.5, 0.0, 0.5, 1.5])
print(t.dtype, t.numpy())
dtypes.float [-1.5 -0.5 0. 0.5 1.5]
t = t.int()
print(t.dtype, t.numpy())
dtypes.int [-1 0 0 0 1]
Source code in tinygrad/tensor.py
3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 |
|
bool
¤
bool() -> Tensor
Convenience method to cast self
to a bool
Tensor.
t = Tensor([-1, 0, 1])
print(t.dtype, t.numpy())
dtypes.int [-1 0 1]
t = t.bool()
print(t.dtype, t.numpy())
dtypes.bool [ True False True]
Source code in tinygrad/tensor.py
3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 |
|