Elementwise
Elementwise ops operate on a per element basis. They don't change the shape of the tensor.
Unary Ops (math)¤
logical_not
¤
logical_not() -> Tensor
Computes the logical NOT of the tensor element-wise.
print(Tensor([False, True]).logical_not().numpy())
[ True False]
Source code in tinygrad/tensor.py
2699 2700 2701 2702 2703 2704 2705 2706 2707 |
|
neg
¤
neg() -> Tensor
Negates the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).neg().numpy())
[ 3. 2. 1. -0. -1. -2. -3.]
Source code in tinygrad/tensor.py
2709 2710 2711 2712 2713 2714 2715 2716 2717 |
|
log
¤
log() -> Tensor
Computes the natural logarithm element-wise.
See: https://en.wikipedia.org/wiki/Logarithm
print(Tensor([1., 2., 4., 8.]).log().numpy())
[0. 0.6931 1.3863 2.0794]
Source code in tinygrad/tensor.py
2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 |
|
log2
¤
log2() -> Tensor
Computes the base-2 logarithm element-wise.
See: https://en.wikipedia.org/wiki/Logarithm
print(Tensor([1., 2., 4., 8.]).log2().numpy())
[0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 |
|
exp
¤
exp() -> Tensor
Computes the exponential function element-wise.
See: https://en.wikipedia.org/wiki/Exponential_function
print(Tensor([0., 1., 2., 3.]).exp().numpy())
[ 1. 2.7183 7.3891 20.0855]
Source code in tinygrad/tensor.py
2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 |
|
exp2
¤
exp2() -> Tensor
Computes the base-2 exponential function element-wise.
See: https://en.wikipedia.org/wiki/Exponential_function
print(Tensor([0., 1., 2., 3.]).exp2().numpy())
[1. 2. 4. 8.]
Source code in tinygrad/tensor.py
2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 |
|
sqrt
¤
sqrt() -> Tensor
Computes the square root of the tensor element-wise.
print(Tensor([1., 2., 3., 4.]).sqrt().numpy())
[1. 1.4142 1.7321 2. ]
Source code in tinygrad/tensor.py
2826 2827 2828 2829 2830 2831 2832 2833 2834 |
|
rsqrt
¤
rsqrt() -> Tensor
Computes the reciprocal of the square root of the tensor element-wise.
print(Tensor([1., 2., 3., 4.]).rsqrt().numpy())
[1. 0.7071 0.5774 0.5 ]
Source code in tinygrad/tensor.py
2836 2837 2838 2839 2840 2841 2842 2843 2844 |
|
sin
¤
sin() -> Tensor
Computes the sine of the tensor element-wise.
print(Tensor([0., math.pi/2, math.pi, 3*math.pi/2, 2*math.pi]).sin().numpy())
[ 0. 1. -0. -1. 0.]
Source code in tinygrad/tensor.py
2846 2847 2848 2849 2850 2851 2852 2853 2854 |
|
cos
¤
cos() -> Tensor
Computes the cosine of the tensor element-wise.
print(Tensor([0., math.pi/2, math.pi, 3*math.pi/2, 2*math.pi]).cos().numpy())
[ 1.0000e+00 0.0000e+00 -1.0000e+00 -2.3842e-07 1.0000e+00]
Source code in tinygrad/tensor.py
2856 2857 2858 2859 2860 2861 2862 2863 2864 |
|
tan
¤
tan() -> Tensor
Computes the tangent of the tensor element-wise.
print(Tensor([0., math.pi/4, math.pi/2, 3*math.pi/4, math.pi]).tan().numpy())
[ 0. 1. inf -1. 0.]
Source code in tinygrad/tensor.py
2866 2867 2868 2869 2870 2871 2872 2873 2874 |
|
asin
¤
asin() -> Tensor
Computes the inverse sine (arcsine) of the tensor element-wise.
print(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9]).asin().numpy())
[-1.1198 -0.6435 -0.3047 0. 0.3047 0.6435 1.1198]
Source code in tinygrad/tensor.py
2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 |
|
acos
¤
acos() -> Tensor
Computes the inverse cosine (arccosine) of the tensor element-wise.
print(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9]).acos().numpy())
[2.6906 2.2143 1.8755 1.5708 1.2661 0.9273 0.451 ]
Source code in tinygrad/tensor.py
2889 2890 2891 2892 2893 2894 2895 2896 2897 |
|
atan
¤
atan() -> Tensor
Computes the inverse tangent (arctan) of the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).atan().numpy())
[-1.249 -1.1071 -0.7854 0. 0.7854 1.1071 1.249 ]
Source code in tinygrad/tensor.py
2899 2900 2901 2902 2903 2904 2905 2906 2907 |
|
trunc
¤
trunc() -> Tensor
Truncates the tensor element-wise.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).trunc().numpy())
[-3. -2. -1. 0. 0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2911 2912 2913 2914 2915 2916 2917 2918 2919 |
|
ceil
¤
ceil() -> Tensor
Rounds the tensor element-wise towards positive infinity.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).ceil().numpy())
[-3. -2. -1. 0. 1. 2. 3. 4.]
Source code in tinygrad/tensor.py
2921 2922 2923 2924 2925 2926 2927 2928 2929 |
|
floor
¤
floor() -> Tensor
Rounds the tensor element-wise towards negative infinity.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).floor().numpy())
[-4. -3. -2. -1. 0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2931 2932 2933 2934 2935 2936 2937 2938 2939 |
|
round
¤
round() -> Tensor
Rounds the tensor element-wise with rounding half to even.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).round().numpy())
[-4. -2. -2. 0. 0. 2. 2. 4.]
Source code in tinygrad/tensor.py
2941 2942 2943 2944 2945 2946 2947 2948 2949 |
|
isinf
¤
Checks the tensor element-wise to return True where the element is infinity, otherwise returns False
print(Tensor([1, float('inf'), 2, float('-inf'), float('nan')]).isinf().numpy())
[False True False True False]
Source code in tinygrad/tensor.py
2951 2952 2953 2954 2955 2956 2957 2958 2959 |
|
isnan
¤
isnan() -> Tensor
Checks the tensor element-wise to return True where the element is NaN, otherwise returns False
print(Tensor([1, float('inf'), 2, float('-inf'), float('nan')]).isnan().numpy())
[False False False False True]
Source code in tinygrad/tensor.py
2961 2962 2963 2964 2965 2966 2967 2968 2969 |
|
isfinite
¤
isfinite() -> Tensor
Checks the tensor element-wise to return True where the element is finite, otherwise returns False
print(Tensor([1, float('inf'), 2, float('-inf'), float('nan')]).isfinite().numpy())
[ True False True False False]
Source code in tinygrad/tensor.py
2971 2972 2973 2974 2975 2976 2977 2978 2979 |
|
lerp
¤
Linearly interpolates between self
and end
by weight
.
print(Tensor([1., 2., 3.]).lerp(Tensor([4., 5., 6.]), 0.5).numpy())
[2.5 3.5 4.5]
Source code in tinygrad/tensor.py
2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 |
|
square
¤
square() -> Tensor
Squares the tensor element-wise.
Equivalent to self*self
.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).square().numpy())
[9. 4. 1. 0. 1. 4. 9.]
Source code in tinygrad/tensor.py
2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 |
|
clamp
¤
clamp(min_=None, max_=None) -> Tensor
Clips (clamps) the values in the tensor between min_
and max_
element-wise.
If min_
is None
, there is no lower bound. If max_
is None, there is no upper bound.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).clip(-1, 1).numpy())
[-1. -1. -1. 0. 1. 1. 1.]
Source code in tinygrad/tensor.py
3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 |
|
clip
¤
clip(min_=None, max_=None) -> Tensor
Alias for Tensor.clamp
.
Source code in tinygrad/tensor.py
3018 3019 3020 3021 3022 |
|
sign
¤
sign() -> Tensor
Returns the sign of the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sign().numpy())
[-1. -1. -1. 0. 1. 1. 1.]
Source code in tinygrad/tensor.py
3024 3025 3026 3027 3028 3029 3030 3031 3032 |
|
abs
¤
abs() -> Tensor
Computes the absolute value of the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).abs().numpy())
[3. 2. 1. 0. 1. 2. 3.]
Source code in tinygrad/tensor.py
3034 3035 3036 3037 3038 3039 3040 3041 3042 |
|
reciprocal
¤
reciprocal() -> Tensor
Compute 1/x
element-wise.
print(Tensor([1., 2., 3., 4.]).reciprocal().numpy())
[1. 0.5 0.3333 0.25 ]
Source code in tinygrad/tensor.py
3044 3045 3046 3047 3048 3049 3050 3051 3052 |
|
Unary Ops (activation)¤
relu
¤
relu() -> Tensor
Applies the Rectified Linear Unit (ReLU) function element-wise.
- Described: https://paperswithcode.com/method/relu
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).relu().numpy())
[0. 0. 0. 0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 |
|
sigmoid
¤
sigmoid() -> Tensor
Applies the Sigmoid function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sigmoid().numpy())
[0.0474 0.1192 0.2689 0.5 0.7311 0.8808 0.9526]
Source code in tinygrad/tensor.py
2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 |
|
hardsigmoid
¤
Applies the Hardsigmoid function element-wise.
NOTE: default alpha
and beta
values is taken from torch
- Described: https://paperswithcode.com/method/hard-sigmoid
- See: https://pytorch.org/docs/stable/generated/torch.nn.functional.hardsigmoid.html
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).hardsigmoid().numpy())
[0. 0.1667 0.3333 0.5 0.6667 0.8333 1. ]
Source code in tinygrad/tensor.py
2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 |
|
elu
¤
elu(alpha=1.0) -> Tensor
Applies the Exponential Linear Unit (ELU) function element-wise.
- Described: https://paperswithcode.com/method/elu
- Paper: https://arxiv.org/abs/1511.07289v5
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).elu().numpy())
[-0.9502 -0.8647 -0.6321 0. 1. 2. 3. ]
Source code in tinygrad/tensor.py
3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 |
|
celu
¤
celu(alpha=1.0) -> Tensor
Applies the Continuously differentiable Exponential Linear Unit (CELU) function element-wise.
- Described: https://paperswithcode.com/method/celu
- Paper: https://arxiv.org/abs/1704.07483
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).celu().numpy())
[-0.9502 -0.8647 -0.6321 0. 1. 2. 3. ]
Source code in tinygrad/tensor.py
3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 |
|
selu
¤
selu(alpha=1.67326, gamma=1.0507) -> Tensor
Applies the Scaled Exponential Linear Unit (SELU) function element-wise.
- Described: https://paperswithcode.com/method/selu
- Paper: https://arxiv.org/abs/1706.02515v5
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).selu().numpy())
[-1.6706 -1.5202 -1.1113 0. 1.0507 2.1014 3.1521]
Source code in tinygrad/tensor.py
3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 |
|
swish
¤
swish() -> Tensor
See .silu()
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).swish().numpy())
[-0.1423 -0.2384 -0.2689 0. 0.7311 1.7616 2.8577]
Source code in tinygrad/tensor.py
3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 |
|
silu
¤
silu() -> Tensor
Applies the Sigmoid Linear Unit (SiLU) function element-wise.
- Described: https://paperswithcode.com/method/silu
- Paper: https://arxiv.org/abs/1606.08415
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).silu().numpy())
[-0.1423 -0.2384 -0.2689 0. 0.7311 1.7616 2.8577]
Source code in tinygrad/tensor.py
3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 |
|
relu6
¤
relu6() -> Tensor
Applies the ReLU6 function element-wise.
- Described: https://paperswithcode.com/method/relu6
- Paper: https://arxiv.org/abs/1704.04861v1
print(Tensor([-9., -6., -3., 0., 3., 6., 9.]).relu6().numpy())
[0. 0. 0. 0. 3. 6. 6.]
Source code in tinygrad/tensor.py
3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 |
|
hardswish
¤
hardswish() -> Tensor
Applies the Hardswish function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).hardswish().numpy())
[-0. -0.3333 -0.3333 0. 0.6667 1.6667 3. ]
Source code in tinygrad/tensor.py
3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 |
|
tanh
¤
tanh() -> Tensor
Applies the Hyperbolic Tangent (tanh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).tanh().numpy())
[-0.9951 -0.964 -0.7616 0. 0.7616 0.964 0.9951]
Source code in tinygrad/tensor.py
3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 |
|
sinh
¤
sinh() -> Tensor
Applies the Hyperbolic Sine (sinh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sinh().numpy())
[-10.0179 -3.6269 -1.1752 0. 1.1752 3.6269 10.0179]
Source code in tinygrad/tensor.py
3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 |
|
cosh
¤
cosh() -> Tensor
Applies the Hyperbolic Cosine (cosh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).cosh().numpy())
[10.0677 3.7622 1.5431 1. 1.5431 3.7622 10.0677]
Source code in tinygrad/tensor.py
3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 |
|
atanh
¤
atanh() -> Tensor
Applies the Inverse Hyperbolic Tangent (atanh) function element-wise.
print(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9]).atanh().numpy())
[-1.4722 -0.6931 -0.3095 0. 0.3095 0.6931 1.4722]
Source code in tinygrad/tensor.py
3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 |
|
asinh
¤
asinh() -> Tensor
Applies the Inverse Hyperbolic Sine (asinh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).asinh().numpy())
[-1.8184 -1.4436 -0.8814 0. 0.8814 1.4436 1.8184]
Source code in tinygrad/tensor.py
3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 |
|
acosh
¤
acosh() -> Tensor
Applies the Inverse Hyperbolic Cosine (acosh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).acosh().numpy())
[ nan nan nan nan 0. 1.317 1.7627]
Source code in tinygrad/tensor.py
3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 |
|
hardtanh
¤
hardtanh(min_val=-1, max_val=1) -> Tensor
Applies the Hardtanh function element-wise.
print(Tensor([-1.5, -1.0, -0.5, 0., 0.5, 1.0, 1.5]).hardtanh().numpy())
[-1. -1. -0.5 0. 0.5 1. 1. ]
Source code in tinygrad/tensor.py
3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 |
|
erf
¤
erf() -> Tensor
Applies error function element-wise.
- Described: https://en.wikipedia.org/wiki/Error_function
print(Tensor([-1.5, -1.0, -0.5, 0., 0.5, 1.0, 1.5]).erf().numpy())
[-0.9661 -0.8427 -0.5205 0. 0.5205 0.8427 0.9661]
Source code in tinygrad/tensor.py
3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 |
|
gelu
¤
gelu() -> Tensor
Applies the Gaussian Error Linear Unit (GELU) function element-wise.
- Described: https://paperswithcode.com/method/gelu
- Paper: https://arxiv.org/abs/1606.08415v5
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).gelu().numpy())
[-0.0036 -0.0454 -0.1588 0. 0.8412 1.9546 2.9964]
Source code in tinygrad/tensor.py
3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 |
|
quick_gelu
¤
quick_gelu() -> Tensor
Applies the Sigmoid GELU approximation element-wise.
- Described: https://paperswithcode.com/method/gelu
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).quick_gelu().numpy())
[-0.0181 -0.0643 -0.1542 0. 0.8458 1.9357 2.9819]
Source code in tinygrad/tensor.py
3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 |
|
leaky_relu
¤
leaky_relu(neg_slope=0.01) -> Tensor
Applies the Leaky ReLU function element-wise.
- Described: https://paperswithcode.com/method/leaky-relu
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).leaky_relu().numpy())
[-0.03 -0.02 -0.01 0. 1. 2. 3. ]
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).leaky_relu(neg_slope=0.42).numpy())
[-1.26 -0.84 -0.42 0. 1. 2. 3. ]
Source code in tinygrad/tensor.py
3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 |
|
mish
¤
mish() -> Tensor
Applies the Mish function element-wise.
- Described: https://paperswithcode.com/method/mish
- Paper: https://arxiv.org/abs/1908.08681v3
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).mish().numpy())
[-0.1456 -0.2525 -0.3034 0. 0.8651 1.944 2.9865]
Source code in tinygrad/tensor.py
3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 |
|
softplus
¤
softplus(beta=1) -> Tensor
Applies the Softplus function element-wise.
- Described: https://paperswithcode.com/method/softplus
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).softplus().numpy())
[0.0486 0.1269 0.3133 0.6931 1.3133 2.1269 3.0486]
Source code in tinygrad/tensor.py
3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 |
|
softsign
¤
softsign() -> Tensor
Applies the Softsign function element-wise.
- Described: https://paperswithcode.com/method/softsign
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).softsign().numpy())
[-0.75 -0.6667 -0.5 0. 0.5 0.6667 0.75 ]
Source code in tinygrad/tensor.py
3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 |
|
Elementwise Ops (broadcasted)¤
add
¤
Adds self
and x
.
Equivalent to self + x
.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.add(20).numpy())
[19.4856 21.085 20.9089 19.9159]
print(t.add(Tensor([[2.0], [3.5]])).numpy())
[[1.4856 3.085 2.9089 1.9159]
[2.9856 4.585 4.4089 3.4159]]
Source code in tinygrad/tensor.py
3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 |
|
sub
¤
Subtracts x
from self
.
Equivalent to self - x
.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.sub(20).numpy())
[-20.5144 -18.915 -19.0911 -20.0841]
print(t.sub(Tensor([[2.0], [3.5]])).numpy())
[[-2.5144 -0.915 -1.0911 -2.0841]
[-4.0144 -2.415 -2.5911 -3.5841]]
Source code in tinygrad/tensor.py
3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 |
|
mul
¤
Multiplies self
and x
.
Equivalent to self * x
.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.mul(3).numpy())
[-1.5431 3.2549 2.7267 -0.2523]
print(t.mul(Tensor([[-1.0], [2.0]])).numpy())
[[ 0.5144 -1.085 -0.9089 0.0841]
[-1.0287 2.17 1.8178 -0.1682]]
Source code in tinygrad/tensor.py
3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 |
|
div
¤
div(
x: Tensor | ConstType,
reverse=False,
rounding_mode: Literal["trunc", "floor"] | None = None,
) -> Tensor
Divides self
by x
.
Equivalent to self / x
.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
div
performs true division.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.div(3).numpy())
[-0.1715 0.3617 0.303 -0.028 ]
print(Tensor([1, 4, 10]).div(Tensor([2, 3, 4])).numpy())
[0.5 1.3333 2.5 ]
Source code in tinygrad/tensor.py
3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 |
|
idiv
¤
Divides self
by x
.
Equivalent to self // x
.
Supports broadcasting to a common shape, type promotion, and integer inputs.
idiv
performs integer division (truncate towards zero).
print(Tensor([-4, 7, 5, 4, -7, 8]).idiv(Tensor([2, -3, 8, -2, 3, 5])).numpy())
[-2 -2 0 -2 -2 1]
Source code in tinygrad/tensor.py
3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 |
|
mod
¤
Mod self
by x
.
Equivalent to self % x
.
Supports broadcasting to a common shape, type promotion, and integer inputs.
print(Tensor([-4, 7, 5, 4, -7, 8]).mod(Tensor([2, -3, 8, -2, 3, 5])).numpy())
[ 0 -2 5 0 2 3]
Source code in tinygrad/tensor.py
3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 |
|
bitwise_xor
¤
Computes bitwise xor of self
and x
.
Equivalent to self ^ x
.
Supports broadcasting to a common shape, type promotion, and integer, boolean inputs.
print(Tensor([-1, -2, 3]).bitwise_xor(Tensor([1, 0, 3])).numpy())
[-2 -2 0]
print(Tensor([True, True, False, False]).bitwise_xor(Tensor([True, False, True, False])).numpy())
[False True True False]
Source code in tinygrad/tensor.py
3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 |
|
bitwise_and
¤
Compute the bitwise AND of self
and x
.
Equivalent to self & x
.
Supports broadcasting to a common shape, type promotion, and integer, boolean inputs.
print(Tensor([2, 5, 255]).bitwise_and(Tensor([3, 14, 16])).numpy())
[ 2 4 16]
print(Tensor([True, True, False, False]).bitwise_and(Tensor([True, False, True, False])).numpy())
[ True False False False]
Source code in tinygrad/tensor.py
3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 |
|
bitwise_or
¤
Compute the bitwise OR of self
and x
.
Equivalent to self | x
.
Supports broadcasting to a common shape, type promotion, and integer, boolean inputs.
print(Tensor([2, 5, 255]).bitwise_or(Tensor([4, 4, 4])).numpy())
[ 6 5 255]
print(Tensor([True, True, False, False]).bitwise_or(Tensor([True, False, True, False])).numpy())
[ True True True False]
Source code in tinygrad/tensor.py
3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 |
|
bitwise_not
¤
bitwise_not() -> Tensor
Compute the bitwise NOT of self
.
Equivalent to ~self
.
print(Tensor([0, 2, 5, 255], dtype="int8").bitwise_not().numpy())
[-1 -3 -6 0]
print(Tensor([True, False]).bitwise_not().numpy())
[False True]
Source code in tinygrad/tensor.py
3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 |
|
lshift
¤
Computes left arithmetic shift of self
by x
bits. self
must have unsigned dtype.
Equivalent to self << x
.
print(Tensor([1, 3, 31], dtype=dtypes.uint8).lshift(2).numpy())
[ 4 12 124]
Source code in tinygrad/tensor.py
3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 |
|
rshift
¤
Computes right arithmetic shift of self
by x
bits. self
must have unsigned dtype.
Equivalent to self >> x
.
print(Tensor([4, 13, 125], dtype=dtypes.uint8).rshift(2).numpy())
[ 1 3 31]
Source code in tinygrad/tensor.py
3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 |
|
pow
¤
Computes power of self
with x
.
Equivalent to self ** x
.
print(Tensor([-1, 2, 3]).pow(2.0).numpy())
[1 4 9]
print(Tensor([-1, 2, 3]).pow(Tensor([-1.5, 0.5, 1.5])).numpy())
[-2147483648 1 5]
print((2.0 ** Tensor([-1, 2, 3])).numpy())
[0 4 8]
Source code in tinygrad/tensor.py
3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 |
|
maximum
¤
Computes element-wise maximum of self
and x
.
print(Tensor([-1, 2, 3]).maximum(1).numpy())
[1 2 3]
print(Tensor([-1, 2, 3]).maximum(Tensor([-4, -2, 9])).numpy())
[-1 2 9]
Source code in tinygrad/tensor.py
3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 |
|
minimum
¤
Computes element-wise minimum of self
and x
.
print(Tensor([-1, 2, 3]).minimum(1).numpy())
[-1 1 1]
print(Tensor([-1, 2, 3]).minimum(Tensor([-4, -2, 9])).numpy())
[-4 -2 3]
Source code in tinygrad/tensor.py
3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 |
|
where
¤
Return a tensor of elements selected from either x
or y
, depending on self
.
output_i = x_i if self_i else y_i
.
cond = Tensor([[True, True, False], [True, False, False]])
print(cond.where(1, 3).numpy())
[[1 1 3]
[1 3 3]]
Tensor.manual_seed(42)
cond = Tensor.randn(2, 3)
print(cond.numpy())
[[ 0.9779 0.4678 0.5526]
[-0.3288 -0.8555 0.2753]]
print((cond > 0).where(cond, -float("inf")).numpy())
[[0.9779 0.4678 0.5526]
[ -inf -inf 0.2753]]
Source code in tinygrad/tensor.py
3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 |
|
copysign
¤
copysign(other) -> Tensor
Return a tensor of with the magnitude of self
and the sign of other
, elementwise.
Source code in tinygrad/tensor.py
3625 3626 3627 3628 3629 3630 3631 3632 |
|
Casting Ops¤
cast
¤
cast(dtype: DTypeLike) -> Tensor
Casts self
to the given dtype
.
t = Tensor([-1, 2.5, 3], dtype=dtypes.float)
print(t.dtype, t.numpy())
dtypes.float [-1. 2.5 3. ]
t = t.cast(dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.cast(dtypes.uint8)
print(t.dtype, t.numpy())
dtypes.uchar [255 2 3]
Source code in tinygrad/tensor.py
3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 |
|
bitcast
¤
bitcast(dtype: DTypeLike) -> Tensor
Bitcasts self
to the given dtype
of the same itemsize.
self
must not require a gradient.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.bitcast(dtypes.uint32)
print(t.dtype, t.numpy())
dtypes.uint [4294967295 2 3]
Source code in tinygrad/tensor.py
4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 |
|
float
¤
float() -> Tensor
Convenience method to cast self
to a float32
Tensor.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.float()
print(t.dtype, t.numpy())
dtypes.float [-1. 2. 3.]
Source code in tinygrad/tensor.py
4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 |
|
half
¤
half() -> Tensor
Convenience method to cast self
to a float16
Tensor.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.half()
print(t.dtype, t.numpy())
dtypes.half [-1. 2. 3.]
Source code in tinygrad/tensor.py
4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 |
|
int
¤
int() -> Tensor
Convenience method to cast self
to a int32
Tensor.
t = Tensor([-1.5, -0.5, 0.0, 0.5, 1.5])
print(t.dtype, t.numpy())
dtypes.float [-1.5 -0.5 0. 0.5 1.5]
t = t.int()
print(t.dtype, t.numpy())
dtypes.int [-1 0 0 0 1]
Source code in tinygrad/tensor.py
4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 |
|
bool
¤
bool() -> Tensor
Convenience method to cast self
to a bool
Tensor.
t = Tensor([-1, 0, 1])
print(t.dtype, t.numpy())
dtypes.int [-1 0 1]
t = t.bool()
print(t.dtype, t.numpy())
dtypes.bool [ True False True]
Source code in tinygrad/tensor.py
4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 |
|