Elementwise
Elementwise ops operate on a per element basis. They don't change the shape of the tensor.
Unary Ops (math)¤
logical_not
¤
logical_not() -> Tensor
Computes the logical NOT of the tensor element-wise.
print(Tensor([False, True]).logical_not().numpy())
[ True False]
Source code in tinygrad/tensor.py
2718 2719 2720 2721 2722 2723 2724 2725 2726 | |
neg
¤
neg() -> Tensor
Negates the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).neg().numpy())
[ 3. 2. 1. -0. -1. -2. -3.]
Source code in tinygrad/tensor.py
2728 2729 2730 2731 2732 2733 2734 2735 2736 | |
log
¤
log() -> Tensor
Computes the natural logarithm element-wise.
See: https://en.wikipedia.org/wiki/Logarithm
print(Tensor([1., 2., 4., 8.]).log().numpy())
[0. 0.6931 1.3863 2.0794]
Source code in tinygrad/tensor.py
2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 | |
log2
¤
log2() -> Tensor
Computes the base-2 logarithm element-wise.
See: https://en.wikipedia.org/wiki/Logarithm
print(Tensor([1., 2., 4., 8.]).log2().numpy())
[0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 | |
exp
¤
exp() -> Tensor
Computes the exponential function element-wise.
See: https://en.wikipedia.org/wiki/Exponential_function
print(Tensor([0., 1., 2., 3.]).exp().numpy())
[ 1. 2.7183 7.3891 20.0855]
Source code in tinygrad/tensor.py
2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 | |
exp2
¤
exp2() -> Tensor
Computes the base-2 exponential function element-wise.
See: https://en.wikipedia.org/wiki/Exponential_function
print(Tensor([0., 1., 2., 3.]).exp2().numpy())
[1. 2. 4. 8.]
Source code in tinygrad/tensor.py
2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 | |
sqrt
¤
sqrt() -> Tensor
Computes the square root of the tensor element-wise.
print(Tensor([1., 2., 3., 4.]).sqrt().numpy())
[1. 1.4142 1.7321 2. ]
Source code in tinygrad/tensor.py
2813 2814 2815 2816 2817 2818 2819 2820 2821 | |
rsqrt
¤
rsqrt()
Computes the reciprocal of the square root of the tensor element-wise.
print(Tensor([1., 2., 3., 4.]).rsqrt().numpy())
[1. 0.7071 0.5774 0.5 ]
Source code in tinygrad/mixin/math.py
508 509 510 511 512 513 514 515 516 | |
sin
¤
sin() -> Tensor
Computes the sine of the tensor element-wise.
print(Tensor([0., math.pi/2, math.pi, 3*math.pi/2, 2*math.pi]).sin().numpy())
[ 0. 1. -0. -1. 0.]
Source code in tinygrad/tensor.py
2823 2824 2825 2826 2827 2828 2829 2830 2831 | |
cos
¤
cos() -> Tensor
Computes the cosine of the tensor element-wise.
print(Tensor([0., math.pi/2, math.pi, 3*math.pi/2, 2*math.pi]).cos().numpy())
[ 1.0000e+00 0.0000e+00 -1.0000e+00 -2.3842e-07 1.0000e+00]
Source code in tinygrad/tensor.py
2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 | |
tan
¤
tan() -> Tensor
Computes the tangent of the tensor element-wise.
print(Tensor([0., math.pi/4, math.pi/2, 3*math.pi/4, math.pi]).tan().numpy())
[ 0. 1. inf -1. 0.]
Source code in tinygrad/tensor.py
2844 2845 2846 2847 2848 2849 2850 2851 2852 | |
asin
¤
asin() -> Tensor
Computes the inverse sine (arcsine) of the tensor element-wise.
print(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9]).asin().numpy())
[-1.1198 -0.6435 -0.3047 0. 0.3047 0.6435 1.1198]
Source code in tinygrad/tensor.py
2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 | |
acos
¤
acos() -> Tensor
Computes the inverse cosine (arccosine) of the tensor element-wise.
print(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9]).acos().numpy())
[2.6906 2.2143 1.8755 1.5708 1.2661 0.9273 0.451 ]
Source code in tinygrad/tensor.py
2867 2868 2869 2870 2871 2872 2873 2874 2875 | |
atan
¤
atan() -> Tensor
Computes the inverse tangent (arctan) of the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).atan().numpy())
[-1.249 -1.1071 -0.7854 0. 0.7854 1.1071 1.249 ]
Source code in tinygrad/tensor.py
2877 2878 2879 2880 2881 2882 2883 2884 2885 | |
trunc
¤
trunc()
Truncates the tensor element-wise.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).trunc().numpy())
[-3. -2. -1. -0. 0. 1. 2. 3.]
Source code in tinygrad/mixin/math.py
261 262 263 264 265 266 267 268 269 | |
ceil
¤
ceil()
Rounds the tensor element-wise towards positive infinity.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).ceil().numpy())
[-3. -2. -1. -0. 1. 2. 3. 4.]
Source code in tinygrad/mixin/math.py
347 348 349 350 351 352 353 354 355 | |
floor
¤
floor()
Rounds the tensor element-wise towards negative infinity.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).floor().numpy())
[-4. -3. -2. -1. 0. 1. 2. 3.]
Source code in tinygrad/mixin/math.py
357 358 359 360 361 362 363 364 365 | |
round
¤
round() -> Tensor
Rounds the tensor element-wise with rounding half to even.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).round().numpy())
[-4. -2. -2. 0. 0. 2. 2. 4.]
Source code in tinygrad/tensor.py
2889 2890 2891 2892 2893 2894 2895 2896 2897 | |
isinf
¤
Checks the tensor element-wise to return True where the element is infinity, otherwise returns False
print(Tensor([1, float('inf'), 2, float('-inf'), float('nan')]).isinf().numpy())
[False True False True False]
Source code in tinygrad/mixin/math.py
327 328 329 330 331 332 333 334 335 | |
isnan
¤
isnan()
Checks the tensor element-wise to return True where the element is NaN, otherwise returns False
print(Tensor([1, float('inf'), 2, float('-inf'), float('nan')]).isnan().numpy())
[False False False False True]
Source code in tinygrad/mixin/math.py
317 318 319 320 321 322 323 324 325 | |
isfinite
¤
isfinite()
Checks the tensor element-wise to return True where the element is finite, otherwise returns False
print(Tensor([1, float('inf'), 2, float('-inf'), float('nan')]).isfinite().numpy())
[ True False True False False]
Source code in tinygrad/mixin/math.py
337 338 339 340 341 342 343 344 345 | |
lerp
¤
Linearly interpolates between self and end by weight.
print(Tensor([1., 2., 3.]).lerp(Tensor([4., 5., 6.]), 0.5).numpy())
[2.5 3.5 4.5]
Source code in tinygrad/tensor.py
2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 | |
square
¤
square()
Squares the tensor element-wise.
Equivalent to self*self.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).square().numpy())
[9. 4. 1. 0. 1. 4. 9.]
Source code in tinygrad/mixin/math.py
289 290 291 292 293 294 295 296 297 298 | |
clamp
¤
clamp(min_=None, max_=None)
Clips (clamps) the values in the tensor between min_ and max_ element-wise.
If min_ is None, there is no lower bound. If max_ is None, there is no upper bound.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).clip(-1, 1).numpy())
[-1. -1. -1. 0. 1. 1. 1.]
Source code in tinygrad/mixin/math.py
300 301 302 303 304 305 306 307 308 309 310 311 | |
clip
¤
clip(min_=None, max_=None)
Alias for Tensor.clamp.
Source code in tinygrad/mixin/math.py
313 314 315 | |
sign
¤
sign() -> Tensor
Returns the sign of the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sign().numpy())
[-1. -1. -1. 0. 1. 1. 1.]
Source code in tinygrad/tensor.py
2912 2913 2914 2915 2916 2917 2918 2919 2920 | |
abs
¤
abs() -> Tensor
Computes the absolute value of the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).abs().numpy())
[3. 2. 1. 0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2922 2923 2924 2925 2926 2927 2928 2929 2930 | |
reciprocal
¤
reciprocal() -> Tensor
Computes 1/x element-wise.
print(Tensor([1., 2., 3., 4.]).reciprocal().numpy())
[1. 0.5 0.3333 0.25 ]
Source code in tinygrad/tensor.py
2932 2933 2934 2935 2936 2937 2938 2939 2940 | |
Unary Ops (activation)¤
relu
¤
relu()
Applies the Rectified Linear Unit (ReLU) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).relu().numpy())
[0. 0. 0. 0. 1. 2. 3.]
Source code in tinygrad/mixin/math.py
367 368 369 370 371 372 373 374 375 376 | |
sigmoid
¤
sigmoid()
Applies the Sigmoid function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sigmoid().numpy())
[0.0474 0.1192 0.2689 0.5 0.7311 0.8808 0.9526]
Source code in tinygrad/mixin/math.py
378 379 380 381 382 383 384 385 386 387 388 | |
logsigmoid
¤
logsigmoid() -> Tensor
Applies the LogSigmoid function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).logsigmoid().numpy())
[-3.0486 -2.1269 -1.3133 -0.6931 -0.3133 -0.1269 -0.0486]
Source code in tinygrad/tensor.py
2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 | |
hardsigmoid
¤
Applies the Hardsigmoid function element-wise.
NOTE: default alpha and beta values are taken from torch
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).hardsigmoid().numpy())
[0. 0.1667 0.3333 0.5 0.6667 0.8333 1. ]
Source code in tinygrad/mixin/math.py
414 415 416 417 418 419 420 421 422 423 424 425 | |
elu
¤
elu(alpha=1.0) -> Tensor
Applies the Exponential Linear Unit (ELU) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).elu().numpy())
[-0.9502 -0.8647 -0.6321 0. 1. 2. 3. ]
Source code in tinygrad/tensor.py
2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 | |
celu
¤
celu(alpha=1.0) -> Tensor
Applies the Continuously differentiable Exponential Linear Unit (CELU) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).celu().numpy())
[-0.9502 -0.8647 -0.6321 0. 1. 2. 3. ]
Source code in tinygrad/tensor.py
2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 | |
selu
¤
selu(alpha=1.67326, gamma=1.0507) -> Tensor
Applies the Scaled Exponential Linear Unit (SELU) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).selu().numpy())
[-1.6706 -1.5202 -1.1113 0. 1.0507 2.1014 3.1521]
Source code in tinygrad/tensor.py
2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 | |
swish
¤
swish()
See .silu()
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).swish().numpy())
[-0.1423 -0.2384 -0.2689 0. 0.7311 1.7616 2.8577]
Source code in tinygrad/mixin/math.py
484 485 486 487 488 489 490 491 492 493 494 | |
silu
¤
silu()
Applies the Sigmoid Linear Unit (SiLU) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).silu().numpy())
[-0.1423 -0.2384 -0.2689 0. 0.7311 1.7616 2.8577]
Source code in tinygrad/mixin/math.py
496 497 498 499 500 501 502 503 504 505 506 | |
relu6
¤
relu6()
Applies the ReLU6 function element-wise.
print(Tensor([-9., -6., -3., 0., 3., 6., 9.]).relu6().numpy())
[0. 0. 0. 0. 3. 6. 6.]
Source code in tinygrad/mixin/math.py
390 391 392 393 394 395 396 397 398 399 400 | |
hardswish
¤
hardswish()
Applies the Hardswish function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).hardswish().numpy())
[-0. -0.3333 -0.3333 0. 0.6667 1.6667 3. ]
Source code in tinygrad/mixin/math.py
402 403 404 405 406 407 408 409 410 411 412 | |
tanh
¤
tanh()
Applies the Hyperbolic Tangent (tanh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).tanh().numpy())
[-0.9951 -0.964 -0.7616 0. 0.7616 0.964 0.9951]
Source code in tinygrad/mixin/math.py
450 451 452 453 454 455 456 457 458 459 460 | |
sinh
¤
sinh() -> Tensor
Applies the Hyperbolic Sine (sinh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sinh().numpy())
[-10.0179 -3.6269 -1.1752 0. 1.1752 3.6269 10.0179]
Source code in tinygrad/tensor.py
2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 | |
cosh
¤
cosh() -> Tensor
Applies the Hyperbolic Cosine (cosh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).cosh().numpy())
[10.0677 3.7622 1.5431 1. 1.5431 3.7622 10.0677]
Source code in tinygrad/tensor.py
2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 | |
atanh
¤
atanh() -> Tensor
Applies the Inverse Hyperbolic Tangent (atanh) function element-wise.
print(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9]).atanh().numpy())
[-1.4722 -0.6931 -0.3095 0. 0.3095 0.6931 1.4722]
Source code in tinygrad/tensor.py
3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 | |
asinh
¤
asinh() -> Tensor
Applies the Inverse Hyperbolic Sine (asinh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).asinh().numpy())
[-1.8184 -1.4436 -0.8814 0. 0.8814 1.4436 1.8184]
Source code in tinygrad/tensor.py
3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 | |
acosh
¤
acosh() -> Tensor
Applies the Inverse Hyperbolic Cosine (acosh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).acosh().numpy())
[ nan nan nan nan 0. 1.317 1.7627]
Source code in tinygrad/tensor.py
3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 | |
hardtanh
¤
hardtanh(min_val=-1, max_val=1)
Applies the Hardtanh function element-wise.
print(Tensor([-1.5, -1.0, -0.5, 0., 0.5, 1.0, 1.5]).hardtanh().numpy())
[-1. -1. -0.5 0. 0.5 1. 1. ]
Source code in tinygrad/mixin/math.py
427 428 429 430 431 432 433 434 435 | |
erf
¤
erf() -> Tensor
Applies error function element-wise.
- Described: https://en.wikipedia.org/wiki/Error_function
print(Tensor([-1.5, -1.0, -0.5, 0., 0.5, 1.0, 1.5]).erf().numpy())
[-0.9661 -0.8427 -0.5205 0. 0.5205 0.8427 0.9661]
Source code in tinygrad/tensor.py
3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 | |
gelu
¤
gelu()
Applies the Gaussian Error Linear Unit (GELU) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).gelu().numpy())
[-0.0036 -0.0454 -0.1588 0. 0.8412 1.9546 2.9964]
Source code in tinygrad/mixin/math.py
472 473 474 475 476 477 478 479 480 481 482 | |
quick_gelu
¤
quick_gelu()
Applies the Sigmoid GELU approximation element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).quick_gelu().numpy())
[-0.0181 -0.0643 -0.1542 0. 0.8458 1.9357 2.9819]
Source code in tinygrad/mixin/math.py
462 463 464 465 466 467 468 469 470 | |
leaky_relu
¤
leaky_relu(neg_slope=0.01)
Applies the Leaky ReLU function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).leaky_relu().numpy())
[-0.03 -0.02 -0.01 0. 1. 2. 3. ]
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).leaky_relu(neg_slope=0.42).numpy())
[-1.26 -0.84 -0.42 0. 1. 2. 3. ]
Source code in tinygrad/mixin/math.py
437 438 439 440 441 442 443 444 445 446 447 448 | |
mish
¤
mish() -> Tensor
Applies the Mish function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).mish().numpy())
[-0.1456 -0.2525 -0.3034 0. 0.8651 1.944 2.9865]
Source code in tinygrad/tensor.py
3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 | |
softplus
¤
softplus(beta=1.0) -> Tensor
Applies the Softplus function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).softplus().numpy())
[0.0486 0.1269 0.3133 0.6931 1.3133 2.1269 3.0486]
Source code in tinygrad/tensor.py
3066 3067 3068 3069 3070 3071 3072 3073 3074 | |
softsign
¤
softsign() -> Tensor
Applies the Softsign function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).softsign().numpy())
[-0.75 -0.6667 -0.5 0. 0.5 0.6667 0.75 ]
Source code in tinygrad/tensor.py
3076 3077 3078 3079 3080 3081 3082 3083 3084 | |
Elementwise Ops (broadcasted)¤
add
¤
Adds self and x.
Equivalent to self + x.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.add(20).numpy())
[19.4856 21.085 20.9089 19.9159]
print(t.add(Tensor([[2.0], [3.5]])).numpy())
[[1.4856 3.085 2.9089 1.9159]
[2.9856 4.585 4.4089 3.4159]]
Source code in tinygrad/mixin/math.py
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | |
sub
¤
Subtracts x from self.
Equivalent to self - x.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.sub(20).numpy())
[-20.5144 -18.915 -19.0911 -20.0841]
print(t.sub(Tensor([[2.0], [3.5]])).numpy())
[[-2.5144 -0.915 -1.0911 -2.0841]
[-4.0144 -2.415 -2.5911 -3.5841]]
Source code in tinygrad/tensor.py
3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 | |
mul
¤
Multiplies self and x.
Equivalent to self * x.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.mul(3).numpy())
[-1.5431 3.2549 2.7267 -0.2523]
print(t.mul(Tensor([[-1.0], [2.0]])).numpy())
[[ 0.5144 -1.085 -0.9089 0.0841]
[-1.0287 2.17 1.8178 -0.1682]]
Source code in tinygrad/mixin/math.py
56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 | |
div
¤
div(
x: Tensor | ConstType,
reverse=False,
rounding_mode: Literal["trunc", "floor"] | None = None,
) -> Tensor
Divides self by x.
Equivalent to self / x.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
div performs true division.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.div(3).numpy())
[-0.1715 0.3617 0.303 -0.028 ]
print(Tensor([1, 4, 10]).div(Tensor([2, 3, 4])).numpy())
[0.5 1.3333 2.5 ]
Source code in tinygrad/tensor.py
3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 | |
idiv
¤
Divides self by x.
Equivalent to self // x.
Supports broadcasting to a common shape, type promotion, and integer inputs.
idiv performs integer division (truncate towards zero).
print(Tensor([-4, 7, 5, 4, -7, 8]).idiv(Tensor([2, -3, 8, -2, 3, 5])).numpy())
[-2 -2 0 -2 -2 1]
Source code in tinygrad/mixin/math.py
122 123 124 125 126 127 128 129 130 131 132 133 | |
mod
¤
Mod self by x.
Equivalent to self % x.
Supports broadcasting to a common shape, type promotion, and integer inputs.
print(Tensor([-4, 7, 5, 4, -7, 8]).mod(Tensor([2, -3, 8, -2, 3, 5])).numpy())
[ 0 -2 5 0 2 3]
Source code in tinygrad/tensor.py
3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 | |
bitwise_xor
¤
Computes bitwise xor of self and x.
Equivalent to self ^ x.
Supports broadcasting to a common shape, type promotion, and integer, boolean inputs.
print(Tensor([-1, -2, 3]).bitwise_xor(Tensor([1, 0, 3])).numpy())
[-2 -2 0]
print(Tensor([True, True, False, False]).bitwise_xor(Tensor([True, False, True, False])).numpy())
[False True True False]
Source code in tinygrad/mixin/math.py
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | |
bitwise_and
¤
Computes the bitwise AND of self and x.
Equivalent to self & x.
Supports broadcasting to a common shape, type promotion, and integer, boolean inputs.
print(Tensor([2, 5, 255]).bitwise_and(Tensor([3, 14, 16])).numpy())
[ 2 4 16]
print(Tensor([True, True, False, False]).bitwise_and(Tensor([True, False, True, False])).numpy())
[ True False False False]
Source code in tinygrad/mixin/math.py
76 77 78 79 80 81 82 83 84 85 86 87 88 89 | |
bitwise_or
¤
Computes the bitwise OR of self and x.
Equivalent to self | x.
Supports broadcasting to a common shape, type promotion, and integer, boolean inputs.
print(Tensor([2, 5, 255]).bitwise_or(Tensor([4, 4, 4])).numpy())
[ 6 5 255]
print(Tensor([True, True, False, False]).bitwise_or(Tensor([True, False, True, False])).numpy())
[ True True True False]
Source code in tinygrad/mixin/math.py
91 92 93 94 95 96 97 98 99 100 101 102 103 104 | |
bitwise_not
¤
bitwise_not() -> Tensor
Computes the bitwise NOT of self.
Equivalent to ~self.
print(Tensor([0, 2, 5, 255], dtype="int8").bitwise_not().numpy())
[-1 -3 -6 0]
print(Tensor([True, False]).bitwise_not().numpy())
[False True]
Source code in tinygrad/tensor.py
3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 | |
lshift
¤
Computes left arithmetic shift of self by x bits. self must have unsigned dtype.
Equivalent to self << x.
print(Tensor([1, 3, 31], dtype=dtypes.uint8).lshift(2).numpy())
[ 4 12 124]
Source code in tinygrad/tensor.py
3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 | |
rshift
¤
Computes right arithmetic shift of self by x bits. self must have unsigned dtype.
Equivalent to self >> x.
print(Tensor([4, 13, 125], dtype=dtypes.uint8).rshift(2).numpy())
[ 1 3 31]
Source code in tinygrad/tensor.py
3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 | |
pow
¤
Computes power of self with x.
Equivalent to self ** x.
print(Tensor([-1, 2, 3]).pow(2.0).numpy())
[1 4 9]
print(Tensor([-1, 2, 3]).pow(Tensor([-1.5, 0.5, 1.5])).numpy())
[-2147483648 1 5]
print((2.0 ** Tensor([-1, 2, 3])).numpy())
[0.5 4. 8. ]
Source code in tinygrad/tensor.py
3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 | |
maximum
¤
Computes element-wise maximum of self and x.
print(Tensor([-1, 2, 3]).maximum(1).numpy())
[1 2 3]
print(Tensor([-1, 2, 3]).maximum(Tensor([-4, -2, 9])).numpy())
[-1 2 9]
Source code in tinygrad/tensor.py
3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 | |
minimum
¤
Computes element-wise minimum of self and x.
print(Tensor([-1, 2, 3]).minimum(1).numpy())
[-1 1 1]
print(Tensor([-1, 2, 3]).minimum(Tensor([-4, -2, 9])).numpy())
[-4 -2 3]
Source code in tinygrad/tensor.py
3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 | |
where
¤
Returns a tensor of elements selected from either x or y, depending on self.
output_i = x_i if self_i else y_i.
cond = Tensor([[True, True, False], [True, False, False]])
print(cond.where(1, 3).numpy())
[[1 1 3]
[1 3 3]]
Tensor.manual_seed(42)
cond = Tensor.randn(2, 3)
print(cond.numpy())
[[ 0.9779 0.4678 0.5526]
[-0.3288 -0.8555 0.2753]]
print((cond > 0).where(cond, -float("inf")).numpy())
[[0.9779 0.4678 0.5526]
[ -inf -inf 0.2753]]
Source code in tinygrad/tensor.py
3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 | |
copysign
¤
copysign(other) -> Tensor
Returns a tensor of with the magnitude of self and the sign of other, elementwise.
Source code in tinygrad/tensor.py
3292 3293 3294 3295 3296 3297 3298 3299 | |
logaddexp
¤
logaddexp(other) -> Tensor
Calculates (self.exp()+other.exp()).log(), elementwise.
Source code in tinygrad/tensor.py
3301 3302 3303 3304 3305 3306 | |
Casting Ops¤
cast
¤
cast(dtype: DTypeLike) -> Tensor
Casts self to the given dtype.
t = Tensor([-1, 2.5, 3], dtype=dtypes.float)
print(t.dtype, t.numpy())
dtypes.float [-1. 2.5 3. ]
t = t.cast(dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.cast(dtypes.uint8)
print(t.dtype, t.numpy())
dtypes.uchar [255 2 3]
Source code in tinygrad/tensor.py
3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 | |
bitcast
¤
bitcast(dtype: DTypeLike) -> Tensor
Bitcasts self to the given dtype of the same itemsize.
self must not require a gradient.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.bitcast(dtypes.uint32)
print(t.dtype, t.numpy())
dtypes.uint [4294967295 2 3]
Source code in tinygrad/tensor.py
3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 | |
float
¤
float() -> Tensor
Convenience method to cast self to a float32 Tensor.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.float()
print(t.dtype, t.numpy())
dtypes.float [-1. 2. 3.]
Source code in tinygrad/tensor.py
3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 | |
half
¤
half() -> Tensor
Convenience method to cast self to a float16 Tensor.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.half()
print(t.dtype, t.numpy())
dtypes.half [-1. 2. 3.]
Source code in tinygrad/tensor.py
3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 | |
int
¤
int() -> Tensor
Convenience method to cast self to a int32 Tensor.
t = Tensor([-1.5, -0.5, 0.0, 0.5, 1.5])
print(t.dtype, t.numpy())
dtypes.float [-1.5 -0.5 0. 0.5 1.5]
t = t.int()
print(t.dtype, t.numpy())
dtypes.int [-1 0 0 0 1]
Source code in tinygrad/tensor.py
3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 | |
bool
¤
bool() -> Tensor
Convenience method to cast self to a bool Tensor.
t = Tensor([-1, 0, 1])
print(t.dtype, t.numpy())
dtypes.int [-1 0 1]
t = t.bool()
print(t.dtype, t.numpy())
dtypes.bool [ True False True]
Source code in tinygrad/tensor.py
3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 | |