1
0
Fork 0
Lesser forklet to learn tinygrad. https://spacecruft.org/deepcrayon/tinygrab
 
 
 
 
 
 
Go to file
= 6b44a7f729 adds beautiful and meaningful logo 2020-10-26 18:12:49 +01:00
.github/workflows remove install from workflow 2020-10-24 09:03:22 -05:00
docs adds beautiful and meaningful logo 2020-10-26 18:12:49 +01:00
test conv stride support 2020-10-26 08:54:43 -07:00
tinygrad make the example simpler 2020-10-26 09:19:20 -07:00
.gitignore add setup.py and change imports to relative 2020-10-26 18:19:50 +03:00
LICENSE readme 2020-10-18 11:27:37 -07:00
README.md adds beautiful and meaningful logo 2020-10-26 18:12:49 +01:00
requirements.txt everyone's a dev 2020-10-24 09:01:15 -05:00
setup.py make the example simpler 2020-10-26 09:19:20 -07:00

README.md


Unit Tests

For something in between a pytorch and a karpathy/micrograd

pip3 install tinygrad

This may not be the best deep learning framework, but it is a deep learning framework.

The Tensor class is a wrapper around a numpy array, except it does Tensor things.

Example

from tinygrad.tensor import Tensor

x = Tensor.eye(3)
y = Tensor([[2.0,0,-2.0]])
z = y.dot(x).sum()
z.backward()

print(x.grad)  # dz/dx
print(y.grad)  # dz/dy

Same example in torch

import torch

x = torch.eye(3, requires_grad=True)
y = torch.tensor([[2.0,0,-2.0]], requires_grad=True)
z = y.matmul(x).sum()
z.backward()

print(x.grad)  # dz/dx
print(y.grad)  # dz/dy

Neural networks?

It turns out, a decent autograd tensor library is 90% of what you need for neural networks. Add an optimizer (SGD, RMSprop, and Adam implemented) from tinygrad.optim, write some boilerplate minibatching code, and you have all you need.

Neural network example (from test/test_mnist.py)

from tinygrad.tensor import Tensor
import tinygrad.optim as optim
from tinygrad.utils import layer_init_uniform

class TinyBobNet:
  def __init__(self):
    self.l1 = Tensor(layer_init_uniform(784, 128))
    self.l2 = Tensor(layer_init_uniform(128, 10))

  def forward(self, x):
    return x.dot(self.l1).relu().dot(self.l2).logsoftmax()

model = TinyBobNet()
optim = optim.SGD([model.l1, model.l2], lr=0.001)

# ... and complete like pytorch, with (x,y) data

out = model.forward(x)
loss = out.mul(y).mean()
loss.backward()
optim.step()

The promise of small

tinygrad, with tests, will always be below 1000 lines. If it isn't, we will revert commits until tinygrad becomes smaller.

TODO

  • Reduce code
  • Increase speed
  • Add features
  • In that order