1
0
Fork 0

tinyrocs image in various sizes

main
Jeff Moe 2024-02-11 11:27:59 -07:00
parent 865c0f20f9
commit e47e7290eb
11 changed files with 20 additions and 8 deletions

View File

@ -11,4 +11,5 @@ pip install librosa nltk phonemizer protobuf pyyaml \
tf2onnx lm_eval onnxruntime pydot tensorflow_addons
# If portaudio.h is available
pip install pyaudio
pip install -e .
# pip install -e .
pip install tinygrad

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 422 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View File

@ -9,7 +9,7 @@ msgid ""
msgstr ""
"Project-Id-Version: tinyrocs: Direct to Chip Liquid Cooled GPU AI Cluster 0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2024-02-10 09:50-0700\n"
"POT-Creation-Date: 2024-02-11 10:57-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: en\n"
@ -43,30 +43,37 @@ msgid ""
"See the Output section of this documentation for example tinygrad output."
msgstr ""
#: ../../../_source/tinygrad.rst:19
#: ../../../_source/tinygrad.rst:18
msgid ""
"Note, installing via ``pip install -e .`` doesn't pickup the ``runtime`` dir "
"(?). This will cause errors such as ``tinygrad.runtime.ops_gpu`` import "
"errors."
msgstr ""
#: ../../../_source/tinygrad.rst:23
msgid "llama"
msgstr ""
#: ../../../_source/tinygrad.rst:20
#: ../../../_source/tinygrad.rst:24
msgid "Running tinygrad ``llama.py`` using the Phind CodeLlama 34B model."
msgstr ""
#: ../../../_source/tinygrad.rst:28
#: ../../../_source/tinygrad.rst:32
msgid ""
"When using ``--shard 5`` this gives an error in ``device.split``. It does "
"start loading the model across all the GPUs though, so it half starts..."
msgstr ""
#: ../../../_source/tinygrad.rst:32
#: ../../../_source/tinygrad.rst:36
msgid ""
"Running without sharding it gives a HIP out of memory error, since it only "
"runs on one GPU."
msgstr ""
#: ../../../_source/tinygrad.rst:37
#: ../../../_source/tinygrad.rst:41
msgid "mixtral"
msgstr ""
#: ../../../_source/tinygrad.rst:38
#: ../../../_source/tinygrad.rst:42
msgid "MOE."
msgstr ""

View File

@ -15,6 +15,10 @@ Then run examples such as ``python examples/coder.py``.
See the Output section of this documentation for example tinygrad output.
Note, installing via ``pip install -e .`` doesn't pickup the ``runtime`` dir (?).
This will cause errors such as ``tinygrad.runtime.ops_gpu`` import errors.
llama
-----
Running tinygrad ``llama.py`` using the Phind CodeLlama 34B model.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB