tinyrocs image in various sizes
|
@ -11,4 +11,5 @@ pip install librosa nltk phonemizer protobuf pyyaml \
|
|||
tf2onnx lm_eval onnxruntime pydot tensorflow_addons
|
||||
# If portaudio.h is available
|
||||
pip install pyaudio
|
||||
pip install -e .
|
||||
# pip install -e .
|
||||
pip install tinygrad
|
||||
|
|
After Width: | Height: | Size: 1.2 MiB |
After Width: | Height: | Size: 41 KiB |
After Width: | Height: | Size: 5.1 KiB |
After Width: | Height: | Size: 133 KiB |
After Width: | Height: | Size: 7.2 KiB |
After Width: | Height: | Size: 422 KiB |
After Width: | Height: | Size: 15 KiB |
|
@ -9,7 +9,7 @@ msgid ""
|
|||
msgstr ""
|
||||
"Project-Id-Version: tinyrocs: Direct to Chip Liquid Cooled GPU AI Cluster 0\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2024-02-10 09:50-0700\n"
|
||||
"POT-Creation-Date: 2024-02-11 10:57-0700\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: en\n"
|
||||
|
@ -43,30 +43,37 @@ msgid ""
|
|||
"See the Output section of this documentation for example tinygrad output."
|
||||
msgstr ""
|
||||
|
||||
#: ../../../_source/tinygrad.rst:19
|
||||
#: ../../../_source/tinygrad.rst:18
|
||||
msgid ""
|
||||
"Note, installing via ``pip install -e .`` doesn't pickup the ``runtime`` dir "
|
||||
"(?). This will cause errors such as ``tinygrad.runtime.ops_gpu`` import "
|
||||
"errors."
|
||||
msgstr ""
|
||||
|
||||
#: ../../../_source/tinygrad.rst:23
|
||||
msgid "llama"
|
||||
msgstr ""
|
||||
|
||||
#: ../../../_source/tinygrad.rst:20
|
||||
#: ../../../_source/tinygrad.rst:24
|
||||
msgid "Running tinygrad ``llama.py`` using the Phind CodeLlama 34B model."
|
||||
msgstr ""
|
||||
|
||||
#: ../../../_source/tinygrad.rst:28
|
||||
#: ../../../_source/tinygrad.rst:32
|
||||
msgid ""
|
||||
"When using ``--shard 5`` this gives an error in ``device.split``. It does "
|
||||
"start loading the model across all the GPUs though, so it half starts..."
|
||||
msgstr ""
|
||||
|
||||
#: ../../../_source/tinygrad.rst:32
|
||||
#: ../../../_source/tinygrad.rst:36
|
||||
msgid ""
|
||||
"Running without sharding it gives a HIP out of memory error, since it only "
|
||||
"runs on one GPU."
|
||||
msgstr ""
|
||||
|
||||
#: ../../../_source/tinygrad.rst:37
|
||||
#: ../../../_source/tinygrad.rst:41
|
||||
msgid "mixtral"
|
||||
msgstr ""
|
||||
|
||||
#: ../../../_source/tinygrad.rst:38
|
||||
#: ../../../_source/tinygrad.rst:42
|
||||
msgid "MOE."
|
||||
msgstr ""
|
||||
|
|
|
@ -15,6 +15,10 @@ Then run examples such as ``python examples/coder.py``.
|
|||
|
||||
See the Output section of this documentation for example tinygrad output.
|
||||
|
||||
Note, installing via ``pip install -e .`` doesn't pickup the ``runtime`` dir (?).
|
||||
This will cause errors such as ``tinygrad.runtime.ops_gpu`` import errors.
|
||||
|
||||
|
||||
llama
|
||||
-----
|
||||
Running tinygrad ``llama.py`` using the Phind CodeLlama 34B model.
|
||||
|
|
BIN
docs/logo.png
Before Width: | Height: | Size: 32 KiB |