1
0
Fork 0
Commit graph

28 commits

Author SHA1 Message Date
Mitchell Goff ae4f0aeb5f
NumPy-like semantics for Tensor.__getitem__ (#506)
* Rewrote Tensor.__getitem__ to fix negative indices and add support for np.newaxis/None

* Fixed pad2d

* mypy doesn't know about mlops methods

* normal python behavior for out-of-bounds slicing

* type: ignore

* inlined idxfix

* added comment for __getitem__

* Better comments, better tests, and fixed bug in np.newaxis
2023-02-08 08:59:46 -06:00
George Hotz 271446e3eb
set requires_grad to None (#387)
* set requires_grad to None

* some things need gradients

* hmm, why was get_parameters filtering
2022-09-21 11:16:02 -04:00
George Hotz 9a3c048724 skip broken tests, no float64 allowed 2022-06-11 17:12:04 -07:00
George Hotz 0225360191 fixed with one return x 2022-06-11 12:08:53 -07:00
George Hotz c0d1254003 don't run unneeded grads 2022-01-15 21:32:13 -08:00
George Hotz ce3d198bb7 less lines and fix default device 2021-11-27 11:18:49 -05:00
Jacky Lee 3a91d5434f
Add dropout test (#265)
* Add dropout test

* Remove condition where training is false

* Skip dropout test when on GPU

* Revert changes to tensor.py and fix test case

* Revert change on whitespace

* Convert Tensor to cpu for testing

* Fix whitespace in tensor.py
2021-06-19 08:49:13 -07:00
Liam ebd72ff437
Test split (#231)
* Split tests

Split tests into "Test CPU" and "Test GPU".

Add test flag "TEST_DEVICES" which is a comma separated list of devices:
CPU,GPU,ANE

* Run tests based on provided TEST_DEVICES flag

By default will run all "CPU,GPU,ANE"

* fix bad quote

* Revert changes and use GPU=1

This is done through setting the default Tensor Device to Device.CPU of
GPU=1 is set.

Run GPU tests: GPU=1 pytest -s -v
2021-01-01 09:19:03 -05:00
George Hotz 2f1b2c0a3b add transpose, start on transformer 2020-12-27 16:59:12 -05:00
Liam bcf1518309
All devices are equal! (#196)
* Update all devices to be tested

ANE, CPU and OCL all now support all tests.

However tests are not currently passing on GPU and I cannot test on CPU.

Failing GPU test are not an issue caused by this update. Tests have not
been passing due to a missing "six" required installation.

OpenCL Tests have not been run since commit: 1a1c63a08b

devices have 3 types and are handle by a new DeviceTypes enum. (The goal
is to revert to Tensor.<type>, but this current setup allows for keyword
argument defaults: `device=DeviceType.CPU`)

All references to Tensor.GPU/CPU/ANE as been converted to the
corresponding `DeviceTypes` enum.

Refactor of the conversion code to allow for any device to any device
conversion.

* Add six dependency in requirements.txt

* Resolve failure to run tests

Move six into gpu required installs. Remove six from standard
installation.

* Remove repeated data conversion

* Refactor method names

Also reduce code with .to and .to_

* Dynamic device handlers

* Refactor DeviceTypes -> Device

* Add mem copy profiling back

* test_backward_pass_diamond_model passing

* Resolve Sum issue on GPU

* Revert batchnorm2d tests

* Update README with upadated API

* ANE testing with

* Last minute line gains
2020-12-15 23:44:08 -08:00
Daulet c7e95ddb21
Add diamond model test (#181)
* add backward pass test for diamond model

* fix train_efficientnet example
2020-12-11 09:21:36 -08:00
Liam 89d0ff6989
Consistent testing (#137)
* Consistent GPU classes

Convert the existing GPU classes into one standard format.

Remove duplicated functions in `test_mnist` and create a TestMNISTGPU
class. This reduces line count and ensures consistency.

Use `@unittest.skipUnless(GPU, "Requires GPU")` instead of `if GPU:` to
skip GPU testing. This will ensure that skipped tests are displayed
accordingly in the pytest output.

* Optim Testing now supports GPU

* Tensor testing now supports GPU

jacobian and gradcheck auto skipped until GPU float64 support added.

* GPU support for custom constructor methods

* Remove GPU flag from Model constructors

It was requested that the `gpu` kwarg be removed from the model
constructor. GPU conversion is now handled in the train function.

This also required the conversion of Optimizer parameters as they are
constructed prior to execution of the `train` function and are dependant
on the model GPU state.

* Fix typo: float32->float64

* Clean `get_parameters` utility

Just a quick refactor w/ the new support for optimizers.

* Remove GPU kwarg from TinyNet

Remove `gpu` kwarg from tiny net to match test_mnist `train` function.
2020-12-09 02:25:27 -08:00
George Hotz 13d34373d1 move gradcheck to extra, clean up unbroadcast 2020-11-16 08:03:31 -08:00
George Hotz 9ac1ad40d6
Add GPU Support! (do not merge yet) (#41)
* copy tensors to and from gpu

* add on GPU

* adding works

* we stick shapes in

* works on cpu and gpu

* test changes, not passing yet

* something else

* op tests pass

* add, mean, and sum have working forward/backward

* mul ops test

* no gpu support, no problem

* test pass, clean up later

* gpu cleanup

* cleanup test ops, don't let div fail

* revert more

* aimpler dispatcher

* clean up grad

* GPU and

* grad is a Tensor now

* gate test on GPU

* cleanups

* late loading gpu

* GPU as input option

* last cleanups
2020-11-01 07:00:49 -08:00
Timothy Mc Alister 15e5988323 make default parameters work for functions 2020-10-26 12:43:36 +01:00
George Hotz b27bcbe4b4 avgpool and test refactor 2020-10-25 18:40:01 -07:00
George Hotz 567707a5f6 rename max_pool2d to match torch, remove more fast conv crap 2020-10-25 17:16:47 -07:00
George Hotz 96f9cdb8a0 woah, fastconv is wrong 2020-10-25 12:56:42 -07:00
George Hotz bb98cdfef7 improve conv testing 2020-10-25 12:46:04 -07:00
George Hotz 67506eb6ba fast im2col 2020-10-25 11:49:35 -07:00
George Hotz 935f5ddaaa always keep batch size out front 2020-10-25 08:14:07 -07:00
George Hotz b91fd3afad maxpool 2020-10-25 07:43:34 -07:00
George Hotz 5756115e57 anyone else let down by the fast conv? 2020-10-23 09:09:29 -07:00
0xNaN d95adbddb4 gradcheck now returns only a bool, refactoring of test_gradcheck 2020-10-22 01:28:52 +02:00
0xNaN adbfc67456 test jacobian and numerical_jacobian against torch.autograd.functional.jacobian 2020-10-22 01:28:52 +02:00
0xNaN 1561d3b9c0 extracting jacobian and test_jacobian 2020-10-22 01:28:52 +02:00
0xNaN 93bc3c22a0 tiny gradcheck 2020-10-22 01:28:52 +02:00
Adrian Garcia Badaracco 5afe6b1f68
rename files 2020-10-21 11:28:03 -05:00
Renamed from test/test.py (Browse further)