pytorch/.ci/pytorch
Jeff Moe b8fc24e092 Upstream v2.1.0 2023-11-08 09:13:36 -07:00
..
fake_numpy Forklet of Pytorch 2023-11-08 09:01:59 -07:00
perf_test Upstream v2.1.0 2023-11-08 09:13:36 -07:00
win-test-helpers Upstream v2.1.0 2023-11-08 09:13:36 -07:00
.shellcheckrc Forklet of Pytorch 2023-11-08 09:01:59 -07:00
README.md Forklet of Pytorch 2023-11-08 09:01:59 -07:00
build-mobile.sh Forklet of Pytorch 2023-11-08 09:01:59 -07:00
build.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00
codegen-test.sh Forklet of Pytorch 2023-11-08 09:01:59 -07:00
common-build.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00
common.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00
common_utils.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00
cpp_doc_push_script.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00
create_test_cert.py Upstream v2.1.0 2023-11-08 09:13:36 -07:00
docker-build-test.sh Forklet of Pytorch 2023-11-08 09:01:59 -07:00
docs-test.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00
functorch_doc_push_script.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00
macos-build-test.sh Forklet of Pytorch 2023-11-08 09:01:59 -07:00
macos-build.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00
macos-common.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00
macos-test.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00
multigpu-test.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00
print_sccache_log.py Upstream v2.1.0 2023-11-08 09:13:36 -07:00
python_doc_push_script.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00
run_glootls_test.sh Forklet of Pytorch 2023-11-08 09:01:59 -07:00
short-perf-test-cpu.sh Forklet of Pytorch 2023-11-08 09:01:59 -07:00
short-perf-test-gpu.sh Forklet of Pytorch 2023-11-08 09:01:59 -07:00
test.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00
win-build.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00
win-test.sh Upstream v2.1.0 2023-11-08 09:13:36 -07:00

README.md

This directory contains scripts for our continuous integration.

One important thing to keep in mind when reading the scripts here is that they are all based off of Docker images, which we build for each of the various system configurations we want to run on Jenkins. This means it is very easy to run these tests yourself:

  1. Figure out what Docker image you want. The general template for our images look like: registry.pytorch.org/pytorch/pytorch-$BUILD_ENVIRONMENT:$DOCKER_VERSION, where $BUILD_ENVIRONMENT is one of the build environments enumerated in pytorch-dockerfiles. The dockerfile used by jenkins can be found under the .ci directory

  2. Run docker run -it -u jenkins $DOCKER_IMAGE, clone PyTorch and run one of the scripts in this directory.

The Docker images are designed so that any "reasonable" build commands will work; if you look in build.sh you will see that it is a very simple script. This is intentional. Idiomatic build instructions should work inside all of our Docker images. You can tweak the commands however you need (e.g., in case you want to rebuild with DEBUG, or rerun the build with higher verbosity, etc.).

We have to do some work to make this so. Here is a summary of the mechanisms we use:

  • We install binaries to directories like /usr/local/bin which are automatically part of your PATH.

  • We add entries to the PATH using Docker ENV variables (so they apply when you enter Docker) and /etc/environment (so they continue to apply even if you sudo), instead of modifying PATH in our build scripts.

  • We use /etc/ld.so.conf.d to register directories containing shared libraries, instead of modifying LD_LIBRARY_PATH in our build scripts.

  • We reroute well known paths like /usr/bin/gcc to alternate implementations with update-alternatives, instead of setting CC and CXX in our implementations.