# satnogs-wut The goal of satnogs-wut is to have a script that will take an observation ID and return an answer whether the observation is "good", "bad", or "failed". ## Good Observation ![Good Observation](pics/waterfall-good.png) ## Bad Observation ![Bad Observation](pics/waterfall-bad.png) ## Failed Observation ![Failed Observation](pics/waterfall-failed.png) # Machine Learning The system at present is build upon the following: * Debian * Tensorflow * Keras Learning/Testing, results are inaccurate. # wut? The following scripts are in the repo: * `wut` --- Feed it an observation ID and it returns if it is a "good", "bad", or "failed" observation. * `wut-compare` --- Compare an observation IDs' presumably human vetting with a `wut` vetting. * `wut-compare-all` --- Compare all the observations in `download/` with `wut` vettings. * `wut-dl-sort` --- Populate `data/` dir with waterfalls from `download/`. * `wut-ml` --- Main machine learning Python script using Tensorflow and Keras. * `wut-obs` --- Download the JSON for an observation ID. * `wut-review-staging` --- Review all images in `data/staging`. * `wut-water` --- Download waterfall for an observation ID to `download/[ID]`. * `wut-water-range` --- Download waterfalls for a range of observation IDs to `download/[ID]`. # Installation Most of the scripts are simple shell scripts with few dependencies. ## Setup The scripts use files that are ignored in the git repo. So you need to create those directories: ``` mkdir -p download mkdir -p data/train/good mkdir -p data/train/bad mkdir -p data/train/failed mkdir -p data/val/good mkdir -p data/val/bad mkdir -p data/val/failed mkdir -p data/staging mkdir -p data/test/unvetted ``` ## Debian Packages You'll need `curl` and `jq`, both in Debian's repos. ``` apt update apt install curl jq ``` ## Machine Learning For the machine learning scripts, like `wut-ml`, both Tensorflow and Keras need to be installed. The versions of those in Debian didn't work for me. IIRC, for Tensorflow I built a `pip` of version 2.0.0 from git and installed that. I installed Keras from `pip`. Something like: ``` # XXX These aren't the exact commands, need to check... # Install bazel or whatever their build system is # Install Tensorflow git clone tensorflow... cd tensorflow ./configure # run some bazel command dpkg -i /tmp/pkg_foo/*.deb apt update apt -f install # Install Keras pip3 install --user keras # A million other commands.... ``` # Usage The main purpose of the script is to evaluate an observation, but to do that, it needs to build a corpus of observations to learn from. So many of the scripts in this repo are just for downloading and managing observations. The following steps need to be performed: 1. Download waterfalls and JSON descriptions with `wut-get-waterfall-range`. These get put in the `downloads/[ID]/` directories. 1. Organize downloaded waterfalls into categories (e.g. "good", "bad", "failed"). Use `wut-dl-sort` script. The script them into their respective directories under: * `data/train/good/` * `data/train/bad/` * `data/train/failed/` * `data/val/good/` * `data/val/bad/` * `data/val/failed/` 1. Use machine learning script `wut-ml` to build a model based on the files in the `data/train` and `data/val` directories. 1. Rate an observation using the `wut` script. # Caveats This is the first machine learning script I've done, I know little about satellites and less about radio, and I'm not a programmer. # Source License / Copying Main repository is available here: * https://spacecruft.org/spacecruft/satnogs-wut License: CC By SA 4.0 International and/or GPLv3+ at your discretion. Other code licensed under their own respective licenses. Copyright (C) 2019, 2020, Jeff Moe