satnogs-wut/README.md

93 lines
2.8 KiB
Markdown
Raw Normal View History

2020-01-01 23:12:46 -07:00
# satnogs-wut
2020-01-02 16:44:03 -07:00
The goal of satnogs-wut is to have a script that will take an
observation ID and return an answer whether the observation is
"good", "bad", or "failed".
2020-01-02 16:52:23 -07:00
## Good Observation
2020-01-02 16:51:29 -07:00
![Good Observation](pics/waterfall-good.png)
2020-01-02 16:52:23 -07:00
## Bad Observation
2020-01-02 16:51:29 -07:00
![Bad Observation](pics/waterfall-bad.png)
2020-01-02 16:52:23 -07:00
## Failed Observation
2020-01-02 16:51:29 -07:00
![Failed Observation](pics/waterfall-failed.png)
2020-01-02 16:44:03 -07:00
# Machine Learning
The system at present is build upon the following:
* Debian
2020-01-01 23:18:12 -07:00
* Tensorflow
* Keras
Learning/Testing, results are inaccurate.
2020-01-02 16:44:03 -07:00
# wut?
The following scripts are in the repo:
* `wut` --- Feed it an observation ID and it returns if it is a "good", "bad", or "failed" observation.
* `wut-api-test` --- API Tests.
* `wut-get-obs` --- Download the JSON for an observation ID.
2020-01-02 17:14:30 -07:00
* `wut-get-staging` --- Download waterfalls to `data/staging` for review (deprecated).
2020-01-02 16:44:03 -07:00
* `wut-get-train-bad` --- Download waterfalls to `data/train/bad` for review (deprecated).
* `wut-get-train-good` --- Download waterfalls to `data/train/good` for review (deprecated).
* `wut-get-validation-bad` --- Download waterfalls to `data/validation/bad` for review (deprecated).
* `wut-get-validation-good` --- Download waterfalls to `data/validation/good` for review (deprecated).
* `wut-get-waterfall` --- Download waterfall for an observation ID to `download/[ID]`.
2020-01-02 16:57:16 -07:00
* `wut-get-waterfall-range` --- Download waterfalls for a range of observation IDs to `download/[ID]`.
2020-01-02 16:44:03 -07:00
* `wut-ml` --- Main machine learning Python script using Tensorflow and Keras.
* `wut-review-staging` --- Review all images in `data/staging`.
2020-01-02 17:11:16 -07:00
# Usage
The main purpose of the script is to evaluate an observation,
but to do that, it needs to build a corpus of observations to
learn from. So many of the scripts in this repo are just for
downloading and managing observations.
The following steps need to be performed:
1. Download waterfalls and JSON descriptions with `wut-get-waterfall-range`.
These get put in the `downloads/[ID]/` directories.
1. Organize downloaded waterfalls into categories (e.g. "good", "bad", "failed").
Note: this needs a script written.
Put them into their respective directories under:
2020-01-02 17:14:30 -07:00
2020-01-02 17:11:16 -07:00
* `data/train/good/`
2020-01-02 17:14:30 -07:00
2020-01-02 17:11:16 -07:00
* `data/train/bad/`
2020-01-02 17:14:30 -07:00
2020-01-02 17:11:16 -07:00
* `data/train/failed/`
2020-01-02 17:14:30 -07:00
2020-01-02 17:11:16 -07:00
* `data/validation/good/`
2020-01-02 17:14:30 -07:00
2020-01-02 17:11:16 -07:00
* `data/validataion/bad/`
2020-01-02 17:14:30 -07:00
2020-01-02 17:11:16 -07:00
* `data/validataion/failed/`
1. Use machine learning script `wut-ml` to build a model based on
the files in the `data/train` and `data/validation` directories.
1. Rate an observation using the `wut` script.
# Caveats
This is the first machine learning script I've done,
I know little about satellites and less about radio,
and I'm not a programmer.
2020-01-02 16:44:03 -07:00
# Source License / Copying
2020-01-02 16:56:30 -07:00
Main repository is available here:
2020-01-02 17:11:16 -07:00
2020-01-02 16:56:30 -07:00
* https://spacecruft.org/spacecruft/satnogs-wut
2020-01-02 16:55:08 -07:00
License: CC By SA 4.0 International and/or GPLv3+ at your discretion. Other code licensed under their own respective licenses.
2020-01-01 23:18:12 -07:00
2020-01-02 16:55:08 -07:00
Copyright (C) 2019, 2020, Jeff Moe