Compare commits

...

31 Commits
1.0 ... master

Author SHA1 Message Date
Jeff Moe 7195382398 hostnames in worker 2022-08-17 10:24:45 -06:00
Jeff Moe 22fbe4073a Make python scripts exectuable 2022-08-17 10:23:33 -06:00
Jeff Moe ac2ae3f1cf New worker hosts, mas 2022-08-17 10:19:24 -06:00
Jeff Moe fb236fcf30 New worker hosts 2022-08-17 10:15:57 -06:00
Jeff Moe 8aee0ac624 cleanup, more modes 2022-08-17 10:06:51 -06:00
Jeff Moe 8dd443ab30 Save plot of training accuracy 2022-08-16 22:21:15 -06:00
Jeff Moe 33adccb2cb Fix paths, predict misc 2022-08-16 21:32:01 -06:00
Jeff Moe ec25b09b24 Merge branch 'master' of spacecruft.org:spacecruft/satnogs-wut 2022-08-16 21:03:11 -06:00
Jeff Moe 3068a39e3a Training cleanup, new mode 2022-08-16 21:03:02 -06:00
Jeff Moe 827b950a14 Merge branch 'master' of spacecruft.org:spacecruft/satnogs-wut 2022-08-16 20:40:29 -06:00
Jeff Moe 1fbad10405 Tensorboard Plugin python package 2022-08-16 20:40:20 -06:00
Jeff Moe 323ceda7eb Train FM 2022-08-16 20:22:48 -06:00
Jeff Moe b07a53458e Update to /srv/satnogs paths, open with latest jupyter 2022-08-16 19:41:49 -06:00
Jeff Moe 8feeb6c896 make install 2022-08-16 19:26:40 -06:00
Jeff Moe 6be11cb7aa Use tensorflow keras nowadays 2022-08-16 18:58:50 -06:00
Jeff Moe 3f6434f6dd Remove hard-coded path 2022-08-16 18:23:36 -06:00
Jeff Moe 73fbdf43da Makefiles for install/uninstall 2022-08-16 17:31:38 -06:00
Jeff Moe 20d96f0a61 Move code to src/ dir 2022-08-16 17:22:17 -06:00
Jeff Moe fa3d0b0284 New list of encoding modes 2022-08-16 17:20:45 -06:00
Jeff Moe 2c86557fcb Notes on new observation date ranges 2022-08-16 13:37:57 -06:00
Jeff Moe 35dc603832 Various download Internet Archive cruft 2022-06-12 15:06:11 -06:00
Jeff Moe 16956df5ca sleepy dl 2022-06-12 14:53:55 -06:00
Jeff Moe a094fea6bf Get _files.xml from Internet Archive too 2022-06-12 14:46:13 -06:00
Jeff Moe 8a5f8fe070 dt-10 torrents 2022-06-11 19:04:18 -06:00
Jeff Moe 5e987198bc download torrents from Internet Archive 2022-06-11 18:22:52 -06:00
Jeff Moe b6ac03590a wut-aria-* scripts for downloading Internet Archive torrents 2022-06-11 16:05:38 -06:00
Jeff Moe 270178d027 Add script to download Internet Archive torrents 2022-06-10 21:17:08 -06:00
Jeff Moe 1517670e7c wut-ia Internet Archive download stub 2022-06-10 19:41:20 -06:00
Jeff Moe 4d175ce254 Merge branch 'master' of spacecruft.org:spacecruft/satnogs-wut 2022-05-30 12:07:57 -06:00
Jeff Moe e63c52299b Observation ID by year 2022-05-30 12:07:43 -06:00
Jeff Moe 2d7f366ecc Sites re-built with new python/tf/keras/debian/etc 2022-05-30 00:47:12 -06:00
55 changed files with 684 additions and 295 deletions

1
.gitignore vendored
View File

@ -9,3 +9,4 @@ notebooks/logs/
.~lock.*#
log/
notebooks/model.png
bin/

60
Makefile 100644
View File

@ -0,0 +1,60 @@
# Makefile
prefix = /usr/local
bindir = $(prefix)/bin
all:
$(MAKE) -C src
clean:
rm -fr bin/
install:
@cp -vp bin/* $(bindir)/
uninstall:
@rm -vf \
$(bindir)/wut \
$(bindir)/wut-aria-active \
$(bindir)/wut-aria-add \
$(bindir)/wut-aria-daemon \
$(bindir)/wut-aria-info \
$(bindir)/wut-aria-methods \
$(bindir)/wut-aria-shutdown \
$(bindir)/wut-aria-stat \
$(bindir)/wut-aria-stopped \
$(bindir)/wut-aria-waiting \
$(bindir)/wut-audio-archive \
$(bindir)/wut-audio-sha1 \
$(bindir)/wut-compare \
$(bindir)/wut-compare-all \
$(bindir)/wut-compare-tx \
$(bindir)/wut-compare-txmode \
$(bindir)/wut-compare-txmode-csv \
$(bindir)/wut-dl-sort \
$(bindir)/wut-dl-sort-tx \
$(bindir)/wut-dl-sort-txmode \
$(bindir)/wut-dl-sort-txmode-all \
$(bindir)/wut-files \
$(bindir)/wut-files-data \
$(bindir)/wut-files-data-all \
$(bindir)/wut-ia-sha1 \
$(bindir)/wut-ia-torrents \
$(bindir)/wut-img-ck.py \
$(bindir)/wut-ml \
$(bindir)/wut-ml-auto \
$(bindir)/wut-ml-load \
$(bindir)/wut-ml-save \
$(bindir)/wut-obs \
$(bindir)/wut-ogg2wav \
$(bindir)/wut-review-staging \
$(bindir)/wut-rm-random \
$(bindir)/wut-tf \
$(bindir)/wut-tf.py \
$(bindir)/wut-water \
$(bindir)/wut-water-range \
$(bindir)/wut-worker \
$(bindir)/wut-worker-mas \
$(bindir)/wut-worker-mas.py \
$(bindir)/wut-worker.py \

View File

@ -23,10 +23,20 @@ observation ID and return an answer whether the observation is
![Image](pics/waterfall-failed.png)
## wut Web
Main site:
* https://wut.spacecruft.org/
Source code:
* https://spacecruft.org/spacecruft/satnogs-wut
Beta (test) site:
* https://wut-beta.spacecruft.org/
Alpha (development) site:
* https://wut-alpha.spacecruft.org/
## Observations
See also:
@ -55,14 +65,17 @@ Jupyter notebooks into websites.
* `wut.ipynb` --- Machine learning Python script using Tensorflow and Keras in a Jupyter Notebook.
* `wut-predict.ipynb` --- Make prediction (rating) of observation from pre-existing model.
* `wut-train.ipynb` --- Train models to be using by prediction engine.
* `wut-web.ipynb`
* `wut-web-beta.ipynb`
* `wut-web-alpha.ipynb`
* `wut-web.ipynb` --- Website: https://wut.spacecruft.org/
* `wut-web-beta.ipynb` --- Website: https://wut-beta.spacecruft.org/
* `wut-web-alpha.ipynb` --- Website: https://wut-alpha.spacecruft.org/
# wut scripts
The following scripts are in the repo.
* `wut` --- Feed it an observation ID and it returns if it is a "good", "bad", or "failed" observation.
* `wut-aria-add` --- Add a torrent from the Internet Archive to the aria daemon for downloading.
* `wut-aria-daemon` --- Run an aria daemon for torrent downloads from the Internet Archive.
* `wut-audio-archive` --- Downloads audio files from archive.org.
* `wut-audio-sha1` --- Verifies sha1 checksums of files downloaded from archive.org.
* `wut-compare` --- Compare an observations' current presumably human vetting with a `wut` vetting.
@ -76,6 +89,8 @@ The following scripts are in the repo.
* `wut-dl-sort-txmode-all` --- Populate `data/` dir with waterfalls from `download/` using all encodings.
* `wut-files` --- Tells you about what files you have in `downloads/` and `data/`.
* `wut-files-data` --- Tells you about what files you have in `data/`.
* `wut-ia` --- Download SatNOGS data from the Internet Archive at `archive.org`.
* `wut-ia-torrents` --- Download SatNOGS torrents from the Internet Archive at `archive.org`.
* `wut-img-ck.py` --- Validate image files are not corrupt with PIL.
* `wut-ml` --- Main machine learning Python script using Tensorflow and Keras.
* `wut-ml-auto` --- Machine learning Python script using Tensorflow and Keras, auto.
@ -157,6 +172,13 @@ Install Python packages:
pip install --user --upgrade -r requirements.txt
```
Make and install `satnogs-wut`:
```
make
sudo make install
```
### Tensorflow KVM Notes
Note, for KVM, pass cpu=host if host has "avx" in `/proc/cpuinfo`.
@ -223,6 +245,27 @@ Files in the `preprocess/` directory have been preprocessed to be used
further in the pipeline. This contains `.wav` files that have been
decoded from `.ogg` files.
## Internet Archive Downloads
The Internet Archive has a mirror of data from the SatNOGS network.
It is better to download from there to save on Libre Space Foundation
resources.
* https://archive.org/details/satnogs
To download, perhaps do something like the following.
Get an account at archive.org, then run this to set up your account locally:
```
ia configure
```
To download all the SatNOGS collections `.torrent` files from the
Internet Archive, run:
```
wut-ia-torrents
```
# Caveats
This is the first artificial intelligence script I've done,

View File

@ -10,38 +10,13 @@
"#\n",
"# https://spacecruft.org/spacecruft/satnogs-wut\n",
"# Based on data/train and data/val directories builds a wut.h5 file.\n",
"# Reads wut.h5 and tests files in data/test/unvetted/"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# GPLv3+"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Reads wut.h5 and tests files in data/test/unvetted/\n",
"#\n",
"# GPLv3+\n",
"#\n",
"# Built using Jupyter, Tensorflow, Keras"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"print(\"Start\")"
]
},
{
"cell_type": "code",
"execution_count": null,
@ -99,29 +74,12 @@
"from sklearn.decomposition import PCA\n",
"\n",
"# Seaborn pip dependency\n",
"import seaborn as sns"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Interact\n",
"# https://ipywidgets.readthedocs.io/en/stable/examples/Using%20Interact.html\n",
"import seaborn as sns\n",
"\n",
"from __future__ import print_function\n",
"from ipywidgets import interact, interactive, fixed, interact_manual\n",
"import ipywidgets as widgets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Display Images\n",
"import ipywidgets as widgets\n",
"\n",
"from IPython.display import display, Image"
]
},
@ -131,7 +89,13 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"Python import done\")"
"#ENCODING='APT'\n",
"#ENCODING='CW'\n",
"#ENCODING='FM'\n",
"#ENCODING='FSK9k6'\n",
"ENCODING='GMSK2k4'\n",
"#ENCODING='GMSK4k8'\n",
"#ENCODING='USB'"
]
},
{
@ -140,7 +104,9 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"Load HDF file\")"
"h5_file=(\"wut-\" + ENCODING + \".h5\")\n",
"model_path_h5 = os.path.join('/srv/satnogs/data/models/', ENCODING, h5_file)\n",
"print(model_path_h5)"
]
},
{
@ -149,7 +115,7 @@
"metadata": {},
"outputs": [],
"source": [
"model = load_model('data/models/wut-DUV.tf')"
"model = load_model(model_path_h5)"
]
},
{
@ -158,16 +124,9 @@
"metadata": {},
"outputs": [],
"source": [
"test_dir = os.path.join('data/', 'test')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"num_test = len(os.listdir(test_dir))"
"test_dir = os.path.join('/srv/satnogs/data/', 'test')\n",
"num_test = len(os.listdir(test_dir))\n",
"print(\"Will test\", num_test, \"waterfall PNG files under this driectory:\\n\", test_dir)"
]
},
{
@ -177,11 +136,11 @@
"outputs": [],
"source": [
"# Good results\n",
"#batch_size = 128\n",
"#epochs = 6\n",
"batch_size = 128\n",
"epochs = 6\n",
"# Testing, faster more inaccurate results\n",
"batch_size = 32\n",
"epochs = 3"
"#batch_size = 32\n",
"#epochs = 3"
]
},
{
@ -209,15 +168,6 @@
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(test_dir)"
]
},
{
"cell_type": "code",
"execution_count": null,
@ -247,7 +197,15 @@
"metadata": {},
"outputs": [],
"source": [
"# This function will plot images in the form of a grid with 1 row and 3 columns where images are placed in each column.\n",
"print(\"Number of observations to test:\", num_test)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def plotImages(images_arr):\n",
" fig, axes = plt.subplots(1, 3, figsize=(20,20))\n",
" axes = axes.flatten()\n",
@ -264,28 +222,7 @@
"metadata": {},
"outputs": [],
"source": [
"plotImages(sample_test_images[0:3])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# https://keras.io/models/sequential/\n",
"print(\"predict\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#pred=model.predict_generator(test_data_gen,\n",
"#steps=1,\n",
"#verbose=1)"
"plotImages(sample_test_images[0:2])"
]
},
{
@ -297,8 +234,7 @@
"prediction = model.predict(\n",
" x=test_data_gen,\n",
" verbose=1\n",
")\n",
"print(\"end predict\")"
")"
]
},
{
@ -316,7 +252,6 @@
"metadata": {},
"outputs": [],
"source": [
"# Show prediction score\n",
"print(prediction)"
]
},
@ -360,9 +295,7 @@
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# The End"
]
"source": []
}
],
"metadata": {
@ -381,7 +314,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.6"
}
},
"nbformat": 4,

View File

@ -3,7 +3,9 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# wut-train --- What U Think? SatNOGS Observation AI, training application.\n",
@ -67,18 +69,13 @@
"metadata": {},
"outputs": [],
"source": [
"# Visualization\n",
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"from sklearn.decomposition import PCA\n",
"# Seaborn pip dependency\n",
"import seaborn as sns\n",
"# Interact\n",
"# https://ipywidgets.readthedocs.io/en/stable/examples/Using%20Interact.html\n",
"from ipywidgets import interact, interactive, fixed, interact_manual\n",
"import ipywidgets as widgets\n",
"# Display Images\n",
"from IPython.display import display, Image\n",
"from IPython.display import SVG"
]
@ -89,14 +86,35 @@
"metadata": {},
"outputs": [],
"source": [
"ENCODING='GMSK'\n",
"batch_size = 64\n",
"epochs = 4\n",
"# Failing with this now:\n",
"#batch_size = 128\n",
"#epochs = 4\n",
"IMG_WIDTH = 416\n",
"IMG_HEIGHT = 803"
"#ENCODING='APT'\n",
"#ENCODING='BPSK1k2' # Fail\n",
"#ENCODING='FSK9k6'\n",
"#ENCODING='FM'\n",
"ENCODING='GMSK2k4'\n",
"#ENCODING='GMSK4k8'\n",
"#ENCODING='USB'"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#batch_size = 8\n",
"#atch_size = 16\n",
"#atch_size = 32\n",
"batch_size = 64\n",
"#batch_size = 128\n",
"#batch_size = 256\n",
"#epochs = 4\n",
"epochs = 8\n",
"#IMG_WIDTH = 208\n",
"#IMG_HEIGHT = 402\n",
"IMG_WIDTH = 416\n",
"IMG_HEIGHT = 803\n",
"#IMG_WIDTH = 823\n",
"#IMG_HEIGHT = 1603"
]
},
{
@ -105,7 +123,6 @@
"metadata": {},
"outputs": [],
"source": [
"train_dir = os.path.join('/srv/satnogs/data/txmodes', ENCODING )\n",
"train_dir = os.path.join('/srv/satnogs/data/txmodes', ENCODING, 'train')\n",
"val_dir = os.path.join('/srv/satnogs/data/txmodes', ENCODING, 'val')\n",
"train_good_dir = os.path.join(train_dir, 'good')\n",
@ -126,21 +143,18 @@
"metadata": {},
"outputs": [],
"source": [
"print('total training good images:', num_train_good)\n",
"print('total training bad images:', num_train_bad)\n",
"#print(\"--\")\n",
"print(\"Total training images:\", total_train)\n",
"#print('total validation good images:', num_val_good)\n",
"#print('total validation bad images:', num_val_bad)\n",
"#print(\"--\")\n",
"#print(\"Total validation images:\", total_val)\n",
"#print(\"Reduce training and validation set when testing\")\n",
"total_train = 100\n",
"total_val = 100\n",
"#print(\"Train =\")\n",
"#print(total_train)\n",
"#print(\"Validation =\")\n",
"#print(total_val)"
"print('Training good images: ', num_train_good)\n",
"print('Training bad images: ', num_train_bad)\n",
"print('Training images: ', total_train)\n",
"print('Validation good images: ', num_val_good)\n",
"print('Validation bad images: ', num_val_bad)\n",
"print('Validation images: ', total_val)\n",
"print('')\n",
"#print('Reduce training and validation set')\n",
"#total_train = 1000\n",
"#total_val = 1000\n",
"print('Training reduced to: ', total_train)\n",
"print('Validation reduced to: ', total_val)"
]
},
{
@ -210,7 +224,6 @@
"metadata": {},
"outputs": [],
"source": [
"# This function will plot images in the form of a grid with 1 row and 3 columns where images are placed in each column.\n",
"def plotImages(images_arr):\n",
" fig, axes = plt.subplots(1, 3, figsize=(20,20))\n",
" axes = axes.flatten()\n",
@ -269,8 +282,6 @@
"metadata": {},
"outputs": [],
"source": [
"#tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)\n",
"#tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir)\n",
"tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1, write_graph=True, write_images=True, embeddings_freq=1, update_freq='batch')"
]
},
@ -299,11 +310,8 @@
"metadata": {},
"outputs": [],
"source": [
"#wutoptimizer = 'adam'\n",
"wutoptimizer = tf.keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, amsgrad=True)\n",
"\n",
"wutloss = 'binary_crossentropy'\n",
"#wutmetrics = 'accuracy'\n",
"wutmetrics = ['accuracy']"
]
},
@ -324,7 +332,7 @@
"metadata": {},
"outputs": [],
"source": [
"model.summary()"
"#model.summary()"
]
},
{
@ -365,7 +373,6 @@
"metadata": {},
"outputs": [],
"source": [
"# Need ~64 gigs RAM+, 20 gig disk\n",
"history = model.fit(\n",
" train_data_gen,\n",
" steps_per_epoch=total_train // batch_size,\n",
@ -400,7 +407,9 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"loss = history.history['loss']\n",
@ -408,6 +417,12 @@
"\n",
"epochs_range = range(epochs)\n",
"\n",
"save_plot_dir = os.path.join('/srv/satnogs/data/models/', ENCODING)\n",
"os.makedirs(save_plot_dir, exist_ok=True)\n",
"plot_file=(\"wut-plot-\" + ENCODING + \".png\")\n",
"save_path_plot = os.path.join(save_plot_dir, plot_file)\n",
"print(save_path_plot)\n",
"\n",
"plt.figure(figsize=(8, 8))\n",
"plt.subplot(1, 2, 1)\n",
"plt.plot(epochs_range, acc, label='Training Accuracy')\n",
@ -420,6 +435,7 @@
"plt.plot(epochs_range, val_loss, label='Validation Loss')\n",
"plt.legend(loc='upper right')\n",
"plt.title('Training and Validation Loss')\n",
"plt.savefig(save_path_plot)\n",
"plt.show()"
]
},
@ -445,7 +461,9 @@
"metadata": {},
"outputs": [],
"source": [
"model.save('/srv/satnogs/data/models/GMSK/wut-GMSK-202205.h5')"
"h5_file=(\"wut-\" + ENCODING + \".h5\")\n",
"save_path_h5 = os.path.join('/srv/satnogs/data/models/', ENCODING, h5_file)\n",
"print(save_path_h5)"
]
},
{
@ -454,7 +472,7 @@
"metadata": {},
"outputs": [],
"source": [
"model.save('/srv/satnogs/data/models/GMSK/wut-GMSK-202205.tf')"
"model.save(save_path_h5)"
]
},
{
@ -463,7 +481,9 @@
"metadata": {},
"outputs": [],
"source": [
"plot_model(model, show_shapes=True, show_layer_names=True, expand_nested=True, dpi=72, to_file='/srv/satnogs/data/models/GMSK/plot_model.png')"
"tf_modeldir=(\"wut-\" + ENCODING + \".tf\")\n",
"save_path_tf = os.path.join('/srv/satnogs/data/models/', ENCODING, tf_modeldir)\n",
"print(save_path_tf)"
]
},
{
@ -472,8 +492,33 @@
"metadata": {},
"outputs": [],
"source": [
"SVG(model_to_dot(model).create(prog='dot', format='svg'))"
"model.save(save_path_tf)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#plot_model(model, show_shapes=True, show_layer_names=True, expand_nested=True, dpi=72, to_file='/srv/satnogs/data/models/FM/plot_model.png')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#SVG(model_to_dot(model).create(prog='dot', format='svg'))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@ -492,7 +537,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.6"
}
},
"nbformat": 4,

View File

@ -380,7 +380,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.6"
}
},
"nbformat": 4,

View File

@ -336,7 +336,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.6"
}
},
"nbformat": 4,

View File

@ -216,7 +216,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.6"
}
},
"nbformat": 4,

View File

@ -167,9 +167,9 @@
"metadata": {},
"outputs": [],
"source": [
"train_dir = os.path.join('data/', 'train')\n",
"val_dir = os.path.join('data/', 'val')\n",
"test_dir = os.path.join('data/', 'test')"
"train_dir = os.path.join('/srv/satnogs/data/', 'train')\n",
"val_dir = os.path.join('/srv/satnogs/data/', 'val')\n",
"test_dir = os.path.join('/srv/satnogs/data/', 'test')"
]
},
{
@ -755,7 +755,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.6"
}
},
"nbformat": 4,

View File

@ -1,4 +1,5 @@
black[jupyter]
internetarchive
ipywidgets
jupyterlab
matplotlib
@ -7,6 +8,7 @@ pydot
seaborn
sklearn
tensorboard
tensorboard-plugin-profile
tensorflow_cpu
#tensorflow_gpu

7
src/Makefile 100644
View File

@ -0,0 +1,7 @@
all:
mkdir -p ../bin
cp -p wut wut-aria-active wut-aria-add wut-aria-daemon wut-aria-info wut-aria-methods wut-aria-shutdown wut-aria-stat wut-aria-stopped wut-aria-waiting wut-audio-archive wut-audio-sha1 wut-compare wut-compare-all wut-compare-tx wut-compare-txmode wut-compare-txmode-csv wut-dl-sort wut-dl-sort-tx wut-dl-sort-txmode wut-dl-sort-txmode-all wut-files wut-files-data wut-files-data-all wut-ia-sha1 wut-ia-torrents wut-img-ck.py wut-ml wut-ml-auto wut-ml-load wut-ml-save wut-obs wut-ogg2wav wut-review-staging wut-rm-random wut-tf wut-tf.py wut-water wut-water-range wut-worker wut-worker-mas wut-worker-mas.py wut-worker.py ../bin/
clean:
rm -fr ../bin

View File

@ -16,10 +16,10 @@ OBSID="$1"
rm -rf data/test
mkdir -p data/test/unvetted
./wut-water $OBSID
wut-water $OBSID
[ -f download/$OBSID/waterfall_$OBSID_*.png ] || echo "failed"
[ -f download/$OBSID/waterfall_$OBSID_*.png ] || exit
cp -p download/$OBSID/waterfall_$OBSID_*.png data/test/unvetted/
./wut-ml 2>/dev/null | grep -e ^Observation -e "^\[\[" | sed -e 's/\[\[//' -e 's/\]\]//' -e 's/Observation: //g'
wut-ml 2>/dev/null | grep -e ^Observation -e "^\[\[" | sed -e 's/\[\[//' -e 's/\]\]//' -e 's/Observation: //g'

View File

@ -0,0 +1,14 @@
#!/usr/bin/env python3
import time
import xmlrpc.client as xmlrpclib
from pathlib import Path
from pprint import pprint
s = xmlrpclib.ServerProxy('http://localhost:4800/rpc')
path=Path('/srv/dl')
active=s.aria2.tellActive("token:yajnuAdCemNathNojdi")
pprint(active)

31
src/wut-aria-add 100755
View File

@ -0,0 +1,31 @@
#!/usr/bin/env python3
import time
import xmlrpc.client as xmlrpclib
from pathlib import Path
s = xmlrpclib.ServerProxy('http://localhost:4800/rpc')
path=Path('/srv/dl')
# All torrents
#torrents=sorted(list(path.glob('**/satnogs-observations-*/satnogs-observations-*_archive.torrent')))
# Added torrents
# dt-10
torrents=sorted(list(path.glob('**/satnogs-observations-000000001-000010000/satnogs-observations-*_archive.torrent')))
#torrents=sorted(list(path.glob('**/satnogs-observations-0001?0001-000??0000/satnogs-observations-*_archive.torrent')))
#torrents=sorted(list(path.glob('**/satnogs-observations-0002?0001-000??0000/satnogs-observations-*_archive.torrent')))
#torrents=sorted(list(path.glob('**/satnogs-observations-0003?0001-000??0000/satnogs-observations-*_archive.torrent')))
#torrents=sorted(list(path.glob('**/satnogs-observations-0004?0001-000??0000/satnogs-observations-*_archive.torrent')))
#torrents=sorted(list(path.glob('**/satnogs-observations-0005?0001-000??0000/satnogs-observations-*_archive.torrent')))
#torrents=sorted(list(path.glob('**/satnogs-observations-0006?0001-000??0000/satnogs-observations-*_archive.torrent')))
#torrents=sorted(list(path.glob('**/satnogs-observations-0007?0001-000??0000/satnogs-observations-*_archive.torrent')))
#torrents=sorted(list(path.glob('**/satnogs-observations-0008?0001-000??0000/satnogs-observations-*_archive.torrent')))
#torrents=sorted(list(path.glob('**/satnogs-observations-0009?0001-000??0000/satnogs-observations-*_archive.torrent')))
for i in torrents:
print(i.name)
s.aria2.addTorrent("token:yajnuAdCemNathNojdi",
xmlrpclib.Binary(open(i, mode='rb').read()))
time.sleep(10)

View File

@ -0,0 +1,36 @@
#!/bin/bash
set -x
mkdir -p ~/log /srv/dl
ulimit -n 8192
aria2c \
--daemon=true \
--enable-rpc=true \
--dir=/srv/dl \
--rpc-listen-port=4800 \
--rpc-listen-all=false \
--rpc-secret=`cat /home/jebba/.aria-secret` \
--disable-ipv6=true \
--disk-cache=128M \
--file-allocation=falloc \
--log-level=notice \
--log=/home/jebba/log/aria.log \
--bt-max-open-files=1000 \
--bt-max-peers=1000 \
--continue=true \
--follow-torrent=mem \
--rpc-save-upload-metadata=false \
--max-concurrent-downloads=100 \
--bt-max-open-files=50000 \
--bt-max-peers=0 \
--allow-overwrite=true \
--max-download-result=0 \
--enable-mmap=true
exit
--deferred-input=true \
--enable-mmap

14
src/wut-aria-info 100755
View File

@ -0,0 +1,14 @@
#!/usr/bin/env python3
import time
import xmlrpc.client as xmlrpclib
from pathlib import Path
from pprint import pprint
s = xmlrpclib.ServerProxy('http://localhost:4800/rpc')
path=Path('/srv/dl')
info=s.aria2.getSessionInfo("token:yajnuAdCemNathNojdi")
pprint(info)

View File

@ -0,0 +1,14 @@
#!/usr/bin/env python3
import time
import xmlrpc.client as xmlrpclib
from pathlib import Path
from pprint import pprint
s = xmlrpclib.ServerProxy('http://localhost:4800/rpc')
path=Path('/srv/dl')
methods=s.system.listMethods()
pprint((sorted)(methods))

View File

@ -0,0 +1,14 @@
#!/usr/bin/env python3
import time
import xmlrpc.client as xmlrpclib
from pathlib import Path
from pprint import pprint
s = xmlrpclib.ServerProxy('http://localhost:4800/rpc')
path=Path('/srv/dl')
shutdown=s.aria2.shutdown("token:yajnuAdCemNathNojdi")
pprint(shutdown)

14
src/wut-aria-stat 100755
View File

@ -0,0 +1,14 @@
#!/usr/bin/env python3
import time
import xmlrpc.client as xmlrpclib
from pathlib import Path
from pprint import pprint
s = xmlrpclib.ServerProxy('http://localhost:4800/rpc')
path=Path('/srv/dl')
stat=s.aria2.getGlobalStat("token:yajnuAdCemNathNojdi")
pprint(stat)

View File

@ -0,0 +1,14 @@
#!/usr/bin/env python3
import time
import xmlrpc.client as xmlrpclib
from pathlib import Path
from pprint import pprint
s = xmlrpclib.ServerProxy('http://localhost:4800/rpc')
path=Path('/srv/dl')
stopped=s.aria2.tellStopped("token:yajnuAdCemNathNojdi", 0, 9999)
pprint(stopped)

View File

@ -0,0 +1,14 @@
#!/usr/bin/env python3
import time
import xmlrpc.client as xmlrpclib
from pathlib import Path
from pprint import pprint
s = xmlrpclib.ServerProxy('http://localhost:4800/rpc')
path=Path('/srv/dl')
waiting=s.aria2.tellWaiting("token:yajnuAdCemNathNojdi", 0, 9999)
pprint(waiting)

View File

@ -12,12 +12,12 @@
OBSID="$1"
# Download observation
./wut-water $OBSID
wut-water $OBSID
# Get previous rating
VET=`cat download/$OBSID/$OBSID.json | jq --compact-output '.[0] | {vetted_status}' | cut -f 2 -d ":" | sed -e 's/}//g' -e 's/"//g'`
echo "Vetted Status: $VET"
# Get Machine Learning Result
./wut $OBSID
wut $OBSID

View File

@ -25,7 +25,7 @@ do
echo -n "Vet: $VET "
# Get Machine Learning Result
WUT_VET=`./wut $OBSID | cut -f 2 -d " "`
WUT_VET=`wut $OBSID | cut -f 2 -d " "`
echo -n "Wut: $WUT_VET "
if [ $VET = $WUT_VET ] ; then
let CORRECT=$CORRECT+1

View File

@ -27,7 +27,7 @@ do
echo -n "$OBSID "
echo -n "Vet: $VET "
# Get Machine Learning Result
WUT_VETS=`./wut $OBSID`
WUT_VETS=`wut $OBSID`
WUT_VET=`echo $WUT_VETS | cut -f 2 -d " "`
WUT_RATE=`echo $WUT_VETS | cut -f 1 -d " "`
echo -n "$WUT_VET, "

View File

@ -32,7 +32,7 @@ do
echo -n "$OBSID "
echo -n "Vet: $VET "
# Get Machine Learning Result
WUT_VETS=`./wut $OBSID | cut -f 2 -d " "`
WUT_VETS=`wut $OBSID | cut -f 2 -d " "`
WUT_VET=`echo $WUT_VETS | tail -1 | cut -f 2 -d " "`
WUT_RATE=`echo $WUT_VETS | head -1`
echo -n "Wut: $WUT_VET "

View File

@ -36,7 +36,7 @@ do
echo -n "$OBSID, "
echo -n "$VET, "
# Get Machine Learning Result
WUT_VETS=`./wut $OBSID`
WUT_VETS=`wut $OBSID`
WUT_VET=`echo $WUT_VETS | cut -f 2 -d " "`
WUT_RATE=`echo $WUT_VETS | cut -f 1 -d " "`
echo -n "$WUT_VET, "

View File

@ -26,7 +26,7 @@ cd /srv/satnogs
# Enable the following if you want to download waterfalls in this range:
#echo "Downloading Waterfalls"
#./wut-water-range $OBSIDMIN $OBSIDMAX
#wut-water-range $OBSIDMIN $OBSIDMAX
# XXX remove data/train and data/val directories XXX
echo "Removing data/ subdirectories"

View File

@ -29,7 +29,7 @@ OBSID=$OBSIDMIN
# Enable the following if you want to download waterfalls in this range:
#echo "Downloading Waterfalls"
#./wut-water-range $OBSIDMIN $OBSIDMAX
#wut-water-range $OBSIDMIN $OBSIDMAX
# XXX remove data/train and data/val directories XXX
echo "Removing data/ subdirectories"

View File

@ -1,15 +1,20 @@
#!/bin/bash
# wut-dl-sort-txmode
#
# XXX This script removes directories in data/ !!! XXX
#
# Populates the data/ directory from the download/dir.
# Does it just for a specific transmitter mode (encoding)
# Available encodings:
# AFSK AFSK1k2 AHRPT APT BPSK BPSK1k2 BPSK9k6 BPSK12k5 BPSK400 CERTO CW DUV
# FFSK1k2 FM FSK1k2 FSK4k8 FSK9k6 FSK19k2 GFSK1k2 GFSK2k4 GFSK4k8 GFSK9k6
# GFSK19k2 GFSK Rktr GMSK GMSK1k2 GMSK2k4 GMSK4k8 GMSK9k6 GMSK19k2 HRPT LRPT
# MSK1k2 MSK2k4 MSK4k8 PSK PSK31 SSTV USB WSJT
#
# XXX This script removes directories in data/ !!! XXX
# Available encodings:
# 4FSK AFSK_TUBiX10 AFSK AHRPT AM APT ASK BPSK_PMT-A3 BPSK CERTO CW DBPSK DOKA
# DPSK DQPSK DSTAR DUV DVB-S2 FFSK FMN FM FSK_AX.25_G3RUH FSK_AX.100_Mode_5
# FSK_AX.100_Mode_6 FSK GFSK_Rktr GFSK GFSK/BPSK GMSK_USP GMSK HRPT LRPT LSB
# LoRa MFSK MSK_AX.100_Mode_5 MSK_AX.100_Mode_6 MSK OFDM OQPSK PSK31 PSK63 PSK
# QPSK31 QPSK63 QPSK SSTV USB WSJT
#
# Encoding list generator:
# for i in `curl --silent https://db.satnogs.org/api/modes/ | jq '.[] | .name' | sort -V | sed -e 's/"//g' -e 's/ /_/g' -e 's/\//_/g'` ; do echo -n "$i " ; done ; echo
#
# Usage:
# wut-dl-sort-txmode [Encoding] [Minimum Observation ID] [Maximum Observation ID]
@ -17,6 +22,8 @@
# wut-dl-sort-txmode CW 1467000 1470000
# For December, 2019 Example:
# wut-dl-sort-txmode CW 1292461 1470525
# For July, 2022 Example:
# wut-dl-sort-txmode BPSK1k2 6154228 6283338
#
# * Takes the files in the download/ dir.
# * Looks at the JSON files to see if it is :good", "bad", or "failed".
@ -38,7 +45,7 @@ cd $DATADIR || exit
# Enable the following if you want to download waterfalls in this range:
#echo "Downloading Waterfalls"
#./wut-water-range $OBSIDMIN $OBSIDMAX
#wut-water-range $OBSIDMIN $OBSIDMAX
# XXX remove data/train and data/val directories XXX
echo "Removing subdirectories"

View File

@ -1,17 +1,22 @@
#!/bin/bash
# wut-dl-sort-txmode-all
#
# XXX This script removes directories in data/ !!! XXX
#
# Training of all waterfalls. Used for modes that have few samples.
#
# Populates the data/ directory from the download/dir.
# Does it just for a specific transmitter mode (encoding)
# Available encodings:
# AFSK AFSK1k2 AHRPT APT BPSK BPSK1k2 BPSK9k6 BPSK12k5 BPSK400 CERTO CW DUV
# FFSK1k2 FM FSK1k2 FSK4k8 FSK9k6 FSK19k2 GFSK1k2 GFSK2k4 GFSK4k8 GFSK9k6
# GFSK19k2 GFSK Rktr GMSK GMSK1k2 GMSK2k4 GMSK4k8 GMSK9k6 GMSK19k2 HRPT LRPT
# MSK1k2 MSK2k4 MSK4k8 PSK PSK31 SSTV USB WSJT
#
# XXX This script removes directories in data/ !!! XXX
# Available encodings:
# 4FSK AFSK_TUBiX10 AFSK AHRPT AM APT ASK BPSK_PMT-A3 BPSK CERTO CW DBPSK DOKA
# DPSK DQPSK DSTAR DUV DVB-S2 FFSK FMN FM FSK_AX.25_G3RUH FSK_AX.100_Mode_5
# FSK_AX.100_Mode_6 FSK GFSK_Rktr GFSK GFSK/BPSK GMSK_USP GMSK HRPT LRPT LSB
# LoRa MFSK MSK_AX.100_Mode_5 MSK_AX.100_Mode_6 MSK OFDM OQPSK PSK31 PSK63 PSK
# QPSK31 QPSK63 QPSK SSTV USB WSJT
#
# Encoding list generator:
# for i in `curl --silent https://db.satnogs.org/api/modes/ | jq '.[] | .name' | sort -V | sed -e 's/"//g' -e 's/ /_/g' -e 's/\//_/g'` ; do echo -n "$i " ; done ; echo
#
# Usage:
# wut-dl-sort-txmode-all [Minimum Observation ID] [Maximum Observation ID]
@ -28,7 +33,6 @@
#
# Possible vetted_status: bad, failed, good, null, unknown.
OBSENC="ALL"
OBSIDMIN="$1"
OBSIDMAX="$2"
@ -41,7 +45,7 @@ cd $DATADIR || exit
# Enable the following if you want to download waterfalls in this range:
#echo "Downloading Waterfalls"
#./wut-water-range $OBSIDMIN $OBSIDMAX
#wut-water-range $OBSIDMIN $OBSIDMAX
# XXX remove data/train and data/val directories XXX
echo "Removing subdirectories"

74
src/wut-ia-sha1 100755
View File

@ -0,0 +1,74 @@
#!/usr/bin/env python3
#
# wut-ia-sha1 --- Verify downloaded files checksums
#
# XXX uses both ET and xml.parsers.expat
import argparse
import os
from xml.parsers.expat import ParserCreate, ExpatError, errors
from pathlib import Path
import hashlib
import xml.etree.ElementTree as ET
dl_dir=Path('/srv/dl')
def convertxml(xmlfile, xml_attribs=True):
with open(xmlfile, "rb") as f:
d = xmltodict.parse(f, xml_attribs=xml_attribs, process_namespaces=False)
return d
def parse_args():
parser = argparse.ArgumentParser(description='sha1 check Internet Archive downloads')
parser.add_argument('observations',
type=str,
help='Observation set. Example: 006050001-006060000')
args = parser.parse_args()
obs_set = 'satnogs-observations-' + args.observations
obs_dir = Path(dl_dir, obs_set)
filename_xml = obs_set + '_files.xml'
print('filename XML:', filename_xml)
xmlfile = Path(obs_dir, filename_xml)
p = ParserCreate()
try:
p.ParseFile(open(xmlfile, 'rb'))
except:
print('No XML file to process')
exit()
return(xmlfile, obs_dir)
def get_sha1(filename):
sha1 = hashlib.sha1()
try:
with open(filename, 'rb') as f:
while True:
data = f.read(1048576)
if not data:
break
sha1.update(data)
return sha1.hexdigest()
except:
status='EXCEPTION'
def process_set(xmlfile, obs_dir):
root_node = ET.parse(xmlfile).getroot()
for tag in root_node.findall('file'):
name = tag.get('name')
for file_sha1 in tag.iter('sha1'):
filename = Path(obs_dir, name)
sha1_hash=get_sha1(filename)
if sha1_hash == file_sha1.text:
print('OK ', end='')
else:
print('FAIL ', end='')
print(name)
def main():
xmlfile, obs_dir = parse_args()
process_set(xmlfile, obs_dir)
if __name__ == "__main__":
main();

View File

@ -0,0 +1,31 @@
#!/usr/bin/env python3
#
# wut-ia-torrents --- Download SatNOGS torrents from the Internet Archive.
#
# https://archive.org/details/satnogs
from internetarchive import get_item
from internetarchive import get_session
from internetarchive import download
from internetarchive import search_items
import time
# Download dir
obs_dl='/srv/dl'
s = get_session()
s.mount_http_adapter()
search_results = s.search_items('satnogs-observations')
for i in search_items('identifier:satnogs-observations-*'):
obs_id=(i['identifier'])
print('Collection', obs_id)
download(obs_id, verbose=True, glob_pattern='*.torrent',
checksum=True, destdir=obs_dl,
retries=4, ignore_errors=True)
download(obs_id, verbose=True, glob_pattern='*_files.xml',
checksum=True, destdir=obs_dl,
retries=4, ignore_errors=True)
time.sleep(3)

View File

@ -16,25 +16,25 @@
import os
import numpy as np
import tensorflow.python.keras
from tensorflow.python.keras import Sequential
from tensorflow.python.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from tensorflow.python.keras import optimizers
from tensorflow.python.keras.preprocessing import image
from tensorflow.python.keras.models import load_model
from tensorflow.python.keras.preprocessing.image import load_img
from tensorflow.python.keras.preprocessing.image import img_to_array
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from tensorflow.keras import optimizers
from tensorflow.keras.preprocessing import image
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import img_to_array
# XXX
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.layers import Input, concatenate
#from tensorflow.python.keras.optimizers import Adam
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, concatenate
#from tensorflow.keras.optimizers import Adam
# XXX Plot
from tensorflow.python.keras.utils import plot_model
from tensorflow.python.keras.callbacks import ModelCheckpoint
from tensorflow.keras.utils import plot_model
from tensorflow.keras.callbacks import ModelCheckpoint
## for visualizing
import matplotlib.pyplot as plt, numpy as np
from sklearn.decomposition import PCA

View File

@ -16,25 +16,25 @@
import os
import numpy as np
import tensorflow.python.keras
from tensorflow.python.keras import Sequential
from tensorflow.python.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from tensorflow.python.keras import optimizers
from tensorflow.python.keras.preprocessing import image
from tensorflow.python.keras.models import load_model
from tensorflow.python.keras.preprocessing.image import load_img
from tensorflow.python.keras.preprocessing.image import img_to_array
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from tensorflow.keras import optimizers
from tensorflow.keras.preprocessing import image
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import img_to_array
# XXX
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.layers import Input, concatenate
#from tensorflow.python.keras.optimizers import Adam
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, concatenate
#from tensorflow.keras.optimizers import Adam
# XXX Plot
from tensorflow.python.keras.utils import plot_model
from tensorflow.python.keras.callbacks import ModelCheckpoint
from tensorflow.keras.utils import plot_model
from tensorflow.keras.callbacks import ModelCheckpoint
## for visualizing
import matplotlib.pyplot as plt, numpy as np
from sklearn.decomposition import PCA

View File

@ -20,15 +20,15 @@
import os
import numpy as np
import tensorflow.python.keras
from tensorflow.python.keras import Sequential
from tensorflow.python.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from tensorflow.python.keras import optimizers
from tensorflow.python.keras.preprocessing import image
from tensorflow.python.keras.models import load_model
from tensorflow.python.keras.preprocessing.image import load_img
from tensorflow.python.keras.preprocessing.image import img_to_array
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from tensorflow.keras import optimizers
from tensorflow.keras.preprocessing import image
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import img_to_array
model = load_model('/srv/satnogs/data/wut.h5')
img_width=256

View File

@ -19,15 +19,15 @@
import os
import numpy as np
import tensorflow.python.keras
from tensorflow.python.keras import Sequential
from tensorflow.python.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from tensorflow.python.keras import optimizers
from tensorflow.python.keras.preprocessing import image
from tensorflow.python.keras.models import load_model
from tensorflow.python.keras.preprocessing.image import load_img
from tensorflow.python.keras.preprocessing.image import img_to_array
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from tensorflow.keras import optimizers
from tensorflow.keras.preprocessing import image
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import img_to_array
datagen = ImageDataGenerator()
train_it = datagen.flow_from_directory('/srv/satnogs/data/train/', class_mode='binary')

View File

22
wut-tf.py → src/wut-tf.py 100644 → 100755
View File

@ -15,17 +15,17 @@ import datetime
import tensorflow as tf
import tensorflow.python.keras
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.python.keras import optimizers
from tensorflow.python.keras import Sequential
from tensorflow.python.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.python.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from tensorflow.python.keras.layers import Input, concatenate
from tensorflow.python.keras.models import load_model
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.preprocessing import image
from tensorflow.python.keras.preprocessing.image import img_to_array
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.preprocessing.image import load_img
from tensorflow.keras import optimizers
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from tensorflow.keras.layers import Input, concatenate
from tensorflow.keras.models import load_model
from tensorflow.keras.models import Model
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing.image import load_img
os.environ["TF_CONFIG"] = json.dumps({
"cluster": {
"worker": [ "ml1-int:2222", "ml2-int:2222", "ml3-int:2222", "ml4-int:2222", "ml5-int:2222" ]

View File

@ -9,9 +9,25 @@
#
# The last observation to start in 2019 was 1470525
# The last observation to start in 2019-11 was 1292461
#
# APPROXIMATE:
# Observations 2015: 1-86
# Observations 2016: 87-613. Many in 15,000 range too
# Observations 2017: 614-55551
# Observations 2018: 55551-388962
# Observations 2019: 388963-1470939
# Observations 2020: 1470940-3394851
# Observations 2021: 3394852-5231193
# Observations 2022-01 2022-04: 5231194-5712616
# Observations 2022-05 5712617-6021303
# Observations 2022-06 6021304-6154227
# Observations 2022-07 6154228-6283338
#
# NOTE! Observations are not in numerical order by chronology.
# It looks like it is ordered by scheduling, so an older observation can have
# a higher observation ID.
# a higher observation ID. So the above list is rough, not exact.
# Also, there are exceptions, such as observations with IDs far higher than
# others that year.
#
# So to get mostly all of the observations in December, 2019, run:
# wut-water-range 1292461 1470525

26
src/wut-worker 100755
View File

@ -0,0 +1,26 @@
#!/bin/bash
# wut-worker
#
# Starts worker client.
#
# Usage:
# wut-worker
# Example:
# wut-worker
#
# Note:
# Each node needs a unique index number.
#
# NOTE!
# This generates the node number based off the hostname.
# The hosts are rs-ml1 through rs-ml10. The index starts at zero,
# so the index is hostname minus one (without alpha).
HOSTNUM=`hostname | sed -e 's/rs-ml//g'`
let HOSTNUM=$HOSTNUM-1
export TF_CONFIG='{"cluster": {"worker": [ "rs-ml1:23009", "rs-ml2:23009", "rs-ml3:23009", "rs-ml4:23009", "rs-ml5:23009", "rs-ml6:23009", "rs-ml7:23009", "rs-ml8:23009", "rs-ml9:23009", "rs-ml10:23009"]}, "task": {"index": '$HOSTNUM', "type": "worker"}}'
echo $TF_CONFIG
wut-worker.py

View File

@ -13,14 +13,15 @@
#
# NOTE!
# This generates the node number based off the hostname.
# The hosts are ml0 through ml5.
# The hosts are rs-ml0 through rs-ml10.
HOSTNUM=`hostname | sed -e 's/ml//g'`
HOSTNUM=`hostname | sed -e 's/rs-ml//g'`
#export TF_CONFIG='{"cluster": {"worker": [ "ml0-int:2222", "ml1-int:2222", "ml2-int:2222", "ml3-int:2222", "ml4-int:2222", "ml5-int:2222"]}, "task": {"index": '$HOSTNUM', "type": "worker"}}'
export TF_CONFIG='{"cluster": {"worker": [ "ml1-int:2222", "ml2-int:2222", "ml3-int:2222", "ml4-int:2222", "ml5-int:2222"]}}'
#export TF_CONFIG='{"cluster": {"worker": [ "ml1-int:2222", "ml2-int:2222", "ml3-int:2222", "ml4-int:2222", "ml5-int:2222"]}}'
export TF_CONFIG='{"cluster": {"worker": [ "rs-ml1:23009", "rs-ml2:23009", "rs-ml3:23009", "rs-ml4:23009", "rs-ml5:23009", "rs-ml6:23009", "rs-ml7:23009", "rs-ml8:23009", "rs-ml9:23009", "rs-ml10:23009"]}}'
#export TF_CONFIG='{"cluster": {"chief": [ "ml0-int:2222" ], "worker": [ "ml1-int:2222", "ml2-int:2222", "ml3-int:2222", "ml4-int:2222", "ml5-int:2222"]}, "task": {"index": '$HOSTNUM', "type": "worker"}}'
echo $TF_CONFIG
python3 wut-worker-mas.py
wut-worker-mas.py

View File

@ -15,17 +15,17 @@ import datetime
import tensorflow as tf
import tensorflow.python.keras
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.python.keras import optimizers
from tensorflow.python.keras import Sequential
from tensorflow.python.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.python.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from tensorflow.python.keras.layers import Input, concatenate
from tensorflow.python.keras.models import load_model
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.preprocessing import image
from tensorflow.python.keras.preprocessing.image import img_to_array
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.preprocessing.image import load_img
from tensorflow.keras import optimizers
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from tensorflow.keras.layers import Input, concatenate
from tensorflow.keras.models import load_model
from tensorflow.keras.models import Model
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing.image import load_img
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(True)

View File

@ -19,17 +19,17 @@ import datetime
import tensorflow as tf
import tensorflow.python.keras
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.python.keras import optimizers
from tensorflow.python.keras import Sequential
from tensorflow.python.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.python.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from tensorflow.python.keras.layers import Input, concatenate
from tensorflow.python.keras.models import load_model
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.preprocessing import image
from tensorflow.python.keras.preprocessing.image import img_to_array
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.preprocessing.image import load_img
from tensorflow.keras import optimizers
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from tensorflow.keras.layers import Input, concatenate
from tensorflow.keras.models import load_model
from tensorflow.keras.models import Model
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.python.data.experimental.ops.distribute_options import AutoShardPolicy
get_ipython().run_line_magic('matplotlib', 'inline')
import matplotlib.pyplot as plt

View File

@ -1,26 +0,0 @@
#!/bin/bash
# wut-worker
#
# Starts worker client.
#
# Usage:
# wut-worker
# Example:
# wut-worker
#
# Note:
# Each node needs a unique index number.
#
# NOTE!
# This generates the node number based off the hostname.
# The hosts are ml1 through ml5. The index starts at zero,
# so the index is hostname minus one (without alpha).
HOSTNUM=`hostname | sed -e 's/ml//g'`
let HOSTNUM=$HOSTNUM-1
export TF_CONFIG='{"cluster": {"worker": [ "ml1-int:2222", "ml2-int:2222", "ml3-int:2222", "ml4-int:2222", "ml5-int:2222"]}, "task": {"index": '$HOSTNUM', "type": "worker"}}'
echo $TF_CONFIG
python3 wut-worker.py