1
0
Fork 0
Commit Graph

5993 Commits (tools)

Author SHA1 Message Date
Joost VandeVondele 5640ad48ae
Merge pull request #3707 from Sopel97/max_nodes
Prevent search explosion in data generation when search is bounded by node count.
2021-09-18 09:14:56 +02:00
Tomasz Sobczyk 5956efafdd Hard-kill search in generate_training_data when the node count is 3x over the limit. 2021-09-17 21:16:49 +02:00
Joost VandeVondele cfee179152
Merge pull request #3705 from Sopel97/fix_tsan_warn
Fix usage of sync_endl instead of endl causing UB mutex unlock.
2021-09-17 12:03:55 +02:00
Tomasz Sobczyk b165fa0e96 Fix usage of sync_endl instead of endl causing UB mutex unlock. 2021-09-17 09:36:27 +02:00
Joost VandeVondele f3c921c854
Merge pull request #3704 from Sopel97/nodes_multipv
Add a nodes bound for the multiPV search in "generate_training_data".
2021-09-15 23:42:17 +02:00
Tomasz Sobczyk 474b63754d Add a nodes bound for the multiPV search in "generate_training_data". 2021-09-15 23:31:35 +02:00
Joost VandeVondele f2dbb3f6c8
Merge pull request #3701 from Sopel97/time_limit
Add "max_time_*" options to "generate_training_data" tool.
2021-09-15 06:55:53 +02:00
Tomasz Sobczyk 79abe1e662 Add "max_time_*" options to "generate_training_data" tool that allow limiting the runtime by time instead of count. 2021-09-14 14:47:24 +02:00
Joost VandeVondele a02a6bf13e
Merge pull request #3687 from Sopel97/fix_ply_init
Fix uninitialized ss->ply in data generator
2021-09-02 22:24:15 +02:00
Tomasz Sobczyk f8d1315d90 Fix uninitialized ss->ply in data generator 2021-09-02 21:31:28 +02:00
Joost VandeVondele 8fc7d9a4d4
Merge pull request #3645 from Sopel97/tools_merge
Update tools to 2021-08-09
2021-08-20 08:57:17 +02:00
Tomasz Sobczyk 2922bcc1a7 Merge remote-tracking branch 'upstream/master' into merge_tmp 2021-08-15 21:53:46 +02:00
Tomasz Sobczyk 5d99239e95 Remove old travis CI file 2021-08-15 21:50:28 +02:00
Tomasz Sobczyk 1deb64f0a7 Fix instrumentation 2021-08-15 21:50:21 +02:00
Tomasz Sobczyk d61d38586e New NNUE architecture and net
Introduces a new NNUE network architecture and associated network parameters

The summary of the changes:

* Position for each perspective mirrored such that the king is on e..h files. Cuts the feature transformer size in half, while preserving enough knowledge to be good. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.b40q4rb1w7on.
* The number of neurons after the feature transformer increased two-fold, to 1024x2. This is possibly mostly due to the now very optimized feature transformer update code.
* The number of neurons after the second layer is reduced from 16 to 8, to reduce the speed impact. This, perhaps surprisingly, doesn't harm the strength much. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.6qkocr97fezq

The AffineTransform code did not work out-of-the box with the smaller number of neurons after the second layer, so some temporary changes have been made to add a special case for InputDimensions == 8. Also additional 0 padding is added to the output for some archs that cannot process inputs by <=8 (SSE2, NEON). VNNI uses an implementation that can keep all outputs in the registers while reducing the number of loads by 3 for each 16 inputs, thanks to the reduced number of output neurons. However GCC is particularily bad at optimization here (and perhaps why the current way the affine transform is done even passed sprt) (see https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit# for details) and more work will be done on this in the following days. I expect the current VNNI implementation to be improved and extended to other architectures.

The network was trained with a slightly modified version of the pytorch trainer (https://github.com/glinscott/nnue-pytorch); the changes are in https://github.com/glinscott/nnue-pytorch/pull/143

The training utilized 2 datasets.

    dataset A - https://drive.google.com/file/d/1VlhnHL8f-20AXhGkILujnNXHwy9T-MQw/view?usp=sharing
    dataset B - as described in ba01f4b954

The training process was as following:

    train on dataset A for 350 epochs, take the best net in terms of elo at 20k nodes per move (it's fine to take anything from later stages of training).
    convert the .ckpt to .pt
    --resume-from-model from the .pt file, train on dataset B for <600 epochs, take the best net. Lambda=0.8, applied before the loss function.

The first training command:

python3 train.py \
    ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
    ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
    --gpus "$3," \
    --threads 1 \
    --num-workers 1 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 20 \
    --smart-fen-skipping \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=1.0 \
    --max_epochs=600 \
    --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

The second training command:

python3 serialize.py \
    --features=HalfKAv2_hm^ \
    ../nnue-pytorch-training/experiment_131/run_6/default/version_0/checkpoints/epoch-499.ckpt \
    ../nnue-pytorch-training/experiment_$1/base/base.pt

python3 train.py \
    ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
    ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
    --gpus "$3," \
    --threads 1 \
    --num-workers 1 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 20 \
    --smart-fen-skipping \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=0.8 \
    --max_epochs=600 \
    --resume-from-model ../nnue-pytorch-training/experiment_$1/base/base.pt \
    --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

STC: https://tests.stockfishchess.org/tests/view/611120b32a8a49ac5be798c4

LLR: 2.97 (-2.94,2.94) <-0.50,2.50>
Total: 22480 W: 2434 L: 2251 D: 17795
Ptnml(0-2): 101, 1736, 7410, 1865, 128

LTC: https://tests.stockfishchess.org/tests/view/611152b32a8a49ac5be798ea

LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 9776 W: 442 L: 333 D: 9001
Ptnml(0-2): 5, 295, 4180, 402, 6

closes https://github.com/official-stockfish/Stockfish/pull/3646

bench: 5189338
2021-08-15 12:05:43 +02:00
Tomasz Sobczyk 7586e49548 bump macos version to 10.15 2021-08-09 13:24:35 +02:00
Tomasz Sobczyk 2b42d3a55a remove werror 2021-08-09 13:17:38 +02:00
Tomasz Sobczyk cd26704ae0 fix mcts init 2021-08-09 13:09:14 +02:00
Tomasz Sobczyk d76be2f428
Merge branch 'tools' into tools_merge 2021-08-09 13:06:54 +02:00
Tomasz Sobczyk 368bd2e4f9 most-merge fixes 2021-08-09 13:01:52 +02:00
Tomasz Sobczyk 51b4e7bd6e Merge branch 'tools' into tools_merge 2021-08-09 11:39:42 +02:00
Joost VandeVondele dabaf2220f Revert futility pruning patches
reverts 09b6d28391 and
dbd7f602d3 that significantly impact mate
finding capabilities. For example on ChestUCI_23102018.epd, at 1M nodes,
the number of mates found is nearly reduced 2x without these depth conditions:

       sf6  2091
       sf7  2093
       sf8  2107
       sf9  2062
      sf10  2208
      sf11  2552
      sf12  2563
      sf13  2509
      sf14  2427
    master  1246
   patched  2467

(script for testing at https://github.com/official-stockfish/Stockfish/files/6936412/matecheck.zip)

closes https://github.com/official-stockfish/Stockfish/pull/3641

fixes https://github.com/official-stockfish/Stockfish/issues/3627

Bench: 5467570
2021-08-05 16:41:07 +02:00
VoyagerOne a1a83f3869 SEE simplification
Simplified SEE formula by removing std::min. Should also be easier to tune.

STC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 22656 W: 1836 L: 1729 D: 19091
Ptnml(0-2): 54, 1426, 8267, 1521, 60
https://tests.stockfishchess.org/tests/view/610ae62f2a8a49ac5be79449

LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 26248 W: 806 L: 744 D: 24698
Ptnml(0-2): 6, 668, 11715, 728, 7
https://tests.stockfishchess.org/tests/view/610b17ad2a8a49ac5be79466

closes https://github.com/official-stockfish/Stockfish/pull/3643

bench:  4915145
2021-08-05 16:32:07 +02:00
SFisGOD 73ef5b8c4a Update default net to nn-46832cfbead3.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/6100e7f096b86d98abf6a832
Parameters: A total of 256 net weights and 8 net biases were tuned (output layer)
Base net: nn-56a5f1c4173a.nnue
New net: nn-ec3c8e029926.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/610733caafad2da4f4ae3da7
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-ec3c8e029926.nnue
New net: nn-46832cfbead3.nnue

STC:
LLR: 2.98 (-2.94,2.94) <-0.50,2.50>
Total: 50520 W: 3953 L: 3765 D: 42802
Ptnml(0-2): 138, 3063, 18678, 3235, 146
https://tests.stockfishchess.org/tests/view/610a79692a8a49ac5be793f4

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 57256 W: 1723 L: 1566 D: 53967
Ptnml(0-2): 12, 1442, 25568, 1589, 17
https://tests.stockfishchess.org/tests/view/610ac5bb2a8a49ac5be79434

Closes https://github.com/official-stockfish/Stockfish/pull/3642

Bench: 5359314
2021-08-05 08:52:07 +02:00
Stefan Geschwentner 5cd42f6b0b Simplify new cmh pruning thresholds by using directly a quadratic formula.
This decouples also the stat bonus updates from the threshold which creates less dependencies for tuning of stat bonus parameters.
Perhaps a further fine tuning of the now separated coefficients for constHist[0] and constHist[1] could give further gains.

STC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 78384 W: 6134 L: 6090 D: 66160
Ptnml(0-2): 207, 5013, 28705, 5063, 204
https://tests.stockfishchess.org/tests/view/6106d235afad2da4f4ae3d4b

LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 38176 W: 1149 L: 1095 D: 35932
Ptnml(0-2): 6, 1000, 17030, 1038, 14
https://tests.stockfishchess.org/tests/view/6107a080afad2da4f4ae3def

closes https://github.com/official-stockfish/Stockfish/pull/3639

Bench: 5098146
2021-08-05 08:47:33 +02:00
VoyagerOne 31ebd918ea Futile pruning simplification
Remove CMH conditions in futile pruning.

STC:
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 93520 W: 7165 L: 7138 D: 79217
Ptnml(0-2): 222, 5923, 34427, 5982, 206
https://tests.stockfishchess.org/tests/view/61083104e50a153c346ef8df

LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 59072 W: 1746 L: 1706 D: 55620
Ptnml(0-2): 13, 1562, 26353, 1588, 20
https://tests.stockfishchess.org/tests/view/610894f2e50a153c346ef913

closes https://github.com/official-stockfish/Stockfish/pull/3638

Bench: 5229673
2021-08-05 08:44:38 +02:00
VoyagerOne a0fca67da4 CMH Pruning Tweak
replace CounterMovePruneThreshold by a depth dependent threshold

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 35512 W: 2718 L: 2552 D: 30242
Ptnml(0-2): 66, 2138, 13194, 2280, 78
https://tests.stockfishchess.org/tests/view/6104442fafad2da4f4ae3b94

LTC:
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 36536 W: 1150 L: 1019 D: 34367
Ptnml(0-2): 10, 920, 16278, 1049, 11
https://tests.stockfishchess.org/tests/view/6104b033afad2da4f4ae3bbc

closes https://github.com/official-stockfish/Stockfish/pull/3636

Bench: 5848718
2021-07-31 15:29:19 +02:00
Tomasz Sobczyk 26edf9534a Avoid unnecessary stores in the affine transform
This patch improves the codegen in the AffineTransform::forward function for architectures >=SSSE3. Current code works directly on memory and the compiler cannot see that the stores through outptr do not alias the loads through weights and input32. The solution implemented is to perform the affine transform with local variables as accumulators and only store the result to memory at the end. The number of accumulators required is OutputDimensions / OutputSimdWidth, which means that for the 1024->16 affine transform it requires 4 registers with SSSE3, 2 with AVX2, 1 with AVX512. It also cuts the number of stores required by NumRegs * 256 for each node evaluated. The local accumulators are expected to be assigned to registers, but even if this cannot be done in some case due to register pressure it will help the compiler to see that there is no aliasing between the loads and stores and may still result in better codegen.

See https://godbolt.org/z/59aTKbbYc for codegen comparison.

passed STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 140328 W: 10635 L: 10358 D: 119335
Ptnml(0-2): 302, 8339, 52636, 8554, 333

closes https://github.com/official-stockfish/Stockfish/pull/3634

No functional change
2021-07-30 17:15:52 +02:00
SFisGOD e973eee919 Update default net to nn-56a5f1c4173a.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/60fd24efd8a6b65b2f3a796e
Parameters: A total of 256 net biases were tuned (hidden layer 2)
New best values: Half of the changes from the tuning run
New net: nn-5992d3ba79f3.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/60fec7d6d8a6b65b2f3a7aa2
Parameters: A total of 128 net biases were tuned (hidden layer 1)
New best values: Half of the changes from the tuning run
New net: nn-56a5f1c4173a.nnue

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 140392 W: 10863 L: 10578 D: 118951
Ptnml(0-2): 347, 8754, 51718, 9021, 356
https://tests.stockfishchess.org/tests/view/610037e396b86d98abf6a79e

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 14216 W: 454 L: 355 D: 13407
Ptnml(0-2): 4, 323, 6356, 420, 5
https://tests.stockfishchess.org/tests/view/61019995afad2da4f4ae3a3c

Closes #3633

Bench: 4801359
2021-07-29 07:35:13 +02:00
SFisGOD 237ed1ef8f Update default net to nn-26abeed38351.nnue
SPSA: https://tests.stockfishchess.org/tests/view/60fba335d8a6b65b2f3a7891

New best values: Half of the changes from the tuning run.
Setting: nodestime=300 with 10+0.1 (approximate real TC is 2.5 seconds)
The rest is the same as described in #3593

The change from nodestime=600 to 300 was suggested by gekkehenker to prevent time losses for some slow workers
SFisGOD@94cd757#commitcomment-53324840

STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 67448 W: 5241 L: 5036 D: 57171
Ptnml(0-2): 151, 4198, 24827, 4391, 157
https://tests.stockfishchess.org/tests/view/60fd50f2d8a6b65b2f3a798e

LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 48752 W: 1504 L: 1358 D: 45890
Ptnml(0-2): 13, 1226, 21754, 1368, 15
https://tests.stockfishchess.org/tests/view/60fd7bb2d8a6b65b2f3a79a9

Closes https://github.com/official-stockfish/Stockfish/pull/3630

Bench:  5124774
2021-07-26 07:52:59 +02:00
Giacomo Lorenzetti 910d26b5c3 Simplification in LMR
This commit removes the `!captureOrPromotion` condition from ttCapture reduction and from good/bad history reduction (similar to #3619).

passed STC:
https://tests.stockfishchess.org/tests/view/60fc734ad8a6b65b2f3a7922
LLR: 2.97 (-2.94,2.94) <-2.50,0.50>
Total: 48680 W: 3855 L: 3776 D: 41049
Ptnml(0-2): 118, 3145, 17744, 3206, 127

passed LTC:
https://tests.stockfishchess.org/tests/view/60fce7d5d8a6b65b2f3a794c
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 86528 W: 2471 L: 2450 D: 81607
Ptnml(0-2): 28, 2203, 38777, 2232, 24

closes https://github.com/official-stockfish/Stockfish/pull/3629

Bench: 4951406
2021-07-26 07:48:58 +02:00
MichaelB7 b939c80513 Update the default net to nn-76a8a7ffb820.nnue.
combined work by Serio Vieri, Michael Byrne, and Jonathan D (aka SFisGod) based on top of previous developments, by restarts from good nets.

Sergio generated the net https://tests.stockfishchess.org/api/nn/nn-d8609abe8caf.nnue:

The initial net nn-d8609abe8caf.nnue is trained by generating around 16B of training data from the last master net nn-9e3c6298299a.nnue, then trained, continuing from the master net, with lambda=0.2 and sampling ratio of 1. Starting with LR=2e-3, dropping LR with a factor of 0.5 until it reaches LR=5e-4. in_scaling is set to 361. No other significant changes made to the pytorch trainer.

Training data gen command (generates in chunks of 200k positions):

generate_training_data min_depth 9 max_depth 11 count 200000 random_move_count 10 random_move_max_ply 80 random_multi_pv 12 random_multi_pv_diff 100 random_multi_pv_depth 8 write_min_ply 10 eval_limit 1500 book noob_3moves.epd output_file_name gendata/$(date +"%Y%m%d-%H%M")_${HOSTNAME}.binpack

PyTorch trainer command (Note that this only trains for 20 epochs, repeatedly train until convergence):

python train.py --features "HalfKAv2^" --max_epochs 20 --smart-fen-skipping --random-fen-skipping 500 --batch-size 8192 --default_root_dir $dir --seed $RANDOM --threads 4 --num-workers 32 --gpus $gpuids --track_grad_norm 2 --gradient_clip_val 0.05 --lambda 0.2 --log_every_n_steps 50 $resumeopt $data $val

See https://github.com/sergiovieri/Stockfish/tree/tools_mod/rl for the scripts used to generate data.

Based on that Michael generated nn-76a8a7ffb820.nnue in the following way:

The net being submitted was trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 30 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --auto_lr_find True --lambda=1.0 --max_epochs=240 --seed %random%%random% --default_root_dir exp/run_109 --resume-from-model ./pt/nn-d8609abe8caf.pt

This run is thus started from Segio Vieri's net nn-d8609abe8caf.nnue

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

model.py modifications:
loss = torch.pow(torch.abs(p - q), 2.6).mean()
LR = 8.0e-5 calculated as follows: 1.5e-3*(.992^360) - the idea here was to take a highly trained net and just use all.binpack as a finishing micro refinement touch for the last 2 Elo or so. This net was discovered on the 59th epoch.
optimizer = ranger.Ranger(train_params, betas=(.90, 0.999), eps=1.0e-7, gc_loc=False, use_gc=False)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.992)
For this micro optimization, I had set the period to "5" in train.py. This changes the checkpoint output so that every 5th checkpoint file is created

The final touches were to adjust the NNUE scale, as was done by Jonathan in tests running at the same time.

passed LTC
https://tests.stockfishchess.org/tests/view/60fa45aed8a6b65b2f3a77a4
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 53040 W: 1732 L: 1575 D: 49733
Ptnml(0-2): 14, 1432, 23474, 1583, 17

passed STC
https://tests.stockfishchess.org/tests/view/60f9fee2d8a6b65b2f3a7775
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 37928 W: 3178 L: 3001 D: 31749
Ptnml(0-2): 100, 2446, 13695, 2623, 100.

closes https://github.com/official-stockfish/Stockfish/pull/3626

Bench: 5169957
2021-07-24 18:04:59 +02:00
Giacomo Lorenzetti a85928e7ec Apply good/bad history reduction also when inCheck
Main idea is that, in some cases, 'in check' situations are not so different from 'not in check' ones.
Trying to use piece count in order to select only a few 'in check' situations have failed LTC testing.
It could be interesting to apply one of those ideas in other parts of the search function.

passed STC:
https://tests.stockfishchess.org/tests/view/60f1b68dd1189bed71812d40
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 53472 W: 4078 L: 4008 D: 45386
Ptnml(0-2): 127, 3297, 19795, 3413, 104

passed LTC:
https://tests.stockfishchess.org/tests/view/60f291e6d1189bed71812de3
LLR: 2.92 (-2.94,2.94) <-2.50,0.50>
Total: 89712 W: 2651 L: 2632 D: 84429
Ptnml(0-2): 60, 2261, 40188, 2294, 53

closes https://github.com/official-stockfish/Stockfish/pull/3619

Bench: 5185789
2021-07-23 19:02:58 +02:00
pb00067 760b7462bc Simplify lowply-history scoring logic
STC:
https://tests.stockfishchess.org/tests/view/60eee559d1189bed71812b16
LLR: 2.97 (-2.94,2.94) <-2.50,0.50>
Total: 33976 W: 2523 L: 2431 D: 29022
Ptnml(0-2): 66, 2030, 12730, 2070, 92

LTC:
https://tests.stockfishchess.org/tests/view/60eefa12d1189bed71812b24
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 107240 W: 3053 L: 3046 D: 101141
Ptnml(0-2): 56, 2668, 48154, 2697, 45

closes https://github.com/official-stockfish/Stockfish/pull/3616

bench: 5199177
2021-07-23 18:53:03 +02:00
Vizvezdenec d957179df7 Prune illegal moves in qsearch earlier
The main idea is that illegal moves influencing search or
qsearch obviously can't be any sort of good. The only reason
why initially legality checks for search and qsearch were done
after they actually can influence some heuristics is because
legality check is expensive computationally. Eventually in
search it was moved to the place where it makes sure that
illegal moves can't influence search.

This patch shows that the same can be done for qsearch + it
passed STC with elo-gaining bounds + it removes 3 lines of code
because one no longer needs to increment/decrement movecount
on illegal moves.

passed STC with elo-gaining bounds
https://tests.stockfishchess.org/tests/view/60f20aefd1189bed71812da0
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 61512 W: 4688 L: 4492 D: 52332
Ptnml(0-2): 139, 3730, 22848, 3874, 165

The same version functionally but with moving condition ever earlier
passed LTC with simplification bounds.
https://tests.stockfishchess.org/tests/view/60f292cad1189bed71812de9
LLR: 2.98 (-2.94,2.94) <-2.50,0.50>
Total: 60944 W: 1724 L: 1685 D: 57535
Ptnml(0-2): 11, 1556, 27298, 1597, 10

closes https://github.com/official-stockfish/Stockfish/pull/3618

bench 4709569
2021-07-23 18:47:30 +02:00
Liam Keegan bc654257e7 Add macOS and windows to CI
- macOS
  - system clang
  - gcc
- windows / msys2
  - mingw 64-bit gcc
  - mingw 32-bit gcc
- minor code fixes to get new CI jobs to pass
  - code: suppress unused-parameter warning on 32-bit windows
  - Makefile: if arch=any on macos, don't specify arch at all

fixes https://github.com/official-stockfish/Stockfish/issues/2958

closes https://github.com/official-stockfish/Stockfish/pull/3623

No functional change
2021-07-23 18:16:05 +02:00
VoyagerOne 36f8d3806b Don't save excluded move eval in TT
STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 17544 W: 1384 L: 1236 D: 14924
Ptnml(0-2): 37, 1031, 6499, 1157, 48
https://tests.stockfishchess.org/tests/view/60ec8d9bd1189bed71812999

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 26136 W: 823 L: 707 D: 24606
Ptnml(0-2): 6, 643, 11656, 755, 8
https://tests.stockfishchess.org/tests/view/60ecb11ed1189bed718129ba

closes https://github.com/official-stockfish/Stockfish/pull/3614

Bench: 5505251
2021-07-13 17:35:20 +02:00
Vizvezdenec dbd7f602d3 Remove second futility pruning depth limit
This patch removes futility pruning lmrDepth limit for futility pruning at parent nodes.
Since it's already capped by margin that is a function of lmrDepth there is no need to extra cap it with lmrDepth.

passed STC
https://tests.stockfishchess.org/tests/view/60e9b5dfd1189bed71812777
LLR: 2.97 (-2.94,2.94) <-2.50,0.50>
Total: 14872 W: 1264 L: 1145 D: 12463
Ptnml(0-2): 37, 942, 5369, 1041, 47

passed LTC
https://tests.stockfishchess.org/tests/view/60e9c635d1189bed71812790
LLR: 2.96 (-2.94,2.94) <-2.50,0.50>
Total: 40336 W: 1280 L: 1225 D: 37831
Ptnml(0-2): 24, 1057, 17960, 1094, 33

closes https://github.com/official-stockfish/Stockfish/pull/3612

bench: 5064969
2021-07-13 17:33:20 +02:00
pb00067 f4986f4596 SEE: simplify stm variable initialization
Pull #3458 removed the only usage of pos.see_ge() moving pieces that
don't belong to the side to move, so we can simplify this, adding an assert.

closes https://github.com/official-stockfish/Stockfish/pull/3607

No functional change
2021-07-13 17:31:15 +02:00
Vizvezdenec 09b6d28391 Remove futility pruning depth limit
This patch removes futility pruning depth limit for child node futility pruning.
In current master it was double capped by depth and by futility margin, which is also a function of depth, which didn't make much sense.

passed STC
https://tests.stockfishchess.org/tests/view/60e2418f9ea99d7c2d693e64
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 116168 W: 9100 L: 9097 D: 97971
Ptnml(0-2): 319, 7496, 42476, 7449, 344

passed LTC
https://tests.stockfishchess.org/tests/view/60e3374f9ea99d7c2d693f20
LLR: 2.96 (-2.94,2.94) <-2.50,0.50>
Total: 43304 W: 1282 L: 1231 D: 40791
Ptnml(0-2): 8, 1126, 19335, 1173, 10

closes https://github.com/official-stockfish/Stockfish/pull/3606

bench 4965493
2021-07-13 17:23:30 +02:00
SFisGOD 8fc297c506 Update default net to nn-9e3c6298299a.nnue
Optimization of nn-956480d8378f.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60da2bf63beab81350ac9fe7

Same method as described in PR #3593

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 17792 W: 1525 L: 1372 D: 14895
Ptnml(0-2): 28, 1156, 6401, 1257, 54
https://tests.stockfishchess.org/tests/view/60deffc59ea99d7c2d693c19

LTC:
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 36544 W: 1245 L: 1109 D: 34190
Ptnml(0-2): 12, 988, 16139, 1118, 15
https://tests.stockfishchess.org/tests/view/60df11339ea99d7c2d693c22

closes https://github.com/official-stockfish/Stockfish/pull/3601

Bench: 4687476
2021-07-03 10:03:32 +02:00
Paul Mulders 516ad1c9bf Allow passing RTLIB=compiler-rt to make
Not all linux users will have libatomic installed.
When using clang as the system compiler with compiler-rt as the default
runtime library instead of libgcc, atomic builtins may be provided by compiler-rt.
This change allows such users to pass RTLIB=compiler-rt to make sure
the build doesn't error out on the missing (unnecessary) libatomic.

closes https://github.com/official-stockfish/Stockfish/pull/3597

No functional change
2021-07-03 09:51:03 +02:00
candirufish ec8dfe7315 no cut node reduction for killer moves.
stc:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 44344 W: 3474 L: 3294 D: 37576
Ptnml(0-2): 117, 2710, 16338, 2890, 117
https://tests.stockfishchess.org/tests/view/60d8ea673beab81350ac9eb8

ltc:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 82600 W: 2638 L: 2441 D: 77521
Ptnml(0-2): 38, 2147, 36749, 2312, 54
https://tests.stockfishchess.org/tests/view/60d9048f3beab81350ac9eed

closes https://github.com/official-stockfish/Stockfish/pull/3600

Bench: 5160239
2021-07-03 09:44:05 +02:00
xoto10 d297d1d8a7 Simplify lazy_skip.
Small speedup by removing operations in lazy_skip.

STC 10+0.1 :
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 55088 W: 4553 L: 4482 D: 46053
Ptnml(0-2): 163, 3546, 20045, 3637, 153
https://tests.stockfishchess.org/tests/view/60daa2cb3beab81350aca04d

LTC 60+0.6 :
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 46136 W: 1457 L: 1407 D: 43272
Ptnml(0-2): 10, 1282, 20442, 1316, 18
https://tests.stockfishchess.org/tests/view/60db0e753beab81350aca08e

closes https://github.com/official-stockfish/Stockfish/pull/3599

Bench 5122403
2021-07-03 09:26:58 +02:00
Stéphane Nicolet b51b094419 Simplify format_cp_aligned_dot()
closes https://github.com/official-stockfish/Stockfish/pull/3583

No functional change
2021-07-03 09:25:16 +02:00
Joost VandeVondele 7cfc1f9b15 Restore development version
No functional change
2021-07-03 09:20:06 +02:00
Joost VandeVondele 773dff0209 Stockfish 14
Official release version of Stockfish 14

Bench: 4770936

---

Today, we have the pleasure to announce Stockfish 14.

As usual, downloads will be freely available at https://stockfishchess.org

The engine is now significantly stronger than just a few months ago,
and wins four times more game pairs than it loses against the previous
release version [0]. Stockfish 14 is now at least 400 Elo ahead of
Stockfish 7, a top engine in 2016 [1]. During the last five years,
Stockfish has thus gained about 80 Elo per year.

Stockfish 14 evaluates positions more accurately than Stockfish 13 as
a result of two major steps forward in defining and training the
efficiently updatable neural network (NNUE) that provides the evaluation
for positions.

First, the collaboration with the Leela Chess Zero team - announced
previously [2] - has come to fruition. The LCZero team has provided a
collection of billions of positions evaluated by Leela that we have
combined with billions of positions evaluated by Stockfish to train the
NNUE net that powers Stockfish 14. The fact that we could use and combine
these datasets freely was essential for the progress made and demonstrates
the power of open source and open data [3].

Second, the architecture of the NNUE network was significantly updated:
the new network is not only larger, but more importantly, it deals better
with large material imbalances and can specialize for multiple phases of
the game [4]. A new project, kick-started by Gary Linscott and
Tomasz Sobczyk, led to a GPU accelerated net trainer written in
pytorch.[5] This tool allows for training high-quality nets in a couple
of hours.

Finally, this release features some search refinements, minor bug
fixes and additional improvements. For example, Stockfish is now about
90 Elo stronger for chess960 (Fischer random chess) at short time control.

The Stockfish project builds on a thriving community of enthusiasts
(thanks everybody!) that contribute their expertise, time, and resources
to build a free and open-source chess engine that is robust, widely
available, and very strong. We invite our chess fans to join the fishtest
testing framework and programmers to contribute to the project on
github [6].

Stay safe and enjoy chess!

The Stockfish team

[0] https://tests.stockfishchess.org/tests/view/60dae5363beab81350aca077
[1] https://nextchessmove.com/dev-builds
[2] https://stockfishchess.org/blog/2021/stockfish-13/
[3] https://lczero.org/blog/2021/06/the-importance-of-open-data/
[4] https://github.com/official-stockfish/Stockfish/commit/e8d64af1
[5] https://github.com/glinscott/nnue-pytorch/
[6] https://stockfishchess.org/get-involved/
2021-07-02 14:53:30 +02:00
Brad Knox 2275923d3c Update Top CPU Contributors
closes https://github.com/official-stockfish/Stockfish/pull/3595

No functional change
2021-06-29 10:24:54 +02:00
SFisGOD 49283d3a66 Update default net to nn-3475407dc199.nnue
Optimization of eight subnetwork output layers of Michael's nn-190f102a22c3.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60d5510642a522cc50282ef3

Parameters: A total of 256 net weights and 8 net biases were tuned
New best values: The raw values at the end of the tuning run were used (800k games, 5 seconds TC)
Settings: default ck value and SPSA A is 30,000 (3.75% of the total number of games)

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 29064 W: 2435 L: 2269 D: 24360
Ptnml(0-2): 72, 1857, 10505, 2029, 69
https://tests.stockfishchess.org/tests/view/60d8ea123beab81350ac9eb6

LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 61848 W: 2055 L: 1884 D: 57909
Ptnml(0-2): 18, 1708, 27310, 1861, 27
https://tests.stockfishchess.org/tests/view/60d8f0393beab81350ac9ec6

closes https://github.com/official-stockfish/Stockfish/pull/3593

Bench: 4770936
2021-06-28 21:31:58 +02:00
MichaelB7 b94a651878 Make net nn-956480d8378f.nnue the default
Trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_18 --resume-from-model ./pt/nn-75980ca503c6.pt

This run is thus started from a previous master net.

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

passed STC:
https://tests.stockfishchess.org/tests/view/60d0c0a7a8ec07dc34c072b2
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 18440 W: 1693 L: 1531 D: 15216
Ptnml(0-2): 67, 1225, 6464, 1407, 57

passed LTC:
https://tests.stockfishchess.org/tests/view/60d762793beab81350ac9d72
LLR: 2.98 (-2.94,2.94) <0.50,3.50>
Total: 93120 W: 3152 L: 2933 D: 87035
Ptnml(0-2): 48, 2581, 41076, 2814, 41

passed LTC (rebased branch to current master):
https://tests.stockfishchess.org/tests/view/60d85eeb3beab81350ac9e2b
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 42688 W: 1347 L: 1206 D: 40135
Ptnml(0-2): 14, 1097, 18981, 1238, 14.

closes https://github.com/official-stockfish/Stockfish/pull/3592

Bench: 4906727
2021-06-28 21:20:05 +02:00