1
0
Fork 0
Commit Graph

5384 Commits (riscv64-cartesi)

Author SHA1 Message Date
xoto10 f21a66f70d Small clean-up, Sept 2021
Closes https://github.com/official-stockfish/Stockfish/pull/3485

No functional change
2021-10-07 09:41:57 +02:00
Stéphane Nicolet 54a989930e Capping stat bonus at 2000
This patch updates the stat_bonus() function (used in the history tables to
help move ordering), keeping the same quadratic for small depths but changing
the values for depth >= 9:

The old bonus formula was increasing from zero at depth 1 to 4100 at depth 14,
then used the strange, small value of 73 for all depths >= 15.

The new bonus formula increases from 0 at depth 1 to 2000 at depth 8, then
keeps 2000 for all depths >= 8.

passed STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 169624 W: 42875 L: 42454 D: 84295
Ptnml(0-2): 585, 19340, 44557, 19729, 601
https://tests.stockfishchess.org/tests/view/615bd69e9d256038a969b97c

passed LTC:
LLR: 3.07 (-2.94,2.94) <0.50,3.50>
Total: 37336 W: 9456 L: 9191 D: 18689
Ptnml(0-2): 20, 3810, 10747, 4067, 24
https://tests.stockfishchess.org/tests/view/615c75d99d256038a969b9b2

closes https://github.com/official-stockfish/Stockfish/pull/3731

Bench: 6261865
2021-10-06 12:04:35 +02:00
Joost VandeVondele 329bdbd9cf Improve the Chess960 correction for cornered bishops
As Chess960 patches can not be tested on fishtest, this was locally tuned
and tested:

Elo: 2.36 +- 1.07
LOS: 0.999992

closes https://github.com/official-stockfish/Stockfish/pull/3730

Bench: 5714575
2021-10-06 11:57:34 +02:00
J. Oster 371b522e9e Time-management fix in MultiPV mode.
When playing games in MultiPV mode we must take care to only track the
best move changing for the first PV line. Otherwise, SF will spend most
of its time for the initial moves after the book exit.

This has been observed and reported on Discord, but can also be seen in
games played in Stefan Pohl's MultiPV experiment.

Tested with MultiPV=4.

STC:
https://tests.stockfishchess.org/tests/view/615c24b59d256038a969b990
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 1744 W: 694 L: 447 D: 603
Ptnml(0-2): 32, 125, 358, 278, 79

LTC:
https://tests.stockfishchess.org/tests/view/615c31769d256038a969b993
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 2048 W: 723 L: 525 D: 800
Ptnml(0-2): 10, 158, 511, 314, 31

closes https://github.com/official-stockfish/Stockfish/pull/3729

Bench: 5714575
2021-10-06 11:53:33 +02:00
Michael Chaly 135caee606 Increase reductions with thread count
Respin of multi-thread idea that was simplified away recently: basically doing
more reductions with thread count since Lazy SMP naturally widens search. With
drawish book this idea got simplified away but with less drawish book it again
gains elo, maybe trying to reinstall other ideas that were simplified away
previously can be beneficial.

passed STC
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 39736 W: 10205 L: 9986 D: 19545
Ptnml(0-2): 45, 4254, 11064, 4447, 58
https://tests.stockfishchess.org/tests/view/615750702d02f48db3961b00

passed LTC
LLR: 2.97 (-2.94,2.94) <0.50,3.50>
Total: 60352 W: 15530 L: 15218 D: 29604
Ptnml(0-2): 24, 5900, 18016, 6212, 24
https://tests.stockfishchess.org/tests/view/6157d8935488e26ea5eace7f

closes https://github.com/official-stockfish/Stockfish/pull/3724

Bench 5714575
2021-10-03 11:28:19 +02:00
Michael Chaly 21ad356c09 Extend quiet tt moves at PvNodes
Idea is to extend some quiet ttMoves if a lot of things indicate that
the transposition table move is going to be a good move:

1) move being a killer - so being the best move in nearby node;
2) reply continuation history is really good.

This is basically saying that move is good "in general" in this position,
that it is a good reply to the opponent move and that it was the best in
this position somewhere in search - so extending it makes a lot of sense.
In general in past year we had a lot of extensions of different types,
maybe there is something more in it :)

passed STC
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 42944 W: 10932 L: 10695 D: 21317
Ptnml(0-2): 141, 4869, 11210, 5116, 136
https://tests.stockfishchess.org/tests/view/614cca8e7bdc23e77ceb89f0

passed LTC
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 156848 W: 39473 L: 38893 D: 78482
Ptnml(0-2): 125, 16327, 44913, 16961, 98
https://tests.stockfishchess.org/tests/view/614cf93d7bdc23e77ceb8a13

closes https://github.com/official-stockfish/Stockfish/pull/3719

Bench: 5714575
2021-09-26 06:58:14 +02:00
Stéphane Nicolet 919da65d70 Reduction instead of cutoff
In master, during singular move analysis, when both the transposition value
and a reduced search for the other moves seem to indicate a fail high, we
heuristically prune the whole subtree and return an fail high score.

This patch is a little bit more cautious in this case, and instead of the
risky cutoff, we now search the ttMove with a reduced depth (by two plies).

STC:
https://tests.stockfishchess.org/tests/view/614dafe07bdc23e77ceb8a89
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 46728 W: 11909 L: 11666 D: 23153
Ptnml(0-2): 181, 5288, 12168, 5561, 166

LTC:
https://tests.stockfishchess.org/tests/view/614dc84abe4c07e0ecac3c95
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 74520 W: 18809 L: 18450 D: 37261
Ptnml(0-2): 45, 7735, 21346, 8084, 50

closes https://github.com/official-stockfish/Stockfish/pull/3718

Bench: 5499262
2021-09-25 22:12:17 +02:00
OfekShochat 00e34a758f Range reductions
adding reductions for when the delta between the static eval and the child's eval is consistently low.

passed STC
https://tests.stockfishchess.org/html/live_elo.html?614d7b3c7bdc23e77ceb8a5d
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 88872 W: 22672 L: 22366 D: 43834
Ptnml(0-2): 343, 10150, 23117, 10510, 316

passed LTC
https://tests.stockfishchess.org/html/live_elo.html?614daf3e7bdc23e77ceb8a82
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 24368 W: 6153 L: 5928 D: 12287
Ptnml(0-2): 13, 2503, 6937, 2708, 23

closes https://github.com/official-stockfish/Stockfish/pull/3717

Bench: 5443950
2021-09-24 23:17:48 +02:00
Stéphane Nicolet ff3fa0c664 Tweak doubly singular condition (Topo's patch)
This patch relax a little bit the condition for doubly singular moves
(ie moves that are so forced that we think that they deserve a local
double extension of the search). We lower the margin and allow up to
six such double extensions in the path between the root and the critical
node.

Original idea by Siad Daboul (@TopoIogist) in PR #3709

Tested with the previous commit:

passed STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 33048 W: 8458 L: 8236 D: 16354
Ptnml(0-2): 120, 3701, 8660, 3923, 120
https://tests.stockfishchess.org/tests/view/614b24347bdc23e77ceb88fe

passed LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 54176 W: 13712 L: 13406 D: 27058
Ptnml(0-2): 36, 5653, 15399, 5969, 31
https://tests.stockfishchess.org/tests/view/614b3b727bdc23e77ceb8911

closes https://github.com/official-stockfish/Stockfish/pull/3714

Bench: 5792377
2021-09-23 23:24:28 +02:00
Stéphane Nicolet 73018a0337 Detect search explosions
This patch detects some search explosions (due to double extensions in
search.cpp) which can happen in some pathological positions, and takes
measures to ensure progress in search even for these pathological situations.

While a small number of double extensions can be useful during search
(for example to resolve a tactical sequence), a sustained regime of
double extensions leads to search explosion and a non-finishing search.
See the discussion in https://github.com/official-stockfish/Stockfish/pull/3544
and the issue https://github.com/official-stockfish/Stockfish/issues/3532 .

The implemented algorithm is the following:

a) at each node during search, store the current depth in the stack.
   Double extensions are by definition levels of the stack where the
   depth at ply N is strictly higher than depth at ply N-1.

b) during search, calculate for each thread a running average of the
   number of double extensions in the last 4096 visited nodes.

c) if one thread has more than 2% of double extensions for a sustained
   period of time (6 millions consecutive nodes, or about 4 seconds on
   my iMac), we decide that this thread is in an explosion state and
   we calm down this thread by preventing it to do any double extension
   for the next 6 millions nodes.

To calculate the running averages, we also introduced a auxiliary class
generalizing the computations of ttHitAverage variable we already had in
code. The implementation uses an exponential moving average of period 4096
and resolution 1/1024, and all computations are done with integers for
efficiency.

-----------

Example where the patch solves a search explosion:

```
   ./stockfish
   ucinewgame
   position fen 8/Pk6/8/1p6/8/P1K5/8/6B1 w - - 37 130
   go infinite
```

This algorithm does not affect search in normal, non-pathological positions.
We verified, for instance, that the usual bench is unchanged up to depth 20
at least, and that the node numbers are unchanged for a search of the starting
position at depth 32.

-------------

See https://github.com/official-stockfish/Stockfish/pull/3714

Bench: 5575265
2021-09-23 23:19:06 +02:00
Michael Chaly e8788d1b32 Combo of various parameter tweaks
Combination of parameter tweaks in search, evaluation and time management.
Original patches by snicolet xoto10 lonfom169 and Vizvezdenec.

Includes:

* Use bigger grain of positional evaluation more frequently (up to 1 exchange difference in non-pawn-material);
* More extra time according to increment;
* Increase margin for singular extensions;
* Do more aggresive parent node futility pruning.

Passed STC
https://tests.stockfishchess.org/tests/view/6147deab3733d0e0dd9f313d
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 45488 W: 11691 L: 11450 D: 22347
Ptnml(0-2): 145, 5208, 11824, 5395, 172

Passed LTC
https://tests.stockfishchess.org/tests/view/6147f1d53733d0e0dd9f3141
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 62520 W: 15808 L: 15482 D: 31230
Ptnml(0-2): 43, 6439, 17960, 6785, 33

closes https://github.com/official-stockfish/Stockfish/pull/3710

bench 5575265
2021-09-21 19:48:40 +02:00
xoto10 5b47b4e6c0 Increase optimumTime by 10%
STC 10+0.1 :
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 47032 W: 12078 L: 11841 D: 23113
Ptnml(0-2): 159, 5098, 12746, 5373, 140
https://tests.stockfishchess.org/tests/view/613f9df1f29dda16fcca8731

LTC 60+0.6 :
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 66248 W: 16631 L: 16301 D: 33316
Ptnml(0-2): 44, 6560, 19578, 6906, 36
https://tests.stockfishchess.org/tests/view/6140603d7315e7c73204a4c1

Non-regression tests with other time control styles:

Moves/Time 40/10+0 :
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 51640 W: 13350 L: 13254 D: 25036
Ptnml(0-2): 183, 5770, 13797, 5908, 162
https://tests.stockfishchess.org/tests/view/6141592b7315e7c73204a599

TCEC Style 10+0.01 :
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 20592 W: 5300 L: 5157 D: 10135
Ptnml(0-2): 81, 2240, 5544, 2317, 114
https://tests.stockfishchess.org/tests/view/61425bb27315e7c73204a6a2

Sudden death 15+0 :
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 127104 W: 32728 L: 32741 D: 61635
Ptnml(0-2): 735, 13973, 34149, 13960, 735
https://tests.stockfishchess.org/tests/view/614256a77315e7c73204a699

The first 3 tests were run with an initial version of the code, which was then modified to make the amount of extra time dependent on the size of increment. No increment gives no extra time, and the extra time given increases until an increment of 1% or more of remaining time gives 10% extra thinking time.

closes https://github.com/official-stockfish/Stockfish/pull/3702

Bench 6658747
2021-09-17 08:14:36 +02:00
SFisGOD 723f48dec0 Update default net to nn-13406b1dcbe0.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/6134abc425b9b35584838572
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-6762d36ad265.nnue
New net: nn-c9fdeea14cb2.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/61355b7e25b9b3558483860e
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-c9fdeea14cb2.nnue
New net: nn-0ddc28184f4c.nnue

SPSA 3: https://tests.stockfishchess.org/tests/view/613737be0cd98ab40c0c9e4e
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-0ddc28184f4c.nnue
New net: nn-2419828bb394.nnue

SPSA 4: https://tests.stockfishchess.org/tests/view/613966ff689039fce12e0fe7
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-2419828bb394.nnue
New net: nn-05d9b1ee3037.nnue

SPSA 5: https://tests.stockfishchess.org/tests/view/613b4a38689039fce12e1209
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-05d9b1ee3037.nnue
New net: nn-98c6ce0fc15f.nnue

SPSA 6: https://tests.stockfishchess.org/tests/view/613e331515591e7c9ebc3fe9
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-98c6ce0fc15f.nnue
New net: nn-13406b1dcbe0.nnue

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 82008 W: 21044 L: 20752 D: 40212
Ptnml(0-2): 264, 9341, 21525, 9587, 287
https://tests.stockfishchess.org/tests/view/613f7c6cf29dda16fcca870c

LTC:
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 182928 W: 46258 L: 45602 D: 91068
Ptnml(0-2): 107, 19448, 51712, 20076, 121
https://tests.stockfishchess.org/tests/view/613fccb97315e7c73204a48c

Closes #3703

Bench: 6658747
2021-09-15 17:50:20 +02:00
xoto10 fd5e77950e Update 2 search parameters after tune.
A tuning run on 3 search parameters was done with 200k games, narrow ranges (50-150%) and a small value for A (3% of total games) :
https://tests.stockfishchess.org/tests/view/613b5f4b689039fce12e1220

STC 10+0.1 :
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 73112 W: 18800 L: 18520 D: 35792
Ptnml(0-2): 205, 8395, 19115, 8597, 244
https://tests.stockfishchess.org/tests/view/613cb8d2689039fce12e1308

LTC 60+0.6 :
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 45616 W: 11604 L: 11321 D: 22691
Ptnml(0-2): 24, 4769, 12946, 5038, 31
https://tests.stockfishchess.org/tests/view/613d07048253e53e97b55b32

closes https://github.com/official-stockfish/Stockfish/pull/3698

Bench 6504816
2021-09-12 18:03:56 +02:00
Michael Chaly 30fdbf4328 Decrease depth for cutnodes with no tt move
By analogy to existing logic of decreasing depth for PvNodes w/o tt move
do the same for cutNodes.

Passed STC
https://tests.stockfishchess.org/tests/view/613abf5a689039fce12e1155
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 90336 W: 23108 L: 22804 D: 44424
Ptnml(0-2): 286, 10316, 23642, 10656, 268

Passed LTC
https://tests.stockfishchess.org/tests/view/613ae330689039fce12e1172
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 37736 W: 9607 L: 9346 D: 18783
Ptnml(0-2): 21, 3917, 10730, 4180, 20

closes https://github.com/official-stockfish/Stockfish/pull/3697

bench 5891181
2021-09-10 11:50:43 +02:00
Stefan Geschwentner b7b6b4ba18 Further improve history updates
Now even double history updates if a search failed low at an expected PV or CUT node.

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 30736 W: 7891 L: 7674 D: 15171
Ptnml(0-2): 90, 3477, 8017, 3694, 90
https://tests.stockfishchess.org/tests/view/61364ae30cd98ab40c0c9da5

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 73600 W: 18684 L: 18326 D: 36590
Ptnml(0-2): 41, 7734, 20899, 8078, 48
https://tests.stockfishchess.org/tests/view/6136940f0cd98ab40c0c9df3

closes https://github.com/official-stockfish/Stockfish/pull/3694

Bench: 6030657
2021-09-07 19:59:14 +02:00
Stefan Geschwentner c31fc8d163 Improve history updates
If a search failed low at an expected PV or CUT node do greater history updates.

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 95112 W: 24293 L: 23982 D: 46837
Ptnml(0-2): 285, 10893, 24906, 11170, 302
https://tests.stockfishchess.org/tests/view/6132aa1a2ffb3c36aceb926f

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 116352 W: 29450 L: 28975 D: 57927
Ptnml(0-2): 93, 12263, 32984, 12748, 88
https://tests.stockfishchess.org/tests/view/613394d12ffb3c36aceb92f4

closes https://github.com/official-stockfish/Stockfish/pull/3693

Bench: 6130736
2021-09-06 14:19:47 +02:00
SFisGOD be63ce1bb5 Update default net to nn-6762d36ad265.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/612cdb1fbb4956d8b78eb5ab
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-fe433fd8c7f6.nnue
New net: nn-5f134823db04.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/612fcde645091e810014af19
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-5f134823db04.nnue
New net: nn-8eca5dd4e3f7.nnue

SPSA 3: https://tests.stockfishchess.org/tests/view/6130822345091e810014af61
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-8eca5dd4e3f7.nnue
New net: nn-4556108e4f00.nnue

SPSA 4: https://tests.stockfishchess.org/tests/view/613287652ffb3c36aceb923c
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-4556108e4f00.nnue
New net: nn-6762d36ad265.nnue

STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 162776 W: 41220 L: 40807 D: 80749
Ptnml(0-2): 517, 18800, 42359, 19177, 535
https://tests.stockfishchess.org/tests/view/6134107125b9b35584838559

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 41056 W: 10428 L: 10156 D: 20472
Ptnml(0-2): 30, 4288, 11618, 4564, 28
https://tests.stockfishchess.org/tests/view/6134ad6525b9b3558483857a

closes https://github.com/official-stockfish/Stockfish/pull/3691

Bench: 5812158
2021-09-06 14:08:22 +02:00
Michael Chaly e404a7d97c Extend captures and promotions
This patch introduces extension for captures and promotions. Every capture or
promotion that is not the first move in the list gets extended at PvNodes and
cutNodes. Special thanks to @locutus2 - all my previous attepmts that failed
on this idea were done only for PvNodes - idea to include also cutNodes was
based on his latest passed patch.

STC
https://tests.stockfishchess.org/tests/view/6134abf325b9b35584838574
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 188920 W: 47754 L: 47304 D: 93862
Ptnml(0-2): 595, 21754, 49344, 22140, 627

LTC
https://tests.stockfishchess.org/tests/view/613521de25b9b355848385d7
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 8768 W: 2283 L: 2098 D: 4387
Ptnml(0-2): 7, 866, 2452, 1053, 6

closes https://github.com/official-stockfish/Stockfish/pull/3692

bench: 5564555
2021-09-06 13:59:17 +02:00
SFisGOD 2807dcfab6 Update default net to nn-735bba95dec0.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/61286d8b62d20cf82b5ad1bd
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-33495fe25081.nnue
New net: nn-83e3cf2af92b.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/6129cf2162d20cf82b5ad25f
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-83e3cf2af92b.nnue
New net: nn-69a528eaef35.nnue

SPSA 3: https://tests.stockfishchess.org/tests/view/612a0dcb62d20cf82b5ad2a0
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-69a528eaef35.nnue
New net: nn-735bba95dec0.nnue

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 95144 W: 24310 L: 23999 D: 46835
Ptnml(0-2): 232, 11059, 24748, 11232, 301
https://tests.stockfishchess.org/tests/view/612bb3be0fdf40644b4b9996

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 33632 W: 8522 L: 8271 D: 16839
Ptnml(0-2): 18, 3511, 9516, 3744, 27
https://tests.stockfishchess.org/tests/view/612ce5b9bb4956d8b78eb5b3

Closes https://github.com/official-stockfish/Stockfish/pull/3685

Bench: 5600615
2021-08-31 12:56:19 +02:00
VoyagerOne ad357e147a CMH Pruning Tweak
Tweak pruning formula by adding up CMH values.

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 14608 W: 3837 L: 3641 D: 7130
Ptnml(0-2): 27, 1681, 3723, 1815, 58
https://tests.stockfishchess.org/tests/view/612792f362d20cf82b5ad156

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 53520 W: 13580 L: 13276 D: 26664
Ptnml(0-2): 28, 5610, 15183, 5908, 31
https://tests.stockfishchess.org/tests/view/6127d27062d20cf82b5ad191

closes https://github.com/official-stockfish/Stockfish/pull/3682

Bench: 5186641
2021-08-27 21:41:32 +02:00
SFisGOD 69eede7d08 Update default net to nn-33495fe25081.nnue
STC:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 37368 W: 9621 L: 9391 D: 18356
Ptnml(0-2): 117, 4287, 9664, 4481, 135
https://tests.stockfishchess.org/tests/view/612768165318138ee1204977

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 13328 W: 3446 L: 3246 D: 6636
Ptnml(0-2): 11, 1383, 3682, 1571, 17
https://tests.stockfishchess.org/tests/view/6127dc8d62d20cf82b5ad196

Closes https://github.com/official-stockfish/Stockfish/pull/3679

Bench: 5179347
2021-08-27 07:51:26 +02:00
ppigazzini f30f231cbf Use "pedantic" flag also for mingw
This will avoid to run in fishtest a test where the linux machines exit from
the building process and only the windows machines run the test.

See:
https://tests.stockfishchess.org/tests/view/61122d732a8a49ac5be79996
4e422577d6 (comments)

closes https://github.com/official-stockfish/Stockfish/pull/3671

No functional change.
2021-08-27 07:49:26 +02:00
Joost VandeVondele af0d82792e Fix empty EvalFile option
some GUIs send an empty string for EvalFile, in that case explicitly try the default name

fixes https://github.com/official-stockfish/Stockfish/issues/3675

closes https://github.com/official-stockfish/Stockfish/pull/3678

No functional change.
2021-08-27 07:48:18 +02:00
bmc4 d754ea50a8 Simplify Declaration on Pawn Move Generation
Removes possible micro-optimization in favor of readability.

STC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 75432 W: 5824 L: 5777 D: 63831
Ptnml(0-2): 178, 4648, 28036, 4657, 197
https://tests.stockfishchess.org/tests/view/611fa7f84977aa1525c9cb75

LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 41200 W: 1156 L: 1106 D: 38938
Ptnml(0-2): 13, 981, 18562, 1031, 13
https://tests.stockfishchess.org/tests/view/611fcc694977aa1525c9cb9b

Closes https://github.com/official-stockfish/Stockfish/pull/3669

No functional change
2021-08-22 09:15:19 +02:00
SFisGOD 590447d7a1 Update default net to nn-517c4f68b5df.nnue
SPSA: https://tests.stockfishchess.org/tests/view/611cf0da4977aa1525c9ca03
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-ac5605a608d6.nnue
New net: nn-517c4f68b5df.nnue

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 11600 W: 998 L: 851 D: 9751
Ptnml(0-2): 30, 705, 4186, 846, 33
https://tests.stockfishchess.org/tests/view/611f84524977aa1525c9cb5b

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 9360 W: 338 L: 243 D: 8779
Ptnml(0-2): 0, 220, 4151, 303, 6
https://tests.stockfishchess.org/tests/view/611f8c5b4977aa1525c9cb64

closes https://github.com/official-stockfish/Stockfish/pull/3667

Bench: 4844618
2021-08-22 09:09:58 +02:00
candirufish 939ffe454d do more LMR extensions for PV nodes
LMR Pv and depth 6 Extension tweak:

LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 52488 W: 1542 L: 1394 D: 49552
Ptnml(0-2): 18, 1253, 23552, 1405, 16
https://tests.stockfishchess.org/tests/view/611e49c34977aa1525c9caa7

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 76216 W: 6000 L: 5784 D: 64432
Ptnml(0-2): 204, 4745, 28006, 4937, 216
https://tests.stockfishchess.org/tests/view/611e0e254977aa1525c9ca89

closes https://github.com/official-stockfish/Stockfish/pull/3666

Bench: 5046381
2021-08-22 09:05:53 +02:00
bmc4 e57d2d9d47 Simplify Null Move Search Reduction
slightly simpler formula for reduction computation.

first round of tests:
STC:
LLR: 2.97 (-2.94,2.94) <-2.50,0.50>
Total: 15632 W: 1319 L: 1204 D: 13109
Ptnml(0-2): 33, 956, 5733, 1051, 43
https://tests.stockfishchess.org/tests/view/60bd03c7457376eb8bcaa600

LTC:
LLR: 3.37 (-2.94,2.94) <-2.50,0.50>
Total: 86296 W: 2814 L: 2779 D: 80703
Ptnml(0-2): 33, 2500, 38039, 2551, 25
https://tests.stockfishchess.org/tests/view/60bd1ff0457376eb8bcaa653

recent tests:
STC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 23936 W: 1895 L: 1793 D: 20248
Ptnml(0-2): 40, 1470, 8869, 1526, 63
https://tests.stockfishchess.org/tests/view/611f9b7d4977aa1525c9cb6b

LTC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 62568 W: 1750 L: 1713 D: 59105
Ptnml(0-2): 19, 1560, 28085, 1605, 15
https://tests.stockfishchess.org/tests/view/611fa4814977aa1525c9cb71

functional on high depth

closes https://github.com/official-stockfish/Stockfish/pull/3535

Bench: 5375286
2021-08-22 09:00:15 +02:00
Tomasz Sobczyk 18dcf1f097 Optimize and tidy up affine transform code.
The new network caused some issues initially due to the very narrow neuron set between the first two FC layers. Necessary changes were hacked together to make it work. This patch is a mature approach to make the affine transform code faster, more readable, and easier to maintain should the layer sizes change again.

The following changes were made:

* ClippedReLU always produces a multiple of 32 outputs. This is about as good of a solution for AffineTransform's SIMD requirements as it can get without a bigger rewrite.

* All self-contained simd helpers are moved to a separate file (simd.h). Inline asm is utilized to work around GCC's issues with code generation and register assignment. See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101693, https://godbolt.org/z/da76fY1n7

* AffineTransform has 2 specializations. While it's more lines of code due to the boilerplate, the logic in both is significantly reduced, as these two are impossible to nicely combine into one.
 1) The first specialization is for cases when there's >=128 inputs. It uses a different approach to perform the affine transform and can make full use of AVX512 without any edge cases. Furthermore, it has higher theoretical throughput because less loads are needed in the hot path, requiring only a fixed amount of instructions for horizontal additions at the end, which are amortized by the large number of inputs.
 2) The second specialization is made to handle smaller layers where performance is still necessary but edge cases need to be handled. AVX512 implementation for this was ommited by mistake, a remnant from the temporary implementation for the new... This could be easily reintroduced if needed. A slightly more detailed description of both implementations is in the code.

Overall it should be a minor speedup, as shown on fishtest:

passed STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 51520 W: 4074 L: 3888 D: 43558
Ptnml(0-2): 111, 3136, 19097, 3288, 128

and various tests shown in the pull request

closes https://github.com/official-stockfish/Stockfish/pull/3663

No functional change
2021-08-20 08:50:25 +02:00
Tomasz Sobczyk ccf0239bc4 Improve handling of the debug log file.
Fix handling of empty strings in uci options and reassigning of the log file

Fixes https://github.com/official-stockfish/Stockfish/issues/3650

Closes https://github.com/official-stockfish/Stockfish/pull/3655

No functional change
2021-08-20 07:57:09 +02:00
Torsten Hellwig 1946a67567 Update default net to nn-ac5605a608d6.nnue
This net was created with the nnue-pytorch trainer, it used the previous master net as a starting point.

The training data includes all T60 data (https://drive.google.com/drive/folders/1rzZkgIgw7G5vQMLr2hZNiUXOp7z80613), all T74 data (https://drive.google.com/drive/folders/1aFUv3Ih3-A8Vxw9064Kw_FU4sNhMHZU-) and the wrongNNUE_02_d9.binpack (https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq). The Leela data were randomly named and then concatenated. All data was merged into one binpack using interleave_binpacks.py.

python3 train.py \
    ../data/t60_t74_wrong.binpack \
    ../data/t60_t74_wrong.binpack \
    --resume-from-model ../data/nn-e8321e467bf6.pt \
    --gpus 1 \
    --threads 4 \
    --num-workers 1 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 300 \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=1.0 \
    --max_epochs=600 \
    --seed $RANDOM \
    --default_root_dir ../output/exp_24

STC:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 15320 W: 1415 L: 1257 D: 12648
Ptnml(0-2): 50, 1002, 5402, 1152, 54
https://tests.stockfishchess.org/tests/view/611c404a4977aa1525c9c97f

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 9440 W: 345 L: 248 D: 8847
Ptnml(0-2): 3, 222, 4175, 315, 5
https://tests.stockfishchess.org/tests/view/611c6c7d4977aa1525c9c996

LTC with UHO_XXL_+0.90_+1.19.epd:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 6232 W: 1638 L: 1459 D: 3135
Ptnml(0-2): 5, 592, 1744, 769, 6
https://tests.stockfishchess.org/tests/view/611c9b214977aa1525c9c9cb

closes https://github.com/official-stockfish/Stockfish/pull/3664

Bench: 5375286
2021-08-18 09:17:22 +02:00
Joost VandeVondele f10ebc2bdf Regenerate dependencies on code change
fixes https://github.com/official-stockfish/Stockfish/issues/3658

dependencies are now regenerated for each code change, this adds some 1s overhead in compile time, but avoids potential miscompilations or build problems.

closes https://github.com/official-stockfish/Stockfish/pull/3659

No functional change
2021-08-17 21:08:34 +02:00
Tomasz Sobczyk d61d38586e New NNUE architecture and net
Introduces a new NNUE network architecture and associated network parameters

The summary of the changes:

* Position for each perspective mirrored such that the king is on e..h files. Cuts the feature transformer size in half, while preserving enough knowledge to be good. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.b40q4rb1w7on.
* The number of neurons after the feature transformer increased two-fold, to 1024x2. This is possibly mostly due to the now very optimized feature transformer update code.
* The number of neurons after the second layer is reduced from 16 to 8, to reduce the speed impact. This, perhaps surprisingly, doesn't harm the strength much. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.6qkocr97fezq

The AffineTransform code did not work out-of-the box with the smaller number of neurons after the second layer, so some temporary changes have been made to add a special case for InputDimensions == 8. Also additional 0 padding is added to the output for some archs that cannot process inputs by <=8 (SSE2, NEON). VNNI uses an implementation that can keep all outputs in the registers while reducing the number of loads by 3 for each 16 inputs, thanks to the reduced number of output neurons. However GCC is particularily bad at optimization here (and perhaps why the current way the affine transform is done even passed sprt) (see https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit# for details) and more work will be done on this in the following days. I expect the current VNNI implementation to be improved and extended to other architectures.

The network was trained with a slightly modified version of the pytorch trainer (https://github.com/glinscott/nnue-pytorch); the changes are in https://github.com/glinscott/nnue-pytorch/pull/143

The training utilized 2 datasets.

    dataset A - https://drive.google.com/file/d/1VlhnHL8f-20AXhGkILujnNXHwy9T-MQw/view?usp=sharing
    dataset B - as described in ba01f4b954

The training process was as following:

    train on dataset A for 350 epochs, take the best net in terms of elo at 20k nodes per move (it's fine to take anything from later stages of training).
    convert the .ckpt to .pt
    --resume-from-model from the .pt file, train on dataset B for <600 epochs, take the best net. Lambda=0.8, applied before the loss function.

The first training command:

python3 train.py \
    ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
    ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
    --gpus "$3," \
    --threads 1 \
    --num-workers 1 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 20 \
    --smart-fen-skipping \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=1.0 \
    --max_epochs=600 \
    --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

The second training command:

python3 serialize.py \
    --features=HalfKAv2_hm^ \
    ../nnue-pytorch-training/experiment_131/run_6/default/version_0/checkpoints/epoch-499.ckpt \
    ../nnue-pytorch-training/experiment_$1/base/base.pt

python3 train.py \
    ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
    ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
    --gpus "$3," \
    --threads 1 \
    --num-workers 1 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 20 \
    --smart-fen-skipping \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=0.8 \
    --max_epochs=600 \
    --resume-from-model ../nnue-pytorch-training/experiment_$1/base/base.pt \
    --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

STC: https://tests.stockfishchess.org/tests/view/611120b32a8a49ac5be798c4

LLR: 2.97 (-2.94,2.94) <-0.50,2.50>
Total: 22480 W: 2434 L: 2251 D: 17795
Ptnml(0-2): 101, 1736, 7410, 1865, 128

LTC: https://tests.stockfishchess.org/tests/view/611152b32a8a49ac5be798ea

LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 9776 W: 442 L: 333 D: 9001
Ptnml(0-2): 5, 295, 4180, 402, 6

closes https://github.com/official-stockfish/Stockfish/pull/3646

bench: 5189338
2021-08-15 12:05:43 +02:00
Joost VandeVondele dabaf2220f Revert futility pruning patches
reverts 09b6d28391 and
dbd7f602d3 that significantly impact mate
finding capabilities. For example on ChestUCI_23102018.epd, at 1M nodes,
the number of mates found is nearly reduced 2x without these depth conditions:

       sf6  2091
       sf7  2093
       sf8  2107
       sf9  2062
      sf10  2208
      sf11  2552
      sf12  2563
      sf13  2509
      sf14  2427
    master  1246
   patched  2467

(script for testing at https://github.com/official-stockfish/Stockfish/files/6936412/matecheck.zip)

closes https://github.com/official-stockfish/Stockfish/pull/3641

fixes https://github.com/official-stockfish/Stockfish/issues/3627

Bench: 5467570
2021-08-05 16:41:07 +02:00
VoyagerOne a1a83f3869 SEE simplification
Simplified SEE formula by removing std::min. Should also be easier to tune.

STC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 22656 W: 1836 L: 1729 D: 19091
Ptnml(0-2): 54, 1426, 8267, 1521, 60
https://tests.stockfishchess.org/tests/view/610ae62f2a8a49ac5be79449

LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 26248 W: 806 L: 744 D: 24698
Ptnml(0-2): 6, 668, 11715, 728, 7
https://tests.stockfishchess.org/tests/view/610b17ad2a8a49ac5be79466

closes https://github.com/official-stockfish/Stockfish/pull/3643

bench:  4915145
2021-08-05 16:32:07 +02:00
SFisGOD 73ef5b8c4a Update default net to nn-46832cfbead3.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/6100e7f096b86d98abf6a832
Parameters: A total of 256 net weights and 8 net biases were tuned (output layer)
Base net: nn-56a5f1c4173a.nnue
New net: nn-ec3c8e029926.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/610733caafad2da4f4ae3da7
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-ec3c8e029926.nnue
New net: nn-46832cfbead3.nnue

STC:
LLR: 2.98 (-2.94,2.94) <-0.50,2.50>
Total: 50520 W: 3953 L: 3765 D: 42802
Ptnml(0-2): 138, 3063, 18678, 3235, 146
https://tests.stockfishchess.org/tests/view/610a79692a8a49ac5be793f4

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 57256 W: 1723 L: 1566 D: 53967
Ptnml(0-2): 12, 1442, 25568, 1589, 17
https://tests.stockfishchess.org/tests/view/610ac5bb2a8a49ac5be79434

Closes https://github.com/official-stockfish/Stockfish/pull/3642

Bench: 5359314
2021-08-05 08:52:07 +02:00
Stefan Geschwentner 5cd42f6b0b Simplify new cmh pruning thresholds by using directly a quadratic formula.
This decouples also the stat bonus updates from the threshold which creates less dependencies for tuning of stat bonus parameters.
Perhaps a further fine tuning of the now separated coefficients for constHist[0] and constHist[1] could give further gains.

STC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 78384 W: 6134 L: 6090 D: 66160
Ptnml(0-2): 207, 5013, 28705, 5063, 204
https://tests.stockfishchess.org/tests/view/6106d235afad2da4f4ae3d4b

LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 38176 W: 1149 L: 1095 D: 35932
Ptnml(0-2): 6, 1000, 17030, 1038, 14
https://tests.stockfishchess.org/tests/view/6107a080afad2da4f4ae3def

closes https://github.com/official-stockfish/Stockfish/pull/3639

Bench: 5098146
2021-08-05 08:47:33 +02:00
VoyagerOne 31ebd918ea Futile pruning simplification
Remove CMH conditions in futile pruning.

STC:
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 93520 W: 7165 L: 7138 D: 79217
Ptnml(0-2): 222, 5923, 34427, 5982, 206
https://tests.stockfishchess.org/tests/view/61083104e50a153c346ef8df

LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 59072 W: 1746 L: 1706 D: 55620
Ptnml(0-2): 13, 1562, 26353, 1588, 20
https://tests.stockfishchess.org/tests/view/610894f2e50a153c346ef913

closes https://github.com/official-stockfish/Stockfish/pull/3638

Bench: 5229673
2021-08-05 08:44:38 +02:00
VoyagerOne a0fca67da4 CMH Pruning Tweak
replace CounterMovePruneThreshold by a depth dependent threshold

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 35512 W: 2718 L: 2552 D: 30242
Ptnml(0-2): 66, 2138, 13194, 2280, 78
https://tests.stockfishchess.org/tests/view/6104442fafad2da4f4ae3b94

LTC:
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 36536 W: 1150 L: 1019 D: 34367
Ptnml(0-2): 10, 920, 16278, 1049, 11
https://tests.stockfishchess.org/tests/view/6104b033afad2da4f4ae3bbc

closes https://github.com/official-stockfish/Stockfish/pull/3636

Bench: 5848718
2021-07-31 15:29:19 +02:00
Tomasz Sobczyk 26edf9534a Avoid unnecessary stores in the affine transform
This patch improves the codegen in the AffineTransform::forward function for architectures >=SSSE3. Current code works directly on memory and the compiler cannot see that the stores through outptr do not alias the loads through weights and input32. The solution implemented is to perform the affine transform with local variables as accumulators and only store the result to memory at the end. The number of accumulators required is OutputDimensions / OutputSimdWidth, which means that for the 1024->16 affine transform it requires 4 registers with SSSE3, 2 with AVX2, 1 with AVX512. It also cuts the number of stores required by NumRegs * 256 for each node evaluated. The local accumulators are expected to be assigned to registers, but even if this cannot be done in some case due to register pressure it will help the compiler to see that there is no aliasing between the loads and stores and may still result in better codegen.

See https://godbolt.org/z/59aTKbbYc for codegen comparison.

passed STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 140328 W: 10635 L: 10358 D: 119335
Ptnml(0-2): 302, 8339, 52636, 8554, 333

closes https://github.com/official-stockfish/Stockfish/pull/3634

No functional change
2021-07-30 17:15:52 +02:00
SFisGOD e973eee919 Update default net to nn-56a5f1c4173a.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/60fd24efd8a6b65b2f3a796e
Parameters: A total of 256 net biases were tuned (hidden layer 2)
New best values: Half of the changes from the tuning run
New net: nn-5992d3ba79f3.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/60fec7d6d8a6b65b2f3a7aa2
Parameters: A total of 128 net biases were tuned (hidden layer 1)
New best values: Half of the changes from the tuning run
New net: nn-56a5f1c4173a.nnue

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 140392 W: 10863 L: 10578 D: 118951
Ptnml(0-2): 347, 8754, 51718, 9021, 356
https://tests.stockfishchess.org/tests/view/610037e396b86d98abf6a79e

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 14216 W: 454 L: 355 D: 13407
Ptnml(0-2): 4, 323, 6356, 420, 5
https://tests.stockfishchess.org/tests/view/61019995afad2da4f4ae3a3c

Closes #3633

Bench: 4801359
2021-07-29 07:35:13 +02:00
SFisGOD 237ed1ef8f Update default net to nn-26abeed38351.nnue
SPSA: https://tests.stockfishchess.org/tests/view/60fba335d8a6b65b2f3a7891

New best values: Half of the changes from the tuning run.
Setting: nodestime=300 with 10+0.1 (approximate real TC is 2.5 seconds)
The rest is the same as described in #3593

The change from nodestime=600 to 300 was suggested by gekkehenker to prevent time losses for some slow workers
SFisGOD@94cd757#commitcomment-53324840

STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 67448 W: 5241 L: 5036 D: 57171
Ptnml(0-2): 151, 4198, 24827, 4391, 157
https://tests.stockfishchess.org/tests/view/60fd50f2d8a6b65b2f3a798e

LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 48752 W: 1504 L: 1358 D: 45890
Ptnml(0-2): 13, 1226, 21754, 1368, 15
https://tests.stockfishchess.org/tests/view/60fd7bb2d8a6b65b2f3a79a9

Closes https://github.com/official-stockfish/Stockfish/pull/3630

Bench:  5124774
2021-07-26 07:52:59 +02:00
Giacomo Lorenzetti 910d26b5c3 Simplification in LMR
This commit removes the `!captureOrPromotion` condition from ttCapture reduction and from good/bad history reduction (similar to #3619).

passed STC:
https://tests.stockfishchess.org/tests/view/60fc734ad8a6b65b2f3a7922
LLR: 2.97 (-2.94,2.94) <-2.50,0.50>
Total: 48680 W: 3855 L: 3776 D: 41049
Ptnml(0-2): 118, 3145, 17744, 3206, 127

passed LTC:
https://tests.stockfishchess.org/tests/view/60fce7d5d8a6b65b2f3a794c
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 86528 W: 2471 L: 2450 D: 81607
Ptnml(0-2): 28, 2203, 38777, 2232, 24

closes https://github.com/official-stockfish/Stockfish/pull/3629

Bench: 4951406
2021-07-26 07:48:58 +02:00
MichaelB7 b939c80513 Update the default net to nn-76a8a7ffb820.nnue.
combined work by Serio Vieri, Michael Byrne, and Jonathan D (aka SFisGod) based on top of previous developments, by restarts from good nets.

Sergio generated the net https://tests.stockfishchess.org/api/nn/nn-d8609abe8caf.nnue:

The initial net nn-d8609abe8caf.nnue is trained by generating around 16B of training data from the last master net nn-9e3c6298299a.nnue, then trained, continuing from the master net, with lambda=0.2 and sampling ratio of 1. Starting with LR=2e-3, dropping LR with a factor of 0.5 until it reaches LR=5e-4. in_scaling is set to 361. No other significant changes made to the pytorch trainer.

Training data gen command (generates in chunks of 200k positions):

generate_training_data min_depth 9 max_depth 11 count 200000 random_move_count 10 random_move_max_ply 80 random_multi_pv 12 random_multi_pv_diff 100 random_multi_pv_depth 8 write_min_ply 10 eval_limit 1500 book noob_3moves.epd output_file_name gendata/$(date +"%Y%m%d-%H%M")_${HOSTNAME}.binpack

PyTorch trainer command (Note that this only trains for 20 epochs, repeatedly train until convergence):

python train.py --features "HalfKAv2^" --max_epochs 20 --smart-fen-skipping --random-fen-skipping 500 --batch-size 8192 --default_root_dir $dir --seed $RANDOM --threads 4 --num-workers 32 --gpus $gpuids --track_grad_norm 2 --gradient_clip_val 0.05 --lambda 0.2 --log_every_n_steps 50 $resumeopt $data $val

See https://github.com/sergiovieri/Stockfish/tree/tools_mod/rl for the scripts used to generate data.

Based on that Michael generated nn-76a8a7ffb820.nnue in the following way:

The net being submitted was trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 30 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --auto_lr_find True --lambda=1.0 --max_epochs=240 --seed %random%%random% --default_root_dir exp/run_109 --resume-from-model ./pt/nn-d8609abe8caf.pt

This run is thus started from Segio Vieri's net nn-d8609abe8caf.nnue

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

model.py modifications:
loss = torch.pow(torch.abs(p - q), 2.6).mean()
LR = 8.0e-5 calculated as follows: 1.5e-3*(.992^360) - the idea here was to take a highly trained net and just use all.binpack as a finishing micro refinement touch for the last 2 Elo or so. This net was discovered on the 59th epoch.
optimizer = ranger.Ranger(train_params, betas=(.90, 0.999), eps=1.0e-7, gc_loc=False, use_gc=False)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.992)
For this micro optimization, I had set the period to "5" in train.py. This changes the checkpoint output so that every 5th checkpoint file is created

The final touches were to adjust the NNUE scale, as was done by Jonathan in tests running at the same time.

passed LTC
https://tests.stockfishchess.org/tests/view/60fa45aed8a6b65b2f3a77a4
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 53040 W: 1732 L: 1575 D: 49733
Ptnml(0-2): 14, 1432, 23474, 1583, 17

passed STC
https://tests.stockfishchess.org/tests/view/60f9fee2d8a6b65b2f3a7775
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 37928 W: 3178 L: 3001 D: 31749
Ptnml(0-2): 100, 2446, 13695, 2623, 100.

closes https://github.com/official-stockfish/Stockfish/pull/3626

Bench: 5169957
2021-07-24 18:04:59 +02:00
Giacomo Lorenzetti a85928e7ec Apply good/bad history reduction also when inCheck
Main idea is that, in some cases, 'in check' situations are not so different from 'not in check' ones.
Trying to use piece count in order to select only a few 'in check' situations have failed LTC testing.
It could be interesting to apply one of those ideas in other parts of the search function.

passed STC:
https://tests.stockfishchess.org/tests/view/60f1b68dd1189bed71812d40
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 53472 W: 4078 L: 4008 D: 45386
Ptnml(0-2): 127, 3297, 19795, 3413, 104

passed LTC:
https://tests.stockfishchess.org/tests/view/60f291e6d1189bed71812de3
LLR: 2.92 (-2.94,2.94) <-2.50,0.50>
Total: 89712 W: 2651 L: 2632 D: 84429
Ptnml(0-2): 60, 2261, 40188, 2294, 53

closes https://github.com/official-stockfish/Stockfish/pull/3619

Bench: 5185789
2021-07-23 19:02:58 +02:00
pb00067 760b7462bc Simplify lowply-history scoring logic
STC:
https://tests.stockfishchess.org/tests/view/60eee559d1189bed71812b16
LLR: 2.97 (-2.94,2.94) <-2.50,0.50>
Total: 33976 W: 2523 L: 2431 D: 29022
Ptnml(0-2): 66, 2030, 12730, 2070, 92

LTC:
https://tests.stockfishchess.org/tests/view/60eefa12d1189bed71812b24
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 107240 W: 3053 L: 3046 D: 101141
Ptnml(0-2): 56, 2668, 48154, 2697, 45

closes https://github.com/official-stockfish/Stockfish/pull/3616

bench: 5199177
2021-07-23 18:53:03 +02:00
Vizvezdenec d957179df7 Prune illegal moves in qsearch earlier
The main idea is that illegal moves influencing search or
qsearch obviously can't be any sort of good. The only reason
why initially legality checks for search and qsearch were done
after they actually can influence some heuristics is because
legality check is expensive computationally. Eventually in
search it was moved to the place where it makes sure that
illegal moves can't influence search.

This patch shows that the same can be done for qsearch + it
passed STC with elo-gaining bounds + it removes 3 lines of code
because one no longer needs to increment/decrement movecount
on illegal moves.

passed STC with elo-gaining bounds
https://tests.stockfishchess.org/tests/view/60f20aefd1189bed71812da0
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 61512 W: 4688 L: 4492 D: 52332
Ptnml(0-2): 139, 3730, 22848, 3874, 165

The same version functionally but with moving condition ever earlier
passed LTC with simplification bounds.
https://tests.stockfishchess.org/tests/view/60f292cad1189bed71812de9
LLR: 2.98 (-2.94,2.94) <-2.50,0.50>
Total: 60944 W: 1724 L: 1685 D: 57535
Ptnml(0-2): 11, 1556, 27298, 1597, 10

closes https://github.com/official-stockfish/Stockfish/pull/3618

bench 4709569
2021-07-23 18:47:30 +02:00
Liam Keegan bc654257e7 Add macOS and windows to CI
- macOS
  - system clang
  - gcc
- windows / msys2
  - mingw 64-bit gcc
  - mingw 32-bit gcc
- minor code fixes to get new CI jobs to pass
  - code: suppress unused-parameter warning on 32-bit windows
  - Makefile: if arch=any on macos, don't specify arch at all

fixes https://github.com/official-stockfish/Stockfish/issues/2958

closes https://github.com/official-stockfish/Stockfish/pull/3623

No functional change
2021-07-23 18:16:05 +02:00
VoyagerOne 36f8d3806b Don't save excluded move eval in TT
STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 17544 W: 1384 L: 1236 D: 14924
Ptnml(0-2): 37, 1031, 6499, 1157, 48
https://tests.stockfishchess.org/tests/view/60ec8d9bd1189bed71812999

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 26136 W: 823 L: 707 D: 24606
Ptnml(0-2): 6, 643, 11656, 755, 8
https://tests.stockfishchess.org/tests/view/60ecb11ed1189bed718129ba

closes https://github.com/official-stockfish/Stockfish/pull/3614

Bench: 5505251
2021-07-13 17:35:20 +02:00
Vizvezdenec dbd7f602d3 Remove second futility pruning depth limit
This patch removes futility pruning lmrDepth limit for futility pruning at parent nodes.
Since it's already capped by margin that is a function of lmrDepth there is no need to extra cap it with lmrDepth.

passed STC
https://tests.stockfishchess.org/tests/view/60e9b5dfd1189bed71812777
LLR: 2.97 (-2.94,2.94) <-2.50,0.50>
Total: 14872 W: 1264 L: 1145 D: 12463
Ptnml(0-2): 37, 942, 5369, 1041, 47

passed LTC
https://tests.stockfishchess.org/tests/view/60e9c635d1189bed71812790
LLR: 2.96 (-2.94,2.94) <-2.50,0.50>
Total: 40336 W: 1280 L: 1225 D: 37831
Ptnml(0-2): 24, 1057, 17960, 1094, 33

closes https://github.com/official-stockfish/Stockfish/pull/3612

bench: 5064969
2021-07-13 17:33:20 +02:00