1
0
Fork 0
stockfish/src/tt.h

110 lines
3.4 KiB
C
Raw Normal View History

2008-08-31 23:59:13 -06:00
/*
Stockfish, a UCI chess playing engine derived from Glaurung 2.1
Copyright (C) 2004-2021 The Stockfish developers (see AUTHORS file)
2008-08-31 23:59:13 -06:00
Stockfish is free software: you can redistribute it and/or modify
2008-08-31 23:59:13 -06:00
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
Stockfish is distributed in the hope that it will be useful,
2008-08-31 23:59:13 -06:00
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
2008-08-31 23:59:13 -06:00
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef TT_H_INCLUDED
2008-08-31 23:59:13 -06:00
#define TT_H_INCLUDED
#include "misc.h"
#include "types.h"
namespace Stockfish {
/// TTEntry struct is the 10 bytes transposition table entry, defined as below:
///
/// key 16 bit
Allow TT entries with key16==0 to be fetched Fix the issue where a TT entry with key16==0 would always be reported as a miss. Instead, we'll use depth8 to detect whether the TT entry is occupied. In order to do that, we'll change DEPTH_OFFSET to -7 (depth8==0) to distinguish between an unoccupied entry and the otherwise lowest possible depth, i.e., DEPTH_NONE (depth8==1). To prevent a performance regression, we'll reorder the TT entry fields by the access order of TranspositionTable::probe(). Memory in general works fastest when accessed in sequential order. We'll also match the store order in TTEntry::save() with the entry field order, and re-order the 'if-or' expressions in TTEntry::save() from the cheapest to the most expensive. Finally, as we now have a proper TT entry occupancy test, we'll fix a minor corner case with hashfull reporting. To reproduce: - Use a big hash - Either: a. Start 31 very quick searches (this wraparounds generation to 0); or b. Force generation of the first search to 0. - go depth infinite Before the fix, hashfull would incorrectly report nearly full hash immediately after the search start, since TranspositionTable::hashfull() used to consider only the entry generation and not whether the entry was actually occupied. STC: LLR: 2.95 (-2.94,2.94) {-0.25,1.25} Total: 36848 W: 4091 L: 3898 D: 28859 Ptnml(0-2): 158, 2996, 11972, 3091, 207 https://tests.stockfishchess.org/tests/view/5f3f98d5dc02a01a0c2881f7 LTC: LLR: 2.95 (-2.94,2.94) {0.25,1.25} Total: 32280 W: 1828 L: 1653 D: 28799 Ptnml(0-2): 34, 1428, 13051, 1583, 44 https://tests.stockfishchess.org/tests/view/5f3fe77a87a5c3c63d8f5332 closes https://github.com/official-stockfish/Stockfish/pull/3048 Bench: 3760677
2020-08-21 03:12:39 -06:00
/// depth 8 bit
/// generation 5 bit
/// pv node 1 bit
/// bound type 2 bit
Allow TT entries with key16==0 to be fetched Fix the issue where a TT entry with key16==0 would always be reported as a miss. Instead, we'll use depth8 to detect whether the TT entry is occupied. In order to do that, we'll change DEPTH_OFFSET to -7 (depth8==0) to distinguish between an unoccupied entry and the otherwise lowest possible depth, i.e., DEPTH_NONE (depth8==1). To prevent a performance regression, we'll reorder the TT entry fields by the access order of TranspositionTable::probe(). Memory in general works fastest when accessed in sequential order. We'll also match the store order in TTEntry::save() with the entry field order, and re-order the 'if-or' expressions in TTEntry::save() from the cheapest to the most expensive. Finally, as we now have a proper TT entry occupancy test, we'll fix a minor corner case with hashfull reporting. To reproduce: - Use a big hash - Either: a. Start 31 very quick searches (this wraparounds generation to 0); or b. Force generation of the first search to 0. - go depth infinite Before the fix, hashfull would incorrectly report nearly full hash immediately after the search start, since TranspositionTable::hashfull() used to consider only the entry generation and not whether the entry was actually occupied. STC: LLR: 2.95 (-2.94,2.94) {-0.25,1.25} Total: 36848 W: 4091 L: 3898 D: 28859 Ptnml(0-2): 158, 2996, 11972, 3091, 207 https://tests.stockfishchess.org/tests/view/5f3f98d5dc02a01a0c2881f7 LTC: LLR: 2.95 (-2.94,2.94) {0.25,1.25} Total: 32280 W: 1828 L: 1653 D: 28799 Ptnml(0-2): 34, 1428, 13051, 1583, 44 https://tests.stockfishchess.org/tests/view/5f3fe77a87a5c3c63d8f5332 closes https://github.com/official-stockfish/Stockfish/pull/3048 Bench: 3760677
2020-08-21 03:12:39 -06:00
/// move 16 bit
/// value 16 bit
/// eval value 16 bit
2008-08-31 23:59:13 -06:00
struct TTEntry {
2008-08-31 23:59:13 -06:00
Move move() const { return (Move )move16; }
Value value() const { return (Value)value16; }
Value eval() const { return (Value)eval16; }
Eliminate ONE_PLY Simplification that eliminates ONE_PLY, based on a suggestion in the forum that support for fractional plies has never been used, and @mcostalba's openness to the idea of eliminating it. We lose a little bit of type safety by making Depth an integer, but in return we simplify the code in search.cpp quite significantly. No functional change ------------------------------------------ The argument favoring eliminating ONE_PLY: * The term “ONE_PLY” comes up in a lot of forum posts (474 to date) https://groups.google.com/forum/?fromgroups=#!searchin/fishcooking/ONE_PLY%7Csort:relevance * There is occasionally a commit that breaks invariance of the code with respect to ONE_PLY https://groups.google.com/forum/?fromgroups=#!searchin/fishcooking/ONE_PLY%7Csort:date/fishcooking/ZIPdYj6k0fk/KdNGcPWeBgAJ * To prevent such commits, there is a Travis CI hack that doubles ONE_PLY and rechecks bench * Sustaining ONE_PLY has, alas, not resulted in any improvements to the engine, despite many individuals testing many experiments over 5 years. The strongest argument in favor of preserving ONE_PLY comes from @locutus: “If we use par example ONE_PLY=256 the parameter space is increases by the factor 256. So it seems very unlikely that the optimal setting is in the subspace of ONE_PLY=1.” There is a strong theoretical impediment to fractional depth systems: the transposition table uses depth to determine when a stored result is good enough to supply an answer for a current search. If you have fractional depths, then different pathways to the position can be at fractionally different depths. In the end, there are three separate times when a proposal to remove ONE_PLY was defeated by the suggestion to “give it a few more months.” So… it seems like time to remove this distraction from the community. See the pull request here: https://github.com/official-stockfish/Stockfish/pull/2289
2019-09-28 14:27:23 -06:00
Depth depth() const { return (Depth)depth8 + DEPTH_OFFSET; }
bool is_pv() const { return (bool)(genBound8 & 0x4); }
Bound bound() const { return (Bound)(genBound8 & 0x3); }
void save(Key k, Value v, bool pv, Bound b, Depth d, Move m, Value ev);
private:
friend class TranspositionTable;
uint16_t key16;
Allow TT entries with key16==0 to be fetched Fix the issue where a TT entry with key16==0 would always be reported as a miss. Instead, we'll use depth8 to detect whether the TT entry is occupied. In order to do that, we'll change DEPTH_OFFSET to -7 (depth8==0) to distinguish between an unoccupied entry and the otherwise lowest possible depth, i.e., DEPTH_NONE (depth8==1). To prevent a performance regression, we'll reorder the TT entry fields by the access order of TranspositionTable::probe(). Memory in general works fastest when accessed in sequential order. We'll also match the store order in TTEntry::save() with the entry field order, and re-order the 'if-or' expressions in TTEntry::save() from the cheapest to the most expensive. Finally, as we now have a proper TT entry occupancy test, we'll fix a minor corner case with hashfull reporting. To reproduce: - Use a big hash - Either: a. Start 31 very quick searches (this wraparounds generation to 0); or b. Force generation of the first search to 0. - go depth infinite Before the fix, hashfull would incorrectly report nearly full hash immediately after the search start, since TranspositionTable::hashfull() used to consider only the entry generation and not whether the entry was actually occupied. STC: LLR: 2.95 (-2.94,2.94) {-0.25,1.25} Total: 36848 W: 4091 L: 3898 D: 28859 Ptnml(0-2): 158, 2996, 11972, 3091, 207 https://tests.stockfishchess.org/tests/view/5f3f98d5dc02a01a0c2881f7 LTC: LLR: 2.95 (-2.94,2.94) {0.25,1.25} Total: 32280 W: 1828 L: 1653 D: 28799 Ptnml(0-2): 34, 1428, 13051, 1583, 44 https://tests.stockfishchess.org/tests/view/5f3fe77a87a5c3c63d8f5332 closes https://github.com/official-stockfish/Stockfish/pull/3048 Bench: 3760677
2020-08-21 03:12:39 -06:00
uint8_t depth8;
uint8_t genBound8;
uint16_t move16;
int16_t value16;
int16_t eval16;
2008-08-31 23:59:13 -06:00
};
/// A TranspositionTable is an array of Cluster, of size clusterCount. Each
/// cluster consists of ClusterSize number of TTEntry. Each non-empty TTEntry
/// contains information on exactly one position. The size of a Cluster should
/// divide the size of a cache line for best performance, as the cacheline is
/// prefetched when possible.
2008-08-31 23:59:13 -06:00
class TranspositionTable {
static constexpr int ClusterSize = 3;
struct Cluster {
TTEntry entry[ClusterSize];
char padding[2]; // Pad to 32 bytes
};
static_assert(sizeof(Cluster) == 32, "Unexpected Cluster size");
// Constants used to refresh the hash table periodically
static constexpr unsigned GENERATION_BITS = 3; // nb of bits reserved for other things
static constexpr int GENERATION_DELTA = (1 << GENERATION_BITS); // increment for generation field
static constexpr int GENERATION_CYCLE = 255 + (1 << GENERATION_BITS); // cycle length
static constexpr int GENERATION_MASK = (0xFF << GENERATION_BITS) & 0xFF; // mask to pull out generation number
2008-08-31 23:59:13 -06:00
public:
~TranspositionTable() { aligned_large_pages_free(table); }
void new_search() { generation8 += GENERATION_DELTA; } // Lower bits are used for other things
TTEntry* probe(const Key key, bool& found) const;
int hashfull() const;
void resize(size_t mbSize);
2008-08-31 23:59:13 -06:00
void clear();
TTEntry* first_entry(const Key key) const {
return &table[mul_hi64(key, clusterCount)].entry[0];
}
static bool enable_transposition_table;
2008-08-31 23:59:13 -06:00
private:
friend struct TTEntry;
size_t clusterCount;
Cluster* table;
uint8_t generation8; // Size must be not bigger than TTEntry::genBound8
2008-08-31 23:59:13 -06:00
};
extern TranspositionTable TT;
} // namespace Stockfish
#endif // #ifndef TT_H_INCLUDED