After four years of development, Gillespie is a next-generation chess engine that bridges the gap
between classical search algorithms and modern neural network evaluation.
HYBRID APPROACH: Unlike traditional engines that rely
purely on brute-force computation, or neural networks that depend entirely on pattern
recognition, Gillespie combines both approaches into a unified architecture.
At its core, Gillespie employs a Mixture of Experts (MoE) NNUE architecture with five
specialized neural networks, each trained on billions of positions from specific game phases.
EXPERT SYSTEM: A lightweight gating network dynamically
routes positions to the most appropriate expert, feeding contextual evaluations into an alpha-beta search with advanced pruning.
4
Years
847K
Games
2451
Epochs
247M
Parameters
Timeline
Development Timeline
All metrics captured on standardized TCEC hardware: 64-core AMD EPYC 7742 @ 2.25GHz, 128GB DDR4-3200
RAM, and NVMe Gen4 storage. The training corpus consists of 847,293 games across 2,451 distinct training epochs,
each representing a complete cycle of self-play generation, neural network training, and validation testing.
ELO PROGRESSION: TRAINING TIMELINE
Self-play reinforcement learning epochs (n=2,451) with rolling 400-game
validation windows. Each epoch represents approximately 350,000 training positions generated through self-play
at varying skill levels. Exponential moving average (α=0.12) applied to smooth variance. Statistical
significance verified via Sequential Probability Ratio Test (SPRT) with α=0.05, β=0.05 confidence intervals.
INITIAL ELO
1847 ± 52
CURRENT ELO
3520 ± 18
GROWTH RATE
+0.68 /epoch
TOTAL GAIN
+1673 ELO
The progression curve demonstrates characteristic sigmoid growth with multiple
plateau regions corresponding to local minima in the training landscape. Notable acceleration phases occur
around epochs 800-1200 and 1800-2100, correlating with architectural improvements to the neural network
evaluation function and refinements to the tree search algorithm.
TECHNICAL SPECIFICATIONS
Training Positions
3.7 × 10⁹
Network Parameters
247.3M
Hash Table Size
16,384 MB
Tablebase Size
7-pc (5.1TB)
Opening Book
24 plies avg
Contempt Factor
-18 cp
Move Overhead
47 ms
Multi-PV Lines
5 variations
The architecture combines a Mixture of Experts NNUE system for position evaluation with alpha-beta pruning
enhanced by neural network move ordering. Five specialized expert models dynamically adapt to different game phases, achieving the strategic sophistication of pure neural
engines while maintaining the tactical sharpness of classical search algorithms.
Games
Versus Others
Each dataset represents a distinct training era, so comparing clips side-by-side surfaces how the
evaluation matured: earlier games lean on brute-force tactics, while recent builds showcase calmer, strategically
balanced plans. Use the selectors to jump between eras and see which styles resonate with your prep.
Interested in running Gillespie locally or contributing training time? The early-access program unlocks
container images, curated tuning tasks, and weekly telemetry summaries.
DIY Trainer Program
Plug into the compute pool, follow the playbook, and stream fresh evals straight into mainline testing.