I'm developing a 9-ball AI assistant to improve strategic pattern play. There are often multiple ways to pocket the next ball—sometimes into more than one visible pocket. The ideal choice depends not only on the likelihood of successfully making the shot, but also on whether it sets up a strong position for the following shot—and ideally, one that continues to enable good positioning beyond that. To tackle this challenge, I've built several key components:
The simulation backbone: pooltool Accurate pool physics is computationally intensive. I rely on pooltool, a robust open-source billiards simulator that precisely models interactions between balls, cushions, pockets, and felt. Simulating a single shot takes about 5–15 milliseconds on one CPU core for typical layouts with 1–3 object balls. For full racks (9 object balls), this increases to 20–50 ms due to the higher number of potential ball-to-ball collisions. That might sound fast—until you consider the scale of the search space. For each table layout, I evaluate shots into 6 possible pockets. Each pocket involves a 5D parameter space: speed, aim angle, cue stick elevation, side spin, and follow/draw. A basic grid search with just 10 steps per dimension results in 100,000 combinations per pocket. At 10 ms per simulation, that’s roughly 17 minutes per pocket—or over an hour per decision. Even advanced optimizers like CMA-ES reduce this to 500–1000 simulations per pocket, totaling 30–60 seconds per layout. Training a value network across millions of such decisions would take months. Speeding up candidate evaluation Instead of simulating every possible shot, I needed a way to quickly assess whether a shot could succeed—without needing the full post-shot table state upfront. My solution breaks each shot into two parts: (1) what the object ball must do to go in, and (2) how to strike the cue ball to achieve that. First, I precompute an Acceptance Window lookup table offline for every combination of object ball position, target pocket, and shot speed. This defines the range of departure angles at which the object ball will actually drop into the pocket—accounting for pocket geometry, rail effects, and other real-world factors. This captures the “what the ball needs to do” requirement. Next, I built a Shot-Index lookup table. Given a desired object ball departure angle (relative to the cue-to-object-ball line) and the distance between cue and object ball, this index retrieves viable shot parameters—speed, aim offset, spin, and draw—from a precomputed database. These entries are generated using pooltool simulations across a discrete grid of (distance, speed, aim-offset, spin, draw), indexed by resulting object ball angle. This approach helped, but gaps remained due to discretization. To fill them, I developed a throw model—a small MLP that predicts deviations in the object ball’s departure angle based on continuous inputs: cue-to-object distance, speed, aim angle, spin, draw, and elevation. The model has 4 hidden layers (128 neurons each), ReLU activations, and ~50k parameters. Trained on 5 million simulated shots (generated in ~6 hours), it achieves a mean angle error of ~0.2° on a 1.1M-shot validation set. I also exploit left/right symmetry to double the effective training data, eliminating the need for manual mirroring during gameplay. The payoff is huge: I use the shot-index to get solid initial parameter guesses, then apply small perturbations and evaluate hundreds of variants in bulk using the throw model on a GPU. On my setup, this delivers a ~10,000× speedup over full physics simulation. A batch of 1,000 candidate shots evaluates in just 1 ms—compared to 10 ms per simulation × 1,000 = 10 seconds traditionally. After generating candidates, I cluster those predicted to fall within the pocket’s acceptance window—grouping by speed, spin, and draw. From each cluster, I pick representative shots and run noisy physics simulations (adding realistic execution error) to test reliability. We’re not interested in once-in-a-million shots that can’t be consistently repeated. Finally, I select the shot that maximizes the expected p(win) value of the resulting table state. Because I still run final physics checks on shortlisted candidates, the overall end-to-end speedup is about 50–100×. Visualizing shot selection To illustrate the system, I set up a scenario with the 8-ball and 9-ball remaining: cue ball near center, 8-ball toward the top-left, 9-ball on the bottom rail. The heatmap shows p(win) based on where the cue ball ends up after shooting the 9-ball (assuming the 9-ball stays put). For this demo, I simulated 10 selected shots 20 times each. Results: 6 shots succeeded all 20 times, 3 succeeded 19/20, and 1 succeeded 15/20. Cue ball path colors reflect these make rates. Only one noisy simulation per shot is plotted—the others would land nearby. The black zone around the 9-ball marks positions less than one ball-width away—invalid spots where the cue ball would overlap the 9-ball. This post focuses only on direct shots, but the full system also includes templated bank, kick, carom, and combination shots—all integrated into the p(win) heatmap. (Caroms and combinations aren’t relevant here since only the 9-ball remains.) Next steps: Curriculum learning I’m now implementing curriculum learning. The p(win) model for a single 9-ball is simple: pocket it and win (unless you scratch). Scrapping means losing, since a skilled opponent will easily convert ball-in-hand. A missed shot yields a reward of (1 − p(win)) from the new state. I’ve simulated ~100,000 full shot-selection scenarios, leveraging 4-way symmetry to enrich the training data. Any shot that isn’t a guaranteed make gets re-evaluated as the model improves, potentially changing optimal choices or safety strategies. Once the single-ball case is mastered, I’ll progress to two-ball scenarios—where pocketing the on-ball leads to a known solvable state (valued by the model), while misses are reassessed iteratively. I’ll continue advancing the curriculum until the AI handles all configurations up to 9 balls. Along the way, I tried many approaches that failed. One notable success: adding the “ghost pocket” angle (via mirror reflection) as a feature significantly improved the bank shot model—an example of physics-informed machine learning. I’m happy to share more details if there’s interest! submitted by /u/ArithmosDev |
Subscribe to Updates
Get the latest tech insights from TechnologiesDigest.com on AI, innovation, and the future of digital technology.
Trending
- OWL Unveils Multi-Token Prediction Drafters for Gemma 4: 3x Faster Inference With Zero Quality Drop
- $${\bf{Micro}}{{\mathbb{S}}}{\bf{plit}}$$ : semantic unmixing of fluorescent microscopy data
- What Bitcoin’s 20% Surge Isn’t Telling You: The Bearish Case Hiding Behind the Rally
- DAEMON Tools Official Installers Hijacked in Sophisticated Supply Chain Malware Attack
- Kyverno 1.18 Unveiled: What’s New in the Latest CNCF Release
- Is That Job Listing Too Good to Be True? 9 Tell-Tale Signs to Spot Scams, Says LinkedIn
- Tutor Intelligence Launches Real-World Data Factory to Train Next-Gen Robot AI
- 5 Creative Ways to Boost Your Workflow with Claude Code



![Engineering a Precision AI: Automated Candidate Selection for Direct Cut Shots in 9-Ball Pool Building a 9-ball AI player: Candidate generation for direct cut shots [P]](https://technologiesdigest.com/wp-content/uploads/2026/05/Building-a-9-ball-AI-player-Candidate-generation-for-direct-cut.png)