GraSS: Combining Graph Neural Networks with Expert Knowledge for SAT Solver Selection
BibTex
Copy
@misc{hao2024grasscombininggraph,
title={GraSS: Combining Graph Neural Networks with Expert Knowledge for SAT Solver Selection},
author={Jianye Hao and Wenyi Xiao and Hui-Ling Zhen and Mingxuan Yuan and Yingxue Zhang and Mark Coates and Zhanguang Zhang and Joseph Cotnareanu and Didier Chetelat and Amur Ghose},
year={2024},
eprint={2405.11024},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2405.11024},
}
AI Audio Lecture + Q&A
0:00 / 0:00
GraSS: Combining Graph Neural Networks with Expert Knowledge for SAT Solver Selection
Transcript
John: In our course on Advanced Topics in Combinatorial Optimization, we've seen a trend of applying deep learning directly to problem structures. Today's lecture is on GraSS, or the Graph Neural Network SAT Solver Selector. This fits into the recent surge of work like 'Neural Approaches to SAT Solving', which try to leverage graph networks. However, instead of solving the problem directly, GraSS focuses on selecting the best tool for the job. It's an interesting shift from learning heuristics to learning which existing heuristic is best.
John: Yes, Noah?
Noah: Hi Professor. So, is the core idea that no single SAT solver is universally the best, and we need a way to pick one for each specific problem instance?
John: Precisely. The performance variation can be extreme—seconds versus hours for different solvers on the same instance. The challenge has always been how to make that choice automatically. The classic approach is SATzilla, which uses hand-crafted features to describe the problem and feeds them into a machine learning model.
Noah: But hand-crafted features have limitations, right? Like you might lose important structural information.
John: Exactly. That's the main issue GraSS addresses. Feature engineering is slow and an information bottleneck. You're compressing a complex logical formula into a fixed-size vector. The contribution here is to sidestep that by representing the entire SAT instance as a graph—specifically, a literal-clause graph. This preserves the complete structure. Then, they use a Graph Neural Network, or GNN, to learn directly from that graph which solver in a portfolio is likely to perform best.
Noah: So it's not just a pure GNN approach; they're still using some domain knowledge?
John: Correct. And that's a critical point. They don't throw away decades of research. They incorporate features inspired by SATzilla—things like variable degrees or clause lengths—as initial node features in the graph. They're trying to get the best of both worlds: the rich, complete structural representation from the graph, and the distilled wisdom from traditional, hand-designed features.
John: Let's look closer at the technical details. The graph itself is a tripartite structure: you have nodes for positive literals, negative literals, and clauses. Edges connect a clause to the literals it contains. This captures the full logical relationship. On top of this base graph, they add those expert-designed features I mentioned as attributes to the nodes.
Noah: Wait, I'm confused about something. The paper also mentions positional encoding for clauses. Why would position matter in a graph, which is typically order-invariant?
John: That's an excellent question, and it highlights a key insight of the paper. While a graph is mathematically unordered, the input files for SAT solvers are ordered lists of clauses. Many solvers are sensitive to this ordering; it can affect their branching heuristics. By adding a positional encoding to the clause nodes, the GNN can learn patterns related to this ordering, which would otherwise be lost in a pure graph representation.
Noah: So it's capturing an artifact of the solver's implementation, not just the abstract problem structure. What about the model's objective? Is it just trying to predict the right solver?
John: Not quite, and this is another important detail. Instead of a standard classification loss that just cares about accuracy, they use a runtime-sensitive loss function. The goal isn't just to be correct; it's to minimize the total solving time. This means the model is penalized more heavily for picking a solver that is catastrophically slow, even if another solver was only slightly faster than the one it picked. It directly optimizes the metric we actually care about in the real world.
John: The implications of this work are twofold. First, the practical impact is significant. On an industrial benchmark, they showed a 40% reduction in average solving time compared to strong baselines. In fields like circuit design, where SAT solvers can run for hours or days, that's a substantial improvement. It makes previously intractable verification tasks more feasible.
Noah: Does this approach seem generalizable? For example, the paper on 'Graph Neural Networks for Binary Programming' tries to use GNNs for a different class of optimization problems. Could this solver-selection framework be applied there too?
John: I think so. The core principle—representing a combinatorial problem as a graph and using a GNN to select from an algorithm portfolio—is quite general. The specific graph representation and expert features would need to be adapted, but the template is there. This work reinforces the idea that GNNs can serve as powerful policy networks for algorithm selection, not just for end-to-end solving. It provides a blueprint for combining learned representations with domain expertise in other areas of automated reasoning and optimization.
John: To wrap up, GraSS demonstrates a powerful synthesis. It doesn't treat the GNN as a black box that replaces everything. Instead, it carefully integrates deep learning with decades of domain knowledge about what makes SAT problems hard. The key takeaways are the value of preserving problem structure through graph representations and optimizing the model for the true objective, which in this case is minimizing runtime, not just maximizing accuracy.
John: Thanks for listening. If you have any further questions, ask our AI assistant or drop a comment.