Recently, large language models (LLMs) have shown promising abilities to
generate novel research ideas in science, a direction which coincides with many
foundational principles in computational creativity (CC). In light of these
developments, we present an idea generation system named Spark that couples
retrieval-augmented idea generation using LLMs with a reviewer model named
Judge trained on 600K scientific reviews from OpenReview. Our work is both a
system demonstration and intended to inspire other CC researchers to explore
grounding the generation and evaluation of scientific ideas within foundational
CC principles. To this end, we release the annotated dataset used to train
Judge, inviting other researchers to explore the use of LLMs for idea
generation and creative evaluations.