The Premise
In the summer of 2025, I participated in an intensive transnational research workshop held in Paris, a collaboration between RIT’s Center for Engaged Storycraft (directed by Dr. Laura Shackelford) and Université Paris 8’s Paragraphe lab (Dr. Samuel Szoniecky). The brief was open-ended: create generative multimedia narratives using AI, knowledge systems, and whatever tools and methods we chose, exploring how computational creativity can be oriented toward storytelling.
Our small team was drawn to mythology. The rich cast of characters, the deep corpus of source texts, the archetypal narrative structures. It felt like fertile ground for generative storytelling. We explored several divination concepts, including the I Ching, before becoming inspired by a team member’s connection to Naples, Italy, formerly a Greek colonial outpost near the ancient city of Cumae.
The Cumaean Sibyl
The Sibyl of Cumae was a prophetic priestess of Apollo, an oracle believed to dwell in a cave near the city. According to myth, she delivered her prophecies in ecstatic trance: riddles etched into laurel leaves and scattered in the wind, or carved into volcanic rock. She was first described by Virgil in the Aeneid, and her cave (a sacred space of mystery, an echoing tunnel where divine messages were said to emerge from the dark) was discovered by archaeologists in the 19th century.
We kept the prophecy. We kept the divination. And we built a system that could channel her.
The System
The oracle is not a single prompt to a language model. It’s an engineered pipeline where structured knowledge, narrative logic, and generative AI work together:
A RAG-augmented LLM persona. Using AnythingLLM with OLlama as the underlying model, we created a custom AI agent embodying the Cumaean Sibyl. A document corpus of epic poems, prophecies, and mythological texts (including The Iliad, The Aeneid, and others) was embedded into a vector database, allowing the model to draw on specific literary and mythological material when generating responses. The agent was prompted to deliver cryptic, poetic wisdom in hexameter verses: metaphorical, somewhat ominous, never cheerful, never chatty.
A semantic prophecy ontology. I built a formal ontology in Protégé (OWL/RDF) that modeled the components of a prophecy: events (battles, treaties, assassinations, divine interventions, natural disasters), actors (deities, mortals, armies, monsters, nation-states), sentiments (positive, negative, bittersweet, neutral), and the causal relationships between them. The initial model captured the basic taxonomy; a substantially expanded version added dozens of event subtypes, actor roles, and narrative structures.
SPARQL-driven narrative arcs. The ontology was hosted on an Apache Jena Fuseki server and queried with SPARQL to generate structured prophetic cause-and-effect chains. Rather than asking an LLM to invent a prophecy from scratch (which tends to produce vague, generic output), the system queries the ontology in a loop to construct coherent multi-act narratives: events in act one lead to inevitable conclusions, actors have appropriate roles and relationships, and the sentiment arc follows a meaningful trajectory.
A web front-end. A Vue.js/TypeScript application provided the interactive interface, communicating with the AnythingLLM REST API via Axios. The vision included RPG-like question-and-response mechanics and a maze mini-game that would weight the positivity or negativity of the prophecy.
Text-to-speech and lipsync. To bring the Sibyl to life, I experimented with multiple TTS systems before settling on OpenAI’s, using detailed custom prompts to achieve a suitably foreboding and dramatic vocal delivery. For visual embodiment, I explored several lipsync approaches: Rhubarb (a lightweight phoneme-to-viseme mapper that generates JSON timing data from audio), Wav2Lip (deep learning-based photorealistic video lipsync), and Nvidia’s Audio2Face 3D API. MidJourney was used to generate mouth viseme frames from a Sibyl portrait for the Rhubarb approach.
Infrastructure
The full system ran locally as a set of Docker containers: AnythingLLM as the AI platform, OLlama for the LLM, and Apache Jena Fuseki hosting the ontology and serving SPARQL queries. This worked well on a MacBook Pro but would be expensive to host in the cloud, a consideration that led to exploring migration to OpenAI’s platform (metered API calls rather than always-on GPU instances) using their File Search and Vector Store APIs as equivalents to AnythingLLM’s RAG features.
My Role
I was the primary developer and technical architect. Specifically, I designed and built the semantic prophecy ontology in Protégé, wrote the SPARQL queries for narrative arc generation, configured the AnythingLLM agent with its RAG corpus and custom persona, built the Vue.js web front-end, implemented the Docker infrastructure, engineered the TTS prompts, and explored multiple lipsync pipelines. My teammates contributed to concept development, research into mythological source material, and presentation design.
Reflections
This project sits at an unusual intersection: knowledge representation, NLP/LLM engineering, classical literature, and creative narrative design. The semantic modeling work (decomposing what makes a prophecy feel like a prophecy and encoding that into a queryable graph) was the most intellectually rewarding part. It’s a concrete demonstration of how structured knowledge and generative AI can complement each other: the ontology provides narrative logic and coherence that the LLM alone cannot reliably produce, while the LLM provides the poetic language and creative variation that a rule-based system cannot.
The workshop context, three weeks of intensive, interdisciplinary collaboration in Paris, shaped the work in important ways. Working alongside researchers in digital humanities pushed the project beyond pure engineering toward questions about how AI intersects with cultural heritage, storytelling traditions, and symbolic meaning-making.
International Research Workshop: RIT Center for Engaged Storycraft × Université Paris 8 Paragraphe Lab · Summer 2025