Topics of particular interest for the workshop include, but are not limited to:
DB-inspired techniques to optimize caching, indexing, or inference in generative ML architectures.
Multi-modal embeddings and semantic question-answering over multiple modalities.
Declarative systems to compose AI agents and multi-agent systems for data processing.
Scheduling and sharing of large workloads for LLMs.
LLM-informed database design, configuration, and tuning.
Strategies to deploy LLM architectures for data processing, e.g., RAG, chain-of-thought reasoning.
Vector databases for embeddings in RAG systems.
Benchmarks for data processing tasks using LLMs.
Integration of LLMs with transactional/real-time analytics databases.
Techniques for efficiently serving instances of transformer models.
Submission website: https://openreview.net/group?id=ACM.org/SIGMOD/2025/Workshop/NOVAS
Postdoctoral Associate, MIT
Postdoctoral Associate, MIT
Assistant Professor, University of Arizona
Associate Professor, OSU
Associate Professor, EURECOM
Submissions will be single anonymous: authors cannot see reviewer names, but reviewers can see author names. We use OpenReview to host papers and the reviewing process will be public. This means that reviewers' comments that can be seen by all, although the reviewers' identity will remain anonymous.
Authors can revise their paper as many times as needed up to the final paper submission deadline. Changes to the paper will not be allowed while the paper is being reviewed.
Conflicts of Interests (COIs) are handled using the same rules of SIGMOD 2025.
The use of LLMs is allowed as a general-purpose assist tool. Authors and reviewers should understand that they take full responsibility for the contents written under their name, including content generated by LLMs that could be construed as plagiarism or scientific misconduct (e.g., fabrication of facts). LLMs are not eligible for authorship.