Building Event-Driven Content Automation: Auto-Summaries with Sanity Agent
Last updated: 22.12.2025

This post discusses the implementation of an automatic summary generation system using Sanity's event-driven architecture. The system leverages Sanity Agent, Sanity Functions, and Blueprints to autonomously generate summaries when content is published. It consists of three main components: a Sanity Blueprint that triggers on post publication, a Sanity Function that orchestrates summary generation, and a Sanity Agent that processes content through a language model to create summaries. The event-driven architecture allows the system to react immediately to content changes, eliminating manual intervention and scaling efficiently with content volume. This approach ensures consistent and timely summary generation, enhancing content workflows. The architecture can be extended to other automated tasks like image optimization and SEO metadata generation, treating content as an active event source rather than passive data.
Introduction
Content management systems have traditionally required manual intervention at nearly every step. Authors write content, editors review it, and someone manually crafts metadata like summaries and descriptions. This approach works at small scale but becomes a bottleneck as content volume grows. What if your CMS could automatically generate summaries the moment content is published, without manual triggers or scheduled jobs?
This post explores the implementation of an automatic summary generation system built on Sanity's event-driven architecture. By combining Sanity Agent, Sanity Functions, and Blueprints, we created a system that reads content changes in real-time and generates summaries autonomously. The result is a fully reactive content pipeline that operates without human intervention.
System Overview
The auto-summary system consists of three primary components working in concert. First, a Sanity Blueprint defines the trigger conditions—specifically, when a post document transitions from draft to published state. Second, a Sanity Function serves as the serverless compute layer, receiving webhook payloads and orchestrating the summary generation. Third, a Sanity Agent processes the content through a large language model and writes the generated summary back to the document.
The flow begins when an editor publishes a post. The Blueprint detects this state change and fires a webhook to the Sanity Function. The Function extracts the document ID, fetches the full post content, and passes it to the Agent with instructions to generate a concise summary. The Agent returns the summary text, which the Function then patches back into the post document, updating the autoSummary field. The entire process completes within seconds, transparent to the end user.
Event-Driven Content Flow
Event-driven architecture fundamentally changes how content systems operate. Rather than polling for changes or running periodic batch jobs, the system reacts immediately to document lifecycle events. Sanity's mutation events provide granular hooks into document creation, updates, and state transitions. This enables precise targeting of automation logic to specific scenarios.
In our implementation, the Blueprint configuration specifies a filter that matches post documents and a trigger condition that fires on publish events. When a document matches these criteria, Sanity's event system constructs a webhook payload containing the document ID, mutation type, and relevant metadata. This payload is delivered via HTTP POST to the configured Function endpoint.
The Function receives the webhook as a standard HTTP request. It validates the payload signature to ensure authenticity, extracts the document identifier, and initiates the content processing pipeline. Because Functions are serverless and stateless, each invocation operates independently. This design eliminates concerns about server management, scaling, or concurrent request handling—the platform handles these automatically.
Reading and Acting on Content Changes
Once triggered, the system must read the document content to generate a meaningful summary. The Function uses the Sanity client to fetch the full document, including the body field containing the post's structured content. Sanity's Content Lake API returns documents in their native format—typically Portable Text for rich content.
The Agent receives this structured content along with a prompt that defines the summarization task. The prompt instructs the Agent to extract key points, maintain the original tone, and constrain the output to a specified length. The Agent processes the content through its language model, generating a summary that captures the essence of the post without excessive detail.
After generation, the Function must write the summary back to Sanity. This requires a write-enabled API token and a patch mutation targeting the specific document. The mutation updates only the autoSummary field, leaving other content untouched. This surgical approach ensures the automation doesn't interfere with manual edits or create unwanted side effects. The document's revision history records the change, maintaining full auditability.
The Role of Sanity Functions
Sanity Functions serve as the bridge between Sanity's content layer and external compute resources. They execute as serverless functions, deployed to edge infrastructure and automatically scaled based on demand. Each Function is essentially an HTTP endpoint with access to Sanity's APIs and the ability to execute arbitrary code.
Functions operate within Sanity's security context, inheriting the project's authentication and permission model. This eliminates the need to manage separate API keys or implement custom authentication flows. The Function receives a pre-authenticated client that can query and mutate content according to the configured permissions.
The stateless nature of Functions enforces clean architectural patterns. Each invocation must be self-contained, fetching any required data and completing its work without relying on persistent state. This constraint actually simplifies implementation—there's no need to manage caches, sessions, or background workers. The Function executes, performs its task, and terminates. Subsequent invocations start fresh.
Error handling becomes critical in this model. If the Function fails—due to network issues, API limits, or processing errors—the webhook delivery may be retried according to the configured retry policy. The Function must implement idempotency checks to avoid duplicate processing. In our case, checking for an existing autoSummary value prevents regeneration on retries.
Why Event-Driven Content Matters
Event-driven content operations represent a paradigm shift from traditional CMS architectures. Rather than content being passive data waiting for external processes, it becomes an active participant in workflows. Documents emit events, trigger computations, and coordinate their own metadata generation. This inverts the control flow—content drives automation rather than automation polling content.
This architecture scales naturally. As content volume increases, the system doesn't require larger polling intervals or more powerful batch processors. Each document triggers its own processing in isolation. Functions scale horizontally, handling concurrent requests without coordination. The Content Lake handles the read and write load through its distributed infrastructure.
Consistency improves because automation happens immediately and deterministically. Every published post receives a summary. There's no risk of forgetting to run a batch job or missing documents during manual processing. The system enforces policies through code rather than relying on editorial discipline. This guarantees a baseline quality standard across all content.
Developer experience benefits from the clear separation of concerns. Content schema, business logic, and automation rules live in distinct layers. The Blueprint declares when to act, the Function implements how to act, and the Agent provides the intelligence to act meaningfully. Teams can iterate on each component independently without coupling.
Conclusion
Building an automatic summary system with Sanity Agent, Functions, and Blueprints demonstrates the power of event-driven content operations. The combination of real-time triggers, serverless compute, and AI-powered content generation creates a fully autonomous pipeline that operates without manual intervention. Content becomes self-maintaining, automatically enriching itself as it moves through publication workflows.
This architecture extends beyond summarization. The same patterns apply to image optimization, SEO metadata generation, content translation, or quality validation. Any operation that should happen automatically in response to content changes fits this model. The key insight is treating content as an event source that drives computation rather than static data waiting for external processing.
The tools—Blueprints for declarative event configuration, Functions for serverless execution, and Agents for intelligent processing—combine to enable sophisticated automation with minimal infrastructure overhead. Teams can focus on defining what should happen rather than managing how to make it happen. This is the promise of event-driven content operations: systems that react intelligently to change, scaling effortlessly as content grows.