AI Agents
AI-powered content enhancement system for automated content workflows, SEO optimization, and intelligent content assistance.
Overview
Agents are file-based definitions that can:
Run manually from the post editor dropdown
Trigger automatically on specific events (publish, review, etc.)
Process and enhance content automatically using AI
Use MCP tools to interact with the CMS
Agent Types
Agents (AI-Powered)
In-process AI agents that use AI providers directly (OpenAI, Anthropic, Google, Nano Banana) without external webhooks. These agents:
Run directly in your application - No external service required
Model-agnostic - Switch between OpenAI, Anthropic, and Google easily
MCP integration - Can use MCP tools for CMS operations
Reaction system - Execute webhooks, Slack notifications, and more after completion
Faster execution - No network latency to external services
Agents are ideal for:
Quick content improvements
SEO optimization
Content enhancement
Automated workflows that need CMS context
Creating an Agent
Agent Example
Here's a complete example of an agent:
import type { AgentDefinition } from '#types/agent_types'
const AiAssistantAgent: AgentDefinition = {
id: 'ai-assistant',
name: 'AI Assistant',
description: 'Built-in AI assistant powered by OpenAI',
enabled: true,
llmConfig: {
// Text provider: 'openai' | 'anthropic' | 'google' | 'nanobanana'
providerText: 'openai',
modelText: 'gpt-4o',
// Media provider: 'openai' | 'nanobanana'
providerMedia: 'openai',
modelMedia: 'dall-e-3',
// Fallback
provider: 'openai',
model: 'gpt-4o',
// API key (optional - uses AI_PROVIDER_OPENAI_API_KEY env var if not set)
// apiKey: process.env.AI_PROVIDER_OPENAI_API_KEY,
// System prompt template
systemPrompt: `You are a helpful content assistant.
Help improve content while maintaining the original intent.`,
// Model options
options: {
temperature: 0.7,
maxTokens: 2000,
},
// Enable MCP tool usage
useMCP: false,
},
scopes: [{ scope: 'dropdown', order: 5, enabled: true }],
// Optional: Reactions (execute after completion)
reactions: [
{
type: 'slack',
trigger: 'on_success',
config: {
webhookUrl: process.env.SLACK_WEBHOOK_URL || '',
channel: '#content-alerts',
template: 'AI Assistant completed: {{agent}} processed post {{data.postId}}',
},
},
],
userAccount: { enabled: true },
}
export default InternalAiAssistantAgent
Environment Configuration
Set your API keys in .env:
# OpenAI
AI_PROVIDER_OPENAI_API_KEY=sk-...
# Anthropic
AI_PROVIDER_ANTHROPIC_API_KEY=sk-ant-...
# Google (Gemini & Imagen)
AI_PROVIDER_GOOGLE_API_KEY=...
Supported Providers and Models
OpenAI
Text:
gpt-4o,gpt-4-turbo,gpt-3.5-turboMedia:
dall-e-3,dall-e-2API Key:
AI_PROVIDER_OPENAI_API_KEY
Anthropic (Claude)
Text:
claude-3-5-sonnet-latest,claude-3-opus-20240229,claude-3-haiku-20240307API Key:
AI_PROVIDER_ANTHROPIC_API_KEY
Google (Gemini, Nano Banana & Imagen)
Text (Reasoning):
gemini-2.0-flash,gemini-1.5-proMedia (Generation):
nano-banana-pro-preview,imagen-4.0-generate-001API Key:
AI_PROVIDER_GOOGLE_API_KEYDescription: Access to Google's suite of Gemini reasoning models and Nano Banana/Imagen generation models.
Dual-Provider Configuration (Text + Media)
Agents can now use different providers for text reasoning and media generation. This is useful for combining powerful text models (like Claude or GPT-4o) with specialized image generators (like DALL-E or Imagen).
Reasoning vs. Generation: Why both?
Even specialized media agents like the Graphic Designer require two distinct models:
Text Model (Reasoning): This is the "brain" of the agent. It is used to process user instructions (e.g., "Make the hero image more professional"), analyze current post context, and decide which tools to call (like
search_mediaorgenerate_image).Media Model (Generation): This is the "artist." It is only invoked when the agent calls a media-specific tool like
generate_image. It takes a text prompt generated by the Reasoning model and turns it into pixels.
A media model cannot "think" or interact with the CMS; it requires a text-based "brain" to coordinate the workflow.
llmConfig: {
// Reasoning model (The Brain)
// Nano Banana is a high-speed Gemini text model
providerText: 'google',
modelText: 'nano-banana-pro-preview',
// Generation model (The Artist)
// Imagen 4 is the specialized image generation model
providerMedia: 'google',
modelMedia: 'imagen-4.0-generate-001'
}
If providerText/modelText or providerMedia/modelMedia are missing, the system falls back to the provider/model fields.
MCP Integration
Agents can use MCP tools to interact with the CMS:
llmConfig: {
provider: 'openai',
model: 'gpt-4',
useMCP: true,
// Optional: restrict to specific tools
allowedMCPTools: ['list_posts', 'get_post_context', 'create_post_ai_review'],
}
When useMCP: true, the agent can:
List and query posts
Get post context
Create and edit posts
Add/update modules
Use layout planning tools
MCP Tool RBAC (Role-Based Access Control)
Security Feature: Agents can be restricted to specific MCP tools using the allowedMCPTools configuration. This ensures agents only have access to the tools they need for their specific purpose.
Where to Configure:
In your agent file (app/agents/your_agent.ts), add allowedMCPTools inside the internal configuration block:
const YourAgent: AgentDefinition = {
id: 'your-agent',
name: 'Your Agent',
enabled: true,
llmConfig: {
provider: 'openai',
model: 'gpt-4',
systemPrompt: '...',
options: { ... },
// Enable MCP tool usage
useMCP: true,
// Configure tool access here:
// Empty array [] = all tools available
// Specify array = only these tools allowed
allowedMCPTools: ['list_posts', 'get_post_context'], // Example: restricted access
// OR
// allowedMCPTools: [], // Full access to all tools
},
// ... rest of agent config
}
Real Examples from Codebase:
Graphic Designer (
app/agents/graphic_designer.ts) - Restricted to media tools only:typescriptllmConfig: { useMCP: true, allowedMCPTools: ['list_media', 'get_media', 'generate_image'], }General Assistant (
app/agents/general_assistant.ts) - Full access:typescriptllmConfig: { useMCP: true, allowedMCPTools: [], // Empty = all tools }
How It Works:
If
allowedMCPToolsis empty or undefined: Agent has access to ALL MCP tools (default behavior)If
allowedMCPToolsis specified: Agent can ONLY use the tools listed in the arrayEnforcement: The system enforces these restrictions at two levels:
System Prompt: Only allowed tools are shown to the AI in the prompt
Execution: Any attempt to call a non-allowed tool is rejected with an error
Example: Restricted Agent (Graphic Designer)
The Graphic Designer agent is restricted to only media-related tools:
llmConfig: {
provider: 'nanobanana',
model: 'gemini-1.5-flash',
useMCP: true,
// Only allow media-related tools - cannot create posts or modify content
allowedMCPTools: ['list_media', 'get_media', 'generate_image'],
}
This agent can:
✅ List media items
✅ Get media details
✅ Generate images via DALL-E
This agent cannot:
❌ Create new posts (
create_post_ai_review)❌ Modify existing posts (
save_post_ai_review,update_post_module_ai_review)❌ Access post data (
list_posts,get_post_context)
Example: Full Access Agent (General Assistant)
The General Assistant has full access to all MCP tools:
llmConfig: {
provider: 'openai',
model: 'gpt-4',
useMCP: true,
// Empty array = all tools available
allowedMCPTools: [],
}
Available MCP Tools:
Post Management:
list_posts,get_post_context,create_post_ai_review,save_post_ai_reviewModule Management:
add_module_to_post_ai_review,update_post_module_ai_review,remove_post_module_ai_reviewMedia Management:
list_media,get_media,search_media,generate_imagesearch_media: Search existing media by alt text, description, filename, or category. Use this to find existing images before generating new ones.generate_image: Generate new images via DALL-E. Only use when explicitly requested or when no suitable existing image is found.
Configuration:
list_post_types,get_post_type_config,list_modules,get_module_schemaLayout Planning:
suggest_modules_for_layout
Best Practices:
Principle of Least Privilege: Only grant agents the minimum tools they need
Document Restrictions: Comment why certain tools are restricted
Test Restrictions: Verify agents cannot access unauthorized tools
Review Regularly: As new tools are added, review agent permissions
Security Notes:
Tool restrictions are enforced server-side and cannot be bypassed
Unauthorized tool calls return an error in the tool results
The AI is only informed about tools it has access to, reducing the chance of attempting unauthorized calls
Reactions
Reactions execute after agent completion. Supported types:
Webhook Reaction
reactions: [
{
type: 'webhook',
trigger: 'on_success',
config: {
url: 'https://example.com/webhook',
method: 'POST',
headers: { 'X-Custom-Header': 'value' },
bodyTemplate: '{"agent": "{{agent}}", "result": {{result}}}',
},
},
]
Slack Reaction
reactions: [
{
type: 'slack',
trigger: 'on_success',
config: {
webhookUrl: process.env.SLACK_WEBHOOK_URL || '',
channel: '#content-alerts',
template: 'Agent {{agent}} completed successfully!',
},
},
]
MCP Tool Reaction
reactions: [
{
type: 'mcp_tool',
trigger: 'on_condition',
condition: {
field: 'result.status',
operator: 'equals',
value: 'published',
},
config: {
toolName: 'create_post_ai_review',
toolParams: {
type: 'blog',
locale: 'en',
slug: '{{result.slug}}',
title: '{{result.title}}',
},
},
},
]
Reaction Triggers
always- Always executeon_success- Only on successful completionon_error- Only on errorson_condition- Based on condition evaluation
System Prompt Templates
System prompts support variable interpolation:
systemPrompt: `You are helping with {{postType}} content.
Current scope: {{scope}}
User context: {{context}}`
Available variables:
{{agent}}- Agent name{{scope}}- Execution scope{{postType}}- Post type (if available){{context}}- Additional context data
1. Generate Agent Scaffold
node ace make:agent seo-optimizer
This creates app/agents/seo_optimizer.ts.
2. Define Agent Configuration
import type { AgentDefinition } from '#types/agent_types'
const SeoOptimizerAgent: AgentDefinition = {
id: 'seo-optimizer',
name: 'SEO Optimizer',
description: 'Automatically generates and optimizes SEO metadata',
enabled: true,
llmConfig: {
provider: 'openai',
model: 'gpt-4',
systemPrompt: 'You are an SEO expert. Optimize metadata for better search rankings.',
options: {
temperature: 0.7,
maxTokens: 1000,
},
// Enable MCP tool usage (allows agent to use CMS tools)
useMCP: true,
// MCP Tool Access Control (RBAC)
// - If empty array []: Agent has access to ALL MCP tools (default)
// - If specified: Agent can ONLY use the tools listed in the array
// Example: Restrict to only SEO-related tools
allowedMCPTools: ['get_post_context', 'save_post_ai_review'],
},
scopes: [{ scope: 'dropdown', order: 20, enabled: true }],
}
export default SeoOptimizerAgent
3. Configure Environment
# .env
# Set API key for your chosen provider
AI_PROVIDER_OPENAI_API_KEY=sk-...
# OR
AI_PROVIDER_ANTHROPIC_API_KEY=sk-ant-...
# OR
AI_PROVIDER_GOOGLE_API_KEY=...
# Optional: Slack webhook for reactions
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/...
Per-agent user accounts (recommended)
Adonis EOS can automatically create dedicated user accounts per agent at boot time. This enables:
Attribution: posts created via MCP can have
author_id/user_idset to the specific agent (e.g. “Translator”).Auditing: activity is tied to a distinct user row per agent.
Least privilege: all agent users should use the
ai_agentrole (cannot publish/approve/admin).
Why emails are “optional”
In this project, users.email is required and unique in the database schema.
So “optional email” means:
you typically don’t provide a real email, and
the system generates an internal email like
[email protected].
Enabling per-agent accounts
Add userAccount to your agent definition:
userAccount: {
enabled: true,
// email?: optional (generated if omitted)
// username?: optional (defaults to agent:<agentId>)
// createAtBoot?: default true
}
Boot-time creation happens automatically during app start (via start/agents.ts).
Disabling boot provisioning (rare)
Set:
AGENT_USERS_BOOTSTRAP_DISABLED=1
This is mainly for special CI/testing workflows.
Agent Scopes
Agents can be triggered in different contexts:
dropdown- Manual execution from post editor dropdownglobal- Global agent accessible via floating brain icon button (lower right of viewport)field- Per-field AI button (e.g. translate a single field, generate image suggestions for a specific module prop)Can be filtered by
fieldTypes(e.g.,['media']) to only appear for specific field typesCan be filtered by
fieldKeysto only appear for specific field paths
posts.bulk- Manual execution from the "Bulk actions" dropdown on the posts index pagepost.publish- Auto-trigger when publishingpost.approve- Trigger when approving changes (Source mode)post.review.save- Trigger when saving for human reviewpost.review.approve- Trigger when approving human review draftpost.ai-review.save- Trigger when an agent saves suggestions to AI reviewpost.ai-review.approve- Trigger when an AI review draft is approvedpost.create-translation- Trigger when a new translation post is createdform.submit- Trigger on form submission
Global Scope
Global agents are accessible via a floating brain icon button in the lower right of the viewport. They don't require a post context and can be used for:
Creating new posts
General content assistance
System-wide operations
Example:
scopes: [{ scope: 'global', order: 5, enabled: true }]
Field Scope with Field Types
Field-scoped agents can be restricted to specific field types (e.g., media fields):
scopes: [
{
scope: 'field',
order: 10,
enabled: true,
fieldTypes: ['media'], // Only available for media field types
},
]
This is useful for specialized agents like the Graphic Designer that should only appear when editing media fields.
Example: Graphic Designer (Image Generation)
We recommend defining image-generation agents as field-scoped agents so they can be used from per-field AI buttons.
Example (app/agents/graphic_designer.ts):
scope:
fieldfieldTypes:
['media']- Only appears for media field typesUses MCP tools:
generate_image(DALL-E) andsearch_mediafor finding existing images
Example: Translator (bulk translations)
We provide:
An agent definition:
app/agents/translator.tsMCP helpers:
create_translation_ai_reviewandcreate_translations_ai_review_bulk
Recommended flow:
Call
create_translations_ai_review_bulkto create translation posts (one per locale) and clone module structure into AI Review.Use
run_field_agent(with the Translator agent) to translate individual fields/modules and stage results.Call
submit_ai_review_to_reviewso a human can approve.
Field scope filtering (recommended)
Use fieldKeys to restrict an agent to specific fields:
scopes: [
{
scope: 'field',
enabled: true,
order: 10,
fieldKeys: ['post.title', 'post.metaTitle', 'module.hero.title', 'module.prose.content'],
},
]
If fieldKeys is omitted/empty, the agent is considered available for all fields.
Field-scope execution via MCP
For per-field AI buttons, use the MCP tool:
run_field_agent
Open-Ended Context (explicit prompt injection surface)
Some agents benefit from a freeform user prompt (e.g. “make this more concise”, “use a formal tone”, etc). In Adonis EOS, this is an explicit, opt-in capability called Open-Ended Context.
Enable it in an agent config
In app/agents/<agent>.ts:
openEndedContext: {
enabled: true,
label: 'Instructions',
placeholder: 'Example: “Keep it under 400 words, preserve the CTA.”',
maxChars: 1200,
}
How it is delivered to agents
Admin UI webhook agents (
POST /api/posts/:id/agents/:agentId/run):The UI sends
openEndedContextand the backend includes it in the webhook payload as:payload.context.openEndedContext
MCP (
run_field_agent):Pass
openEndedContextas an argument; MCP forwards it in the webhook payload under:context.openEndedContext
Server-side enforcement
The backend will reject openEndedContext unless agent.openEndedContext.enabled === true.
If maxChars is set, the backend will reject prompts longer than maxChars.
Security guidance
Treat
openEndedContextas untrusted input.Your agent implementation should:
ignore attempts to change system constraints (publishing, permissions, secrets)
only return structured edits (field/module patches) that the CMS stages in review modes
Request payload (for field-scoped agents)
Field-scoped agents receive context about the field being edited:
{
"scope": "field",
"post": { "id": "uuid", "type": "page", "locale": "en", "status": "draft" },
"field": { "key": "post.title", "currentValue": "..." },
"draftBase": { "title": "...", "metaTitle": "...", "...": "..." },
"module": {
"postModuleId": "uuid",
"moduleInstanceId": "uuid",
"type": "hero",
"scope": "local",
"props": {},
"reviewProps": null,
"aiReviewProps": null,
"overrides": null,
"reviewOverrides": null,
"aiReviewOverrides": null,
"schema": { "type": "hero", "propsSchema": {}, "defaultProps": {} }
},
"context": {}
}
Response expectations (recommended)
For the best UX, have the agent respond using one of these patterns:
{ "value": <newValue> }(recommended for true per-field edits){ "post": { ...partialPostPatch } }(for core post fields){ "module": { "props": { ... } } }or{ "module": { "overrides": { ... } } }(for module edits)
If applyToAiReview=true was passed to run_field_agent, MCP will best-effort stage these responses into AI Review.
Example with Form Filtering
scopes: [
{
scope: 'form.submit',
order: 10,
enabled: true,
formSlugs: ['contact-form', 'inquiry-form'], // Only these forms
},
]
Agent Payload
Agents receive the canonical post JSON format:
{
"post": {
"type": "blog",
"locale": "en",
"slug": "my-post",
"title": "My Post",
"excerpt": "Post summary",
"status": "draft",
"metaTitle": null,
"metaDescription": null,
"canonicalUrl": null,
"robotsJson": null,
"jsonldOverrides": null
},
"modules": [
{
"type": "prose",
"scope": "local",
"orderIndex": 0,
"locked": false,
"props": {
"content": {
/* Lexical JSON */
}
},
"overrides": null,
"globalSlug": null
}
],
"translations": [
{ "id": "uuid", "locale": "en" },
{ "id": "uuid", "locale": "es" }
],
"context": {
"triggeredBy": "dropdown",
"userId": "uuid"
}
}
Agent Response
Agents return suggested changes:
{
"post": {
"title": "Improved SEO Title - My Post | Brand Name",
"metaDescription": "Optimized description with keywords and call to action.",
"metaTitle": "SEO-Optimized Title"
}
}
Important: Changes are applied to review_draft only, not live content. Users review before publishing.
Using Agents
Manual Execution (Dropdown)
Open post editor
Find "Agents" dropdown in Actions panel
Select an agent
Click "Run Agent"
Agent suggestions appear in Review mode
Review and approve changes
Automatic Execution
Agents configured with event scopes run automatically:
scopes: [{ scope: 'post.publish', order: 10, enabled: true }]
When a post is published, this agent runs automatically.
Execution Order
Multiple agents on the same scope execute in order:
// Agent 1: order 10 (runs first)
// Agent 2: order 20 (runs second)
// Agent 3: order 30 (runs third)
Lower numbers run first.
Security
RBAC Permissions
Agents require specific permissions based on their scope:
agents.global- Permission to use global-scoped agents (floating brain icon)agents.dropdown- Permission to use dropdown-scoped agents (post editor)agents.field- Permission to use field-scoped agents (per-field AI buttons)
The agents.edit permission is also checked:
// Admin role with all agent permissions
admin: {
permissions: ['agents.edit', 'agents.global', 'agents.dropdown', 'agents.field']
}
// Editor role with limited agent access
editor: {
permissions: ['agents.dropdown', 'agents.field']
}
Webhook Signatures
Outgoing webhooks include HMAC-SHA256 signatures:
X-Hub-Signature-256: sha256=<signature>
Verify in your webhook handler:
const signature = request.headers['x-hub-signature-256']
const payload = JSON.stringify(request.body)
const expected =
'sha256=' + crypto.createHmac('sha256', process.env.AGENT_SECRET).update(payload).digest('hex')
if (signature !== expected) {
throw new Error('Invalid signature')
}
API Endpoints
List Available Agents
GET /api/agents
Returns agents with dropdown scope.
Run Agent
POST /api/posts/:id/agents/:agentId/run
Content-Type: application/json
{
"context": {
"note": "Custom context data"
}
}
Run Bulk Agent
POST /api/posts/bulk-agents/:agentId/run
Content-Type: application/json
{
"ids": ["post-uuid-1", "post-uuid-2"],
"openEndedContext": "Optimize all selected posts for SEO"
}
Example: SEO Optimizer Agent
1. Create Agent Definition
import type { AgentDefinition } from '#types/agent_types'
const SeoAgent: AgentDefinition = {
id: 'seo-optimizer',
name: 'SEO Optimizer',
description: 'Optimizes SEO metadata using AI',
enabled: true,
llmConfig: {
provider: 'openai',
model: 'gpt-4',
systemPrompt: `You are an SEO expert. Analyze the post content and suggest optimized metaTitle and metaDescription that improve search rankings while accurately representing the content.`,
options: {
temperature: 0.7,
maxTokens: 500,
},
useMCP: true,
allowedMCPTools: ['get_post_context', 'save_post_ai_review'],
},
scopes: [{ scope: 'dropdown', order: 10, enabled: true }],
}
export default SeoAgent
2. Use in Editor
Open a blog post
Select "SEO Optimizer" from Agents dropdown
Click "Run Agent"
Review AI suggestions in Review mode
Approve or edit before publishing
Note: For n8n-based SEO optimization workflows, use the Workflows system instead.
Best Practices
General
Always use Review mode: Never modify live content directly
Add timeouts: Prevent hanging on slow webhooks (external) or long AI completions (internal)
Handle errors gracefully: Return helpful error messages
Log agent runs: Track successes and failures
Order execution: Use
orderfield for dependent agentsScope appropriately: Don't auto-run destructive agents
Agents
Choose the right provider: OpenAI for general tasks, Anthropic for complex reasoning, Google for multimodal
Optimize prompts: Clear system prompts improve results
Set appropriate limits: Use
maxTokensto control costsUse MCP wisely: Enable MCP only when agents need CMS operations
Prose Module Convention: The system automatically detects modules with 'Prose' in their name (e.g.,
prose,prose-with-media). When these modules are present, the agent is explicitly instructed to provide substantial, high-quality copy (multiple paragraphs, headings, etc.) instead of brief summaries. Use this naming convention when building modules that require significant text content.Intelligent Layout Planning: When creating or modifying pages from a brief, agents should use the
suggest_modules_for_layouttool to identify the most appropriate modules (e.g.,features-list,faq,hero) instead of defaulting to a singleprosemodule. Splitting content into logical modules provides a much better user experience and better leverages the CMS's modular design.Monitor usage: Track API costs and usage
Test reactions: Ensure webhooks/Slack notifications work correctly
Global AI Settings
You can configure global fallback providers and models in the Admin UI:
Go to Settings > AI Settings.
Select the Default Reasoning (Text) Provider and Default Model.
Select the Default Media (Generation) Provider and Default Model.
These defaults will be used for any agent that doesn't explicitly define
providerText/modelTextorproviderMedia/modelMediain its config file.
The model dropdowns in the settings page are populated dynamically by querying the respective AI provider APIs (where supported), ensuring you always have access to the latest models.
Troubleshooting
Agent not appearing in dropdown?
Check
enabled: trueandscopesincludesdropdownVerify user has
agents.editpermission
Agent execution failing?
Check API keys are set correctly in
.envVerify the AI provider is accessible
Check model name is correct for the provider
Google Gemini & Imagen Billing Setup
To use Google's advanced models (like gemini-2.5-pro or imagen-4.0) via the Nano Banana/Google provider, you must enable billing on your Google Cloud project. Even if you stay within free tier limits for text, image generation (Imagen) usually requires a billed account.
1. Enable Billing in Google Cloud
Go to the Google Cloud Console Billing page.
Ensure you have a valid Billing Account linked to the project you are using for AI Studio.
If you don't have a project, create one at console.cloud.google.com.
2. Enable the Gemini API (Generative Language API)
In the Google Cloud Console, go to APIs & Services > Library.
Search for "Gemini API" or "Generative Language API".
Ensure the API is Enabled for your billed project. (The technical name in Google's backend is often
Generative Language API, but it is frequently listed asGemini APIin the marketplace).
3. Link Google AI Studio to your Billed Project
Go to Google AI Studio.
Click on the Settings (gear icon) or check your API Keys.
When creating or viewing an API key, ensure it is associated with the Google Cloud project where you just enabled billing.
If you see "Free of charge" in AI Studio, you might still need to click "Set up billing" in the AI Studio sidebar to transition to the "Pay-as-you-go" tier, which unblocks Imagen.
4. Verify Rate Limits
Once billing is enabled, your quotas will increase. You can monitor your usage and limits at the Google AI Studio Usage page.
Note: Google uses the standard AI_PROVIDER_GOOGLE_API_KEY environment variable. Once billing is active, the "Imagen API is only accessible to billed users" error will disappear.
Changes not applying?
Agents update
review_draft, not live contentSwitch to Review tab to see changes
Check agent response format matches expected schema
Related: Workflows | MCP (Model Context Protocol) | API Reference