Skip to content

AI Builder

The AI Builder is a shared module that provides AI-powered generation and refinement of instructions. It supports multiple entity types (voice agents, text agents, plan templates, analyzers) through a unified API, component system, and prompt template structure.

Writing effective agent instructions is difficult:

  • Users struggle to articulate complex behavioral rules in natural language
  • Instructions need to be comprehensive, covering multiple scenarios and edge cases
  • Different agent types require different instruction formats (plain text vs. structured fields)
  • Iterative refinement requires domain expertise

The AI Builder solves these problems by:

  1. Generating instructions from natural language — users describe what they want in plain English, and the AI produces structured, production-ready instructions
  2. Providing targeted editing actions — predefined transformations (improve tone, add examples, strengthen safety) applied via AI
  3. Supporting extensibility — new entity types can be added by following a clear pattern

The AI Builder uses OpenRouter models to transform user input into agent instructions. It supports two primary flows:

Voice agents use a single instructions field with markdown formatting.

Generation flow:

  1. User provides a natural language description (“A friendly receptionist that schedules appointments…”)
  2. ai_builder_generate_api sends the description + area (VoiceAgent) to the AI
  3. AI uses the voice_agent_generate.md prompt template to produce structured markdown
  4. Instructions are returned as plain text

Editing flow:

  1. User selects an action (e.g., “Improve tone”) or provides custom edit instruction
  2. ai_builder_edit_api or ai_builder_custom_edit_api sends current instructions + action/custom instruction
  3. AI uses the corresponding prompt template (e.g., voice_agent_improve_tone.md)
  4. Refined instructions are returned

Analyzers use a single prompt field that defines how an LLM should analyze call transcriptions. The AI Builder generates structured analysis prompts with categories, output formats, and edge case handling.

Generation flow:

  1. User describes the analysis they need (“Analyze calls for sentiment, key issues, and resolution status”)
  2. ai_builder_generate_api sends the description + area (Analyzer) to the AI
  3. AI uses the analyzer/analyzer_generate.md prompt template to produce a structured analysis prompt
  4. The prompt includes: Role & Objective, Analysis Categories (with extraction targets, formats, fallbacks), Output Format, Instructions & Rules, and Edge Cases

Key differences from agent generation:

  • No personality or tone section — analyzers are purely objective
  • No conversation flow — analyzers process completed transcriptions
  • Focus on structured data extraction with explicit fallback behaviors
  • Edge cases cover transcription quality issues (short calls, poor audio, multiple speakers)

Editing flow: Same as voice agents — uses the shared toolbar with all 8 predefined actions plus custom edits, each backed by analyzer-specific prompt templates in src/shared/ai_builder/prompts/analyzer/.

Text agents use four distinct fields: purpose, custom_instructions, escalation_instructions, restricted_topics.

Generation flow:

  1. User provides a natural language description
  2. text_agent_ai_generate_api sends the description to the AI
  3. AI uses .schema::<TextAgentAiResponse>() to produce structured JSON matching the 4-field schema
  4. All four fields are populated atomically

Editing flow:

  1. User selects an action or provides custom instruction
  2. text_agent_ai_edit_api or text_agent_ai_custom_edit_api sends all 4 current fields + action
  3. AI refines all fields together, maintaining consistency
  4. Updated 4-field object is returned

Adding a new entity type to the AI Builder follows this step-by-step process:

Edit src/shared/ai_builder/types/ai_builder_area_type.rs:

#[derive(Debug, Serialize, Deserialize, Clone, Copy, PartialEq)]
pub enum AiBuilderArea {
VoiceAgent,
TextAgent,
PlanTemplate,
Analyzer,
NewEntityType, // ← Add your variant
}

2. Define a Structured Response Type (if needed)

Section titled “2. Define a Structured Response Type (if needed)”

If your entity uses multiple fields (like text agents), create a response type with schemars::JsonSchema:

src/shared/ai_builder/types/new_entity_ai_response_type.rs
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, schemars::JsonSchema)]
#[schemars(deny_unknown_fields)]
pub struct NewEntityAiResponse {
pub field_one: String,
pub field_two: String,
}

Register it in src/shared/ai_builder/types/mod.rs:

mod new_entity_ai_response_type;
pub use new_entity_ai_response_type::*;

When to use structured output:

  • Entity has multiple distinct fields (like text agents: purpose, custom_instructions, etc.)
  • Fields need to be populated atomically
  • You want type-safe, validated output

When to use plain text:

  • Single instruction field (like voice agents)
  • Flexibility in formatting
  • Markdown or free-form content

Create markdown files in src/shared/ai_builder/prompts/ following the naming convention:

{area}_{action}.md

Required templates:

  • new_entity_type_generate.md — generation from scratch
  • new_entity_type_custom_edit.md — custom user instruction
  • One file per predefined action:
    • new_entity_type_improve.md
    • new_entity_type_make_concise.md
    • new_entity_type_improve_tone.md
    • new_entity_type_add_examples.md
    • new_entity_type_fix.md
    • new_entity_type_strengthen_safety.md
    • new_entity_type_add_personality.md
    • new_entity_type_enhance_with_org_data.md

Prompt template conventions:

  1. Start with role definition — “You are an expert prompt engineer specializing in…”
  2. Define output format — “Output ONLY the raw instructions — no explanations, no meta-commentary”
  3. Include structural requirements — headings, bullet points, formatting rules
  4. Provide concrete examples — show the expected structure
  5. Specify domain-specific constraints — safety rules, turn-taking, escalation thresholds

For structured output (like text agents), include JSON schema field definitions and constraints.

Update src/shared/ai_builder/prompts/mod.rs:

pub fn generation_prompt(area: &AiBuilderArea) -> &'static str {
match area {
AiBuilderArea::VoiceAgent => include_str!("voice_agent_generate.md"),
AiBuilderArea::TextAgent => include_str!("text_agent_generate.md"),
AiBuilderArea::NewEntityType => include_str!("new_entity_type_generate.md"),
}
}
pub fn editing_prompt(area: &AiBuilderArea, action: &AiBuilderAction) -> &'static str {
match (area, action) {
// ... existing matches
(AiBuilderArea::NewEntityType, AiBuilderAction::Improve) => {
include_str!("new_entity_type_improve.md")
}
// ... add all actions
}
}

For plain text output, the existing ai_builder_generate_api, ai_builder_edit_api, and ai_builder_custom_edit_api handle the new area automatically once prompts are registered.

For structured output, create dedicated endpoints:

src/shared/ai_builder/api/new_entity_generate_api.rs
use dioxus::prelude::*;
use crate::shared::ai_builder::{NewEntityAiResponse, NewEntityGenerateRequest};
#[post("/api/ai-builder/new-entity/generate", session: crate::auth::Session)]
pub async fn new_entity_ai_generate_api(
request: NewEntityGenerateRequest,
) -> Result<NewEntityAiResponse, HttpError> {
_handler(session, request).await
}
#[cfg(not(target_family = "wasm"))]
pub(crate) async fn _handler(
session: crate::auth::Session,
request: NewEntityGenerateRequest,
) -> Result<NewEntityAiResponse, HttpError> {
use aisdk::{
core::{DynamicModel, LanguageModelRequest},
providers::openrouter::Openrouter,
};
use crate::shared::ai_builder::prompts;
super::check_area_permission(&session, &request.area)
.or_forbidden("Not allowed")?;
let openrouter_conf: crate::mods::ai::OpenRouterCoreConf =
crate::bases::conf::get_core_conf()
.await
.or_internal_server_error("Failed to load configuration")?;
let model = Openrouter::<DynamicModel>::builder()
.api_key(openrouter_conf.openrouter_api_key)
.model_name(crate::mods::ai::AiModels::GENERATE_INSTRUCTIONS)
.build()
.or_internal_server_error("Failed to initialize AI model")?;
let system_prompt = prompts::new_entity_generation_prompt();
let org_context = crate::mods::agent::build_organization_context(session.organization.id)
.await
.or_internal_server_error("Failed to load organization context")?;
let user_message = if org_context.is_empty() {
request.user_description.clone()
} else {
format!(
"{}\n\n## Organization Data\n\n{}",
request.user_description, org_context
)
};
let response = LanguageModelRequest::builder()
.model(model)
.system(system_prompt)
.prompt(&user_message)
.schema::<NewEntityAiResponse>() // ← Structured output
.build()
.generate_text()
.await
.or_internal_server_error("AI generation failed")?;
let result: NewEntityAiResponse = response
.into_schema()
.or_internal_server_error("Failed to parse structured output")?;
Ok(result)
}

Add edit and custom edit endpoints following the same pattern (see text_agent_edit_api.rs and text_agent_custom_edit_api.rs for examples).

Register in src/shared/ai_builder/api/mod.rs.

Update check_area_permission to handle the new area:

pub(crate) fn check_area_permission(
session: &crate::auth::Session,
area: &super::AiBuilderArea,
) -> bool {
use crate::bases::auth::Resource;
match area {
super::AiBuilderArea::VoiceAgent => { /* ... */ }
super::AiBuilderArea::TextAgent => { /* ... */ }
super::AiBuilderArea::NewEntityType => {
crate::bases::auth::NewEntityResource::has_collection_permission(
session,
crate::bases::auth::NewEntityCollectionPermission::Create,
)
}
}
}

Create a component that wires the AI Builder into your entity’s form.

For plain text (voice agent pattern):

use crate::shared::ai_builder::{
AiBuilderArea, AiGenerateInstructions, AiInstructionToolbar,
InstructionPreview, InstructionsSkeleton,
};
#[component]
pub fn NewEntityForm(
instructions: Signal<String>,
) -> Element {
let ai_loading = use_signal(|| false);
rsx! {
div { class: "space-y-4",
// AI generation section
AiGenerateInstructions {
area: AiBuilderArea::NewEntityType,
on_generated: move |generated: String| {
instructions.set(generated);
},
is_loading: ai_loading,
}
// Preview + editing toolbar
if !instructions().is_empty() {
if ai_loading() {
InstructionsSkeleton {}
} else {
InstructionPreview {
instructions: instructions(),
}
AiInstructionToolbar {
area: AiBuilderArea::NewEntityType,
current_instructions: instructions(),
on_edited: move |edited: String| {
instructions.set(edited);
},
}
}
}
}
}
}

For structured output (text agent pattern):

Build a custom component that handles multiple fields. See TextAgentInstructionBuilder in src/mods/text_agent/components/text_agent_instruction_builder_component.rs for a complete example.

Key elements:

  • Separate signals for each field
  • Generate button that calls new_entity_ai_generate_api
  • Action toolbar that calls new_entity_ai_edit_api for each action
  • Custom edit input + button
  • Undo stack for reverting changes

Enum identifying which entity type is being built.

pub enum AiBuilderArea {
VoiceAgent,
TextAgent,
PlanTemplate,
Analyzer,
}

Location: src/shared/ai_builder/types/ai_builder_area_type.rs

Predefined editing transformations available for all areas.

pub enum AiBuilderAction {
Improve,
MakeConcise,
ImproveTone,
AddExamples,
Fix,
StrengthenSafety,
AddPersonality,
EnhanceWithOrgData,
}

Location: src/shared/ai_builder/types/ai_builder_action_type.rs

Methods:

  • .label() — UI button text
  • .template_suffix() — filename suffix for prompt template lookup

Constant: ALL_AI_BUILDER_ACTIONS — array of all actions for toolbar iteration

Plain text generation from natural language description.

pub struct AiBuilderGenerateRequest {
pub area: AiBuilderArea,
pub user_description: String,
}

Location: src/shared/ai_builder/types/generate_request_type.rs

Plain text editing with predefined action.

pub struct AiBuilderEditRequest {
pub area: AiBuilderArea,
pub action: AiBuilderAction,
pub current_instructions: String,
}

Location: src/shared/ai_builder/types/edit_request_type.rs

Plain text editing with custom user instruction.

pub struct AiBuilderCustomEditRequest {
pub area: AiBuilderArea,
pub edit_instruction: String,
pub current_instructions: String,
}

Location: src/shared/ai_builder/types/custom_edit_request_type.rs

Structured generation for text agents.

pub struct TextAgentGenerateRequest {
pub user_description: String,
}

Location: src/shared/ai_builder/types/text_agent_generate_request_type.rs

Structured editing with predefined action.

pub struct TextAgentEditRequest {
pub action: AiBuilderAction,
pub current: TextAgentAiResponse,
}

Location: src/shared/ai_builder/types/text_agent_edit_request_type.rs

Structured editing with custom instruction.

pub struct TextAgentCustomEditRequest {
pub edit_instruction: String,
pub current: TextAgentAiResponse,
}

Location: src/shared/ai_builder/types/text_agent_custom_edit_request_type.rs

Plain text response.

pub struct AiBuilderResponse {
pub instructions: String,
}

Location: src/shared/ai_builder/types/ai_response_type.rs

Structured 4-field response for text agents.

#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, schemars::JsonSchema)]
#[schemars(deny_unknown_fields)]
pub struct TextAgentAiResponse {
pub purpose: String,
pub custom_instructions: String,
pub escalation_instructions: String,
pub restricted_topics: String,
}

Location: src/shared/ai_builder/types/text_agent_ai_response_type.rs

Note: schemars::JsonSchema derive is required for .schema::<T>() in aisdk.

NLP input section for generating instructions from natural language.

Props:

  • area: AiBuilderArea — which entity type
  • on_generated: EventHandler<String> — callback when AI completes
  • is_loading: Signal<bool> — loading state signal

Location: src/shared/ai_builder/components/ai_generate_instructions_component.rs

Usage:

AiGenerateInstructions {
area: AiBuilderArea::VoiceAgent,
on_generated: move |generated: String| {
instructions.set(generated);
},
is_loading: ai_loading,
}

Action button toolbar for editing existing instructions.

Props:

  • area: AiBuilderArea
  • current_instructions: String
  • on_edited: EventHandler<String>

Location: src/shared/ai_builder/components/ai_instruction_toolbar_component.rs

Usage:

AiInstructionToolbar {
area: AiBuilderArea::VoiceAgent,
current_instructions: instructions(),
on_edited: move |edited: String| {
instructions.set(edited);
},
}

Markdown preview panel for instructions.

Props:

  • instructions: String

Location: src/shared/ai_builder/components/instruction_preview_component.rs

Shimmer loading skeleton shown during AI generation.

Location: src/shared/ai_builder/components/ai_generate_instructions_component.rs

All prompt templates follow these conventions:

Start with the AI’s specialized role:

You are an expert prompt engineer specializing in production-ready voice AI agents...

Explicitly state what the AI should return:

IMPORTANT: Output ONLY the raw system prompt — no explanations, no meta-commentary, no wrapping.

For structured output:

IMPORTANT: You MUST respond with a valid JSON object matching the schema provided.

Define headings, sections, and formatting:

## Required output structure
Produce the system prompt with ALL of the following labeled sections:
## 1. Role & Objective
## 2. Personality & Tone
## 3. Conversation Flow
...

Include behavioral rules tailored to the entity type:

- ASK AT MOST ONE QUESTION PER TURN.
- DO NOT ASK MULTI-PART OR COMPOUND QUESTIONS.
- Keep each response to 2-3 sentences.

If the entity uses tools, include usage rules:

## 5. Tools
- **query_knowledge** — when to use, preamble, failure behavior
- **transfer_call** — when to use, preamble

Define escalation thresholds and sensitive data rules:

- 2 failed tool attempts → escalate
- Caller requests human → IMMEDIATELY transfer
- NEVER solicit passwords or SSNs

Provide concrete examples of expected output:

Sample phrases:
- "Let me look that up for you."
- "One moment while I pull up your information."

The AI Builder automatically enriches prompts with organization-specific context when available:

  • Voice agents: Appended to user description before generation
  • Text agents: Appended to user description in both generation and EnhanceWithOrgData editing
  • Analyzers: Appended to user description, enabling domain-specific analysis categories
  • Source: crate::mods::agent::build_organization_context(org_id)

Organization context may include:

  • Business name, address, hours
  • Services offered
  • FAQs and knowledge base snippets

This allows the AI to generate domain-specific, contextually relevant instructions without requiring users to manually include this information.

EndpointMethodPurpose
/api/ai-builder/generatePOSTGenerate instructions from description
/api/ai-builder/editPOSTEdit instructions with predefined action
/api/ai-builder/custom-editPOSTEdit instructions with custom instruction

The area field in the request body (VoiceAgent or Analyzer) determines which prompt templates are used.

EndpointMethodPurpose
/api/ai-builder/text-agent/generatePOSTGenerate all 4 fields from description
/api/ai-builder/text-agent/editPOSTEdit all 4 fields with predefined action
/api/ai-builder/text-agent/custom-editPOSTEdit all 4 fields with custom instruction

All endpoints require authentication and check area-specific permissions via check_area_permission.