<aside> 🧭

Navigation:


Set Up an Organization

LLM Models

Model Evaluation Tool

Logs

Contact us

</aside>

In Deploy.AI, each agent can use both system and user prompts.

Use system prompts to enforce logic, role, tone, and safety. User prompts should focus on the task at hand or inject user-submitted values.

Prompt Syntax Guide

📌 Prompt Variables

Variables allow you to insert dynamic user input into prompts.

📌 Prompt Differentiators

You can help the model interpret your prompts more reliably by wrapping sections with semantic tags. This improves structure and reduces ambiguity.

Example:

<EXECUTION_FLOW>
Step 1: Gather user data
Step 2: Search knowledge base
Step 3: Generate summary
</EXECUTION_FLOW>

Use tags like <INSTRUCTIONS><DATA><CONTEXT>, or custom ones relevant to your workflow.

📌 Connect Code Canvas

Code Canvas allows the agent to output structured code files to the UI. It supports multiple editable files that users can inspect, modify, or upload to platforms like GitHub.

To enable Code Canvas add this in agent config:

"canvas": true

After, wrap the model output inside canvas tags in the system prompt:

The provided code should be enclosed within the <canvasData></canvasData> tags.
Represent files like so:

<canvasData>
[{
      \\“type\\”: \\“file\\”,
      \\“mime\\”: Mime type of first_filename.extension,
      \\“fileName\\”: first_filename.extension,
      \\“data\\”: the content of the first_filename.extension
},
{
      \\“type\\”: \\“file\\”,
      \\“mime\\”: Mime type of second_filename.extension,
      \\“fileName\\”: second_filename.extension,
      \\“data\\”: the content of the second_filename.extension
}]
</canvasData>

Example representation of a file:

{
      \\“type\\”: \\“file\\”,
      \\“mime\\”: plain/text.python,
      \\“fileName\\”: hello_world.py,
      \\“data\\”: print(\\"Hello World\\")
}

Use it to generate backend scripts, configuration files, frontend components, or markdown docs.

📌 Conditional Statements

Use conditionals to control output based on input variables, ideal for policy rules or complex flows.

Syntax Example

{% if PremiumFixtures == true %}
  This will result in higher renovation cost. Recommend full accidental damage coverage.
{% endif %}

{% if HighValueItems == true %}
  Include a note on higher single-item limits.
{% endif %}

Best for:

📌 Predefined output options

This feature suggests multiple response formats the user can choose from. It's helpful when transforming an AI response into a structured format.

Add to the system prompt:

When outputting the response to the user, also output likely options in this format:
<responseOption>Output as JSON</responseOption>
<responseOption>Output as XML</responseOption>
<responseOption>Output as a table</responseOption>

📌 Output Formatting

If your use case requires data to be returned in a specific format (e.g., JSON or XML), you can define a schema for the AI to follow.

System prompt example:

<OUTPUT_OPTIONS>
<responseOption>Output as JSON</responseOption>
<responseOption>Output as XML</responseOption>

<JSON_schema>
{
  "form_id": "form_123456",
  "fields": [
    {
      "section": "Policy",
      "question": "Policy Number",
      "response": "POL123456789"
    }
  ]
}
</JSON_schema>
</OUTPUT_OPTIONS>

Prompt Modules

Prompt modules help you reuse blocks of instructions across multiple agents.

In agent config:

"templates": {
  "init": {
    "user": "Hello",
    "system": ""
  },
  "modules": {
    "additional_instructions": "Here is the alternative prompt"
  }
}

In the main prompt:

{{module.additional_instructions}}

Use modules to keep prompts clean, especially when managing multiple roles, functions, or fallback behaviors.

RAG Feature

RAG connects your agent to external data sources such as uploaded files or knowledge bases.

Agent config snippet:

"models": ["GPT_4O"],
"orgFiles": {
  "file": "KD_American_Airlines Help_Center.docx",
  "path": "KB_American_Airlines"
}

System prompt snippet:

{% tool { "name": "documents.search", "args": { "input": question } } %}

Guardrails Configuration

Add built-in safety mechanisms to filter or restrict model behavior.

Agent config snippet:

"guardrails": {
  "preset": "allow_and_deny"
}

This enforces moderation and ensures sensitive or harmful outputs are blocked based on predefined logic.