<aside> 🧭
Navigation:
</aside>
In Deploy.AI, each agent can use both system and user prompts.
User Prompt: The user-facing instruction — what task the model should complete, often using dynamic variables (e.g., {{question}}).
Example:
“Answer the following: {{question}}”
System Prompt: The internal instruction that defines how the model should behave, structure its response, and interact with tools or data.
Example:
“You are an assistant trained to summarize internal documents clearly. Use the provided content to answer concisely.”
Use system prompts to enforce logic, role, tone, and safety. User prompts should focus on the task at hand or inject user-submitted values.
Variables allow you to insert dynamic user input into prompts.
To reference a form field, use double curly braces:
Answer the following: {{question}}
Variables match the name field from your agent config and can be used in both user and system prompts.
You can help the model interpret your prompts more reliably by wrapping sections with semantic tags. This improves structure and reduces ambiguity.
Example:
<EXECUTION_FLOW>
Step 1: Gather user data
Step 2: Search knowledge base
Step 3: Generate summary
</EXECUTION_FLOW>
Use tags like <INSTRUCTIONS>, <DATA>, <CONTEXT>, or custom ones relevant to your workflow.
Code Canvas allows the agent to output structured code files to the UI. It supports multiple editable files that users can inspect, modify, or upload to platforms like GitHub.
To enable Code Canvas add this in agent config:
"canvas": true
After, wrap the model output inside canvas tags in the system prompt:
The provided code should be enclosed within the <canvasData></canvasData> tags.
Represent files like so:
<canvasData>
[{
\\“type\\”: \\“file\\”,
\\“mime\\”: Mime type of first_filename.extension,
\\“fileName\\”: first_filename.extension,
\\“data\\”: the content of the first_filename.extension
},
{
\\“type\\”: \\“file\\”,
\\“mime\\”: Mime type of second_filename.extension,
\\“fileName\\”: second_filename.extension,
\\“data\\”: the content of the second_filename.extension
}]
</canvasData>
Example representation of a file:
{
\\“type\\”: \\“file\\”,
\\“mime\\”: plain/text.python,
\\“fileName\\”: hello_world.py,
\\“data\\”: print(\\"Hello World\\")
}
Use it to generate backend scripts, configuration files, frontend components, or markdown docs.
Use conditionals to control output based on input variables, ideal for policy rules or complex flows.
Syntax Example
{% if PremiumFixtures == true %}
This will result in higher renovation cost. Recommend full accidental damage coverage.
{% endif %}
{% if HighValueItems == true %}
Include a note on higher single-item limits.
{% endif %}
Best for:
This feature suggests multiple response formats the user can choose from. It's helpful when transforming an AI response into a structured format.
Add to the system prompt:
When outputting the response to the user, also output likely options in this format:
<responseOption>Output as JSON</responseOption>
<responseOption>Output as XML</responseOption>
<responseOption>Output as a table</responseOption>
If your use case requires data to be returned in a specific format (e.g., JSON or XML), you can define a schema for the AI to follow.
System prompt example:
<OUTPUT_OPTIONS>
<responseOption>Output as JSON</responseOption>
<responseOption>Output as XML</responseOption>
<JSON_schema>
{
"form_id": "form_123456",
"fields": [
{
"section": "Policy",
"question": "Policy Number",
"response": "POL123456789"
}
]
}
</JSON_schema>
</OUTPUT_OPTIONS>
Prompt Modules
Prompt modules help you reuse blocks of instructions across multiple agents.
In agent config:
"templates": {
"init": {
"user": "Hello",
"system": ""
},
"modules": {
"additional_instructions": "Here is the alternative prompt"
}
}
In the main prompt:
{{module.additional_instructions}}
Use modules to keep prompts clean, especially when managing multiple roles, functions, or fallback behaviors.
RAG connects your agent to external data sources such as uploaded files or knowledge bases.
Agent config snippet:
"models": ["GPT_4O"],
"orgFiles": {
"file": "KD_American_Airlines Help_Center.docx",
"path": "KB_American_Airlines"
}
System prompt snippet:
{% tool { "name": "documents.search", "args": { "input": question } } %}
Add built-in safety mechanisms to filter or restrict model behavior.
Agent config snippet:
"guardrails": {
"preset": "allow_and_deny"
}
This enforces moderation and ensures sensitive or harmful outputs are blocked based on predefined logic.