Skip to main content

Prompt Templates & Chains

WorldFlow AI provides a prompt management system with version control, variable extraction, folder organization, and direct execution against LLMs. Chains extend this by wiring multiple models into a sequential pipeline where each step's output feeds the next.

Prompt Templates

List Prompts

GET /api/v1/prompts

Query parameters

ParameterTypeDescription
statusstringFilter by DRAFT, PUBLISHED, or ARCHIVED
folderIdstring (uuid)Filter by folder
tagstringFilter by tag
searchstringFree-text search in name and description
pageintegerPage number (default 1)
pageSizeintegerItems per page (default 20)
curl -H "Authorization: Bearer $API_KEY" \
"https://gateway.example.com/api/v1/prompts?status=PUBLISHED&tag=support&pageSize=10"
{
"prompts": [
{
"id": "p-550e8400-...",
"name": "Customer Support",
"slug": "customer-support",
"description": "Standard support reply template",
"template": "You are a helpful {{role}} assistant. The customer asks about {{topic}}.",
"variables": [
{ "name": "role", "varType": "string", "defaultValue": "support" },
{ "name": "topic", "varType": "string", "defaultValue": "" }
],
"model": "gpt-4o",
"temperature": 0.7,
"maxTokens": 500,
"tags": ["support", "customer"],
"folderId": "f-123...",
"status": "PUBLISHED",
"currentVersion": 3,
"createdBy": "user-001",
"createdAt": "2025-02-01T12:00:00Z",
"updatedAt": "2025-02-15T09:30:00Z"
}
],
"total": 1,
"page": 1,
"pageSize": 10,
"totalPages": 1
}

Get a Prompt

GET /api/v1/prompts/{id}

Returns the prompt with its current template, extracted variables, and metadata.

Create a Prompt

POST /api/v1/prompts

Request body

FieldTypeRequiredDescription
namestringYesDisplay name
templatestringYesTemplate text with {{variable}} placeholders
descriptionstringNoHuman-readable description
modelstringNoDefault model for execution
temperaturefloatNoSampling temperature
maxTokensintegerNoMaximum completion tokens
tagsstring[]NoTags for filtering
folderIdstring (uuid)NoParent folder
statusstringNoDRAFT, PUBLISHED, or ARCHIVED (default DRAFT)

Variables are automatically extracted from {{placeholder}} syntax in the template.

curl -X POST -H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
"https://gateway.example.com/api/v1/prompts" \
-d '{
"name": "Customer Support",
"template": "You are a helpful {{role}} assistant.\n\nThe customer asks: {{question}}\n\nRespond professionally about {{topic}}.",
"model": "gpt-4o",
"temperature": 0.7,
"maxTokens": 500,
"tags": ["support"],
"status": "DRAFT"
}'

Update a Prompt

PUT /api/v1/prompts/{id}

Partial updates are supported. When the template field is changed, a new version is created automatically. Include a message to annotate the version.

curl -X PUT -H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
"https://gateway.example.com/api/v1/prompts/p-550e8400-..." \
-d '{
"template": "You are a {{role}} assistant specializing in {{department}}.\n\nQuestion: {{question}}",
"message": "Added department variable for team routing"
}'

Delete a Prompt

DELETE /api/v1/prompts/{id}

Duplicate a Prompt

POST /api/v1/prompts/{id}/duplicate

Creates a copy of an existing prompt with an optional new name and folder.

curl -X POST -H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
"https://gateway.example.com/api/v1/prompts/p-550e8400-.../duplicate" \
-d '{
"name": "Customer Support (Copy)",
"folderId": "f-789..."
}'

Version Control

Every template change creates a new version. You can inspect the history, compare versions, and roll back.

List Versions

GET /api/v1/prompts/{id}/versions
ParameterTypeDescription
pageintegerPage number (default 1)
pageSizeintegerItems per page (default 20)
curl -H "Authorization: Bearer $API_KEY" \
"https://gateway.example.com/api/v1/prompts/p-550e8400-.../versions"
{
"versions": [
{
"id": "v-aaa...",
"promptId": "p-550e8400-...",
"version": 3,
"template": "You are a {{role}} assistant specializing in {{department}}...",
"variables": [
{ "name": "role" },
{ "name": "department" },
{ "name": "question" }
],
"model": "gpt-4o",
"temperature": 0.7,
"message": "Added department variable for team routing",
"createdBy": "user-001",
"createdAt": "2025-02-15T09:30:00Z"
}
],
"total": 3,
"page": 1,
"pageSize": 20,
"totalPages": 1
}

Get a Specific Version

GET /api/v1/prompts/{id}/versions/{version}

Create a Version Directly

POST /api/v1/prompts/{id}/versions
FieldTypeRequiredDescription
templatestringYesNew template text
modelstringNoModel override
temperaturefloatNoTemperature override
maxTokensintegerNoToken limit override
messagestringNoVersion commit message

Compare Versions (Diff)

GET /api/v1/prompts/{id}/diff?fromVersion=1&toVersion=3

Returns a unified diff along with lists of added, removed, and changed variables.

{
"fromVersion": 1,
"toVersion": 3,
"templateDiff": "--- v1\n+++ v3\n@@ -1 +1 @@\n-You are a helpful {{role}} assistant.\n+You are a {{role}} assistant specializing in {{department}}.",
"variablesAdded": ["department"],
"variablesRemoved": [],
"variablesChanged": [],
"stats": {
"additions": 2,
"deletions": 1,
"totalChanges": 3
}
}

Rollback

POST /api/v1/prompts/{id}/rollback

Creates a new version with the content from a previous version.

curl -X POST -H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
"https://gateway.example.com/api/v1/prompts/p-550e8400-.../rollback" \
-d '{
"version": 1,
"message": "Reverting to v1 after regression"
}'

Execution

Execute a Prompt

POST /api/v1/prompts/{id}/execute

Substitutes variables into the template and sends the rendered prompt to the configured model. Responses are cached through the WorldFlow AI semantic cache.

Request body

FieldTypeRequiredDescription
variablesobjectNoKey-value pairs for template substitution
versionintegerNoRun a specific version (default: current)
modelstringNoOverride the default model
temperaturefloatNoOverride temperature
maxTokensintegerNoOverride max tokens
dryRunbooleanNoIf true, render the template but do not call the LLM
curl -X POST -H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
"https://gateway.example.com/api/v1/prompts/p-550e8400-.../execute" \
-d '{
"variables": {
"role": "billing specialist",
"department": "Finance",
"question": "Why was I charged twice?"
}
}'
{
"runId": "run-98765...",
"renderedPrompt": "You are a billing specialist assistant specializing in Finance.\n\nQuestion: Why was I charged twice?",
"estimatedTokens": 42,
"response": "I understand your concern about the duplicate charge...",
"usage": {
"promptTokens": 42,
"completionTokens": 85,
"totalTokens": 127
},
"cacheHit": false,
"durationMs": 1200,
"dryRun": false
}

Test a Prompt (Dry Run)

POST /api/v1/prompts/{id}/test

Accepts the same body as /execute but always performs a dry run. The rendered prompt is returned without making an LLM call.

curl -X POST -H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
"https://gateway.example.com/api/v1/prompts/p-550e8400-.../test" \
-d '{
"variables": { "role": "sales", "department": "Revenue", "question": "Pricing options?" }
}'

Execution History

List runs:

GET /api/v1/prompts/{id}/runs
ParameterTypeDescription
statusstringFilter by run status
versionintegerFilter by prompt version
pageintegerPage number (default 1)
pageSizeintegerItems per page (default 20)

Get a specific run:

GET /api/v1/prompts/{id}/runs/{runId}

Folders

Organize prompts into a hierarchical folder structure.

List Folders

GET /api/v1/folders
ParameterTypeDescription
parentIdstring (uuid)Filter by parent folder (omit for root folders)

Create a Folder

POST /api/v1/folders
curl -X POST -H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
"https://gateway.example.com/api/v1/folders" \
-d '{
"name": "Marketing",
"parentId": null
}'

Get, Update, Delete

GET    /api/v1/folders/{id}
PUT /api/v1/folders/{id}
DELETE /api/v1/folders/{id}

Deleting a folder fails with a 400 error if the folder still contains prompts or subfolders. Move or delete its contents first.


Chains

Chains are sequential multi-model pipelines. Each chain defines an ordered list of steps. When executed, the output of each step becomes the input for the next. This allows you to use one model for drafting, another for summarization, and a third for formatting, all in a single API call.

List Chains

GET /api/v1/chains
{
"chains": [
{
"id": "c-123...",
"name": "Draft-Review-Polish",
"steps": [
{ "modelId": "gpt-4o", "systemPrompt": "Draft a response.", "transform": "passThrough", "temperature": 0.9 },
{ "modelId": "claude-sonnet-4-20250514", "systemPrompt": "Review for accuracy.", "transform": "passThrough", "temperature": 0.3 },
{ "modelId": "gpt-4o-mini", "systemPrompt": "Polish the language.", "transform": "passThrough", "temperature": 0.5 }
],
"description": "Three-stage draft, review, and polish pipeline",
"workspaceId": "ws-001",
"createdAt": "2025-03-01T08:00:00Z",
"updatedAt": "2025-03-01T08:00:00Z"
}
],
"total": 1
}

Create a Chain

POST /api/v1/chains

Request body

FieldTypeRequiredDescription
namestringYesDisplay name
stepsarrayYesOrdered list of chain steps
descriptionstringNoHuman-readable description

Chain step fields

FieldTypeRequiredDescription
modelIdstringYesModel to use for this step
systemPromptstringNoSystem prompt for the step
transformstringNoHow to pass output to the next step: passThrough, extractField, or template (default passThrough)
maxTokensintegerNoToken limit for this step
temperaturefloatNoSampling temperature
curl -X POST -H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
"https://gateway.example.com/api/v1/chains" \
-d '{
"name": "Summarize-then-Translate",
"description": "Summarize English text, then translate to Spanish",
"steps": [
{
"modelId": "gpt-4o",
"systemPrompt": "Summarize the following text in 2-3 sentences.",
"transform": "passThrough",
"maxTokens": 200
},
{
"modelId": "gpt-4o-mini",
"systemPrompt": "Translate the following English text to Spanish.",
"transform": "passThrough",
"maxTokens": 300
}
]
}'

Get, Update, Delete

GET    /api/v1/chains/{id}
PUT /api/v1/chains/{id}
DELETE /api/v1/chains/{id}

The PUT body accepts the same fields as POST. The full steps array is replaced on update.

Execute a Chain

POST /api/v1/chains/{id}/execute

Request body

FieldTypeRequiredDescription
messagesarrayYesChat messages to start the chain (OpenAI message format)
curl -X POST -H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
"https://gateway.example.com/api/v1/chains/c-123.../execute" \
-d '{
"messages": [
{ "role": "user", "content": "WorldFlow AI is a semantic caching gateway that reduces LLM costs by matching similar queries..." }
]
}'
{
"output": "WorldFlow AI es un gateway de almacenamiento sem\u00e1ntico...",
"steps": [
{
"index": 0,
"model": "gpt-4o",
"output": "WorldFlow AI is a semantic caching gateway that reduces LLM costs by matching similar queries to cached responses.",
"promptTokens": 85,
"completionTokens": 42
},
{
"index": 1,
"model": "gpt-4o-mini",
"output": "WorldFlow AI es un gateway de almacenamiento sem\u00e1ntico...",
"promptTokens": 55,
"completionTokens": 48
}
],
"totalPromptTokens": 140,
"totalCompletionTokens": 90
}

Each step in the response includes its model, output text, and token counts. The top-level output is the final step's output.