Skip to main content

Configuration Types for Custom Workflows

Custom workflows let you define configuration for your AI application and then iterate on it in the playground and version it.

This page documents the types you use to define configuration in custom workflows. Each type is a Pydantic model. The playground shows each type as a control (for instance, it shows numeric fields as sliders). You must provide a default value for each field when defining your configuration.

PromptTemplate

PromptTemplate defines a prompt with messages, model settings, and variable substitution.

from agenta.sdk.types import PromptTemplate, Message, ModelConfig

prompt = PromptTemplate(
messages=[
Message(role="system", content="You are a helpful assistant."),
Message(role="user", content="Summarize this: {{text}}")
],
template_format="curly",
input_keys=["text"],
llm_config=ModelConfig(
model="gpt-4o-mini",
temperature=0.7,
max_tokens=500
)
)

Fields

FieldTypeDescription
messagesList[Message]List of messages (system, user, assistant)
template_formatstrVariable syntax: "curly" for {{var}}, "fstring" for {var}
input_keysList[str]Variables the template expects (for validation)
llm_configModelConfigModel and parameters

Methods

format(**kwargs) substitutes variables and returns a new PromptTemplate:

formatted = prompt.format(text="Hello world")
# formatted.messages[1].content == "Summarize this: Hello world"

to_openai_kwargs() converts to arguments for the OpenAI client:

kwargs = formatted.to_openai_kwargs()
response = client.chat.completions.create(**kwargs)

Message

Message represents a single message in a conversation.

from agenta.sdk.types import Message

message = Message(
role="system",
content="You are a helpful assistant."
)

Fields

FieldTypeDescription
rolestrOne of: "system", "user", "assistant"
contentstrThe message text

ModelConfig

ModelConfig defines model selection and parameters.

from agenta.sdk.types import ModelConfig

config = ModelConfig(
model="gpt-4o-mini",
temperature=0.7,
max_tokens=500,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0,
response_format={"type": "json_object"}
)

Fields

FieldTypeDefaultDescription
modelstr"gpt-3.5-turbo"Model name
temperaturefloat1.0Sampling temperature (0-2)
max_tokensintNoneMaximum tokens in response
top_pfloatNoneNucleus sampling
frequency_penaltyfloatNoneFrequency penalty (-2 to 2)
presence_penaltyfloatNonePresence penalty (-2 to 2)
response_formatdictNoneResponse format (e.g., {"type": "json_object"})
stopList[str]NoneStop sequences

Model selection dropdown

Use MultipleChoice to create a dropdown in the UI.

from typing import Annotated
from pydantic import BaseModel, Field
import agenta as ag

class Config(BaseModel):
model: Annotated[str, ag.MultipleChoice(choices=["gpt-4o", "gpt-4o-mini", "gpt-3.5-turbo"])] = Field(
default="gpt-4o-mini"
)

For all supported models, use supported_llm_models:

from agenta.sdk.assets import supported_llm_models

class Config(BaseModel):
model: Annotated[str, ag.MultipleChoice(choices=supported_llm_models)] = Field(
default="gpt-4o-mini"
)

Numeric parameters

Use Field with ge (greater or equal) and le (less or equal) for validation. The UI shows these as sliders.

class Config(BaseModel):
temperature: float = Field(default=0.7, ge=0.0, le=2.0)
max_tokens: int = Field(default=500, ge=1, le=4000)
threshold: float = Field(default=0.8, ge=0.0, le=1.0)

Boolean parameters

Booleans show as checkboxes.

class Config(BaseModel):
use_cache: bool = Field(default=True)
verbose: bool = Field(default=False)

Text parameters

Plain strings show as text areas.

class Config(BaseModel):
system_prompt: str = Field(default="You are a helpful assistant.")

For long text, use PromptTemplate instead. It gives you the prompt editor UI with model selection.

Complete example

from typing import Annotated
from pydantic import BaseModel, Field

import agenta as ag
from agenta.sdk.types import PromptTemplate, Message, ModelConfig
from agenta.sdk.assets import supported_llm_models


class ClassifierConfig(BaseModel):
# Prompt with full editor UI
prompt: PromptTemplate = Field(
default=PromptTemplate(
messages=[
Message(role="system", content="Classify the sentiment of the text."),
Message(role="user", content="Text: {{text}}\n\nSentiment:")
],
template_format="curly",
input_keys=["text"],
llm_config=ModelConfig(model="gpt-4o-mini", temperature=0.0)
)
)

# Model dropdown
model: Annotated[str, ag.MultipleChoice(choices=supported_llm_models)] = Field(
default="gpt-4o-mini"
)

# Numeric with slider
confidence_threshold: float = Field(default=0.8, ge=0.0, le=1.0)

# Boolean checkbox
include_reasoning: bool = Field(default=False)