OpenAI Tools
The OpenAI adapter enables GPT models to interact with Asset Core through function calling, using the same tool surface as other adapters.
Prerequisites
- Asset Core daemons running
- OpenAI API access
- Python or Rust for integration
Step 1 - Get tool definitions
The adapter provides OpenAI-compatible tool definitions:
use assetcore_adapters::openai::tools::OpenAiToolDefinition;
let tools = OpenAiToolDefinition::all();
In Python, you would format these as function specifications for the OpenAI API.
Step 2 - Define the tools for OpenAI
Format tools for the OpenAI Chat Completions API:
tools = [
{
"type": "function",
"function": {
"name": "assetcore_commit",
"description": "Submit a transaction to Asset Core",
"parameters": {
"type": "object",
"properties": {
"operations": {
"type": "array",
"items": {
"type": "object",
"properties": {
"op": {"type": "string"},
"args": {"type": "object"}
},
"required": ["op", "args"]
},
"description": "List of operations to execute"
},
"idempotency_key": {
"type": "string",
"description": "Optional deduplication key"
}
},
"required": ["operations"]
}
}
},
{
"type": "function",
"function": {
"name": "assetcore_write_health",
"description": "Check Asset Core write daemon health",
"parameters": {"type": "object", "properties": {}}
}
}
]
Step 3 - Execute tool calls
When GPT returns a tool call, execute it against Asset Core:
import httpx
import json
async def execute_tool(tool_name: str, arguments: dict) -> str:
if tool_name == "assetcore_commit":
async with httpx.AsyncClient() as client:
response = await client.post(
"http://localhost:8080/v1/commit",
json=arguments
)
return response.text
elif tool_name == "assetcore_write_health":
async with httpx.AsyncClient() as client:
response = await client.get(
"http://localhost:8080/v1/health"
)
return response.text
else:
return json.dumps({"error": f"Unknown tool: {tool_name}"})
Step 4 - Complete conversation loop
Integrate with the OpenAI API:
from openai import OpenAI
client = OpenAI()
messages = [
{"role": "user", "content": "Create a container with ID 1001"}
]
response = client.chat.completions.create(
model="gpt-4",
messages=messages,
tools=tools,
tool_choice="auto"
)
# Check for tool calls
if response.choices[0].message.tool_calls:
for tool_call in response.choices[0].message.tool_calls:
result = await execute_tool(
tool_call.function.name,
json.loads(tool_call.function.arguments)
)
# Add tool result to messages
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": result
})
# Get final response
final_response = client.chat.completions.create(
model="gpt-4",
messages=messages
)
Step 5 - Use the Rust executor
For Rust applications, use the built-in executor:
use assetcore_adapters::openai::executor::OpenAiToolExecutor;
use assetcore_adapters::http_client::DaemonClient;
let client = DaemonClient::new(
"http://localhost:8080",
"http://localhost:8081"
);
let executor = OpenAiToolExecutor::new(client);
// Execute a tool call
let result = executor.execute(
"assetcore_commit",
serde_json::json!({
"operations": [
{
"op": "CreateContainer",
"args": {
"container_id": 1001,
"kind": "Standard"
}
}
]
})
).await?;
Troubleshooting
Invalid function arguments
GPT may generate arguments that don’t match the schema. Validate before execution and return clear error messages.
Rate limiting
OpenAI has rate limits. Implement backoff and retries for production use.
Tool call ordering
GPT may request multiple tool calls in parallel. Asset Core commits are sequential; execute them in order.
Next steps
- Agents Overview - Architecture and design
- MCP Integration - Alternative protocol
- Gemini - Google’s model integration