Building Your First AI Agent with Mastra
Introduction
Mastra is a powerful framework for building AI agents that can automate complex workflows. In this guide, I'll walk you through creating your first AI agent from scratch using the latest Mastra documentation.
Whether you're a developer looking to add AI capabilities to your application or a business owner wanting to automate processes, this guide will get you started quickly.
What You'll Need
- Node.js 18+
- npm or pnpm
- Basic JavaScript/TypeScript knowledge
- An OpenAI API key (or other AI provider)
Step 1: Project Setup
First, let's create a new Mastra project:
npm install @mastra/core @ai-sdk/openai
This installs the core Mastra framework and OpenAI integration.
Step 2: Understanding the Architecture
Mastra agents consist of several key components:
- Agent: The main orchestrator that coordinates tasks
- Tools: Functions that the agent can use to perform actions
- Memory: Storage for conversation history and context
- Instructions: Define the agent's behavior and capabilities
Step 3: Creating Your First Agent
Here's the basic structure of a Mastra agent:
import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
export const myFirstAgent = new Agent({
name: "My First Agent",
instructions: "You are a helpful assistant that provides clear, concise answers.",
model: openai("gpt-4o-mini")
});
Step 4: Adding Tools
Tools extend your agent's capabilities. Let's create a simple weather tool:
import { createTool } from "@mastra/core/tools";
import { z } from "zod";
export const weatherTool = createTool({
id: "getWeather",
description: "Get current weather for a location",
inputSchema: z.object({
location: z.string().describe("City name"),
}),
execute: async ({ context }) => {
// Simulate API call
return {
temperature: 22,
conditions: "Sunny",
location: context.location
};
},
});
Now add the tool to your agent:
export const weatherAgent = new Agent({
name: "Weather Agent",
instructions: "You are a weather assistant. Use the weather tool when asked about weather conditions.",
model: openai("gpt-4o-mini"),
tools: {
weatherTool,
},
});
Step 5: Adding Memory
Memory allows agents to maintain context across conversations:
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";
const memory = new Memory({
storage: new LibSQLStore({
url: "file:memory.db",
}),
});
export const memoryAgent = new Agent({
name: "Memory Agent",
instructions: "You are a helpful assistant with conversation memory.",
model: openai("gpt-4o-mini"),
memory,
});
Step 6: Generating Responses
There are several ways to interact with your agent:
Text Generation
const response = await agent.generate("What's the weather like in Paris?");
console.log(response.text);
Streaming Responses
const stream = await agent.stream("Tell me a story");
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}
Structured Output
import { z } from "zod";
const response = await agent.generate("Analyze this text: 'Hello world'", {
output: z.object({
sentiment: z.enum(['positive', 'negative', 'neutral']),
keywords: z.array(z.string())
})
});
console.log(response.object); // Typed structured data
Step 7: Advanced Features
Dynamic Agents
Create agents that adapt based on runtime context:
const dynamicAgent = new Agent({
name: "Dynamic Agent",
instructions: ({ runtimeContext }) => {
const userType = runtimeContext.get("userType");
return `You are assisting a ${userType} user.`;
},
model: ({ runtimeContext }) => {
const userType = runtimeContext.get("userType");
return userType === "premium" ? openai("gpt-4o") : openai("gpt-4o-mini");
},
});
Input and Output Processors
Add processors for content moderation and validation:
import { ModerationProcessor } from "@mastra/core/processors";
const safeAgent = new Agent({
name: "Safe Agent",
instructions: "You are a helpful assistant.",
model: openai("gpt-4o-mini"),
inputProcessors: [
new ModerationProcessor({
model: openai("gpt-4o-mini"),
threshold: 0.7,
}),
],
});
Step 8: Integration with MCP
Mastra supports Model Context Protocol for connecting to external tools:
import { MCPClient } from "@mastra/mcp";
const mcpClient = new MCPClient({
servers: {
filesystem: {
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"],
},
},
});
const mcpAgent = new Agent({
name: "MCP Agent",
instructions: "You can use external tools via MCP.",
model: openai("gpt-4o-mini"),
tools: await mcpClient.getTools(),
});
Step 9: Voice Capabilities
Add voice interaction capabilities:
import { OpenAIVoice } from "@mastra/voice-openai";
const voiceAgent = new Agent({
name: "Voice Agent",
instructions: "You are a voice-enabled assistant.",
model: openai("gpt-4o-mini"),
voice: new OpenAIVoice(),
});
// Generate speech
const audio = await voiceAgent.voice.speak("Hello, I'm your AI assistant!");
Step 10: Deployment
Once your agent is ready, you can deploy it in various ways:
API Endpoint
// In your API route
const response = await agent.generate(req.body.message);
res.json({ reply: response.text });
As MCP Server
import { MCPServer } from "@mastra/mcp";
const mcpServer = new MCPServer({
name: "My Agent Server",
agents: { myAgent },
});
Common Patterns and Best Practices
1. Error Handling
Always implement proper error handling:
execute: async ({ context }) => {
try {
const result = await externalAPI.call(context.input);
return result;
} catch (error) {
return { error: "Failed to process request", details: error.message };
}
}
2. Input Validation
Use Zod schemas for robust input validation:
inputSchema: z.object({
email: z.string().email(),
age: z.number().min(18).max(120),
preferences: z.array(z.string()).min(1),
})
3. Logging and Monitoring
Add logging for debugging and monitoring:
execute: async ({ context }) => {
console.log(`Processing request for: ${context.input}`);
const result = await processRequest(context.input);
console.log(`Request completed successfully`);
return result;
}
Real-World Example: Customer Support Agent
Let's build a complete customer support agent:
import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";
import { createTool } from "@mastra/core/tools";
import { z } from "zod";
import { openai } from "@ai-sdk/openai";
// Support ticket tool
const createTicketTool = createTool({
id: "createSupportTicket",
description: "Create a new support ticket",
inputSchema: z.object({
issue: z.string(),
priority: z.enum(['low', 'medium', 'high']),
customerEmail: z.string().email(),
}),
execute: async ({ context }) => {
// Create ticket in your system
const ticketId = await createSupportTicket(context);
return { ticketId, status: 'created' };
},
});
// Knowledge base search tool
const searchKBTool = createTool({
id: "searchKnowledgeBase",
description: "Search the knowledge base for solutions",
inputSchema: z.object({
query: z.string(),
}),
execute: async ({ context }) => {
const results = await searchKnowledgeBase(context.query);
return { results };
},
});
// Memory for conversation context
const memory = new Memory({
storage: new LibSQLStore({ url: "file:support.db" }),
});
// The support agent
export const supportAgent = new Agent({
name: "Customer Support Agent",
instructions: `
You are a customer support agent for our SaaS platform.
Always be helpful, professional, and empathetic.
When a customer has an issue:
1. First try to solve it using the knowledge base
2. If you can't solve it, create a support ticket
3. Always follow up and confirm the customer is satisfied
Remember conversation history and customer preferences.
`,
model: openai("gpt-4o"),
tools: {
createTicketTool,
searchKBTool,
},
memory,
});
Conclusion
You've just built your first AI agent with Mastra! This is just the beginning. Mastra's flexibility allows you to create increasingly complex and capable agents.
Key Takeaways:
- Start simple with basic instructions and a single tool
- Add memory for better conversation continuity
- Use processors for safety and validation
- Leverage MCP for external integrations
- Always test thoroughly before deployment
Next Steps:
- Experiment with different AI models
- Add more sophisticated tools and integrations
- Implement multi-agent systems
- Set up monitoring and analytics
- Deploy to production environments
The possibilities are endless. What kind of agent will you build next?
Ready to build something amazing? Check out the Mastra documentation for more advanced features and examples.