Laravel and Prism PHP: The Modern Way to Work with AI Models
Every Laravel project that needs AI ends up with a different implementation. One project uses the OpenAI PHP client directly. Another one uses a wrapper someone wrote three years ago that is no longer maintained. A third one is tightly coupled to a specific model, so switching from GPT-4o to Claude requires rewriting half the service layer.
Prism PHP solves this properly. It is a Laravel package that gives you a single, consistent API for working with multiple AI providers. OpenAI, Anthropic Claude, Ollama for local models, Mistral, Gemini, and more, all through the same fluent interface. You switch providers by changing one line. Your application code does not care which model is behind it.
This post covers the full picture. All the supported providers and when to use each one, text generation with structured output, tool calling so your AI can actually interact with your application, and embeddings for semantic search. I will tie all three together with a real-world example at the end so you can see how they work as a system rather than isolated features.
What you need:
- Laravel 10 or 11
- PHP 8.1+
- Composer
- API keys for whichever providers you plan to use.
- Ollama requires a local install but is free to run.
Why Prism instead of the OpenAI Client directly
The openai-php/laravel client is solid and I have used it in several projects on this blog. But it locks you into OpenAI. If you want to try Claude for a specific task, or use a local Ollama model for development to avoid API costs, you are writing separate integration code for each one.
Prism is inspired by the Vercel AI SDK, which solved this same problem in the JavaScript world. The idea is simple: define a unified interface, write drivers for each provider, and let application code stay completely provider-agnostic. The practical benefits are real.
You can use GPT-4o for general generation, Claude for tasks where it performs better (long document analysis, nuanced writing), and a local Ollama model during development so you are not burning API credits on every test run. All through the same application code. That flexibility is genuinely useful once you start building production AI features.
Supported Providers
Prism currently ships with first-party support for these providers. Each has its own strengths and the right choice depends on the task.
| Provider | Best For | Key Models | Cost |
|---|---|---|---|
| OpenAI | General generation, embeddings, function calling | GPT-4o, GPT-4o-mini, text-embedding-3-small | Pay per token |
| Anthropic | Long documents, reasoning, nuanced analysis | Claude 3.7 Sonnet, Claude 3.5 Haiku | Pay per token |
| Ollama | Local development, privacy-sensitive data, zero API cost | Llama 3, Mistral, Phi-3, any Ollama model | Free (runs locally) |
| Mistral | Efficient generation, European data residency | Mistral Large, Mistral Small | Pay per token |
| Google Gemini | Multimodal tasks, audio and video input | Gemini 1.5 Flash, Gemini 1.5 Pro | Pay per token |
| xAI (Grok) | Real-time data awareness, alternative to GPT-4 | Grok-2 | Pay per token |
My general approach: OpenAI for embeddings and general tasks, Claude for anything involving long content or nuanced judgment, Ollama for local development. That combination covers most application needs while keeping costs reasonable.
Installation and Configuration
composer require prism-php/prism
Publish the config file:
php artisan vendor:publish --tag=prism-config
This generates config/prism.php. Add your provider credentials to .env:
# OpenAI
OPENAI_API_KEY=sk-your-openai-key
# Anthropic
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key
# Mistral
MISTRAL_API_KEY=your-mistral-key
# Google Gemini
GEMINI_API_KEY=your-gemini-key
# xAI
XAI_API_KEY=your-xai-key
# Ollama runs locally, no API key needed
# Default URL is http://localhost:11434
The config/prism.php file maps these to the relevant providers. You can also set default models per provider here, which saves repeating the model name in every call.
Part 1: Text Generation and Structured Output
The core feature and the one you will use most. Prism's text generation API is chainable and reads naturally, which is one of the things that makes it feel like a proper Laravel package rather than a thin wrapper.
Basic Text Generation
Here is the same prompt sent to three different providers, with zero application code changes between them:
<?php
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
// OpenAI
$response = Prism::text()
->using(Provider::OpenAI, 'gpt-4o')
->withSystemPrompt('You are a helpful PHP development assistant.')
->withPrompt('Explain what a service container is in Laravel.')
->asText();
echo $response->text;
// Swap to Claude, same code
$response = Prism::text()
->using(Provider::Anthropic, 'claude-3-7-sonnet-latest')
->withSystemPrompt('You are a helpful PHP development assistant.')
->withPrompt('Explain what a service container is in Laravel.')
->asText();
echo $response->text;
// Or run it locally with Ollama during development
$response = Prism::text()
->using(Provider::Ollama, 'llama3')
->withSystemPrompt('You are a helpful PHP development assistant.')
->withPrompt('Explain what a service container is in Laravel.')
->asText();
echo $response->text;
That is the core value proposition right there. Same interface, different provider. If OpenAI has an outage or you want to A/B test Claude versus GPT-4o on a specific prompt, you change one line.
Structured Output with Schema Validation
Getting raw text back is fine for simple tasks. For anything that feeds into application logic, you want structured output. Prism handles this through schema definitions that map directly to PHP objects.
<?php
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
use Prism\Prism\Schema\ObjectSchema;
use Prism\Prism\Schema\StringSchema;
use Prism\Prism\Schema\IntegerSchema;
use Prism\Prism\Schema\ArraySchema;
$schema = new ObjectSchema(
name: 'article_analysis',
description: 'Analysis of a PHP tutorial article',
properties: [
new StringSchema('summary', 'One sentence summary of the article'),
new IntegerSchema('difficulty_level', 'Difficulty from 1 (beginner) to 5 (expert)'),
new StringSchema('primary_topic', 'The main topic of the article'),
new ArraySchema(
'key_concepts',
'Key technical concepts covered',
new StringSchema('concept', 'A technical concept mentioned in the article')
),
new ArraySchema(
'prerequisite_knowledge',
'What the reader should know before reading this',
new StringSchema('prerequisite', 'A prerequisite concept or skill')
),
],
requiredFields: ['summary', 'difficulty_level', 'primary_topic', 'key_concepts']
);
$articleContent = "Your article text goes here...";
$response = Prism::text()
->using(Provider::OpenAI, 'gpt-4o')
->withSystemPrompt('You analyse PHP and Laravel tutorial articles.')
->withPrompt("Analyse this article:\n\n{$articleContent}")
->withSchema($schema)
->asStructured();
// $response->structured is a fully typed PHP array matching your schema
$analysis = $response->structured;
echo $analysis['summary'];
echo $analysis['difficulty_level'];
echo implode(', ', $analysis['key_concepts']);
No more parsing freeform text. No more stripping markdown fences from JSON responses. Prism handles the structured output negotiation with the model and gives you a validated PHP array. If the model returns something that does not match the schema, Prism throws rather than silently returning garbage data.
Multi-Turn Conversations
<?php
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
use Prism\Prism\ValueObjects\Messages\UserMessage;
use Prism\Prism\ValueObjects\Messages\AssistantMessage;
$history = [
new UserMessage('What is the difference between Laravel jobs and events?'),
new AssistantMessage('Jobs are queued tasks for deferred work. Events are for broadcasting that something happened in your application...'),
new UserMessage('Can you show me a code example of a job?'),
];
$response = Prism::text()
->using(Provider::Anthropic, 'claude-3-7-sonnet-latest')
->withSystemPrompt('You are a Laravel expert.')
->withMessages($history)
->asText();
echo $response->text;
Part 2: Tool Calling
Tool calling is where things get genuinely interesting. Instead of the AI just generating text, you give it tools it can call, functions that interact with your actual application. The model decides when to use a tool based on the user's request, calls it, gets the result, and incorporates it into its response.
Without tool calling, an AI assistant can only work with what it was trained on. With tool calling, it can check live database records, call external APIs, perform calculations, and do anything else you give it a tool for.
Defining a Tool
<?php
use Prism\Prism\Tool;
// A tool that looks up an order from your database
$orderLookupTool = Tool::as('get_order_status')
->for('Look up the status and details of a customer order by order number')
->withStringParameter('order_number', 'The order number to look up, e.g. ORD-2025-1234')
->using(function (string $order_number): string {
$order = \App\Models\Order::where('order_number', $order_number)
->with('items')
->first();
if (!$order) {
return json_encode(['error' => 'Order not found']);
}
return json_encode([
'order_number' => $order->order_number,
'status' => $order->status,
'placed_at' => $order->created_at->format('d M Y'),
'total' => '$' . number_format($order->total, 2),
'items' => $order->items->count(),
'tracking' => $order->tracking_number ?? 'Not yet assigned',
]);
});
The model sees the tool name, description, and parameter definitions. When a user asks something like "what is the status of my order ORD-2025-4821", the model recognises it needs the order lookup tool, calls it with the order number extracted from the message, gets the JSON result back, and uses it to form a natural language response.
Using Multiple Tools Together
<?php
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
use Prism\Prism\Tool;
$orderLookupTool = Tool::as('get_order_status')
->for('Look up order status by order number')
->withStringParameter('order_number', 'The order number')
->using(function (string $order_number): string {
// Your order lookup logic here
return json_encode(['status' => 'shipped', 'tracking' => 'TRK123456']);
});
$productSearchTool = Tool::as('search_products')
->for('Search the product catalogue by keyword')
->withStringParameter('keyword', 'Search term to find products')
->withIntegerParameter('limit', 'Maximum number of results to return')
->using(function (string $keyword, int $limit = 5): string {
$products = \App\Models\Product::search($keyword)
->take($limit)
->get(['name', 'price', 'in_stock']);
return json_encode($products->toArray());
});
$refundPolicyTool = Tool::as('get_refund_policy')
->for('Retrieve the current refund and returns policy')
->using(function (): string {
return "Orders can be returned within 30 days of delivery. " .
"Refunds process within 3 to 5 business days. " .
"Items must be unused and in original packaging.";
});
$response = Prism::text()
->using(Provider::OpenAI, 'gpt-4o')
->withSystemPrompt(
'You are a helpful customer support assistant for an online store. ' .
'Use the available tools to look up order details, products, and policies. ' .
'Always check actual data before making claims about orders or policies.'
)
->withPrompt("I ordered something last week, order number ORD-2025-4821. " .
"Has it shipped yet? Also, what's your return policy?")
->withTools([$orderLookupTool, $productSearchTool, $refundPolicyTool])
->withMaxSteps(5)
->asText();
echo $response->text;
The withMaxSteps(5) call is important. It limits how many tool calls the model can make in a single request, preventing runaway chains where the model keeps calling tools indefinitely. Five steps is plenty for most support interactions.
What happens behind the scenes: the model reads the user message, decides it needs to call get_order_status with order number ORD-2025-4821, Prism runs your PHP function, returns the result to the model, the model sees the shipping status, then calls get_refund_policy for the second question, gets that result, and writes a complete response covering both questions using real data from your application.
Part 3: Embeddings for Semantic Search
Embeddings convert text into a vector, a list of floating point numbers that represent the semantic meaning of the text. Two pieces of text that mean similar things will have vectors that are close together in that high-dimensional space, even if the actual words used are completely different.
Prism handles embeddings through the same clean interface as text generation.
<?php
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
// Generate an embedding for a piece of text
$response = Prism::embeddings()
->using(Provider::OpenAI, 'text-embedding-3-small')
->fromInput('How do I reset my account password?')
->create();
$vector = $response->embeddings[0]->embedding;
// $vector is an array of 1536 floats representing the semantic meaning
You can also embed multiple texts in a single API call, which is more efficient when indexing content:
<?php
$response = Prism::embeddings()
->using(Provider::OpenAI, 'text-embedding-3-small')
->fromInput([
'How do I reset my password?',
'Account recovery steps for locked accounts',
'Changing your email address in account settings',
'Two-factor authentication setup guide',
])
->create();
foreach ($response->embeddings as $index => $embedding) {
echo "Text {$index}: " . count($embedding->embedding) . " dimensions\n";
}
Real-World Example: Tying All Three Together
Here is where it gets practical. I will build a customer support assistant that uses all three Prism features together: embeddings to find relevant knowledge base articles, tool calling to look up live order data, and structured text generation to produce consistent responses.
The scenario is a support bot for a Laravel e-commerce application. The bot needs to answer questions about orders using real database data, find relevant help articles using semantic search, and produce responses that follow a consistent format.
Database Setup for Knowledge Base
php artisan make:migration create_knowledge_base_table
<?php
Schema::create('knowledge_base_articles', function (Blueprint $table) {
$table->id();
$table->string('title');
$table->text('content');
$table->json('embedding')->nullable();
$table->string('category');
$table->timestamps();
});
The Knowledge Base Indexer
<?php
namespace App\Services;
use App\Models\KnowledgeBaseArticle;
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
class KnowledgeBaseIndexer
{
public function indexAll(): void
{
$articles = KnowledgeBaseArticle::whereNull('embedding')->get();
foreach ($articles->chunk(20) as $batch) {
$texts = $batch->map(fn($a) => $a->title . "\n\n" . $a->content)
->toArray();
$response = Prism::embeddings()
->using(Provider::OpenAI, 'text-embedding-3-small')
->fromInput($texts)
->create();
foreach ($batch as $index => $article) {
$article->update([
'embedding' => $response->embeddings[$index]->embedding,
]);
}
usleep(200000);
}
}
public function findRelevant(string $query, int $topK = 3): array
{
$queryResponse = Prism::embeddings()
->using(Provider::OpenAI, 'text-embedding-3-small')
->fromInput($query)
->create();
$queryVector = $queryResponse->embeddings[0]->embedding;
// Load articles and compute cosine similarity in PHP
// For production with large knowledge bases, use pgvector instead
$articles = KnowledgeBaseArticle::whereNotNull('embedding')->get();
$scored = $articles->map(function ($article) use ($queryVector) {
$articleVector = $article->embedding;
return [
'article' => $article,
'similarity' => $this->cosineSimilarity($queryVector, $articleVector),
];
})
->filter(fn($item) => $item['similarity'] > 0.75)
->sortByDesc('similarity')
->take($topK);
return $scored->pluck('article')->toArray();
}
private function cosineSimilarity(array $a, array $b): float
{
$dot = array_sum(array_map(fn($x, $y) => $x * $y, $a, $b));
$magA = sqrt(array_sum(array_map(fn($x) => $x ** 2, $a)));
$magB = sqrt(array_sum(array_map(fn($x) => $x ** 2, $b)));
return ($magA * $magB) > 0 ? $dot / ($magA * $magB) : 0.0;
}
}
The Support Assistant Service
<?php
namespace App\Services;
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
use Prism\Prism\Tool;
use Prism\Prism\Schema\ObjectSchema;
use Prism\Prism\Schema\StringSchema;
use Prism\Prism\Schema\ArraySchema;
class CustomerSupportAssistant
{
public function __construct(
private KnowledgeBaseIndexer $knowledgeBase
) {}
public function respond(string $customerMessage, string $customerId): array
{
// Step 1: Find relevant knowledge base articles using embeddings
$relevantArticles = $this->knowledgeBase->findRelevant($customerMessage);
$knowledgeContext = collect($relevantArticles)
->map(fn($a) => "### {$a->title}\n{$a->content}")
->join("\n\n---\n\n");
// Step 2: Define tools for live data access
$orderTool = Tool::as('get_order')
->for('Look up order details and status for a specific order number')
->withStringParameter('order_number', 'The order number, e.g. ORD-2025-1234')
->using(function (string $order_number) use ($customerId): string {
$order = \App\Models\Order::where('order_number', $order_number)
->where('customer_id', $customerId)
->with('items', 'shipment')
->first();
if (!$order) {
return json_encode([
'error' => 'Order not found or does not belong to this customer',
]);
}
return json_encode([
'order_number' => $order->order_number,
'status' => $order->status,
'placed_at' => $order->created_at->format('d M Y'),
'total' => '$' . number_format($order->total, 2),
'item_count' => $order->items->count(),
'tracking' => $order->shipment->tracking_number ?? 'Not yet assigned',
'carrier' => $order->shipment->carrier ?? null,
'estimated_delivery' => $order->shipment->estimated_delivery ?? null,
]);
});
$accountTool = Tool::as('get_account_info')
->for('Retrieve customer account information like email and membership status')
->using(function () use ($customerId): string {
$customer = \App\Models\Customer::find($customerId);
if (!$customer) {
return json_encode(['error' => 'Customer not found']);
}
return json_encode([
'name' => $customer->name,
'email' => $customer->email,
'member_since' => $customer->created_at->format('M Y'),
'membership_tier' => $customer->tier,
'total_orders' => $customer->orders()->count(),
]);
});
// Step 3: Define schema for structured response
$responseSchema = new ObjectSchema(
name: 'support_response',
description: 'A structured customer support response',
properties: [
new StringSchema('message', 'The response message to send to the customer'),
new StringSchema(
'escalation_level',
'Whether to escalate: "none", "agent", or "manager"'
),
new StringSchema('sentiment', 'Customer sentiment detected: "positive", "neutral", "frustrated"'),
new ArraySchema(
'follow_up_actions',
'Actions the support team should take after this interaction',
new StringSchema('action', 'A follow-up action item')
),
],
requiredFields: ['message', 'escalation_level', 'sentiment']
);
// Step 4: Generate the response using text generation, tools, and schema
$systemPrompt = "You are a helpful and empathetic customer support assistant.
Relevant knowledge base articles for this conversation:
{$knowledgeContext}
Guidelines:
- Use the knowledge base content above when it is relevant to the customer's question.
- Use the available tools to look up live order and account data when needed.
- Keep responses concise and clear, two to three sentences where possible.
- If the customer is frustrated, acknowledge their feelings first before solving the problem.
- Escalate to 'agent' if the issue requires human judgment or account changes.
- Escalate to 'manager' only if the customer is threatening to leave or requests a manager specifically.
- Never guess at order details, always use the get_order tool for specific order questions.";
$response = Prism::text()
->using(Provider::Anthropic, 'claude-3-7-sonnet-latest')
->withSystemPrompt($systemPrompt)
->withPrompt($customerMessage)
->withTools([$orderTool, $accountTool])
->withMaxSteps(4)
->withSchema($responseSchema)
->asStructured();
return $response->structured;
}
}
The Controller
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Services\CustomerSupportAssistant;
class SupportController extends Controller
{
public function __construct(
private CustomerSupportAssistant $assistant
) {}
public function chat(Request $request)
{
$request->validate([
'message' => 'required|string|max:1000',
]);
$customerId = auth()->id();
$message = $request->input('message');
try {
$response = $this->assistant->respond($message, $customerId);
return response()->json([
'reply' => $response['message'],
'escalation_level' => $response['escalation_level'],
'sentiment' => $response['sentiment'],
'follow_up' => $response['follow_up_actions'] ?? [],
]);
} catch (\Exception $e) {
report($e);
return response()->json([
'reply' => 'Something went wrong. Please try again in a moment.',
], 500);
}
}
}
What This System Produces
Here is a realistic example of what the full pipeline returns for a frustrated customer asking about a delayed order:
Customer message: "My order ORD-2025-4821 was supposed to arrive three days ago
and I still haven't received it. This is really frustrating."
System flow:
1. Embeddings search finds "Shipping delays FAQ" and "How to track your order" articles
2. Claude reads the relevant knowledge base content
3. Claude calls get_order tool with order number ORD-2025-4821
4. Tool returns: { status: "shipped", tracking: "TRK987654", carrier: "FedEx",
estimated_delivery: "2 days ago" }
5. Claude generates structured response
Response:
{
"message": "I completely understand your frustration, and I am sorry your order
is running late. I can see ORD-2025-4821 shipped with FedEx and the
tracking number is TRK987654. FedEx is showing a delay on their end,
but the package is still in transit. You can track it directly at
fedex.com using that number for the most current status.",
"escalation_level": "none",
"sentiment": "frustrated",
"follow_up_actions": [
"Monitor order TRK987654 for delivery confirmation",
"If not delivered within 48 hours, initiate trace request with FedEx",
"Flag customer account for priority handling on next contact"
]
}
The sentiment flag lets your frontend show a different UI for frustrated customers. The escalation level drives routing logic. The follow-up actions can be stored and assigned to your support team automatically. This is not just a chatbot, it is a complete support workflow powered by three Prism features working together.
Testing Prism Code
One of the things that makes Prism genuinely production-ready is its testing utilities. You do not want real API calls firing during unit tests. Prism ships with response faking so you can test your application logic without hitting any external APIs.
<?php
use Prism\Prism\Facades\Prism;
use Prism\Prism\Testing\PrismFake;
use Prism\Prism\ValueObjects\TextResult;
it('generates a support response for order queries', function () {
$fakeResponse = new TextResult(
text: '{"message": "Your order has shipped.", "escalation_level": "none", "sentiment": "neutral"}',
finishReason: 'stop',
usage: ['prompt_tokens' => 100, 'completion_tokens' => 50]
);
Prism::fake([$fakeResponse]);
$assistant = app(CustomerSupportAssistant::class);
$result = $assistant->respond('Where is my order?', customerId: 1);
expect($result['escalation_level'])->toBe('none');
expect($result['sentiment'])->toBe('neutral');
Prism::assertCallCount(1);
Prism::assertLastCallUsedProvider('anthropic');
});
Prism::fake() intercepts all Prism calls and returns your predefined responses. Prism::assertCallCount() and Prism::assertLastCallUsedProvider() let you verify your code is making the right calls. Clean, straightforward, and no real API usage during tests.
Switching Providers Without Touching Application Code
One last thing worth showing explicitly, because it is the whole point of Prism. You can make your provider configurable through your .env file so you can switch without a code change:
# .env
AI_PROVIDER=anthropic
AI_MODEL=claude-3-7-sonnet-latest
<?php
// In your service or controller
$response = Prism::text()
->using(
config('ai.provider', 'openai'),
config('ai.model', 'gpt-4o')
)
->withPrompt($prompt)
->asText();
Change the provider in .env, restart the queue workers, done. No code changes, no redeployment of application logic. That is the practical benefit of a unified interface. You build once against Prism's API and gain the flexibility to move between providers as your needs evolve, pricing changes, or a new model comes along that performs better on your specific tasks.
Prism is still relatively young but it is actively maintained and the API has stabilised enough to build production features on. For any new Laravel project that involves AI, it is the first package I reach for now. The alternative, direct API clients for each provider, creates the kind of fragmented codebase that becomes a maintenance problem fast.
No more posts to load.
- Building a RAG System in Laravel from Scratch
- Steps to create a Contact Form in Symfony With SwiftMailer
- Build an AI Code Review Bot with Laravel — Real-World Use Case
- Build a WhatsApp AI Assistant Using Laravel, Twilio and OpenAI
- CIBB - Basic Forum With Codeigniter and Twitter Bootstrap
- Drupal 7 - Create your custom Hello World module
- Create Front End Component in Joomla - Step by step procedure
- A step by step procedure to develop wordpress plugin
- Migrating a wordpress website to Joomla website
- Magento - Steps to add Custom Tabs to the Product Admin