Build a WhatsApp AI Assistant Using Laravel, Twilio and OpenAI

PHP CMS Frameworks March 25, 2026

A few months ago a client came to us with a pretty common problem. Their support team was spending most of the day answering the same twenty questions over and over. Shipping times, return policies, order status, payment methods. The questions were predictable. The answers were documented. But every single one still needed a human to respond.

They were already using WhatsApp for customer communication, so the ask was simple: can we put something intelligent on that channel so the team can focus on the cases that actually need them? That is how we ended up building a WhatsApp AI assistant using Laravel, Twilio, and OpenAI, and it is exactly what this post covers.

By the end you will have a working bot that receives WhatsApp messages through a Twilio webhook, maintains conversation memory per customer so context carries across messages, and uses OpenAI to generate replies that sound like a real support agent. The whole thing runs on standard Laravel, no exotic packages.

What you need:

  • Laravel 10 or 11
  • Twilio account with WhatsApp sandbox access
  • OpenAI API key
  • Publicly accessible URL for your webhook

If you are working locally, ngrok handles that last part cleanly.

How the system works before we write any code

It is worth spending a minute on the architecture before jumping in. When a customer sends a WhatsApp message, Twilio receives it and forwards it to your webhook URL as an HTTP POST request. Laravel handles that request, pulls the customer's conversation history from cache, appends the new message, sends the full context to OpenAI, gets a reply, stores the updated history back in cache, and sends the response back to Twilio which delivers it to WhatsApp.

Customer sends WhatsApp message
        ↓
Twilio receives it and POSTs to your Laravel webhook
        ↓
Laravel pulls conversation history from Cache
        ↓
Appends new message to history
        ↓
Sends full conversation context to OpenAI
        ↓
OpenAI returns a support reply
        ↓
Laravel stores updated history in Cache
        ↓
Laravel responds with TwiML so Twilio delivers the message
        ↓
Customer receives the reply on WhatsApp

The conversation memory is the part most tutorials skip. Without it, every message the customer sends is treated as a brand new conversation. The bot has no idea what was just discussed. That makes for a frustrating experience, especially in support scenarios where context matters a lot.

Step 1: Install Laravel and required packages

composer create-project laravel/laravel whatsapp-ai-assistant
cd whatsapp-ai-assistant
composer require openai-php/laravel twilio/sdk

Publish the OpenAI config:

php artisan vendor:publish --provider="OpenAI\Laravel\ServiceProvider"

Add your credentials to .env:

OPENAI_API_KEY=sk-your-openai-key-here

TWILIO_SID=ACxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
TWILIO_AUTH_TOKEN=your-auth-token-here
TWILIO_WHATSAPP_FROM=whatsapp:+14155238886

The number in TWILIO_WHATSAPP_FROM is Twilio's shared WhatsApp sandbox number. Once you go to production and get a dedicated number approved by WhatsApp, you update it there.

Add the Twilio values to config/services.php so you can access them cleanly throughout the app:

'twilio' => [
    'sid'        => env('TWILIO_SID'),
    'auth_token' => env('TWILIO_AUTH_TOKEN'),
    'from'       => env('TWILIO_WHATSAPP_FROM'),
],

Step 2: The Conversation Memory Service

This is the part that makes the bot actually useful in a support context. Each customer gets their own conversation history stored in Laravel Cache, keyed by their WhatsApp number. Every time they send a message, we load their history, add the new message, send the whole thing to OpenAI, then save the updated history back.

Create app/Services/ConversationMemoryService.php:

<?php

namespace App\Services;

use Illuminate\Support\Facades\Cache;

class ConversationMemoryService
{
    private int $maxMessages = 20;
    private int $ttlMinutes  = 60;

    /**
     * Get conversation history for a given WhatsApp number.
     */
    public function getHistory(string $phone): array
    {
        return Cache::get($this->key($phone), []);
    }

    /**
     * Append a new message to the conversation history.
     */
    public function addMessage(string $phone, string $role, string $content): void
    {
        $history = $this->getHistory($phone);

        $history[] = [
            'role'    => $role,
            'content' => $content,
        ];

        // Keep history trimmed so we do not blow the context window
        if (count($history) > $this->maxMessages) {
            $history = array_slice($history, -$this->maxMessages);
        }

        Cache::put($this->key($phone), $history, now()->addMinutes($this->ttlMinutes));
    }

    /**
     * Clear conversation history, useful for reset commands.
     */
    public function clearHistory(string $phone): void
    {
        Cache::forget($this->key($phone));
    }

    private function key(string $phone): string
    {
        return 'whatsapp_conversation_' . md5($phone);
    }
}

The maxMessages limit of 20 is deliberate. OpenAI has a context window limit and sending an entire day's worth of messages in every request gets expensive fast. Keeping the last 20 exchanges gives the bot enough context to be helpful without unnecessary API cost.

The TTL of 60 minutes means if a customer goes quiet for an hour and comes back, the conversation starts fresh. You can adjust both of these to fit your support workflow.

Step 3: The WhatsApp AI Service

This service handles the OpenAI side. It takes the customer's phone number and their latest message, builds the full conversation context including a system prompt that defines the bot's behaviour, and returns a reply.

Create app/Services/WhatsAppAIService.php:

<?php

namespace App\Services;

use OpenAI\Laravel\Facades\OpenAI;

class WhatsAppAIService
{
    public function __construct(
        private ConversationMemoryService $memory
    ) {}

    public function respond(string $phone, string $userMessage): string
    {
        // Save the customer's message to history first
        $this->memory->addMessage($phone, 'user', $userMessage);

        // Build messages array with system prompt at the top
        $messages = array_merge(
            [$this->systemPrompt()],
            $this->memory->getHistory($phone)
        );

        $response = OpenAI::chat()->create([
            'model'       => 'gpt-4o',
            'temperature' => 0.5,
            'max_tokens'  => 300,
            'messages'    => $messages,
        ]);

        $reply = trim($response->choices[0]->message->content);

        // Save the assistant reply to history so context carries forward
        $this->memory->addMessage($phone, 'assistant', $reply);

        return $reply;
    }

    private function systemPrompt(): array
    {
        return [
            'role'    => 'system',
            'content' => 'You are a friendly and professional customer support assistant
                          for an e-commerce store. You help customers with questions about
                          orders, shipping, returns, and payments. Keep replies concise and
                          clear, ideally under 3 sentences, since this is a WhatsApp conversation.
                          If you do not know something specific about an order, ask the customer
                          for their order number and let them know a human agent will follow up.
                          Never make up order details or policies you are not sure about.',
        ];
    }
}

A few things worth pointing out here. The max_tokens: 300 keeps replies short, which is exactly what you want for WhatsApp. Nobody wants to read a five paragraph response on their phone. The system prompt explicitly tells the bot not to make up order details, which is important for a support context where hallucinated information would cause real problems.

The temperature is 0.5, slightly higher than what I used in the code review bot from the last post. Support responses need to feel natural and conversational, so a bit more variation is fine here.

Step 4: The Webhook Controller

php artisan make:controller WhatsAppWebhookController
<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Illuminate\Http\Response;
use App\Services\WhatsAppAIService;
use App\Services\ConversationMemoryService;

class WhatsAppWebhookController extends Controller
{
    public function __construct(
        private WhatsAppAIService $aiService,
        private ConversationMemoryService $memory
    ) {}

    public function handle(Request $request): Response
    {
        $from    = $request->input('From', '');
        $message = trim($request->input('Body', ''));

        if (empty($from) || empty($message)) {
            return $this->twiml('');
        }

        // Allow customers to reset their conversation
        if (strtolower($message) === 'reset') {
            $this->memory->clearHistory($from);
            return $this->twiml('Conversation reset. How can I help you today?');
        }

        // Handle media messages gracefully
        if ($request->has('MediaUrl0')) {
            return $this->twiml('Thanks for the image. A human agent will review it and get back to you shortly.');
        }

        $reply = $this->aiService->respond($from, $message);

        return $this->twiml($reply);
    }

    /**
     * Build a TwiML response that Twilio uses to send the WhatsApp message.
     */
    private function twiml(string $message): Response
    {
        $xml  = '<?xml version="1.0" encoding="UTF-8"?>';
        $xml .= '<Response>';
        $xml .= '<Message>' . htmlspecialchars($message) . '</Message>';
        $xml .= '</Response>';

        return response($xml, 200)->header('Content-Type', 'text/xml');
    }
}

The reset command is a small touch but worth having. If a customer gets into a confusing exchange and wants to start over, they just send "reset" and the history clears. Useful for testing too.

Step 5: Route and CSRF Exception

Add the webhook route in routes/web.php:

use App\Http\Controllers\WhatsAppWebhookController;

Route::post('/webhook/whatsapp', [WhatsAppWebhookController::class, 'handle'])
    ->name('webhook.whatsapp');

Twilio sends POST requests to your webhook, and Laravel's CSRF middleware will block them by default because Twilio does not send a CSRF token. You need to exclude this route from CSRF protection.

In Laravel 10, open app/Http/Middleware/VerifyCsrfToken.php and add the route to the exceptions array:

<?php

namespace App\Http\Middleware;

use Illuminate\Foundation\Http\Middleware\VerifyCsrfToken as Middleware;

class VerifyCsrfToken extends Middleware
{
    protected $except = [
        'webhook/whatsapp',
    ];
}

In Laravel 11, open bootstrap/app.php and update it there:

->withMiddleware(function (Middleware $middleware) {
    $middleware->validateCsrfTokens(except: [
        'webhook/whatsapp',
    ]);
})

This is one of those things that trips people up the first time they set up a Twilio webhook on Laravel. The request just silently fails and you get no clear error message. If your webhook is not responding, check this before anything else.

Step 6: Validating That Requests Actually Come From Twilio

Since this webhook is publicly accessible, you should verify that incoming requests actually came from Twilio and not from someone who found your endpoint. Twilio signs every request with your auth token and sends the signature in the X-Twilio-Signature header.

Create a middleware to handle this:

php artisan make:middleware ValidateTwilioRequest
<?php

namespace App\Http\Middleware;

use Closure;
use Illuminate\Http\Request;
use Twilio\Security\RequestValidator;

class ValidateTwilioRequest
{
    public function handle(Request $request, Closure $next): mixed
    {
        $validator = new RequestValidator(config('services.twilio.auth_token'));

        $signature = $request->header('X-Twilio-Signature', '');
        $url       = $request->fullUrl();
        $params    = $request->post();

        if (!$validator->validate($signature, $url, $params)) {
            abort(403, 'Invalid Twilio signature.');
        }

        return $next($request);
    }
}

Apply it to the webhook route:

Route::post('/webhook/whatsapp', [WhatsAppWebhookController::class, 'handle'])
    ->middleware(\App\Http\Middleware\ValidateTwilioRequest::class)
    ->name('webhook.whatsapp');

Skip this during local development if it causes issues. Twilio signature validation depends on the exact URL matching, which can get complicated with ngrok. Enable it in staging and production.

Step 7: Exposing Your Local Server With ngrok

Twilio needs a publicly accessible URL to send webhook requests to. During development, ngrok creates a temporary public tunnel to your local machine. Install it from ngrok.com, then run both of these in separate terminals:

php artisan serve
ngrok http 8000

ngrok will give you a forwarding URL that looks something like https://a1b2c3d4.ngrok.io. Your full webhook URL will be:

https://a1b2c3d4.ngrok.io/webhook/whatsapp

Copy that URL, you will need it in the next step.

Step 8: Configure the Twilio WhatsApp Sandbox

Log into your Twilio console and go to Messaging, then Try it out, then Send a WhatsApp message. You will land on the sandbox configuration page.

Find the field labelled "When a message comes in" and paste your ngrok webhook URL there. Make sure the request method is set to HTTP POST, not GET.

To connect your personal WhatsApp number to the sandbox for testing, send the join code shown on that page (something like "join bright-example") to the Twilio sandbox number on WhatsApp. You will get a confirmation message once you are connected.

Send any message from your WhatsApp now and watch it flow through the full stack.

What a Real Customer Support Conversation Looks Like

Here is an example of the kind of exchange this bot handles well. A customer asking about a return:

Customer : Hi, I want to return a jacket I bought last week

Bot      : Hi there! Happy to help with your return. Could you share your
           order number so I can check the details?

Customer : It is ORDER-48291

Bot      : Thanks! Our standard return window is 30 days from delivery,
           so you are well within that period. Would you like me to walk
           you through the return steps or connect you with a human agent?

Customer : Walk me through it please

Bot      : Sure. Repack the item securely, then visit our returns portal
           and enter your order number to generate a prepaid label. Drop
           it at any courier location and your refund will process within
           3 to 5 business days once we receive it.

Customer : What if I lost the original packaging?

Bot      : No problem at all. Any sturdy box or bag works fine.
           The original packaging is not required for returns.

Four exchanges, the bot carried the order number across messages without the customer repeating it, gave accurate policy information, and offered a clear escalation path. That is exactly what a good first-line support interaction should look like.

Rate Limiting Per Customer

If one customer sends fifty messages in a minute, you do not want to fire fifty OpenAI API calls. Add rate limiting per phone number using Laravel's built-in rate limiter, right at the top of the handle method in your controller:

use Illuminate\Support\Facades\RateLimiter;

$key = 'whatsapp_' . md5($from);

if (RateLimiter::tooManyAttempts($key, 10)) {
    return $this->twiml('You are sending messages too quickly. Please wait a moment and try again.');
}

RateLimiter::hit($key, 60);

This allows 10 messages per minute per customer before the rate limit kicks in. Adjust the numbers based on how your support volume actually looks.

Moving From Sandbox to Production

The sandbox works well for testing but has real limitations. Every customer has to send a join code before the bot can message them, and the sandbox number is shared across all Twilio accounts. For an actual deployment you need a dedicated WhatsApp Business number approved through Meta.

The approval process goes through Twilio's WhatsApp sender registration. You submit your business details, Meta reviews and approves the number, and once that is done you update TWILIO_WHATSAPP_FROM in your production environment and point the webhook to your live URL. The rest of the code does not change.

On the infrastructure side, switch CACHE_DRIVER to redis in production. The file cache works locally but Redis handles concurrent requests from multiple customers properly and survives server restarts without losing conversation history mid-session.

Three things to add before handing this to a Client

The core works well but a production support bot needs a bit more to be truly reliable.

First, a database log of every conversation. Both for debugging and for reviewing what the bot is actually saying to customers. A simple whatsapp_messages table with columns for phone, role, content, and created_at is enough to start. You will thank yourself for having this the first time the bot says something unexpected.

Second, a human handoff trigger. If the customer says something like "I want to speak to a real person" or the bot detects repeated frustration in the conversation, it should stop trying to resolve things automatically and flag the conversation for the support team. A keyword check handles the obvious cases, and you can ask OpenAI to classify sentiment alongside the reply for the subtler ones.

Third, a basic admin view showing active conversations, the most common questions coming in, and average response times. That data is useful for improving the system prompt and for giving the support team visibility into what the bot is handling versus what it is escalating.

Those three additions turn a working prototype into something you can confidently hand over and actually maintain.

Comments · 0

Post a Comment