Laravel Queue Patterns You're Missing: Batching, Chains, and Atomic Locks at Scale

If you've been using Laravel queues, you know the basics: dispatch a job, it runs in the background, done. But real applications have more complex requirements. You need to process a hundred files and know when all of them finish. You need jobs that depend on other jobs. You need to handle failures without losing track of what went wrong. You need some jobs to never run in parallel, and others to be rate-limited against an external API. This article covers the patterns that bridge the gap between a working queue and a production-grade queue system.

📋 Table of Contents

📦 Job Batching - Group, Track, React

Job batching lets you dispatch a group of jobs and react when all of them complete, when any fails, or in both cases.

Basic batch:

// app/Actions/ImportProductsAction.php

declare(strict_types=1);

namespace App\Actions;

use App\Jobs\ImportProductChunkJob;
use Illuminate\Bus\Batch;
use Illuminate\Support\Facades\Bus;
use Throwable;

class ImportProductsAction
{
    public function handle(array $productChunks): string
    {
        $batch = Bus::batch(
            collect($productChunks)->map(
                fn (array $chunk) => new ImportProductChunkJob($chunk)
            )->toArray()
        )
        ->then(function (Batch $batch) {
            // All jobs completed successfully
            logger()->info("Import finished: {$batch->totalJobs} products imported.");
        })
        ->catch(function (Batch $batch, Throwable $e) {
            // First failure - batch still continues by default
            logger()->error("Import chunk failed: {$e->getMessage()}");
        })
        ->finally(function (Batch $batch) {
            // Always runs - success or failure
            ImportLog::updateStatus($batch->id, $batch->failedJobs > 0 ? 'partial' : 'complete');
        })
        ->name('Product Import')
        ->allowFailures() // Don't cancel the whole batch on one failure
        ->dispatch();

        return $batch->id;
    }
}

The job itself must use the Batchable trait to integrate with the batch lifecycle:

// app/Jobs/ImportProductChunkJob.php

declare(strict_types=1);

namespace App\Jobs;

use Illuminate\Bus\Batchable;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;

class ImportProductChunkJob implements ShouldQueue
{
    use Batchable, Dispatchable, InteractsWithQueue, Queueable;

    public function __construct(public readonly array $chunk) {}

    public function handle(): void
    {
        // Check if the batch was cancelled before doing heavy work
        if ($this->batch()?->cancelled()) {
            return;
        }

        foreach ($this->chunk as $product) {
            Product::updateOrCreate(['sku' => $product['sku']], $product);
        }
    }
}

Check batch progress in a controller:

// app/Http/Controllers/Api/ImportController.php

declare(strict_types=1);

namespace App\Http\Controllers\Api;

use Illuminate\Support\Facades\Bus;

class ImportController extends Controller
{
    public function status(string $batchId): JsonResponse
    {
        $batch = Bus::findBatch($batchId);

        return response()->json([
            'total'      => $batch->totalJobs,
            'processed'  => $batch->processedJobs(),
            'failed'     => $batch->failedJobs,
            'progress'   => $batch->progress(), // 0100
            'finished'   => $batch->finished(),
        ]);
    }
}

This gives you a progress endpoint for polling from the frontend.

Setup - batches require the job_batches table:

php artisan queue:batches-table
php artisan migrate

🔗 Job Chaining - Sequential Pipelines

Chaining runs jobs in sequence - the next job starts only when the previous one completes successfully. If any job fails, the rest of the chain is abandoned.

// app/Actions/ProcessOrderAction.php

declare(strict_types=1);

namespace App\Actions;

use App\Jobs\ChargePaymentJob;
use App\Jobs\GenerateInvoiceJob;
use App\Jobs\SendConfirmationEmailJob;
use App\Jobs\UpdateInventoryJob;
use Illuminate\Support\Facades\Bus;

class ProcessOrderAction
{
    public function handle(int $orderId): void
    {
        Bus::chain([
            new ChargePaymentJob($orderId),
            new UpdateInventoryJob($orderId),
            new GenerateInvoiceJob($orderId),
            new SendConfirmationEmailJob($orderId),
        ])
        ->catch(function (Throwable $e) use ($orderId) {
            logger()->error("Order {$orderId} pipeline failed: {$e->getMessage()}");
            Order::find($orderId)?->markAsFailed();
        })
        ->dispatch();
    }
}

Conditional chain continuation - use PendingChain::dispatchIf():

Bus::chain([
    new ChargePaymentJob($orderId),
    new UpdateInventoryJob($orderId),
])
->dispatchIf($order->requiresShipping, new CreateShipmentJob($orderId));

Mix batches and chains - a batch inside a chain:

Bus::chain([
    new ValidateOrderJob($orderId),
    Bus::batch([
        new ProcessPaymentJob($orderId),
        new ReserveInventoryJob($orderId),
    ])->allowFailures(),
    new SendConfirmationJob($orderId),
])->dispatch();

The chain waits for the entire batch to complete before moving to SendConfirmationJob.

🛡️ Job Middleware - Rules Per Job

Job middleware lets you attach reusable behaviour directly to a job - rate limiting, deduplication, throttling - without duplicating the logic in handle().

WithoutOverlapping - prevent concurrent execution:

// app/Jobs/GenerateReportJob.php

declare(strict_types=1);

namespace App\Jobs;

use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Queue\Middleware\WithoutOverlapping;

class GenerateReportJob implements ShouldQueue
{
    public function __construct(public readonly int $userId) {}

    public function middleware(): array
    {
        return [
            // Only one GenerateReportJob per user can run at a time
            new WithoutOverlapping($this->userId),
        ];
    }

    public function handle(): void
    {
        // Generate report for $this->userId
    }
}

WithoutOverlapping uses a Redis lock keyed on the job class + the ID you pass. Overlapping jobs are released back to the queue and retried.

Configure expiry and release behaviour:

public function middleware(): array
{
    return [
        (new WithoutOverlapping($this->userId))
            ->expireAfter(300)    // Lock expires after 5 minutes (prevent deadlock)
            ->releaseAfter(30)    // Release back to queue after 30 seconds
            ->dontRelease(),      // Or discard overlapping jobs entirely
    ];
}

ThrottlesExceptions - back off on repeated failures:

// app/Jobs/SyncExternalApiJob.php

use Illuminate\Queue\Middleware\ThrottlesExceptions;

public function middleware(): array
{
    return [
        // After 3 exceptions, back off for 10 minutes
        new ThrottlesExceptions(maxAttempts: 3, decayMinutes: 10),
    ];
}

This is different from $tries. ThrottlesExceptions pauses the job when it starts failing, giving the external service time to recover, and resumes automatically.

RateLimited - respect external API limits:

// app/Providers/AppServiceProvider.php

use Illuminate\Support\Facades\RateLimiter;
use Illuminate\Cache\RateLimiting\Limit;

RateLimiter::for('external-api', function () {
    return Limit::perMinute(60); // Max 60 API calls per minute across all workers
});
// app/Jobs/SendToExternalApiJob.php

use Illuminate\Queue\Middleware\RateLimited;

public function middleware(): array
{
    return [new RateLimited('external-api')];
}

All workers share the same rate limiter through Redis - if you have 5 workers, they collectively respect the 60/minute limit, not 300/minute.

🔁 ShouldBeUnique and ShouldBeEncrypted

ShouldBeUnique - prevents dispatching a duplicate job while one is already in the queue or running:

// app/Jobs/RecalculateUserStatsJob.php

declare(strict_types=1);

namespace App\Jobs;

use Illuminate\Contracts\Queue\ShouldBeUnique;
use Illuminate\Contracts\Queue\ShouldQueue;

class RecalculateUserStatsJob implements ShouldQueue, ShouldBeUnique
{
    public int $uniqueFor = 3600; // Unique window: 1 hour

    public function __construct(public readonly int $userId) {}

    public function uniqueId(): string
    {
        return (string) $this->userId; // One job per user at a time
    }

    public function handle(): void
    {
        // Recalculate stats for this user
    }
}

If you dispatch RecalculateUserStatsJob(42) while one is already queued for user 42, the second dispatch is silently dropped.

ShouldBeUniqueUntilProcessing - releases the uniqueness lock as soon as the job starts (not when it finishes), allowing a re-queue while the current one runs:

use Illuminate\Contracts\Queue\ShouldBeUniqueUntilProcessing;

class RecalculateUserStatsJob implements ShouldQueue, ShouldBeUniqueUntilProcessing

ShouldBeEncrypted - encrypts job payload in the queue:

// app/Jobs/SendPasswordResetJob.php

use Illuminate\Contracts\Queue\ShouldBeEncrypted;
use Illuminate\Contracts\Queue\ShouldQueue;

class SendPasswordResetJob implements ShouldQueue, ShouldBeEncrypted
{
    public function __construct(
        public readonly string $email,
        public readonly string $token, // Not visible in plain text in Redis
    ) {}
}

Use this for any job that carries sensitive data - tokens, PII, payment references - that would otherwise be readable in Redis or your database queue.

💥 Failure Strategies - Retries, Backoff, Dead Letters

Basic retry configuration:

// app/Jobs/ProcessWebhookJob.php

declare(strict_types=1);

namespace App\Jobs;

use Illuminate\Contracts\Queue\ShouldQueue;

class ProcessWebhookJob implements ShouldQueue
{
    public int $tries = 5;           // Total attempts before marking as failed
    public int $maxExceptions = 3;   // Mark as failed after 3 unhandled exceptions (not retries)

    // Array backoff - wait 60s, then 180s, then 600s between retries
    public array $backoff = [60, 180, 600];

    public function handle(): void
    {
        // Process webhook payload
    }

    public function failed(\Throwable $e): void
    {
        // Called when the job has exhausted all attempts
        logger()->error("Webhook failed permanently: {$e->getMessage()}", [
            'job' => $this::class,
        ]);

        WebhookLog::markFailed($this->webhookId, $e->getMessage());
    }
}

retryUntil() - time-based retry window:

public function retryUntil(): \DateTime
{
    // Keep retrying for up to 24 hours, then give up
    return now()->addHours(24);
}

This is better than a fixed $tries count for jobs that depend on external services - you retry for a meaningful window rather than a number.

Custom Dead Letter Queue - log all failed jobs to a dedicated table:

// database/migrations/2026_04_01_create_failed_jobs_detail_table.php

Schema::create('failed_jobs_detail', function (Blueprint $table) {
    $table->id();
    $table->string('job_class');
    $table->json('payload');
    $table->text('exception');
    $table->timestamp('failed_at');
});
// app/Jobs/Concerns/LogsFailure.php

declare(strict_types=1);

namespace App\Jobs\Concerns;

use App\Models\FailedJobDetail;

trait LogsFailure
{
    public function failed(\Throwable $e): void
    {
        FailedJobDetail::create([
            'job_class' => static::class,
            'payload'   => json_encode($this),
            'exception' => $e->getMessage() . "\n" . $e->getTraceAsString(),
            'failed_at' => now(),
        ]);
    }
}

Add the trait to any job that needs custom failure tracking, then build a simple dashboard on top of failed_jobs_detail.

🧪 Testing with Bus::fake() and Queue::fake()

Queue::fake() - assert jobs were pushed:

// tests/Feature/OrderControllerTest.php

declare(strict_types=1);

use App\Jobs\ProcessOrderJob;
use App\Jobs\SendConfirmationEmailJob;
use Illuminate\Support\Facades\Queue;

it('dispatches processing job on order creation', function () {
    Queue::fake();

    $this->postJson('/api/v1/orders', [...])
        ->assertCreated();

    Queue::assertPushed(ProcessOrderJob::class);
    Queue::assertNotPushed(SendConfirmationEmailJob::class); // Not dispatched directly
});

it('dispatches to the correct queue', function () {
    Queue::fake();

    ProcessOrderJob::dispatch($order);

    Queue::assertPushedOn('orders', ProcessOrderJob::class);
});

Bus::fake() - assert chains and batches:

// tests/Feature/ProcessOrderActionTest.php

declare(strict_types=1);

use App\Actions\ProcessOrderAction;
use App\Jobs\ChargePaymentJob;
use App\Jobs\GenerateInvoiceJob;
use Illuminate\Support\Facades\Bus;

it('dispatches the correct job chain for an order', function () {
    Bus::fake();

    app(ProcessOrderAction::class)->handle($order->id);

    Bus::assertChained([
        ChargePaymentJob::class,
        UpdateInventoryJob::class,
        GenerateInvoiceJob::class,
        SendConfirmationEmailJob::class,
    ]);
});

it('dispatches a batch for bulk import', function () {
    Bus::fake();

    app(ImportProductsAction::class)->handle($chunks);

    Bus::assertBatched(function ($batch) {
        return $batch->jobs->count() === count($chunks)
            && $batch->name === 'Product Import';
    });
});

Test the failed() hook directly:

it('logs failure when webhook job exhausts retries', function () {
    $job = new ProcessWebhookJob($webhookId);
    $exception = new \RuntimeException('External service timeout');

    $job->failed($exception);

    expect(WebhookLog::where('webhook_id', $webhookId)->first()->status)
        ->toBe('failed');
});

🚀 Priority Queues in Redis with Supervisor

Laravel workers can listen to multiple queues with an ordered priority. Jobs on higher-priority queues are processed first.

Dispatch to a specific queue:

// High priority - payment processing
ProcessPaymentJob::dispatch($order)->onQueue('high');

// Normal priority - notifications
SendEmailJob::dispatch($user)->onQueue('default');

// Low priority - analytics, reporting
UpdateAnalyticsJob::dispatch($event)->onQueue('low');

Supervisor configuration - worker processes with priority order:

; /etc/supervisor/conf.d/laravel-worker.conf

[program:laravel-worker-high]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/home/artisan queue:work redis --queue=high,default,low --sleep=3 --tries=3
autostart=true
autorestart=true
numprocs=4
redirect_stderr=true
stdout_logfile=/var/log/laravel-worker-high.log

[program:laravel-worker-low]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/home/artisan queue:work redis --queue=low --sleep=3 --tries=3
autostart=true
autorestart=true
numprocs=1
redirect_stderr=true
stdout_logfile=/var/log/laravel-worker-low.log

The --queue=high,default,low ordering means the worker checks the high queue first, then default, then low. Four workers handle high-priority jobs; one handles low-priority background work.

Key principle: separate workers for different priorities prevent low-priority jobs from starving high-priority ones. Even if low is backed up with 10k analytics jobs, the payment workers are unaffected.

✅ Conclusion

  • Use batching when you need to process a group of jobs and react to their collective completion - imports, bulk operations, parallel pipelines
  • Use chaining when steps must happen in sequence and each depends on the previous one - order processing, multi-step workflows
  • Add job middleware (WithoutOverlapping, ThrottlesExceptions, RateLimited) to express per-job rules without polluting handle()
  • Use ShouldBeUnique to prevent duplicate jobs for the same entity during high-frequency dispatches
  • Use ShouldBeEncrypted for any job carrying PII, tokens, or sensitive references
  • Build a Dead Letter Queue on top of the failed() hook to get context about permanent failures
  • Write tests with Bus::fake() to assert chains and batches - they're the hardest bugs to catch without them
  • Use priority queues with separate Supervisor processes to ensure critical jobs never wait behind background work

Follow me on LinkedIn for more Laravel tips! Would you like a deep dive into Laravel Pulse for monitoring queue health in production? Let me know in the comments below!

Comments (0)
Leave a comment

© 2026 All rights reserved.