Every Laravel application eventually runs into a slow endpoint. The query is there - somewhere - pulling too much data, missing an index, or firing fifty times in a loop. Telescope and Debugbar are great for local development, but they won't help you when you need to capture query behaviour programmatically, log it to a file, or build a middleware that activates on demand in a staging environment. This article shows you exactly how to do that.
📋 Table of Contents
- 🔍 DB::listen() - Watching Queries as They Run
- 📋 DB::enableQueryLog() - Capture and Inspect
- 🛠️ Building a QueryLogger Middleware
- 🔬 Reading EXPLAIN - What MySQL Actually Does
- ⚠️ Common Patterns That Kill Performance
- 🚀 Dev vs Production - What to Enable and When
- 🧱 MySQL Slow Query Log - The Full Picture
- ✅ Conclusion
🔍 DB::listen() - Watching Queries as They Run
DB::listen() registers a callback that fires after every SQL statement executed in the current request lifecycle. It receives a QueryExecuted event with the SQL, bindings, and execution time.
The simplest use case is dumping queries during local debugging:
// app/Providers/AppServiceProvider.php
declare(strict_types=1);
namespace App\Providers;
use Illuminate\Database\Events\QueryExecuted;
use Illuminate\Support\Facades\DB;
use Illuminate\Support\ServiceProvider;
class AppServiceProvider extends ServiceProvider
{
public function boot(): void
{
if (config('app.debug')) {
DB::listen(function (QueryExecuted $query) {
logger()->debug('Query executed', [
'sql' => $query->sql,
'bindings' => $query->bindings,
'time_ms' => $query->time,
]);
});
}
}
}
You can also threshold-filter - log only the slow ones:
// app/Providers/AppServiceProvider.php
DB::listen(function (QueryExecuted $query) {
if ($query->time > 100) { // ms
logger()->warning('Slow query detected', [
'sql' => $query->sql,
'time_ms' => $query->time,
'caller' => collect(debug_backtrace(DEBUG_BACKTRACE_IGNORE_ARGS, 8))
->map(fn($t) => ($t['class'] ?? '') . '@' . ($t['function'] ?? ''))
->filter(fn($s) => str_contains($s, 'App\\'))
->values()
->first(),
]);
}
});
The debug_backtrace trick narrows down which class actually triggered the query - useful when you have layered services and can't immediately tell where the call comes from.
When to use DB::listen():
- Threshold alerts during development (
> 100ms→ log warning) - Collecting metrics in a custom service (query count per request)
- Real-time debugging without touching Telescope
When not to use it: It fires synchronously. Heavy callbacks slow down every request.
📋 DB::enableQueryLog() - Capture and Inspect
DB::enableQueryLog() buffers every query executed after it's called. DB::getQueryLog() returns the full list at any point. Unlike DB::listen(), it captures everything into memory for later inspection.
// routes/web.php (quick local debug - never commit this)
Route::get('/debug-orders', function () {
DB::enableQueryLog();
$orders = \App\Models\Order::with('items.product')
->where('status', 'pending')
->get();
$log = DB::getQueryLog();
return response()->json([
'query_count' => count($log),
'total_ms' => collect($log)->sum('time'),
'queries' => $log,
]);
});
The returned array has this shape per entry:
[
'query' => 'select * from `orders` where `status` = ?',
'bindings' => ['pending'],
'time' => 4.21, // milliseconds
]
Practical pattern - count queries in a test:
// tests/Feature/OrderIndexTest.php
declare(strict_types=1);
use App\Models\Order;
use Illuminate\Support\Facades\DB;
it('loads orders without N+1 queries', function () {
Order::factory()->count(20)->hasItems(3)->create();
DB::enableQueryLog();
$this->getJson('/api/v1/orders')->assertOk();
expect(DB::getQueryLog())->toHaveCount(2); // orders + eager-loaded items
});
This is one of the cleanest ways to enforce N+1 prevention at the test level - without any third-party package.
🛠️ Building a QueryLogger Middleware
The real power comes from combining both tools in a middleware that activates on demand - without a deploy, without touching code, and without enabling debug mode globally.
// app/Http/Middleware/QueryLogger.php
declare(strict_types=1);
namespace App\Http\Middleware;
use Closure;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\Log;
use Symfony\Component\HttpFoundation\Response;
class QueryLogger
{
public function handle(Request $request, Closure $next): Response
{
if ($this->shouldLog($request)) {
DB::enableQueryLog();
}
return $next($request);
}
public function terminate(Request $request, Response $response): void
{
if (! $this->shouldLog($request)) {
return;
}
try {
$queries = DB::getQueryLog();
$total = count($queries);
$timeMs = array_sum(array_column($queries, 'time'));
$timestamp = now()->format('Ymd_His_u');
$path = storage_path("logs/query_debug_{$timestamp}.json");
file_put_contents($path, json_encode([
'request' => $request->method() . ' ' . $request->fullUrl(),
'summary' => [
'total_queries' => $total,
'total_time_ms' => round($timeMs, 2),
],
'queries' => $queries,
], JSON_PRETTY_PRINT));
Log::info('QueryLogger saved', ['path' => $path, 'total' => $total]);
} catch (\Throwable $e) {
Log::error('QueryLogger failed: ' . $e->getMessage());
}
}
private function shouldLog(Request $request): bool
{
return config('app.debug')
&& ($request->hasHeader('X-Debug-Queries') || $request->has('debug_queries'));
}
}
Register it in your middleware stack:
// bootstrap/app.php
declare(strict_types=1);
use App\Http\Middleware\QueryLogger;
return Application::configure(basePath: dirname(__DIR__))
->withMiddleware(function ($middleware) {
$middleware->append(QueryLogger::class);
})
->create();
Trigger it with a header:
curl -H "X-Debug-Queries: 1" https://staging.example.com/api/v1/orders
The output file looks like this:
{
"request": "GET https://staging.example.com/api/v1/orders",
"summary": {
"total_queries": 47,
"total_time_ms": 312.45
},
"queries": [
{
"query": "select * from `orders` where `user_id` = ?",
"bindings": [42],
"time": 6.21
}
]
}
47 queries for a single endpoint is a very obvious N+1 to fix. The file gives you the full picture without any tooling overhead.
Security note: The shouldLog() guard checks app.debug. Never enable this middleware in production with APP_DEBUG=true. In staging, set the flag and use the header only when you need it.
🔬 Reading EXPLAIN - What MySQL Actually Does
Once you have the slow query from the log, the next step is running EXPLAIN to understand what MySQL does with it.
EXPLAIN SELECT * FROM orders WHERE user_id = 42 AND status = 'pending';
The most important columns:
| Column | What to look for |
|---|---|
type | ALL = full table scan (bad). ref, eq_ref, const = index used (good) |
key | Which index was used. NULL = no index |
rows | Estimated rows scanned. Should be close to actual result count |
Extra | Using filesort or Using temporary = usually fixable with an index |
Example - before adding an index:
EXPLAIN SELECT * FROM orders WHERE status = 'pending' ORDER BY created_at DESC;
-- type: ALL | key: NULL | rows: 94823 | Extra: Using filesort
Scanning 94k rows with a filesort - this will hurt at scale. Fix:
// database/migrations/2026_03_01_add_index_to_orders_table.php
declare(strict_types=1);
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
return new class extends Migration
{
public function up(): void
{
Schema::table('orders', function (Blueprint $table) {
$table->index(['status', 'created_at']);
});
}
public function down(): void
{
Schema::table('orders', function (Blueprint $table) {
$table->dropIndex(['status', 'created_at']);
});
}
};
After the migration:
EXPLAIN SELECT * FROM orders WHERE status = 'pending' ORDER BY created_at DESC;
-- type: ref | key: orders_status_created_at_index | rows: 312 | Extra: Using index condition
From 94k rows to 312. A composite index on (status, created_at) covers both the WHERE filter and the ORDER BY in one scan.
EXPLAIN ANALYZE (MySQL 8.0+) goes further - it actually executes the query and shows real timings:
EXPLAIN ANALYZE SELECT * FROM orders WHERE status = 'pending' ORDER BY created_at DESC LIMIT 20;
-- -> Limit: 20 row(s) (actual time=0.312..0.318 rows=20 loops=1)
-- -> Index lookup on orders using orders_status_created_at_index (status='pending')
-- (actual time=0.298..0.312 rows=20 loops=1)
Real time: 0.318ms. Before the index it was 340ms. Add indexes before you cache.
⚠️ Common Patterns That Kill Performance
These are the patterns you will find in the query log most often:
1. Missing select() - loading everything
// Bad - loads all 40 columns, serializes everything
$users = User::all();
// Good - only what the endpoint needs
$users = User::select('id', 'name', 'email')->get();
2. Wildcard LIKE with leading %
// Bad - cannot use an index, always a full scan
User::where('email', 'LIKE', '%@example.com')->get();
// Better - trailing wildcard can use an index
User::where('email', 'LIKE', 'john%')->get();
// Best for full-text search - use Laravel Scout
3. Subquery inside a loop
// Bad - fires one query per order (N+1 in disguise)
foreach ($orders as $order) {
$order->total = $order->items()->sum('price');
}
// Good - aggregate in a single query
$orders = Order::withSum('items', 'price')->get();
// Access: $order->items_sum_price
4. orderBy() on a non-indexed column
// Bad - filesort on created_at with no index
Product::where('category_id', 5)->orderBy('created_at')->get();
// Fix: add index(['category_id', 'created_at'])
5. Repeated count() on the same relationship
// Bad - one COUNT query per post in the loop
foreach ($posts as $post) {
echo $post->comments()->count();
}
// Good - one query for all counts
$posts = Post::withCount('comments')->get();
// Access: $post->comments_count
🚀 Dev vs Production - What to Enable and When
| Tool | Dev | Staging | Production |
|---|---|---|---|
DB::listen() in ServiceProvider | ✅ threshold log | ✅ warning only | ❌ remove |
QueryLogger middleware | ✅ always on | ✅ header-gated | ❌ disabled |
DB::enableQueryLog() in tests | ✅ assertion tool | ✅ CI checks | N/A |
| MySQL slow query log | ✅ optional | ✅ recommended | ✅ always |
The golden rule: never leave DB::enableQueryLog() running globally in production. It buffers all queries in memory for every request. Under load, that is a memory leak.
For production, the MySQL slow query log (next section) is the right tool.
🧱 MySQL Slow Query Log - The Full Picture
When Laravel-level logging is off, MySQL can still catch slow queries at the database level. This runs regardless of your application code.
Enable it in my.cnf (or my.ini on Windows):
# /etc/mysql/mysql.conf.d/mysqld.cnf
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
log_queries_not_using_indexes = 1
long_query_time = 1- log queries taking more than 1 secondlog_queries_not_using_indexes = 1- catches full scans even if they're fast now
Restart MySQL and watch the log:
sudo tail -f /var/log/mysql/slow.log
The output includes the query, user, host, and exact timing:
# Time: 2026-03-13T08:42:11
# Query_time: 2.341 Lock_time: 0.000 Rows_sent: 1 Rows_examined: 94823
SELECT * FROM orders WHERE status = 'pending' ORDER BY created_at DESC LIMIT 1;
Correlate this with your QueryLogger output - if the same query appears in both, you have confirmed the bottleneck. Use mysqldumpslow to aggregate and rank:
mysqldumpslow -s t -t 10 /var/log/mysql/slow.log
This shows the 10 slowest queries by total time. In Docker, add the slow log config to your MySQL service:
# docker-compose.yml
services:
mysql:
image: mysql:8.0
command: >
--slow_query_log=1
--slow_query_log_file=/var/log/mysql/slow.log
--long_query_time=1
--log_queries_not_using_indexes=1
volumes:
- mysql_logs:/var/log/mysql
✅ Conclusion
- Use
DB::listen()for threshold-based logging during development - filter to> 100msand let it write tolaravel.log - Use
DB::enableQueryLog()in tests to enforce maximum query counts and catch N+1 regressions before they reach production - The
QueryLoggermiddleware is your on-demand tool for staging - activate with a header, get a JSON file, disable without a deploy EXPLAINtells you what MySQL plans to do;EXPLAIN ANALYZEtells you what it actually did - always run both before and after adding an index- Fix the query first, add an index second, cache last - in that order
- MySQL slow query log catches what your application-level tooling misses, especially in high-traffic windows
Follow me on LinkedIn for more Laravel tips! Would you like to see a deeper dive into query optimization with composite indexes and covering indexes? Let me know in the comments below!