Search
Close this search box.

Handling Large CSV Imports Efficiently in Laravel

Importing large CSV files in Laravel can quickly become a bottleneck if not handled properly. Memory exhaustion, timeouts, and slow database operations are common issues developers face, and they tend to surface in production at the worst possible moment.

This guide covers nine practical strategies to handle large CSV imports efficiently, keeping your application fast and stable no matter how much data you’re throwing at it.



Common Problems with Large CSV Imports

When dealing with large files; 10MB or more, or tens of thousands of rows — the default approach of loading the file and processing it in one go simply doesn’t hold up. Here’s what you’re likely to run into:

  • Memory limit errors
  • Script execution timeouts
  • Slow database inserts
  • Server crashes due to heavy processing

Each of these has a specific fix. The strategies below address them one by one, and the final section shows how to combine them into a solid, production-ready stack.


1. Use Chunking Instead of Loading the Entire File

The most common mistake is reading the entire CSV into memory at once. For small files this is fine; for anything large, it’s a fast route to a memory limit error. The fix is to stream the file and process it one row at a time using PHP’s native:

  fgetcsv().

if (($handle = fopen($filePath, 'r')) !== false) {
    while (($row = fgetcsv($handle, 1000, ',')) !== false) {
        // Process each row
    }
    fclose($handle);
}


Why this works:

  • Reads one row at a time
  • Keeps memory usage low
  • Suitable for very large files

This should be your baseline for any CSV import, regardless of file size. It costs you nothing and protects you from memory issues as data volumes grow.



2. Use Laravel Queues for Background Processing

Large imports should never run inside a web request. A request has a fixed execution time limit, and blocking the user while thousands of rows are processed makes for a poor experience, assuming the request doesn’t time out first.

The solution is to accept the file upload, dispatch a background job, and return immediately. The import happens asynchronously while the user gets on with something else.

Dispatch a Job:

ImportCsvJob::dispatch($filePath);

Job Example:

class ImportCsvJob implements ShouldQueue
{
    public function handle()
    {
        // Process CSV here
    }
}


Benefits:

  • No request timeout issues
  • Better user experience
  • Can retry failed jobs

The retry behaviour is particularly useful, if a job fails partway through due to a database hiccup or a malformed row, Laravel can automatically retry it without any manual intervention.



3. Use Chunked Database Inserts

Once you’re reading the file row by row, the next bottleneck is usually the database. Inserting records one at a time means one query per row, for a 50,000 row file, that’s 50,000 round trips to the database. Batching inserts collapses those into a handful of queries.

$batchSize = 1000;
$data = [];
foreach ($rows as $row) {
    $data[] = [
        'name' => $row[0],
        'email' => $row[1],
    ];
    if (count($data) === $batchSize) {
        DB::table('users')->insert($data);
        $data = [];
    }
}
// Insert remaining
if (!empty($data)) {
    DB::table('users')->insert($data);
}

Benefits:

  • Reduces database queries
  • Improves performance significantly

A batch size of 1,000 rows is a sensible default for most setups, but it’s worth tuning based on your row width and database configuration.



4. Use Lazy Collections (Laravel Feature)

Laravel’s LazyCollection is one of the more underused features in the framework. It uses PHP generators under the hood to stream data through a pipeline without loading it all into memory and it pairs naturally with chunked processing.

use Illuminate\Support\LazyCollection;
LazyCollection::make(function () use ($filePath) {
    $handle = fopen($filePath, 'r');
    while (($row = fgetcsv($handle)) !== false) {
        yield $row;
    }
    fclose($handle);
})
->chunk(1000)
->each(function ($rows) {
    // Insert chunk into DB
});

Why it’s powerful:

  • Streams data instead of loading everything
  • Combines well with chunking

If you’re already comfortable with Laravel’s collection API, LazyCollection will feel immediately familiar — it just works without pulling everything into memory first.



5. Validate Data in Batches

Validation is necessary, but validating each row individually with a full Laravel validator call is expensive at scale. Instead, collect a batch of rows and validate them together in a single pass.

Validator::make($batchData, [
    '*.email' => 'required|email',
    '*.name' => 'required|string',
])->validate();

A couple of tips that make a real difference in production:

  • Validate before inserting the batch — don’t let bad data reach the database
  • Log invalid rows instead of stopping the entire import — valid rows should still be processed even if some are malformed

Logging failures rather than throwing exceptions keeps the import resilient and gives you a clear audit trail to follow up on problem records later.



6. Use Database Transactions Carefully

Transactions are important for data integrity, but wrapping an entire large import in a single transaction is a mistake. If something fails at row 40,000, you lose everything. Transactions also hold locks, a long-running transaction can cause contention across the rest of your application.

Instead, use a transaction per batch:

  • Use transactions per batch
  • Prevents rollback of entire dataset
DB::transaction(function () use ($data) {
    DB::table('users')->insert($data);
});

This way, each batch either completes cleanly or rolls back on its own — without affecting the rows that have already been successfully imported.



7. Consider Using the Laravel Excel Package

If you’d rather not wire all of this up manually, the Maatwebsite Laravel Excel package handles chunk reading, queued imports, and model binding out of the box. It’s a well-maintained package that abstracts most of the complexity covered in this guide.

Install:

composer require maatwebsite/excel

Chunk Reading Example:

Excel::import(new UsersImport, $file);
class UsersImport implements ToModel, WithChunkReading
{
    public function model(array $row)
    {
        return new User([
            'name' => $row[0],
            'email' => $row[1],
        ]);
    }
    public function chunkSize(): int
    {
        return 1000;
    }
}

It also supports queued imports, failure handling, and progress events — so if your import requirements are complex, it’s worth evaluating before building everything from scratch.



8. Use Progress Tracking (Optional but Useful)

For imports that take more than a few seconds, users need to know something is happening. A background job with no feedback looks like a broken feature. Progress tracking doesn’t need to be complex — even a simple status indicator makes a significant difference to the experience.

For better UX, track import progress:

  • Store progress in DB or cache
  • Show progress bar on frontend
  • Update after each chunk

A common approach is to store a progress record in the cache at the start of the import, update it after each batch, and poll it from the frontend every few seconds to drive a progress bar.



9. Optimise Server Configuration

Sometimes the bottleneck isn’t your code — it’s the environment it’s running in. If you’ve applied the strategies above and are still hitting walls, check your server config:

  • Increase memory_limit (if needed)
  • Increase max_execution_time (for CLI jobs)
  • Use queue workers (php artisan queue:work)

Running imports through queue workers on the CLI sidesteps the PHP web server limits entirely — CLI processes have their own (typically more generous) configuration, and they’re not subject to web request timeouts.



Final Thoughts

Handling large CSV imports in Laravel is all about streaming, batching, and offloading work. No single strategy solves every problem, but combined, they give you a robust pipeline that can handle millions of rows without putting your application at risk.

Here’s the recommended stack at a glance:

  • File reading → LazyCollection / fgetcsv
  • Processing → Queue Jobs
  • Database → Batch inserts
  • Validation → Chunk-based
  • Optional → Laravel Excel package

By combining these techniques, you can process even millions of rows efficiently without crashing your application. Start with the chunked file reading and queued jobs; those two alone will resolve the most common issues — then layer in the remaining optimisations as your import volumes grow.

Share on:

You may also like

en_US

Subscribe To Our Newsletter