Say you have a job that syncs a user's data from an external API. You dispatch it every time the user updates their profile. Now imagine the user clicks "Save" three times in a row.
Without any safeguards, three identical jobs hit the queue and all three try to sync the same user at the same time. They overwrite each other, make redundant API calls, and in the worst case, corrupt your data.
Laravel gives you four tools to handle situations like this: the ShouldBeUnique interface, and three job middleware -- WithoutOverlapping, RateLimited, and ThrottlesExceptions. They solve different problems, and understanding when to use each one will save you from some painful debugging sessions.
This post assumes you are comfortable with Laravel Jobs and Queues. If you want a deeper look at structuring Job classes, handling failures, and designing for idempotency, check out the Jobs chapter in Clean Code in Laravel.
ShouldBeUnique
Before diving into middleware, there is a built-in interface that handles the most common case: preventing duplicate jobs from being dispatched in the first place.
When a job implements ShouldBeUnique, Laravel checks whether an identical job is already on the queue or currently being processed. If it is, the new dispatch is silently ignored. The job never even makes it onto the queue.
Think of it as a "do not even queue it twice" rule.
When to use it
Use ShouldBeUnique when dispatching the same job multiple times is wasteful and the result would be identical. If the job already exists on the queue, there is no point adding another one.
Real-world scenario
You have a job that rebuilds a product's search index after it is updated. The product page has an "Update" button, and the user clicks it five times. You only need one reindex, not five:
// App\Jobs\UpdateSearchIndex
use Illuminate\Contracts\Queue\ShouldBeUnique;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Queue\Queueable;
class UpdateSearchIndex implements ShouldQueue, ShouldBeUnique
{
use Queueable;
public int $uniqueFor = 3600;
public function __construct(private Product $product)
{
}
public function uniqueId(): string
{
return 'search-index-update-product-'.$this->product->id;
}
public function handle(): void
{
// Rebuild the search index for this product...
}
}
The uniqueId method scopes the uniqueness to a specific product. Two jobs for different products are both dispatched. Two jobs for the same product -- only the first one goes through.
The $uniqueFor property sets how long the lock lasts (in seconds). After one hour, even if the first job has not finished, a new dispatch is allowed. This prevents stale locks from blocking your queue permanently.
What if the lock should release sooner?
By default, the lock is held until the job finishes processing or fails all retry attempts. If you want the lock to release the moment a worker picks up the job (allowing a new dispatch to queue while the current one is still running), use ShouldBeUniqueUntilProcessing instead:
use Illuminate\Contracts\Queue\ShouldBeUniqueUntilProcessing;
use Illuminate\Contracts\Queue\ShouldQueue;
class UpdateSearchIndex implements ShouldQueue, ShouldBeUniqueUntilProcessing
{
// ...
}
This is useful when you want to guarantee at most one copy in the queue, but you are fine with overlapping execution.
ShouldBeUniquerequires a cache driver that supports atomic locks. Thememcached,redis,dynamodb,database,file, andarraydrivers all support them.
WithoutOverlapping
This middleware prevents two instances of the same job from running at the same time. It uses an atomic lock behind the scenes -- if a lock already exists for the given key, the second job is released back onto the queue.
Think of it as a "one at a time" rule.
When to use it
Use WithoutOverlapping when the job modifies a shared resource and running two instances simultaneously would cause conflicts.
Real-world scenario
You have a job that recalculates a user's credit score. Two of these jobs running at the same time for the same user would produce inconsistent results:
// App\Jobs\RecalculateCreditScore
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Queue\Queueable;
use Illuminate\Queue\Middleware\WithoutOverlapping;
class RecalculateCreditScore implements ShouldQueue
{
use Queueable;
public int $tries = 5;
public function __construct(private User $user)
{
}
public function middleware(): array
{
return [new WithoutOverlapping($this->user->id)];
}
public function handle(): void
{
$this->user->recalculateScore();
}
}
The key you pass to WithoutOverlapping determines the scope of the lock. In this case, it is the user's ID. Two jobs for different users run in parallel without any issue. Two jobs for the same user do not.
What happens to the blocked job?
By default, the blocked job is released back onto the queue with zero delay and retried on its next attempt. This consumes an attempt, so make sure you set $tries high enough to account for that.
You can control the release delay:
public function middleware(): array
{
return [
(new WithoutOverlapping($this->user->id))->releaseAfter(60),
];
}
If the duplicate job is genuinely unnecessary and you want to discard it entirely, use dontRelease:
public function middleware(): array
{
return [
(new WithoutOverlapping($this->user->id))->dontRelease(),
];
}
Lock expiration
What if your job crashes and never releases the lock? By default, the lock's expiresAfter is 0, meaning it has no automatic expiration. If the process is killed before the finally block runs, the lock stays forever. Always set an explicit expiration:
public function middleware(): array
{
return [
(new WithoutOverlapping($this->user->id))->expireAfter(180),
];
}
This tells Laravel to release the lock after 3 minutes regardless of whether the job finished.
Like
ShouldBeUnique, theWithoutOverlappingmiddleware requires a cache driver that supports atomic locks.
ShouldBeUnique vs WithoutOverlapping
Now that you understand both, here is how they differ:
ShouldBeUnique |
WithoutOverlapping |
|
|---|---|---|
| When it acts | At dispatch time (before the job enters the queue) | At execution time (when the worker picks up the job) |
| What it prevents | Duplicate jobs from being queued | Two jobs from running at the same time |
| Blocked job | Silently discarded -- never queued | Released back onto the queue and retried later |
ShouldBeUnique says: "if this job is already waiting or running, do not bother adding another one." WithoutOverlapping says: "if this job is currently running, wait your turn."
Use ShouldBeUnique alone when the duplicate dispatch is truly unnecessary -- like reindexing a product. But if the second job carries different intent (a second payment attempt, a second status check), you want it to run, just not at the same time. That is where WithoutOverlapping comes in.
You can also combine them. Use ShouldBeUnique to prevent duplicates from piling up in the queue, and WithoutOverlapping to prevent concurrent execution of any that do get through:
use Illuminate\Contracts\Queue\ShouldBeUnique;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Queue\Queueable;
use Illuminate\Queue\Middleware\WithoutOverlapping;
class RecalculateCreditScore implements ShouldQueue, ShouldBeUnique
{
use Queueable;
public int $tries = 5;
public function __construct(private User $user)
{
}
public function uniqueId(): string
{
return $this->user->id;
}
public function middleware(): array
{
return [
(new WithoutOverlapping($this->user->id))->expireAfter(180),
];
}
public function handle(): void
{
$this->user->recalculateScore();
}
}
The duplicate never queues, and even if a race condition slips one through, it is released back instead of running in parallel.
RateLimited
This middleware limits how many times a job can run within a time window. Unlike WithoutOverlapping, it does not care whether the job is currently running. It cares about how often it runs.
Think of it as a "not too many per hour" rule.
When to use it
Use RateLimited when you interact with an external API that enforces rate limits, or when you want to control resource consumption across many users.
Real-world scenario
Your application lets users export their data as a PDF. Generating a PDF is expensive, and you do not want a single user to trigger 50 exports in an hour:
First, define the rate limiter in your AppServiceProvider:
// App\Providers\AppServiceProvider
use Illuminate\Cache\RateLimiting\Limit;
use Illuminate\Support\Facades\RateLimiter;
public function boot(): void
{
RateLimiter::for('exports', function (object $job) {
return Limit::perHour(5)->by($job->user->id);
});
}
Then attach it to the job:
// App\Jobs\GeneratePdfExport
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Queue\Queueable;
use Illuminate\Queue\Middleware\RateLimited;
class GeneratePdfExport implements ShouldQueue
{
use Queueable;
public int $tries = 10;
public function __construct(private User $user)
{
}
public function middleware(): array
{
return [new RateLimited('exports')];
}
public function handle(): void
{
// Generate the PDF...
}
}
The limiter allows 5 exports per hour per user. The sixth job gets released back onto the queue with an appropriate delay. Once the window resets, the released job executes normally.
Differentiate between user types
The rate limiter receives the job instance, so you can adjust limits based on your business logic:
RateLimiter::for('exports', function (object $job) {
return $job->user->hasSubscription('premium')
? Limit::none()
: Limit::perHour(5)->by($job->user->id);
});
Premium users get unlimited exports. Free users get five per hour.
Discarding rate-limited jobs
If a rate-limited job should not be retried at all, use dontRelease:
public function middleware(): array
{
return [(new RateLimited('exports'))->dontRelease()];
}
ThrottlesExceptions
This middleware is different from WithoutOverlapping and RateLimited. It does not prevent jobs from running. It reacts to failures.
When a job keeps throwing exceptions, ThrottlesExceptions pauses all further attempts for a set duration. Instead of hammering a failing service with retries every few seconds, the job backs off and waits.
Think of it as a "cool down after repeated failures" rule.
When to use it
Use ThrottlesExceptions when your job depends on a third-party service that might go down temporarily. Instead of burning through all your retries in seconds, you give the service time to recover.
Real-world scenario
You have a job that sends an SMS via Twilio. If Twilio is having an outage, retrying every second is pointless and wastes your resources:
// App\Jobs\SendSmsNotification
use DateTime;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Queue\Queueable;
use Illuminate\Queue\Middleware\ThrottlesExceptions;
class SendSmsNotification implements ShouldQueue
{
use Queueable;
public function __construct(
private User $user,
private string $message,
) {
}
public function middleware(): array
{
return [new ThrottlesExceptions(maxAttempts: 5, decaySeconds: 10 * 60)];
}
public function retryUntil(): DateTime
{
return now()->addHours(6);
}
public function handle(): void
{
// Call Twilio API to send the SMS...
}
}
Here is what happens:
- The job runs and throws an exception. It is released back onto the queue immediately.
- This repeats until 5 consecutive exceptions occur.
- After the 5th exception, the job pauses for approximately 10 minutes.
- After the pause, it tries again. If the service is back, it succeeds and the exception counter resets. If not, the cycle repeats.
- The whole process stops after 6 hours (
retryUntil).
Use
retryUntilinstead of$trieswhen pairing withThrottlesExceptions. A fixed number of tries gets consumed quickly, and your job fails before the service recovers.
Sharing the throttle across jobs
If multiple jobs call the same API, you can share a throttle bucket so they all back off together:
public function middleware(): array
{
return [
(new ThrottlesExceptions(5, 10 * 60))->by('twilio-api'),
];
}
Now if your SendSmsNotification and VerifyPhoneNumber jobs both use this key, a Twilio outage throttles them all at once instead of each job discovering the outage independently.
Throttling specific exceptions
By default, every exception triggers the throttle. You can narrow it down:
use Illuminate\Http\Client\HttpClientException;
public function middleware(): array
{
return [
(new ThrottlesExceptions(5, 10 * 60))->when(
fn (Throwable $e) => $e instanceof HttpClientException
),
];
}
Only HTTP client exceptions count toward the throttle. A TypeError in your code still fails the job immediately without affecting the throttle counter.
When to use which?
Here is a quick way to decide:
| Situation | Tool |
|---|---|
| The same job should not be queued more than once | ShouldBeUnique |
| Two instances of the same job must not run simultaneously | WithoutOverlapping |
| A job should only run N times per time window | RateLimited |
| A job keeps failing and you want it to back off | ThrottlesExceptions |
They are not mutually exclusive. You can combine them:
use Illuminate\Contracts\Queue\ShouldBeUnique;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Queue\Queueable;
use Illuminate\Queue\Middleware\RateLimited;
use Illuminate\Queue\Middleware\ThrottlesExceptions;
use Illuminate\Queue\Middleware\WithoutOverlapping;
class GenerateReport implements ShouldQueue, ShouldBeUnique
{
use Queueable;
public function __construct(private User $user)
{
}
public function uniqueId(): string
{
return $this->user->id;
}
public function middleware(): array
{
return [
(new WithoutOverlapping($this->user->id))->expireAfter(180),
new RateLimited('reports'),
new ThrottlesExceptions(5, 10 * 60),
];
}
public function handle(): void
{
// Generate the report...
}
}
This job is not queued twice for the same user, does not overlap during execution, respects rate limits, and backs off when exceptions pile up.
Summary
- ShouldBeUnique -- prevents duplicate jobs from entering the queue. The second dispatch is silently discarded. Use it when the duplicate would produce the exact same result.
- WithoutOverlapping -- prevents concurrent execution by using an atomic lock. The blocked job is released back onto the queue. Always set
expireAfterto avoid permanent locks after a crash. - RateLimited -- caps how many times a job executes within a time window. Define the limiter in
AppServiceProviderusingRateLimiter::for(), then attach it to the job. - ThrottlesExceptions -- backs off after repeated failures instead of burning through retries. Pair it with
retryUntil()for time-based retry windows. - Tune your attempts --
WithoutOverlappingandRateLimitedconsume attempts when they release a job. Set$trieshigh enough or useretryUntil()to avoid premature failure. - Combine them freely -- these tools solve different problems and work well together on the same job.