Cron vs Durable Scheduling: When Cron Stops Being Enough

Cron works for recurring system tasks. When scheduling becomes per-user, per-event, and needs retries, cancellation, and observability, durable scheduling is a better fit.

Cron is fine for recurring system tasks — cache warming, report generation, log rotation. It runs on a fixed cadence and does its job well.

It stops being enough when the work is tied to specific events. A reminder that fires 48 hours after signup. An invitation that expires in 7 days. A payment retry that needs backoff. These are not recurring cadences — they are per-event timers, and each one needs its own schedule, its own cancellation rules, and its own delivery guarantees.

Durable scheduling treats each event as its own timer: scheduled once, persisted, retried on failure, cancellable, and observable. No scanning. No coordination plumbing. No silent failures.

At a glance

DimensionCronDurable scheduling
Time modelFixed recurring cadencePer-event schedule (exact time, relative delay, or local timezone)
State modelStateless trigger — job checks what is dueEvent-specific — each timer carries its own state
Scheduling precisionMinute-level; serverless cron can drift significantly under high loadSecond-level with exact timestamps
Timezone and DSTManual offset management; DST bugs cause double-runs or skipped jobsAutomatic DST handling with local-time scheduling
CancellationAdd a cancelled flag, check at execution timeCancel the specific timer by ID
RetriesBuild your own retry logicBuilt-in retries with configurable backoff, jitter, and per-hook overrides
ObservabilityNone unless you build itPer-delivery status tracking, attempt history, anomaly detection
AlertingNone — silent failures by defaultAutomatic failure alerts via email, Slack, or webhook
InfrastructureBuilt into every Unix system; zero dependenciesManaged service — no infrastructure to operate
Best fitSystem maintenance, cache warming, report generationReminders, expirations, retries, follow-ups, check-later workflows

How cron works

Cron is built into every Unix system. You define a schedule expression, and the system runs the job at that cadence. No dependencies, no external services, no cost.

For fixed-cadence system tasks — vacuuming a database, rotating logs, generating a daily report — this is the right tool. If you manage fewer than a dozen scheduled tasks and they all run on the same cadence, cron is simple and sufficient.

Platform cron options extend this to managed environments. Vercel, Cloudflare Workers, Railway, and Render all offer cron triggers. GitHub Actions supports scheduled workflows. For database-local work, pg_cron runs SQL directly in PostgreSQL without an external scheduler.

Cron’s strength is simplicity: a schedule expression and a command. When the problem fits that shape, nothing else is needed.

Where cron starts to break

The problems start when scheduling becomes event-specific.

The database-scan pattern

The most common workaround is a cron job that polls for due work: SELECT * FROM jobs WHERE run_at <= NOW(). This works at low volume. At tens of thousands of pending rows, it becomes a performance bottleneck. The polling interval creates a precision floor — poll every minute, and jobs can run up to 59 seconds late.

This is not cron anymore. It is a scheduling system built on top of cron, and it comes with all the maintenance that implies.

DST incidents

Timezone handling is manual with cron. A payment processing job scheduled at 3am Eastern runs twice when clocks fall back — once before the transition and once after. Jobs scheduled between 2:00am and 2:59am during spring-forward never run at all. These are well-known failure modes, not edge cases. Durable scheduling with local-time support handles the transitions automatically, but your handlers still need to be idempotent where it matters — no scheduling layer eliminates that responsibility.

Lost state on deploy

In-memory schedulers — setTimeout, node-schedule, delayed queue jobs — lose their state on restart. If the timer is not persisted externally, it does not survive deploys, scaling events, or infrastructure changes.

Silent failures

Cron has no built-in alerting. A job fails, and nothing happens unless you have built monitoring around it. You find out about failures when a customer complains, not when the failure happens.

Platform cron limitations

Serverless cron options trade simplicity for constraints. GitHub Actions cron jobs can drift significantly under high load and are disabled on inactive repositories. Vercel Hobby cron timing is imprecise within the hour. All serverless cron options are recurring triggers — none support per-event scheduling via API, except AWS EventBridge Scheduler, which requires full AWS infrastructure.

Multi-instance overlap

Without advisory locks or Kubernetes concurrencyPolicy, cron jobs running on multiple instances execute simultaneously. The result is duplicate work, race conditions, and data inconsistencies that are difficult to debug.

Cancellation is an afterthought

“Cancel this specific user’s pending reminder” with cron means adding a cancelled flag in the database and checking it at execution time. You cannot cancel a specific future cron invocation — only skip it when it fires. The cancelled state is coordination code you maintain alongside the scheduling logic.

How durable scheduling changes the model

With durable scheduling, each event gets its own timer. You schedule it once at event time and the system handles the rest.

Schedule once, no scanning. When a user signs up, schedule a reminder for 48 hours later. When an invitation is created, schedule its expiry for 7 days out. No polling loop, no “what’s due now” query. Each timer fires at its own time.

Cancel or update the specific timer. If the user completes onboarding before the reminder fires, cancel that specific timer by ID. No flag-checking, no database cleanup.

Built-in retries with backoff. If a delivery fails, retries happen automatically with configurable backoff, jitter, and delay. Each hook can override project-level retry settings — a payment retry might use a different strategy than a notification retry.

Per-delivery observability. Every delivery is tracked: pending, retry, completed, or failed. Attempt history shows what happened and when. No building a separate monitoring system.

Anomaly detection and alerting. When failure rates for an endpoint spike above its historical baseline, alerts fire automatically via email, Slack, or webhook. Recovery notifications follow when the endpoint returns to normal.

Timezone-aware scheduling. postAtLocal with a timezone schedules delivery in the user’s local time and handles DST transitions automatically. No manual UTC conversion, no spring-forward bugs.

Survives deploys and restarts. Timers are persisted in PostgreSQL with multi-AZ redundancy. Deploys, restarts, and scaling events do not lose scheduled work.

What about recurring work?

Cron’s core strength is recurring cadences. Posthook handles these too — with better tooling.

Sequences support recurring workflows through config-as-code. Define schedules in posthook.toml, version-control them, and deploy with posthook apply:

  • Calendar scheduling with timezone, DST handling, onDays, onDates, negative dates (like -1 for last day of month), and everyN intervals
  • Interval scheduling with fixed cadences — minutes, hours, days, months
  • Multi-step workflows where downstream steps wait for prerequisites to complete
  • Per-step retry overrides so each step can have its own failure handling
  • Full observability — every run is tracked with delivery status, not silent like cron

This means you can use Posthook for both per-event timing and recurring operational work without maintaining separate cron infrastructure.

When to choose each

Cron is the right choice when:

  • The task is recurring and system-level — vacuuming, log rotation, cache warming
  • Exact per-event timing does not matter
  • You have a small number of scheduled tasks
  • The work is pure SQL and pg_cron handles it
  • You are not building coordination logic on top of the schedule

Durable scheduling is the right choice when:

  • Each event creates its own timer — signup triggers a reminder, order creates an expiry
  • The work needs retries, observability, or cancellation
  • Deploys, restarts, or scaling would otherwise lose scheduled state
  • Per-event timing precision matters — “at this exact time,” not “sometime in the next minute”
  • Timezone-aware delivery with DST handling is needed
  • You want alerting on failures rather than silent drops

You can use both. Keep cron for system maintenance. Use durable scheduling for event-specific timing. They solve different problems and work fine side by side.

Frequently asked questions

Ready to get started?

Create your free account and start scheduling hooks in minutes. No credit card required.