Goal
Enqueue jobs to a specific function by name with configurable retries, concurrency limits, FIFO ordering, and dead-letter support. All named queues are defined centrally iniii-config.yaml. For help deciding between named and topic-based queues, see When to use which.
Named queues use the
Enqueue trigger action. Refer to Trigger Actions to learn more.Enable the Queue module
iii-config.yaml
For complete configuration options please refer to Queue module reference.
Steps
1. Define named queues in config
Declare one or more named queues underqueue_configs. Each queue has independent retry, concurrency, and ordering settings.
iii-config.yaml
FIFO queues enforce ordering in a queue and they require a
message_group_field to order on. Queues can also set backoff_ms for exponential retry delays. See more on this in the steps below.For full configuration options refer to the Queue module reference.2. Enqueue work via trigger action
From any function, enqueue a job by callingtrigger() with TriggerAction.Enqueue and the target queue name. The caller receives an acknowledgement (messageReceiptId) once the engine accepts the job — it does not wait for processing.
- Node / TypeScript
- Python
- Rust
payload as its input — it does not need to know it was invoked via a queue.
3. Handle the enqueue result
The enqueue call can fail synchronously if the queue name is unknown or FIFO validation fails. Always handle the result.- Node / TypeScript
- Python
- Rust
- The queue name does not exist in
queue_configs - A FIFO queue’s
message_group_fieldis missing ornullin the payload
4. Use FIFO queues for ordered processing
When processing order matters — for example, financial transactions for the same account — settype: fifo and specify message_group_field. Jobs sharing the same group value are processed strictly in order.
iii-config.yaml (excerpt)
message_group_field, and its value must be non-null.
- Node / TypeScript
- Python
- Rust
5. Configure retries and backoff
Every named queue retries failed jobs automatically. Backoff is exponential:| Attempt | backoff_ms: 1000 | backoff_ms: 2000 |
|---|---|---|
| 1 | 1 000 ms | 2 000 ms |
| 2 | 2 000 ms | 4 000 ms |
| 3 | 4 000 ms | 8 000 ms |
| 4 | 8 000 ms | 16 000 ms |
| 5 | 16 000 ms | 32 000 ms |
iii-config.yaml (excerpt)
See Manage Failed Triggers for DLQ inspection and redrive.
6. Control concurrency
Theconcurrency field sets the maximum number of jobs the engine processes simultaneously from a single queue (per engine instance).
iii-config.yaml (excerpt)
- Standard queues: the engine pulls up to
concurrencyjobs simultaneously. - FIFO queues: the engine processes one job at a time (prefetch=1) to preserve ordering, regardless of the
concurrencyvalue.
Result
Jobs are enqueued and acknowledged immediately — the caller receives amessageReceiptId without waiting for processing. The engine delivers each job to the target function, retries failures with exponential backoff, and routes exhausted jobs to the dead-letter queue. Standard queues process jobs concurrently; FIFO queues guarantee per-group ordering.
For a detailed comparison of standard and FIFO queue behavior — including processing model, ordering guarantees, and flow diagrams — see the Queue module reference. For retry and dead-letter flow, see Retry and dead-letter flow.
Real-World Scenarios
HTTP API to Queue Pipeline
The most common pattern — an HTTP endpoint accepts a request, responds immediately, and offloads the actual work to a queue. This keeps API response times fast regardless of how long downstream processing takes.iii-config.yaml
- Node / TypeScript
- Python
- Rust
Financial Transaction Ledger (FIFO)
Transactions for the same account must be applied in order to prevent balance inconsistencies. Different accounts can process in parallel.iii-config.yaml (excerpt)
account_id:
The worker processes acct_A jobs strictly in order, while acct_B proceeds independently:
- Node / TypeScript
- Python
- Rust
ledger queue is FIFO with message_group_field: account_id, the deposit for acct_A always completes before the withdrawal. Without FIFO ordering, the withdrawal could execute first and fail with “Insufficient funds” even though the deposit was submitted first.
Bulk Email with Rate Limiting
A marketing system sends thousands of emails. The SMTP provider has a rate limit. A standard queue with low concurrency prevents overloading the provider while retrying transient failures.iii-config.yaml (excerpt)
- Node / TypeScript
- Python
- Rust
concurrency: 3, at most three emails are in-flight at any time. Failed sends retry with exponential backoff (5s, 10s, 20s, 40s, 80s), protecting the SMTP provider from overload.
For adapter options (builtin, RabbitMQ, Redis), scenario-based recommendations, and the full queue configuration reference, see the Queue module reference.
Remember
Jobs are enqueued and acknowledged immediately — the caller receives amessageReceiptId without waiting for processing. The engine delivers each job to the target function, retries failures with exponential backoff, and routes permanently failed jobs to a dead-letter queue. Standard queues process jobs concurrently; FIFO queues guarantee per-group ordering.
Next Steps
Topic-Based Queues
Fan out messages to multiple subscribers with durable pub-sub
Trigger Actions
Understand synchronous, Void, and Enqueue invocation modes
Dead Letter Queues
Handle and redrive failed queue messages
Queue Module Reference
Full configuration reference for queues and adapters