Goal
Use queues to run work asynchronously with built-in retries, dead-letter support, and concurrency control. iii offers two queue modes — topic-based (pub/sub fan-out) and named queues (direct function targeting) — so you can pick the delivery model that fits your use case.What Are Queues
A queue sits between the code that produces work and the code that processes it. Instead of calling a function and waiting for it to finish, you hand the work to a queue. The queue stores the message, delivers it to a consumer, retries on failure, and routes permanently failed messages to a dead-letter queue (DLQ) for later inspection. This separation solves three problems:- Speed — the producer responds immediately instead of blocking on slow downstream work.
- Reliability — transient failures (network blips, service restarts) are retried automatically instead of being lost.
- Load control — concurrency limits prevent consumers from overwhelming downstream systems.
When to Use Queues
| Scenario | Use a queue? | Why |
|---|---|---|
| HTTP handler must respond fast, but downstream work is slow | Yes | Enqueue the work and return 202 Accepted immediately |
| Multiple functions must react to the same event | Yes | Topic-based queues fan out to every subscriber |
| Work must survive process restarts | Yes | Queues persist messages and retry on failure |
| External API has rate limits | Yes | Concurrency control throttles parallel requests |
| Transactions for the same entity must be ordered | Yes | FIFO queues guarantee per-group ordering |
| You need the function’s return value right now | No | Use a synchronous trigger instead |
| The work is non-critical and losing it is acceptable | Maybe | TriggerAction.Void() is simpler if you don’t need retries |
Two Queue Modes
iii supports two ways to use queues. Both share the same adapter, retry engine, and DLQ infrastructure — they differ in how producers address consumers.| Topic-based | Named queues | |
|---|---|---|
| Producer | trigger({ function_id: 'enqueue', payload: { topic, data } }) | trigger({ function_id, payload, action: TriggerAction.Enqueue({ queue }) }) |
| Consumer | Registers registerTrigger({ type: 'queue', config: { topic } }) | No registration — function is the target |
| Delivery | Fan-out: each subscribed function gets every message; replicas compete | Single target function per enqueue call |
| Config | Optional queue_config on trigger | queue_configs in iii-config.yaml |
| Best for | Durable pub/sub with retries and fan-out | Direct function invocation with retries, FIFO, DLQ |
Named queues use the
Enqueue trigger action. If you are new to trigger actions, read Trigger Actions first.Topic-Based Queues
Topic-based queues work like durable pub/sub: you publish a message to a topic, and every function subscribed to that topic receives a copy. If a function has multiple replicas, they compete on a shared per-function queue — only one replica processes each message.Register consumers for a topic
Subscribe one or more functions to the same topic. Each function gets its own internal queue.Both
- Node / TypeScript
- Python
- Rust
notify::email and audit::log are now subscribed to order.created. Every message published to that topic reaches both functions.Publish events to the topic
From any function, publish a message using the builtin The producer does not need to know which functions are subscribed — it only knows the topic name.
enqueue function. The engine fans it out to every subscribed function.- Node / TypeScript
- Python
- Rust
Understand fan-out delivery
Topic-based queues use fan-out per function:
- Each distinct function subscribed to a topic receives a copy of every message.
- If a function has multiple replicas running, they compete on a shared per-function queue — only one replica processes each message.
Filter messages with conditions (optional)
Attach a condition function to a queue trigger to filter which messages reach the handler. The condition receives the message data and returns
true or false. If false, the handler is not called — no error is surfaced.- Node / TypeScript
- Python
- Rust
See Conditions for the full pattern including HTTP and state trigger conditions.
Named Queues
Named queues target a specific function directly. You define queue settings iniii-config.yaml and reference the queue name when enqueuing work.
Define named queues in config
Declare one or more named queues under
queue_configs. Each queue has independent retry, concurrency, and ordering settings.iii-config.yaml
See the Queue module reference for every field, type, and default value.
Enqueue work via trigger action
From any function, enqueue a job by calling The target function receives the
trigger() with TriggerAction.Enqueue and the target queue name. The caller receives an acknowledgement (messageReceiptId) once the engine accepts the job — it does not wait for processing.- Node / TypeScript
- Python
- Rust
payload as its input — it does not need to know it was invoked via a queue.Handle the enqueue result
The enqueue call can fail synchronously if the queue name is unknown or FIFO validation fails. Always handle the result.Common rejection reasons:
- Node / TypeScript
- Python
- Rust
- The queue name does not exist in
queue_configs - A FIFO queue’s
message_group_fieldis missing ornullin the payload
Use FIFO queues for ordered processing
When processing order matters — for example, financial transactions for the same account — set The payload must contain the field named by
type: fifo and specify message_group_field. Jobs sharing the same group value are processed strictly in order.iii-config.yaml (excerpt)
message_group_field, and its value must be non-null.- Node / TypeScript
- Python
- Rust
Configure retries and backoff
Every named queue retries failed jobs automatically. Backoff is exponential:
After all retries are exhausted, the job moves to a dead-letter queue (DLQ).
| Attempt | backoff_ms: 1000 | backoff_ms: 2000 |
|---|---|---|
| 1 | 1 000 ms | 2 000 ms |
| 2 | 2 000 ms | 4 000 ms |
| 3 | 4 000 ms | 8 000 ms |
| 4 | 8 000 ms | 16 000 ms |
| 5 | 16 000 ms | 32 000 ms |
iii-config.yaml (excerpt)
See Manage Failed Triggers for DLQ inspection and redrive.
Control concurrency
The
concurrency field sets the maximum number of jobs the engine processes simultaneously from a single queue (per engine instance).iii-config.yaml (excerpt)
- Standard queues: the engine pulls up to
concurrencyjobs simultaneously. - FIFO queues: the engine processes one job at a time (prefetch=1) to preserve ordering, regardless of the
concurrencyvalue.
Standard vs FIFO Queues
| Dimension | Standard | FIFO |
|---|---|---|
| Processing model | Up to concurrency jobs in parallel | One job at a time (prefetch=1) |
| Ordering | No guarantees — jobs may complete in any order | Strictly ordered within a message group |
message_group_field | Not required | Required — must be present and non-null in every payload |
| Throughput | High — scales with concurrency | Lower — trades throughput for ordering |
| Use cases | Email sends, image processing, notifications | Payments, ledger entries, state machines |
| Retries | Retried independently, other jobs continue | Retried inline — blocks the queue until success or DLQ |
Standard queue flow
Jobs are dequeued and processed concurrently. Each job is independent.FIFO queue flow
Jobs within the same message group are processed one at a time, strictly in order.Retry and dead-letter flow
When a job fails, the engine retries it with exponential backoff. After all retries exhaust, the job moves to the DLQ.Real-World Scenarios
Scenario 1: HTTP API to Queue Pipeline
The most common pattern — an HTTP endpoint accepts a request, responds immediately, and offloads the actual work to a queue. This keeps API response times fast regardless of how long downstream processing takes.iii-config.yaml
- Node / TypeScript
- Python
- Rust
Scenario 2: Event Fan-Out with Topic Queues
An order system publishesorder.created events. Multiple independent services — email notifications, inventory updates, and analytics — each need to process every order. Topic-based queues fan out each message to all subscribers with independent retries per function.
- Node / TypeScript
- Python
- Rust
order.created event independently. If inventory::reserve fails and retries, it does not affect notify::email or analytics::track.
Scenario 3: Financial Transaction Ledger (FIFO)
Transactions for the same account must be applied in order to prevent balance inconsistencies. Different accounts can process in parallel.iii-config.yaml (excerpt)
- Node / TypeScript
- Python
- Rust
ledger queue is FIFO with message_group_field: account_id, the deposit for acct_A always completes before the withdrawal. Without FIFO ordering, the withdrawal could execute first and fail with “Insufficient funds” even though the deposit was submitted first.
Scenario 4: Bulk Email with Rate Limiting
A marketing system sends thousands of emails. The SMTP provider has a rate limit. A standard queue with low concurrency prevents overloading the provider while retrying transient failures.iii-config.yaml (excerpt)
- Node / TypeScript
- Python
- Rust
concurrency: 3, at most three emails are in-flight at any time. Failed sends retry with exponential backoff (5s, 10s, 20s, 40s, 80s), protecting the SMTP provider from overload.
Choosing an Adapter
The queue adapter determines where messages are stored and how they are distributed.| Scenario | Recommended Adapter | Why |
|---|---|---|
| Local development | BuiltinQueueAdapter (in_memory) | Zero dependencies, fast iteration |
| Single-instance production | BuiltinQueueAdapter (file_based) | Durable across restarts, no external infra |
| Multi-instance production | RabbitMQAdapter | Distributes messages across engine instances |
See the Queue module reference for adapter configuration and the adapter comparison table for a feature matrix.
When using the RabbitMQ adapter, iii creates exchanges and queues using a predictable naming convention. For a queue named
payment, the main queue is iii.__fn_queue::payment, the retry queue is iii.__fn_queue::payment::retry.queue, and the DLQ is iii.__fn_queue::payment::dlq.queue. See Dead Letter Queues for the full resource map.Queue Config Reference
| Field | Type | Default | Description |
|---|---|---|---|
max_retries | u32 | 3 | Maximum delivery attempts before routing to DLQ |
concurrency | u32 | 10 | Maximum concurrent workers for this queue (standard only) |
type | string | "standard" | "standard" for concurrent processing; "fifo" for ordered processing |
message_group_field | string | — | Required for FIFO — the JSON field in the payload used for ordering groups (must be non-null) |
backoff_ms | u64 | 1000 | Base retry backoff in milliseconds. Applied exponentially: backoff_ms × 2^(attempt - 1) |
poll_interval_ms | u64 | 100 | Worker poll interval in milliseconds |
Next Steps
Trigger Actions
Understand synchronous, Void, and Enqueue invocation modes
Dead Letter Queues
Handle and redrive failed queue messages
Queue Module Reference
Full configuration reference for queues and adapters
Conditions
Filter queue messages with condition functions