Goal
Set up a queue consumer that retries on transient failures (e.g. an external endpoint being down) and automatically routes jobs to a dead letter queue (DLQ) when all retries are exhausted.
Steps
1. Register the external endpoint as an HTTP-invoked function
Register the payment API as an HTTP-invoked function. The engine makes the HTTP call — when the
endpoint is down or returns a non-2xx status, the engine marks the invocation as failed. When this
function is invoked via a named queue, the queue worker retries it based on the queue’s config.
Node / TypeScript
Python
Rust
import { registerWorker } from 'iii-sdk'
const iii = registerWorker(process.env.III_URL ?? 'ws://localhost:49134')
iii.registerFunction(
{ id: 'payments::charge' },
{
url: 'https://api.payments.example.com/charge',
method: 'POST',
timeout_ms: 5000,
},
)
import os
from iii import HttpInvocationConfig, register_worker
iii = register_worker(os.environ.get("III_URL", "ws://localhost:49134"))
iii.register_function(
"payments::charge",
HttpInvocationConfig(
url="https://api.payments.example.com/charge",
method="POST",
timeout_ms=5000,
),
)
use iii_sdk::{
register_worker, InitOptions, RegisterFunctionMessage,
HttpInvocationConfig, HttpMethod,
};
use std::collections::HashMap;
let iii = register_worker(
&std::env::var("III_URL").unwrap_or_else(|_| "ws://127.0.0.1:49134".to_string()),
InitOptions::default(),
);
iii.register_function((
RegisterFunctionMessage::with_id("payments::charge".into()),
HttpInvocationConfig {
url: "https://api.payments.example.com/charge".to_string(),
method: HttpMethod::Post,
timeout_ms: Some(5000),
headers: HashMap::new(),
auth: None,
},
));
2. Define a named queue with retry configuration
Declare the queue in iii-config.yaml with the retry and backoff settings. When the payment
endpoint fails, the engine retries with exponential backoff until max_retries is exhausted — then
the job moves to the DLQ. Enqueue work to this function by calling trigger() with
TriggerAction.Enqueue from wherever the order is created.
modules:
- class: modules::queue::QueueModule
config:
queue_configs:
payment_dlq:
max_retries: 5
backoff_ms: 2000
concurrency: 2
type: standard
adapter:
class: modules::queue::BuiltinQueueAdapter
Node / TypeScript
Python
Rust
import { TriggerAction } from 'iii-sdk'
await iii.trigger({
function_id: 'payments::charge',
payload: { orderId: order.id, amount: order.total },
action: TriggerAction.Enqueue({ queue: 'payment_dlq' }),
})
from iii import TriggerAction
iii.trigger({
"function_id": "payments::charge",
"payload": {"orderId": order["id"], "amount": order["total"]},
"action": TriggerAction.Enqueue(queue="payment_dlq"),
})
use iii_sdk::{TriggerAction, TriggerRequest};
use serde_json::json;
iii.trigger(TriggerRequest {
function_id: "payments::charge".into(),
payload: json!({
"orderId": order["id"],
"amount": order["total"],
}),
action: Some(TriggerAction::Enqueue {
queue: "payment_dlq".into(),
}),
timeout_ms: None,
})
.await?;
With this configuration, a failing job follows this timeline:
| Attempt | Delay before retry |
|---|
| 1 | 2 s |
| 2 | 4 s |
| 3 | 8 s |
| 4 | 16 s |
| 5 | — moved to DLQ |
3. What happens when a job lands in the DLQ
When the payment endpoint is down and all 5 retries exhaust, the engine:
- Removes the job from the active queue
- Stores it in the DLQ with the original payload, the last error, and a
failed_at timestamp
- Logs a warning:
WARN queue="payment_dlq" job_id="..." attempts=5 "Job exhausted, moved to DLQ"
The job stays in the DLQ until the engine redrives it. No other jobs in the queue are blocked — processing continues normally for new messages.
4. Queue configuration reference
| Field | Type | Default | Description |
|---|
max_retries | u32 | 3 | Maximum delivery attempts before moving to DLQ |
backoff_ms | u64 | 1000 | Base delay in ms between retries (exponential: backoff_ms × 2^(attempt − 1)) |
concurrency | u32 | 10 | Max concurrent jobs for this queue |
type | string | "standard" | Queue mode: "standard" (concurrent) or "fifo" (ordered) |
Result
Failed jobs retry automatically with exponential backoff. After all retries exhaust, the job moves to the DLQ where it is preserved with its full payload and error context. The engine continues processing new messages in the queue without interruption.
DLQ is fully supported by the Builtin and RabbitMQ queue adapters. The Redis adapter does not support DLQ operations.