How to Forward Webhooks to Multiple Destinations
How to Forward Webhooks to Multiple Destinations
You set up a Stripe webhook endpoint. It works. Then someone asks: "Can we also send those to our logging service?" And then: "What about the staging environment?" And then: "Analytics needs a copy too."
Suddenly you're writing a webhook dispatcher inside your payment handler. That's the wrong place for it.
The Fan-Out Problem
Most webhook providers send to one URL. Stripe gives you one endpoint per event type. GitHub sends to one URL per repository webhook. Shopify — same deal.
But your team needs that data in multiple places:
- Your production application (obviously)
- A logging or observability service like Datadog or Loki
- A staging environment for testing
- A data warehouse for analytics
- A backup system for audit trails
The naive fix is doing this inside your webhook handler:
func handleStripeWebhook(w http.ResponseWriter, r *http.Request) {
payload, _ := io.ReadAll(r.Body)
// Handle the webhook
processPayment(payload)
// Forward to logging
http.Post("https://logs.example.com/webhooks", "application/json", bytes.NewReader(payload))
// Forward to staging
http.Post("https://staging.example.com/webhooks/stripe", "application/json", bytes.NewReader(payload))
// Forward to analytics
http.Post("https://analytics.example.com/ingest", "application/json", bytes.NewReader(payload))
w.WriteHeader(http.StatusOK)
}
This has problems. If the logging service is slow, your Stripe response takes longer. If analytics is down, you might timeout and Stripe marks the delivery as failed. The forwarding code grows and grows, mixed in with your business logic. Error handling gets messy fast.
Option 1: Message Queue
If you already run infrastructure like RabbitMQ, Kafka, or AWS SQS, you can publish incoming webhooks to a topic and let consumers handle distribution.
# Receive webhook, publish to queue
@app.route('/webhooks/stripe', methods=['POST'])
def stripe_webhook():
payload = request.get_data()
verify_signature(payload, request.headers)
# Publish once, multiple consumers pick it up
channel.basic_publish(
exchange='webhooks',
routing_key='stripe',
body=payload
)
return '', 200
This works well but you're running a message broker now. For a team of five working on a SaaS app, that's a lot of infrastructure for forwarding HTTP requests.
Option 2: Reverse Proxy with Mirroring
Nginx can mirror requests to additional backends:
location /webhooks/stripe {
mirror /mirror-logging;
mirror /mirror-staging;
proxy_pass http://your-app:3000;
}
location = /mirror-logging {
internal;
proxy_pass https://logs.example.com/webhooks$request_uri;
}
location = /mirror-staging {
internal;
proxy_pass https://staging.example.com/webhooks/stripe$request_uri;
}
This keeps forwarding out of your application code. Downside: no retries if a destination fails, no filtering by request content, and you need to redeploy nginx config every time you add a destination.
Option 3: Webhook Relay Service
A relay service sits between the webhook provider and your destinations. It receives the webhook once, then forwards copies to every configured destination.
The flow:
Stripe → Relay endpoint → Your app
→ Logging service
→ Staging environment
→ Analytics pipeline
This is what ThunderHooks relay rules do. You configure a relay rule per destination, with optional filters, and the service handles forwarding with retries.
Filtering
Not every destination needs every webhook. Your analytics pipeline probably doesn't care about charge.dispute.funds_withdrawn. Your staging environment only needs events related to the feature you're testing.
Useful filters:
- HTTP method — forward only POST requests (ignore health checks)
- URL path — forward
/webhooks/stripebut not/webhooks/github - Content-Type — only forward
application/json, skip form-encoded callbacks
With ThunderHooks, filters are configured per relay rule. A rule for your logging service might forward everything. A rule for staging might only forward payment_intent.* events.
Retries
Destinations go down. Networks blip. Your staging server restarts during a deploy. A good relay handles this with retries.
The standard approach is exponential backoff:
| Attempt | Delay |
|---|---|
| 1st retry | 5 seconds |
| 2nd retry | 25 seconds |
| 3rd retry | ~2 minutes |
| 4th retry | ~10 minutes |
| 5th retry | ~1 hour |
This gives transient failures time to resolve without hammering a server that's already struggling. After the final attempt, the relay marks the delivery as failed so you can investigate.
What About Headers?
Forwarded webhooks should include the original headers. Signature verification headers matter — if your app verifies Stripe's Stripe-Signature header, the relay needs to pass it through unchanged.
Watch out for headers that change meaning when relayed:
Host— should reflect the destination, not the relayContent-Length— should match the body being sentX-Forwarded-For— the relay should add its own IP to the chain
ThunderHooks passes through all original headers and adds X-Forwarded-By: thunderhooks so your application can tell the difference between a direct webhook and a relayed one if needed.
Idempotency Matters More with Fan-Out
When you relay to four destinations, a retry on one destination means that webhook gets delivered twice to that service. Your downstream consumers need to handle duplicates.
The pattern is straightforward: use the webhook event ID as an idempotency key.
app.post('/webhooks/stripe', async (req, res) => {
const event = JSON.parse(req.body);
// Check if we already processed this event
const exists = await db.query(
'SELECT 1 FROM processed_events WHERE event_id = ?',
[event.id]
);
if (exists) {
return res.status(200).json({ status: 'already_processed' });
}
await processEvent(event);
await db.query(
'INSERT INTO processed_events (event_id, processed_at) VALUES (?, NOW())',
[event.id]
);
res.status(200).json({ status: 'ok' });
});
This isn't specific to relaying — you should do this regardless. But with fan-out, duplicates become more likely, so it matters more.
Setting Up Relay Rules in ThunderHooks
If you want to try this without building the plumbing yourself:
- Create an endpoint in ThunderHooks (you get a permanent URL)
- Point your webhook provider at that URL
- Add relay rules for each destination
Each relay rule needs:
- Destination URL — where to forward
- Filters (optional) — method, path, or content type restrictions
- Max retries — how many times to retry on failure (default: 5)
Webhooks are captured first (so you get full history and inspection), then relayed to all matching rules. If a destination is down, retries happen in the background. Your webhook provider sees a quick 200 response regardless.
When Relaying Isn't the Right Call
Relaying adds a hop. That means:
- Latency — an extra network round-trip per destination. For most webhook use cases this doesn't matter (webhooks are async by nature) but if you need sub-50ms forwarding, a relay adds overhead.
- Single point of failure — if the relay service is down, nothing gets forwarded. ThunderHooks mitigates this by still capturing the webhook (so you can replay it later) but real-time forwarding stops.
- Cost — each relay consumes 1 credit. If you're forwarding 10,000 webhooks/month to four destinations, that's 40,000 credits on relaying alone.
For high-volume, latency-sensitive workloads, a message queue running in your own infrastructure gives you more control. For most teams doing webhook development and testing, a relay service is simpler.