Scheduled API Testing with Assertions: Monitor Beyond Status Codes

By ThunderHooks Team · · 7 min read
Scheduled API Testing with Assertions: Monitor Beyond Status Codes

Scheduled API Testing with Assertions: Monitor Beyond Status Codes

Your payment processing endpoint returns 200. Everything looks fine. Except the response body is {"data": null} because the upstream provider changed their contract and your serialization silently swallowed the error. Customers are hitting "Pay Now" and getting spinner-of-death for six hours before anyone notices.

Status code 200 is a lie. Or at least it's incomplete.

The Problem with Basic Uptime Checks

Traditional uptime monitoring asks one question: "Did the server respond with a non-error status code?" That's table stakes. It catches crashes, DNS failures, expired certificates. It does not catch:

  • An API that returns 200 with an empty or malformed body
  • A database connection pool that's exhausted, so queries return cached stale data
  • Response times that crept from 200ms to 4 seconds after a bad deploy
  • A JSON response where $.status changed from "active" to "degraded" because a third-party dependency is down
  • An authentication endpoint that always returns 200 but stopped including the Authorization header

You need to test what the response contains, not just that one arrived.

Synthetic Monitoring vs. Uptime Monitoring

Quick distinction. Uptime monitoring pings a URL and checks if it's alive. Synthetic monitoring simulates real API calls and validates the responses.

Think of it like this: uptime monitoring is checking that the restaurant is open. Synthetic monitoring is ordering food and checking that it actually tastes right.

With synthetic monitoring, you define a full HTTP request (method, headers, body) and a set of assertions about the response. The test runs on a schedule. When assertions fail, you get alerted. That's it.

Tools like Checkly, Pingdom, and Datadog Synthetic Monitoring all do this. So does ThunderHooks.

Six Types of Assertions You Should Be Running

1. Status Code

The obvious one, but worth being explicit about. Don't just check "is it 2xx." Check the exact code your API should return.

{
  "type": "status_code",
  "operator": "equals",
  "value": "200"
}

Why exact codes? Because a 201 when you expected 200 might mean your API is creating duplicate resources. A 204 instead of 200 means something changed in how your framework handles empty responses.

2. Response Time Threshold

An API that takes 8 seconds to respond is functionally down for most users. Set a ceiling.

{
  "type": "response_time",
  "operator": "less_than",
  "value": "500"
}

That's 500 milliseconds. Aggressive? Maybe. But if your payments API normally responds in 120ms, a jump to 500ms means something is wrong and you want to know before it hits 5000ms.

Pick your threshold based on actual P95 latency, not wishful thinking. Run the test for a week, look at the numbers, then set the bar 2-3x above normal.

3. Body Contains String

The simplest content check. Does the response body include a string you expect?

{
  "type": "body_contains",
  "operator": "contains",
  "value": "\"status\":\"active\""
}

Good for health check endpoints that return a known string, or for verifying that an API isn't returning error messages wrapped in a 200 response (which is disturbingly common).

4. JSONPath Value Matching

This is where it gets interesting. JSONPath lets you reach into a JSON response and check specific fields.

{
  "type": "jsonpath",
  "operator": "equals",
  "target": "$.data.status",
  "value": "active"
}

That checks whether the status field inside the data object equals "active". Real-world examples:

  • $.meta.version equals "v2" -- catch API version regressions
  • $.data.items.length greater_than "0" -- make sure a list endpoint isn't returning empty
  • $.error equals "" -- verify no error field is populated

JSONPath is the single most useful assertion type for API testing. A 200 with $.data.available set to false is a production incident that no status code check will ever catch.

5. Header Presence

Some APIs communicate state through headers. Rate limit remaining, API version, cache status.

{
  "type": "header",
  "operator": "equals",
  "target": "Content-Type",
  "value": "application/json"
}

Useful checks: X-RateLimit-Remaining is greater_than some threshold. Content-Type is what you expect (not text/html from an error page). Cache-Control hasn't changed unexpectedly.

6. TLS Certificate Expiry

Your cert expires, your API goes down, and every webhook provider that sends to your HTTPS endpoint starts getting rejected. Let's Encrypt certs expire every 90 days and auto-renewal sometimes breaks silently.

{
  "type": "cert_expiry",
  "operator": "greater_than",
  "value": "14"
}

That asserts the TLS certificate has more than 14 days left. Gives you two weeks to fix auto-renewal before it actually expires. Could save you from a 3am incident.

Real Example: Testing a Payment API

Say you run an e-commerce platform. Your /api/v2/payments/status endpoint needs to return current payment processing status. Here's what a complete test configuration looks like:

Request:

  • Method: GET
  • URL: https://api.yourapp.com/api/v2/payments/status
  • Headers: {"Authorization": "Bearer sk_monitor_token_xxxx", "Accept": "application/json"}

Assertions:

[
  {"type": "status_code", "operator": "equals", "value": "200"},
  {"type": "response_time", "operator": "less_than", "value": "800"},
  {"type": "jsonpath", "operator": "equals", "target": "$.processor.status", "value": "operational"},
  {"type": "jsonpath", "operator": "greater_than", "target": "$.processor.success_rate", "value": "0.95"},
  {"type": "header", "operator": "equals", "target": "Content-Type", "value": "application/json"},
  {"type": "cert_expiry", "operator": "greater_than", "value": "14"}
]

Six assertions. If any single one fails, the test fails. You'd know within minutes if:

  • The endpoint went down (status code)
  • The database got slow (response time)
  • The payment processor flagged your account (processor status)
  • Transaction success rate dropped (success rate check)
  • Something weird happened to content negotiation (content type)
  • Your cert is about to expire (TLS check)

That's way more signal than "200 OK, looks fine."

How ThunderHooks API Testing Works

ThunderHooks runs a background scheduler that ticks every 30 seconds. On each tick, it checks which API tests are due based on their configured interval. Due tests execute concurrently (up to 10 at once) with SSRF protection built into the HTTP client so nobody can point a test at localhost or internal IPs.

Each test execution:

  1. Sends the HTTP request with your configured method, headers, and body
  2. Captures the response status, headers, body (up to 4KB), and TLS certificate info
  3. Evaluates every assertion against the response
  4. Stores the result with per-assertion pass/fail details
  5. Updates consecutive failure count
  6. Sends alerts (email and/or webhook) when status transitions from passing to failing
  7. Sends recovery alerts when it starts passing again

API test runs are free — they don't consume credits. Run as many tests as your plan allows at whatever interval you need without worrying about credit usage.

Alert configuration is per-test. You set an email, an alert webhook URL (point it at Slack or PagerDuty via their incoming webhook), or both. You can also set alert_after_failures to require consecutive failures before alerting, which cuts down on noise from transient blips.

Cost Comparison

Here's the honest comparison if API testing is your primary need:

Checkly starts at $30/month for their Starter plan. You get browser checks and API checks, Playwright scripting, a nice dashboard. If you need browser-based synthetic monitoring too, Checkly is hard to beat. But if you only need API endpoint testing alongside webhook infrastructure, it's a separate tool and a separate bill.

Pingdom transaction checks (which are their API testing equivalent) start at $15/month for 10 uptime checks. But their "transaction" monitoring that does assertion-style checks is in the higher tiers. And again, it doesn't know anything about webhooks.

ThunderHooks Pro is $19/month. You get 5 API tests (5-minute minimum interval), plus 25 webhook endpoints, relay rules, uptime monitors, and heartbeat monitoring. If you're already using ThunderHooks for webhook development, API testing is bundled in. No extra tool, no extra login.

The Team plan at $49/month bumps you to 25 API tests with 60-second intervals and 20,000 monthly credits.

Pick based on what else you need. Checkly wins on browser checks. Pingdom wins on global check locations. ThunderHooks wins if you already use it for webhook infrastructure.

API Tests vs. Uptime Monitors vs. Heartbeats

These three features overlap, and picking the wrong one creates blind spots.

Uptime monitors send an HTTP request and check the status code. Use these for "is it up?" checks on webhook endpoints, landing pages, or any URL where reachability is all you care about. Free and fast.

API tests send a full HTTP request and validate the response against multiple assertions. Use these when you need to verify response content, not just availability. Payment APIs, auth endpoints, anything where a 200 with bad data is worse than a 500.

Heartbeats work backwards. Instead of you checking a URL, your service pings ThunderHooks on a schedule. If ThunderHooks stops receiving pings, it alerts you. Use these for cron jobs, background workers, batch processors, anything that runs on a schedule and should phone home when it finishes.

Quick decision guide:

Question Use
Is this URL responding? Uptime monitor
Is this API returning correct data? API test
Is my background job still running? Heartbeat
Did my nightly ETL complete? Heartbeat
Is my payment flow working end-to-end? API test
Is my webhook endpoint reachable? Uptime monitor
Is the response body what I expect? API test

When in doubt, start with an uptime monitor. Upgrade to an API test when you get your first "but it was returning 200!" incident.

Getting Started

If you're not ready for a tool, start with a script. Here's a quick one using curl and jq:

#!/bin/bash
RESPONSE=$(curl -s -w "\n%{http_code}\n%{time_total}" \
  -H "Authorization: Bearer $API_TOKEN" \
  https://api.yourapp.com/health)

BODY=$(echo "$RESPONSE" | head -n -2)
STATUS=$(echo "$RESPONSE" | tail -n 2 | head -n 1)
TIME=$(echo "$RESPONSE" | tail -n 1)

# Check status
if [ "$STATUS" != "200" ]; then
  echo "FAIL: status $STATUS"
  exit 1
fi

# Check response time (in seconds)
if (( $(echo "$TIME > 2.0" | bc -l) )); then
  echo "FAIL: response time ${TIME}s"
  exit 1
fi

# Check JSON field
DB_STATUS=$(echo "$BODY" | jq -r '.database')
if [ "$DB_STATUS" != "ok" ]; then
  echo "FAIL: database status is $DB_STATUS"
  exit 1
fi

echo "PASS"

Run it with cron, pipe failures to Slack, and you've got basic synthetic monitoring. It'll work until you need history, dashboards, or multiple team members understanding what's monitored.

Resources

Ready to simplify webhook testing?

Try ThunderHooks free. No credit card required.

Get Started Free