How I Built a SaaS With Go, HTMX, and Zero JS Frameworks
How I Built a SaaS With Go, HTMX, and Zero JS Frameworks
Most SaaS products ship with React or Next.js, a Node backend, PostgreSQL, Redis, and a deployment pipeline that takes a week to set up. ThunderHooks ships with none of that.
The entire stack is Go, HTMX, Alpine.js, and SQLite. One binary. One database file. Deployed on a single free-tier VM. It handles webhook capture, real-time streaming, uptime monitoring, heartbeat checks, API testing, and public status pages.
Here is how it works and why this stack was chosen over the conventional approach.
The Stack
| Layer | Technology | Why |
|---|---|---|
| Backend | Go 1.24 + Echo | Single binary, fast compilation, no runtime dependencies |
| Templating | Templ | Type-safe HTML components compiled to Go |
| Frontend interactivity | HTMX + Alpine.js | Declarative AJAX + lightweight reactivity, ~22KB total |
| Styling | Tailwind CSS | Utility-first, purged in production |
| Database | SQLite via Turso | Single file, zero ops, multi-tenant via user_id |
| Real-time | Server-Sent Events | Native browser API, no WebSocket complexity |
| Deployment | Docker + Makefile | One command deploys to a GCP e2-micro |
Total frontend JavaScript written by hand: 212 lines. That includes CSRF token injection, toast notifications, and SSE connection management.
Why Not React
React solves problems ThunderHooks does not have. There is no complex client-side state. There is no offline-first requirement. There are no deeply nested interactive forms. The dashboard shows a list of webhooks, lets you click into them, and replay them. That is it.
HTMX handles all of this with HTML attributes:
<button hx-post="/api/webhooks/replay"
hx-target="#result"
hx-swap="innerHTML">
Replay
</button>
No bundler. No virtual DOM. No hydration. The server renders HTML, HTMX swaps it into the page, and Alpine.js handles the small bits of client-side state like toggle menus and modals.
The result is a dashboard that loads in under 200ms and weighs less than most React app's JavaScript bundle alone.
Templ: Type-Safe HTML in Go
The biggest win in this stack is Templ. It compiles HTML templates to Go functions with full type safety. If you pass the wrong type to a component, it fails at compile time — not at runtime in production.
A simplified example of the layout pattern:
templ AppLayout(nonce string, email string, baseURL string, contents templ.Component) {
<!DOCTYPE html>
<html>
<head>
<script nonce={ nonce } src="/static/js/vendor/htmx.min.js"></script>
<script nonce={ nonce } src="/static/js/vendor/alpine.min.js"></script>
</head>
<body>
@Navigation(email)
@contents
</body>
</html>
}
Every page is a function call. The nonce is generated per-request for Content Security Policy compliance, then threaded through the layout to every <script> tag. Try doing that cleanly with html/template.
Components compose naturally:
templ EndpointCard(endpoint models.Endpoint) {
<div class="card">
<h3>{ endpoint.Name }</h3>
<span class={ statusBadge(endpoint.Status) }>{ endpoint.Status }</span>
</div>
}
The templ generate command runs in under 500ms for the entire project. It outputs plain Go files that compile with everything else. No separate template engine at runtime.
Real-Time With SSE, Not WebSockets
When a webhook hits a ThunderHooks endpoint, every connected dashboard user sees it appear instantly. This uses Server-Sent Events, not WebSockets.
SSE is simpler for this use case because the data flows one direction — server to browser. The browser opens a connection, the server pushes events. No handshake negotiation, no ping/pong frames, no reconnection logic on the client side (the browser handles it natively).
The architecture uses a broker pattern:
- Dashboard opens an SSE connection to
/sse/endpoints/:id - Broker registers the connection, keyed by endpoint ID
- When a webhook arrives, the handler publishes an event to the broker
- Broker fans out the event to all connections watching that endpoint
Connection limits prevent resource exhaustion — 5 connections per user, 10,000 total. A 30-second heartbeat detects dead connections and cleans them up.
On the client side, HTMX has native SSE support:
<div hx-ext="sse"
sse-connect="/sse/endpoints/abc123"
sse-swap="new-request">
</div>
When the new-request event arrives, HTMX swaps the HTML fragment into the page. No JavaScript event listeners, no manual DOM manipulation.
SQLite as the Production Database
ThunderHooks uses a single SQLite database for everything. Not PostgreSQL. Not MySQL. SQLite.
This is possible because the multi-tenancy model is simple: every table has a user_id column with an index. Queries always filter by user. There is no cross-tenant data access.
The atomic credit system is a good example of SQLite's strengths:
UPDATE users
SET credits_balance = credits_balance - ?
WHERE id = ? AND credits_balance >= ?
This single statement atomically checks the balance and deducts credits. If the balance is too low, zero rows are affected and the transaction is rolled back. No advisory locks, no SELECT-then-UPDATE race conditions. SQLite's single-writer model makes this inherently safe.
For production, the database runs on Turso — SQLite over HTTP with edge replication. For local development, it uses modernc.org/sqlite, a pure Go SQLite implementation that requires no CGO. Same schema, same queries, zero configuration changes between environments.
SSRF Protection at the Network Layer
The webhook relay feature forwards incoming webhooks to user-specified URLs. This creates a classic SSRF (Server-Side Request Forgery) risk — a user could point the relay at 169.254.169.254 and read cloud metadata, or at localhost to probe internal services.
The defense is a custom HTTP transport that validates IP addresses at dial time, before the TCP connection is established:
transport := &http.Transport{
DialContext: func(ctx context.Context, network, addr string) (net.Conn, error) {
host, port, _ := net.SplitHostPort(addr)
ips, _ := net.LookupIP(host)
for _, ip := range ips {
if ip.IsLoopback() || ip.IsPrivate() || ip.IsLinkLocalUnicast() {
return nil, errors.New("target not allowed")
}
}
return dialer.DialContext(ctx, network, net.JoinHostPort(ips[0].String(), port))
},
}
This catches DNS rebinding attacks because the IP is resolved and validated in the same step as dialing. The connection goes directly to the validated IP, not back through DNS. Additional checks block cloud metadata endpoints and the error messages are sanitized to avoid leaking internal network information.
Rate Limiting With a Token Bucket
The rate limiter is a from-scratch token bucket implementation with LRU eviction. Each client (identified by IP or user ID) gets a bucket that refills at a fixed rate. When the bucket is empty, requests are rejected.
Different routes get different limits:
- Authentication endpoints: 5 requests/minute (brute-force protection)
- Webhook capture: 200 requests/minute per IP (high throughput)
- API calls: 120 requests/minute per user
- Heartbeat pings: 200 requests/minute (cron jobs ping frequently)
The LRU eviction keeps memory bounded. A cleanup goroutine runs every 5 minutes to evict stale buckets. In practice, 100,000 buckets use a few MB of memory.
The Build and Deploy Pipeline
The entire build is a single make deploy command:
- Docker builds a multi-stage image — Go compilation, Templ generation, Tailwind purge, and binary extraction into a minimal Alpine container
- The image is compressed and transferred to the production VM via SCP
- SSH loads the image, stops the old container, starts the new one, and prunes old images
No CI/CD service. No Kubernetes. No container registry. The production VM is a GCP e2-micro (always-free tier) with Caddy handling TLS termination. Total monthly hosting cost: $0.
The Docker container runs as a non-root user with a health check that pings the app every 30 seconds. If it fails 3 times, Docker restarts it automatically.
What I Would Change
The libsql driver's datetime handling is painful. It cannot scan SQL TEXT columns into Go time.Time values. Every datetime field requires scanning into sql.NullString and then parsing with time.Parse(time.RFC3339, ...). This adds boilerplate to every query that touches timestamps.
Alpine.js's CSP compatibility requires unsafe-eval. The standard Alpine.js distribution uses eval() internally, which conflicts with strict Content Security Policy headers. There is a CSP-compatible build (@alpinejs/csp), but it requires a different import and has some feature limitations. For now, the CSP header includes unsafe-eval, which is not ideal.
Tailwind's purging misses Templ files by default. The content configuration must explicitly include .templ files, not just .go files. This cost a few hours of debugging invisible styles.
The Numbers
The final artifact is a ~20MB Docker container. The Go binary is ~15MB with symbols stripped. First-page load is under 200ms. The SSE connection adds zero polling overhead. The entire project is about 12,000 lines of Go and Templ, plus 212 lines of JavaScript.
For an indie developer building a SaaS in 2026, this stack eliminates the most common time sinks: frontend build tooling, database ops, and deployment complexity. It will not win any awards for being trendy, but it ships fast and runs cheap.
Resources
- Templ documentation
- HTMX documentation
- Echo framework
- Turso (SQLite edge database)
- Alpine.js
- ThunderHooks — the product this stack powers