How this runs
A small township, an edge CDN, serverless functions at the edge, a GPU classifier, and a bucket of photos. Below is the whole stack — what each piece does, how the bytes flow, and why this architecture works for a community of about 1,200 people the same way it would work for a few hundred million.
The one-paragraph version
Photos are uploaded from a phone walking the parks, straight to Linode Object Storage via a Spin-based edge function running on Akamai Cloud Functions. A GPU classifier on a Linode Kubernetes cluster (Qwen2.5-VL-7B + CLIP) identifies each feature and rates its condition. A HEIC conversion sidecar in the same cluster handles iPhone photo formats. The resulting catalog — assets, parks, condition, metadata — is materialized as a single JSON snapshot in object storage and served to the committee and the public through a second Spin function, globally, via Akamai's CDN. Every request is logged through Akamai DataStream 2 to a ClickHouse cluster and visualized in Grafana.
What you're looking at
The same binary is deployed as two separate Akamai Cloud Functions apps. One
(helena-parks-public) is started with public_mode=true and
physically refuses any non-GET verb and any /v1/internal/* path — the
edit routes simply are not reachable. The other (helena-parks) is the
committee-facing app with the full write surface, gated by a shared token at the
CDN layer.
Requests land on the Akamai CDN first. If they carry a valid token (query string or
cookie), the CDN routes to the committee function. If not, the CDN routes to the
public function. The browser never knows which origin it hit — both respond under
www.helenatownshipparks.com, same certificate, same everything.
Akamai CDN (Ion)
TLS termination, global caching, token gate, path-based origin override, and Let's Encrypt DV cert reissue. Single property, both apps, one cert.
Akamai Cloud Functions
Fermyon Spin 3.x on WebAssembly (WASI-p1), deployed with spin aka deploy.
One Rust binary, two apps, same code — public-mode flag short-circuits edit routes.
Linode Object Storage
S3-compatible bucket on E3 cluster (us-ord-10). Holds the original
photos, derivatives (thumb + web + original), and the single-file catalog snapshots.
Bucket versioning enabled.
LKE GPU classifier
Qwen2.5-VL-7B-Instruct + OpenCLIP ViT-L/14 on an RTX 4000 Ada node in LKE. vLLM runtime, typed-prompted VLM for classification + condition, CLIP for nearest-neighbor dedup across visits.
HEIC sidecar
A small Python sidecar in the same LKE cluster that converts iPhone HEIC uploads to JPEG before ingest. A deliberate trade: edge WASM can't link libheif, so the heavy lifting runs in LKE behind a private endpoint.
DataStream 2 → ClickHouse → Grafana
Every edge request is logged through Akamai DS2 with 17 standard fields and delivered to a shared ClickHouse cluster, then rendered in a Grafana dashboard alongside every other demo. Real observability from day one.
Why the architecture matters
- Latency on a phone in the woods. A volunteer walking Coy Mountain Nature Preserve with LTE signal is closer to the nearest Akamai edge node than to any central cloud region — photos enter the CDN at the edge and are already halfway to Linode Object Storage before the phone even knows about the upload.
- Reads are cattle, writes are pets. The committee's daily experience (viewing the map, browsing the catalog) is small-payload, globally distributed, perfectly suited for an edge function. GPU inference stays on dedicated LKE infrastructure where it belongs.
- The public version literally can't do harm.
public_mode=truerejects every non-GET verb at the top of the Rust handler. There is no edit capability to leak — it's not on the wire. - One cert, one property, two audiences. A single Akamai property handles both the public site and the committee site, with origin selection driven by token presence. One place to update rules, one place to read DS2 logs.
- Same code scales up. This reference architecture works for a township of about 1,200 the same way it would work for a continent — the edge functions are stateless, the object storage scales independently, and the GPU tier is the only thing you'd size up as traffic grows.
Code and contact
This is an open-source reference architecture. The full Rust source, Spin manifests, Kubernetes manifests, inference service, and this UI are all on GitHub. Issues and pull requests welcome.
View on GitHub → Back to the map