Platform

How it works

A walk through what actually happens when you message Basal — told without jargon.

When you send a message to Basal in Slack, it travels through a short chain of services that each do one specific thing. Here’s the flow.

1 · Slack receives your message

Slack sends the event to a Luna-owned endpoint — a Cloudflare Worker called luna-slack-dm. Before it does anything, it verifies the message actually came from Slack (cryptographic signature check) and that the sender’s email is @lunadiabetes.com. If either check fails, the message is dropped.

2 · The router decides who should handle it

luna-slack-dm hands the verified message to luna-router. The router is a dispatcher — today it always routes to Basal, but as more agents come online (like the Data Agent) the router will pick based on the channel, a slash command, or the question itself.

3 · The agent handles your conversation

luna-agent-basal receives the message along with your conversation history. Each Lunite has their own dedicated Durable Object instance — essentially a small, stateful mini-server in Cloudflare’s edge network — with their own SQLite table of messages. Your conversations stay separate from everyone else’s, automatically.

4 · The model call goes through a proxy

Instead of calling the AI model directly, the agent goes through luna-ai-proxy. This is Luna’s single point of egress for all model API calls. It:

  • Holds the API keys so individual agents never touch credentials.
  • Strips any identifying headers before forwarding.
  • Adds hashed metadata so we can measure usage without logging conversations.
  • Forwards to Cloudflare’s AI Gateway — a layer that handles rate limiting, cost caps, and audit logging for every model request.

5 · The model responds

The gateway forwards the request to the actual model (see Tech stack for which model). The response streams back through the same chain in reverse — gateway → proxy → agent → router → Slack — and lands in your DM.

Why so many pieces?

Each Worker does one thing. That makes any one of them easy to replace, upgrade, or monitor in isolation. It also means when something breaks, it’s obvious where.

The chain is also what makes switching models trivial: a new model provider is a one-line change in luna-ai-proxy, not a rewrite of every agent. See Tech stack for more on that.

Next: privacy & security covers how this architecture keeps your conversations safe.