Skip to content

Edge Config

Shadow-canary stores all routing state in a single Edge Config key. The middleware reads this key on every request (with a 60-second in-process cache). The three workflows write to it on every deploy or canary tick.

The key is derived deterministically from the repo slug as shadow-<repo-slug>-canary — for example, the repo owner/my-app uses the key shadow-my-app-canary. It is not configurable: the middleware reads VERCEL_GIT_REPO_SLUG (auto-injected by Vercel on every deploy) and the workflows read github.event.repository.name / $GITHUB_REPOSITORY. This removes by construction the silent-mismatch bug where one side writes to one key while the other reads a different one.

Full schema

{
"deploymentDomainProd": "https://my-app-abc123.vercel.app",
"deploymentDomainProdPrevious": "https://my-app-def456.vercel.app",
"deploymentDomainShadow": "https://my-app-xyz789.vercel.app",
"trafficShadowPercent": 1,
"trafficProdCanaryPercent": 42,
"shadowForceIPs": ["203.0.113.42"],
"canaryPaused": false,
"canaryStartedAt": "2024-11-15T08:00:00Z"
}

Field reference

deploymentDomainProd

Typestring (URL)
Written bydeploy-prod.yml
Read byMiddleware (for internal bookkeeping), admin UI
Examplehttps://my-app-abc123.vercel.app

The per-deployment URL of the current production deploy — the deploy that received vercel promote. This is the deploy where the middleware runs. It is stored here so the admin UI can display it without a Vercel API call.

deploymentDomainProdPrevious

Typestring (URL) or absent
Written bydeploy-prod.yml (sets on canary start, clears on [skip-canary])
Read byMiddleware (rewrite target for prod-previous bucket)
Examplehttps://my-app-def456.vercel.app

The per-deployment URL of the previous production deploy. Present during a canary and kept after it completes at 100%, so in-flight sessions on prod-previous can finish their journey. Overwritten by the next deploy-prod.yml run.

If this field is absent or empty, the middleware treats all prod traffic as prod-new (no canary in effect).

deploymentDomainShadow

Typestring (URL)
Written bydeploy-shadow.yml
Read byMiddleware (rewrite target for shadow bucket)
Examplehttps://my-app-xyz789.vercel.app

The per-deployment URL of the current master deploy. Updated on every push to master. If this field is absent, shadow routing silently degrades — the middleware returns NextResponse.next().

trafficShadowPercent

Typenumber (0–100)
Default1
Written bydeploy-shadow.yml (initializes to 1 if absent)
Read byMiddleware

Percentage of requests routed to the shadow deploy. Typically 1. Set to 0 as a kill-switch — the shadow deploy continues to exist but receives no traffic. Propagates within 60 seconds, no redeploy needed.

trafficProdCanaryPercent

Typenumber (0–100)
Default100 (no canary)
Written bydeploy-prod.yml (resets on each deploy), canary-ramp.yml (increments)
Read byMiddleware

Percentage of prod-bucket requests that go to the new deploy. The remainder goes to deploymentDomainProdPrevious.

ValueMeaning
100No active canary — all prod traffic on new deploy
0Full rollback — all prod traffic on previous deploy
199Canary in progress

shadowForceIPs

Typestring[] (IPv4 addresses)
Default[]
Written byManual (Vercel dashboard or API)
Read byMiddleware

List of IPv4 addresses that are always routed to the shadow deploy, bypassing random roll and cookie stickiness. Used to route office or dev-team traffic to shadow permanently. No cookie is set for IP-forced requests.

canaryPaused

Typeboolean
Defaultfalse
Written byAdmin UI (Pause/Resume buttons), canary-ramp.yml (sets true on SLO failure)
Read bycanary-ramp.yml

When true, the canary cron skips the SLO check and bump. Traffic split is frozen at the current trafficProdCanaryPercent. Resume via the admin UI or by setting to false in Edge Config directly.

canaryStartedAt

Typestring (ISO 8601) or null
Written bydeploy-prod.yml (on canary start), canary-ramp.yml (clears at 100%)
Read byAdmin UI

Timestamp of when the current canary started. Used for display purposes in the admin UI. null means no active canary.

Cache TTL

The middleware caches the Edge Config response in module memory for 60 seconds. Config changes take up to 60 seconds to propagate to warm middleware instances. Cold starts (new instances) always fetch from Edge Config directly.

Manual edits

You can edit the config directly in the Vercel dashboard (Storage > your store > Items). Common manual operations:

  • Set trafficShadowPercent: 0 to kill the shadow (shadow still exists, gets no traffic)
  • Set trafficProdCanaryPercent: 0 to roll back a canary manually
  • Add or remove IPs in shadowForceIPs
  • Set canaryPaused: false to resume a paused canary

Sharing a store across projects

Vercel’s Pro plan limits you to 3 Edge Config stores per team. If you want to run shadow-canary on 4+ projects, host all their configs as separate keys inside one shared store — the per-repo namespacing of the key (shadow-<repo-slug>-canary) makes this safe by construction.

Setup

  1. Link the same Edge Config store to every project (Vercel Storage → your store → Projects → Connect).

  2. In the Edge Config store, create one item per project using its derived key:

    // Repo: owner/stargaze → key: shadow-stargaze-canary
    { "trafficShadowPercent": 0, "trafficProdCanaryPercent": 100, "shadowForceIPs": [] }
    // Repo: owner/checkout → key: shadow-checkout-canary
    { "trafficShadowPercent": 1, "trafficProdCanaryPercent": 100, "shadowForceIPs": [] }

That’s it — no per-project env var or GitHub secret to wire up. Middleware reads VERCEL_GIT_REPO_SLUG (auto-injected) and workflows read github.event.repository.name / $GITHUB_REPOSITORY. Both sides converge on the same derived key, so there is nothing to forget or desync.

Renaming the repo

The Edge Config key is derived from the repo slug, so if you rename the GitHub repo, the key changes and the old entry becomes orphaned. To migrate without a gap in routing state:

  1. Rename the repo on GitHub.
  2. Re-link the Vercel project to the renamed repo (Vercel Project Settings → Git). This updates VERCEL_GIT_REPO_SLUG on the next deploy.
  3. In the Edge Config store’s Items tab, copy the value from shadow-<old-slug>-canary into a new item at shadow-<new-slug>-canary.
  4. Trigger a deploy (push to master or production). The workflows will write to the new key from then on.
  5. Delete the old shadow-<old-slug>-canary item once you’ve confirmed routing still works.

Step 3 can be skipped: the next deploy-shadow.yml run repopulates deploymentDomainShadow, trafficShadowPercent, etc. from defaults — but any non-default values (like shadowForceIPs entries) will be lost.

Limits inside one store

Edge Config has a 64 KB total payload limit per store. A typical ShadowConfig item is ~500 bytes, so a shared store can host ~100 projects safely (up to ~120 in theory). The item size is dominated by shadowForceIPs — each IPv4 entry adds ~18 bytes. A large IP allowlist (20+ IPs) pushes an item over 1 KB and shrinks the per-store project ceiling fast.

The runtime enforces an 8 KB per-item cap in patchShadowConfig to prevent one project from pushing a shared store over the 64 KB limit and breaking every tenant.


Related:

  • Routing — how the middleware uses this config
  • Workflows — which fields each workflow writes
  • Dashboard — admin UI reading and writing this config