Edge Config
Shadow-canary stores all routing state in a single Edge Config key. The middleware reads this key on every request (with a 60-second in-process cache). The three workflows write to it on every deploy or canary tick.
The key is derived deterministically from the repo slug as shadow-<repo-slug>-canary — for example, the repo owner/my-app uses the key shadow-my-app-canary. It is not configurable: the middleware reads VERCEL_GIT_REPO_SLUG (auto-injected by Vercel on every deploy) and the workflows read github.event.repository.name / $GITHUB_REPOSITORY. This removes by construction the silent-mismatch bug where one side writes to one key while the other reads a different one.
Full schema
{ "deploymentDomainProd": "https://my-app-abc123.vercel.app", "deploymentDomainProdPrevious": "https://my-app-def456.vercel.app", "deploymentDomainShadow": "https://my-app-xyz789.vercel.app", "trafficShadowPercent": 1, "trafficProdCanaryPercent": 42, "shadowForceIPs": ["203.0.113.42"], "canaryPaused": false, "canaryStartedAt": "2024-11-15T08:00:00Z"}Field reference
deploymentDomainProd
| Type | string (URL) |
| Written by | deploy-prod.yml |
| Read by | Middleware (for internal bookkeeping), admin UI |
| Example | https://my-app-abc123.vercel.app |
The per-deployment URL of the current production deploy — the deploy that received vercel promote. This is the deploy where the middleware runs. It is stored here so the admin UI can display it without a Vercel API call.
deploymentDomainProdPrevious
| Type | string (URL) or absent |
| Written by | deploy-prod.yml (sets on canary start, clears on [skip-canary]) |
| Read by | Middleware (rewrite target for prod-previous bucket) |
| Example | https://my-app-def456.vercel.app |
The per-deployment URL of the previous production deploy. Present during a canary and kept after it completes at 100%, so in-flight sessions on prod-previous can finish their journey. Overwritten by the next deploy-prod.yml run.
If this field is absent or empty, the middleware treats all prod traffic as prod-new (no canary in effect).
deploymentDomainShadow
| Type | string (URL) |
| Written by | deploy-shadow.yml |
| Read by | Middleware (rewrite target for shadow bucket) |
| Example | https://my-app-xyz789.vercel.app |
The per-deployment URL of the current master deploy. Updated on every push to master. If this field is absent, shadow routing silently degrades — the middleware returns NextResponse.next().
trafficShadowPercent
| Type | number (0–100) |
| Default | 1 |
| Written by | deploy-shadow.yml (initializes to 1 if absent) |
| Read by | Middleware |
Percentage of requests routed to the shadow deploy. Typically 1. Set to 0 as a kill-switch — the shadow deploy continues to exist but receives no traffic. Propagates within 60 seconds, no redeploy needed.
trafficProdCanaryPercent
| Type | number (0–100) |
| Default | 100 (no canary) |
| Written by | deploy-prod.yml (resets on each deploy), canary-ramp.yml (increments) |
| Read by | Middleware |
Percentage of prod-bucket requests that go to the new deploy. The remainder goes to deploymentDomainProdPrevious.
| Value | Meaning |
|---|---|
100 | No active canary — all prod traffic on new deploy |
0 | Full rollback — all prod traffic on previous deploy |
1–99 | Canary in progress |
shadowForceIPs
| Type | string[] (IPv4 addresses) |
| Default | [] |
| Written by | Manual (Vercel dashboard or API) |
| Read by | Middleware |
List of IPv4 addresses that are always routed to the shadow deploy, bypassing random roll and cookie stickiness. Used to route office or dev-team traffic to shadow permanently. No cookie is set for IP-forced requests.
canaryPaused
| Type | boolean |
| Default | false |
| Written by | Admin UI (Pause/Resume buttons), canary-ramp.yml (sets true on SLO failure) |
| Read by | canary-ramp.yml |
When true, the canary cron skips the SLO check and bump. Traffic split is frozen at the current trafficProdCanaryPercent. Resume via the admin UI or by setting to false in Edge Config directly.
canaryStartedAt
| Type | string (ISO 8601) or null |
| Written by | deploy-prod.yml (on canary start), canary-ramp.yml (clears at 100%) |
| Read by | Admin UI |
Timestamp of when the current canary started. Used for display purposes in the admin UI. null means no active canary.
Cache TTL
The middleware caches the Edge Config response in module memory for 60 seconds. Config changes take up to 60 seconds to propagate to warm middleware instances. Cold starts (new instances) always fetch from Edge Config directly.
Manual edits
You can edit the config directly in the Vercel dashboard (Storage > your store > Items). Common manual operations:
- Set
trafficShadowPercent: 0to kill the shadow (shadow still exists, gets no traffic) - Set
trafficProdCanaryPercent: 0to roll back a canary manually - Add or remove IPs in
shadowForceIPs - Set
canaryPaused: falseto resume a paused canary
Sharing a store across projects
Vercel’s Pro plan limits you to 3 Edge Config stores per team. If you want to run shadow-canary on 4+ projects, host all their configs as separate keys inside one shared store — the per-repo namespacing of the key (shadow-<repo-slug>-canary) makes this safe by construction.
Setup
-
Link the same Edge Config store to every project (Vercel Storage → your store → Projects → Connect).
-
In the Edge Config store, create one item per project using its derived key:
// Repo: owner/stargaze → key: shadow-stargaze-canary{ "trafficShadowPercent": 0, "trafficProdCanaryPercent": 100, "shadowForceIPs": [] }// Repo: owner/checkout → key: shadow-checkout-canary{ "trafficShadowPercent": 1, "trafficProdCanaryPercent": 100, "shadowForceIPs": [] }
That’s it — no per-project env var or GitHub secret to wire up. Middleware reads VERCEL_GIT_REPO_SLUG (auto-injected) and workflows read github.event.repository.name / $GITHUB_REPOSITORY. Both sides converge on the same derived key, so there is nothing to forget or desync.
Renaming the repo
The Edge Config key is derived from the repo slug, so if you rename the GitHub repo, the key changes and the old entry becomes orphaned. To migrate without a gap in routing state:
- Rename the repo on GitHub.
- Re-link the Vercel project to the renamed repo (Vercel Project Settings → Git). This updates
VERCEL_GIT_REPO_SLUGon the next deploy. - In the Edge Config store’s Items tab, copy the value from
shadow-<old-slug>-canaryinto a new item atshadow-<new-slug>-canary. - Trigger a deploy (push to
masterorproduction). The workflows will write to the new key from then on. - Delete the old
shadow-<old-slug>-canaryitem once you’ve confirmed routing still works.
Step 3 can be skipped: the next deploy-shadow.yml run repopulates deploymentDomainShadow, trafficShadowPercent, etc. from defaults — but any non-default values (like shadowForceIPs entries) will be lost.
Limits inside one store
Edge Config has a 64 KB total payload limit per store. A typical ShadowConfig item is ~500 bytes, so a shared store can host ~100 projects safely (up to ~120 in theory). The item size is dominated by shadowForceIPs — each IPv4 entry adds ~18 bytes. A large IP allowlist (20+ IPs) pushes an item over 1 KB and shrinks the per-store project ceiling fast.
The runtime enforces an 8 KB per-item cap in patchShadowConfig to prevent one project from pushing a shared store over the 64 KB limit and breaking every tenant.
Related: