When static is clearly greener and when it is not
Static pages reduce origin compute by serving prebuilt files from a CDN. That often leads to lower server energy use and fewer bytes transferred from origin. Static approaches are typically greener when most requests are identical for many users and when cache hit ratios at the edge are high.
Static is not automatically the lowest carbon choice when pages require personalization per user, contain frequently changing content that forces rebuilds, or when build times and artifact storage create significant recurring overhead for a high volume site. The goal is to compare total work across network, edge, origin, and client for the real traffic mix you have rather than assuming static always wins.
A compact measurement model you can use today
Use a small set of measured quantities to compare architectures. Express emissions per visit as the sum of network emissions, edge compute emissions, origin compute emissions, and client device emissions. Work with measured bytes and measured compute rather than vendor claims.
Key variables
- B is bytes transferred per visit from CDN edge to client measured with RUM or synthetic tests.
- Nb is bytes transferred between edge and origin per cache miss and the miss rate M is the fraction of requests that miss the edge cache.
- Ce is edge compute energy per request when the edge runs functions or SSR for a request that is served from edge compute.
- Co is origin compute energy per cache miss when origin must render or query a database.
- Cd is device energy per visit attributable to rendering and active network usage on the client.
- EFnet and EFgrid are emission factors for network transfer and server compute in your hosting region expressed in grams CO2e per unit energy or per byte when available.
Simple formula
Emissions per visit E can be approximated as:
E = B times EFnet + M times Nb times EFnet + EdgeWork + M times Co times EFgrid + Cd times EFdevice
EdgeWork is the emissions from any edge compute executed per request and can be expressed as Ce times EFgrid for the edge region. The important point is that the cache miss rate M appears directly in origin compute terms and in extra network transfer for misses.
How to measure the inputs in practice
Measure bytes per visit with real user monitoring and group pages by template or by user intent. Measure cache hit ratio at the CDN and at any intermediate layer for the same URL groups. Collect server metrics for average CPU time per miss and average memory used during request handling. If you use managed edge functions measure billed duration and memory as a proxy for compute energy. When you cannot measure energy directly work with CPU seconds or billed compute as a proxy and use consistent conversion to emissions for comparisons.
Decision rules that map product needs to architecture choices
Rule 1 Personalization fraction matters more than technology
If fewer than 10 percent of page views need per user personalization and the rest are identical across users then static pages with client side personalization or edge side personalization can achieve high cache hit ratios and low emissions. If more than 50 percent of requests require unique per user content then static builds lose their advantage because cache miss and origin render work dominate.
Rule 2 Cache hit ratio is your single most influential variable
A high cache hit ratio reduces both network and origin compute emissions. Investing engineering time to raise the edge cache hit ratio often yields larger emissions reductions than switching templating engines or compressing assets. Focus on correct cache headers, cache key design that avoids unnecessary variation, and use of surrogate keys to invalidate only the items that change.
Rule 3 Use hybrid patterns when update frequency or build cost is high
If content updates frequently for a small portion of the site consider incremental approaches that regenerate only the changed pages or that use a cache with short time to live combined with stale while revalidate strategies. These hybrid patterns can avoid full site rebuilds that are costly in compute and time while keeping most traffic on cached static assets.
Rule 4 Edge compute is not free but it can reduce origin work
Edge functions can move rendering from origin to the edge which reduces cross region network transfer and origin CPU per miss. Account for the cost and emissions of edge compute per request. If edge compute replaces many origin round trips and substantially shortens transfer distances it can reduce emissions for geographic spread traffic. If the edge function runs heavy CPU tasks for each request and does not produce cacheable output then emissions will increase.
Rule 5 Build pipeline and artifact storage are part of the picture
For very large sites frequent full rebuilds and large artifact stores introduce recurring compute and storage costs. Measure build minutes and storage bytes per deploy and include those in your comparison when releases are frequent. In many cases an architecture that avoids repeated full builds by using server render on update or by selectively invalidating cache will lower total emissions.
Work through three common scenarios
Scenario A High traffic public pages with little personalization
Static site generation with CDN caching is usually the lowest emissions choice. Focus on reducing payload per page, enabling long cache times on assets and HTML, and using stale while revalidate to keep hits high during background updates.
Scenario B Content heavy site with frequent updates and many pages
Full site rebuilds per update can be expensive. Consider incremental static regeneration or an architecture that serves cached HTML from edge, and regenerates only when content changes. If real time freshness matters choose short time to live with background refresh or server render of a small subset of pages. Compare rebuild compute minutes against origin render work multiplied by expected miss rate to pick the least carbon choice.
Scenario C High personalization and authenticated flows
When most visitors see unique content, cacheability drops and origin or authenticated edge compute will be required. In that case optimize the backend render path for efficiency, cache parts of the response that are common across users, and offload non critical personalization to the client when possible to reduce server work per visit.
Example calculation using labeled hypothetical numbers
Use the variables from the measurement model. Suppose you measure average B equal to 1.2 megabytes per page, the cache miss rate M at the edge is 15 percent, Nb is 600 kilobytes per miss, edge compute Ce is small for static pages, and origin compute Co is significant for dynamic render. Rather than provide a national emission factor here compute a relative comparison by converting compute to CPU seconds and bytes to network units then compare architecture A and architecture B using the same conversion. The result will show whether cache misses or origin renders dominate and where to invest effort first.
Label any worked numbers clearly as examples and avoid publishing hypothetical numbers as site specific facts.
Practical checklist to run a quick compare in one day
- Collect RUM bytes per page for representative templates.
- Get CDN logs to measure cache hit ratio and bytes per miss.
- Measure origin request CPU time and database query time for a miss.
- Estimate client rendering energy using device metrics if available or treat it as constant across architectures for the same payload.
- Run the simple formula to locate the largest contributors and prioritize fixes.
What to measure after you change architecture
Track bytes per visit, cache hit ratio, origin CPU minutes, build minutes per deploy, and real user engagement metrics. Report emissions per visit and total monthly emissions using the same method you used for the baseline to ensure comparability. Show uncertainty ranges if emission factors are approximated.
Practical tradeoff examples that teams face
Teams often must choose between speed, freshness, personalization, cost, and emissions. Use the measurement model to make transparent choices. For example investing in a small client side personalization library can keep HTML cacheability high and dramatically cut origin work for a modest increase in client CPU. Conversely moving heavy business logic from client to edge may improve control and security but will increase edge compute per visit. The right decision depends on the cacheable fraction of traffic and the comparative compute cost.
Making data driven choices avoids dogma and lets teams balance product needs with lower carbon outcomes.