Static site versus dynamic site emissions and real world tradeoffs
What we mean by static and dynamic in this context
Static site means pages delivered as pre rendered assets where a CDN serves the same file to many visitors without invoking origin compute on each request. Static assets can be generated at build time or produced on demand and cached. Dynamic site means page HTML or API responses that are generated on the server at request time or require per visit compute because of personalization, authentication, or frequently changing data.
Where emissions are created for web experiences
Three technical layers create most of the emissions you will care about. First, hosting compute and storage used by your build pipeline, application servers, serverless functions, and databases. Second, network transfer that moves bytes from origin through CDNs to end devices. Third, device energy used by the client to parse, render, and execute code. Each layer sits on an electricity grid with its own carbon intensity which determines actual greenhouse gas outcomes.
Any architecture can move work between these layers. Moving work off repeated server side compute to a CDN or to a client device reduces origin electricity use but can increase bytes served or device CPU. The net emissions effect depends on traffic shape, caching efficiency, and local grid intensity for involved hosts and clients.
Common tradeoffs that change the emissions picture
Traffic volume and cache hit rate matter more than the raw architecture name. A static page served from a CDN to millions of visitors keeps origin compute near zero for cached requests and will usually reduce per visit emissions compared with generating HTML on the origin for each request. If cache control is weak or query parameters bust caches often, the site will send more traffic to origin compute and the advantage shrinks.
Personalization and authentication tend to force dynamic work. Pages that show per user data often require server calls that cannot be globally cached. Teams can sometimes move personalization to client side API calls that are small and cached selectively. That trade moves compute to the client and increases network requests. Whether that is lower carbon depends on call size, client device efficiency, and whether the client is on a low or high carbon grid at that time.
Build frequency versus runtime compute is another axis. Static sites with frequent builds shift electricity use to the CI pipeline. If you rebuild thousands of pages many times per day, build servers will consume energy. For stable content a single build serves many visits and amortizes that cost across traffic. For highly churned content the repeated build cost can exceed on demand rendering unless builds are incremental and optimized.
Edge functions and serverless compute can blur the lines. Small functions executed at a CDN edge can replace heavyweight origin rendering, reduce network hops, and reduce transfer size by tailoring HTML to the request. But edge compute has its own energy cost per invocation and cold start behaviors that can affect total compute. The balance depends on the size and frequency of those invocations.
How to compare emissions in a real world decision
Pick clear boundaries. Decide whether you will account for only production request time emissions or include build pipeline and CI costs. For product decisions you usually need both. Define a visit or session shape that matches your users. A blog reader path differs from an authenticated dashboard path.
Measure rather than guess. Use real user monitoring to capture bytes transferred, server response times, and client CPU where possible. Complement RUM with synthetic tests that exercise cached and uncached requests from a representative set of locations. Log origin invocation counts and function durations for serverless or edge code.
Convert electricity to emissions using locally relevant carbon intensity data for each execution location. Grid carbon intensity varies by region and hour. If you want simple comparability you can use a standard annual average for a region, but if you aim for reporting accuracy consider time and location aware factors provided by carbon intensity services.
Isolate variables by running A B tests or short experiments. Deploy a static version and a dynamic version behind the same domain and route a sample of traffic to each. Keep content, assets, and third party calls identical except for the rendering method so differences trace back to architecture. Measure origin compute, network transfer, and client CPU per visit for both variants.
Practical measurement steps
- Define a representative user journey for the site segment you are comparing.
- Collect RUM metrics for bytes, load events, and CPU where available.
- Capture server metrics for request counts, compute time, and storage operations.
- Run synthetic tests for cold and warm cache conditions from representative locations.
- Apply regional carbon intensity factors to host and client energy estimates.
- Compare per visit and aggregate emissions across expected traffic volumes and update frequencies.
Decision rules to help choose
If most pages are cacheable and you have medium to high traffic then static delivery from a CDN will usually minimize emissions per visit because it avoids repeated origin compute. If pages are personalized and cannot be cached with a high hit rate you will need dynamic rendering for correctness. In that case optimize the dynamic path for minimal compute time, efficient serialization, and strong short term caching for subresources.
When content updates are extremely frequent and you cannot rely on incremental builds, prefer dynamic rendering or incremental regeneration workflows that only update changed pages. When build pipelines are heavy and run often consider moving non critical tasks to scheduled batch windows which can be timed for lower grid carbon intensity.
For mixed sites prefer hybrid patterns that combine the best of both approaches. Examples include static pages for public content and dynamic APIs for user data, static shell with client side data hydration, or static pages with small edge functions for tailored elements. Hybrid choices let you keep the high traffic footprint low while still supporting personalization where it matters.
Architecture patterns and when they make sense
Static site delivered by CDN. Best when most content is public and cacheable. Low origin compute for cached hits. Consider this when traffic is large and content churn is low.
Static site with client side hydration. Use when you need interactive components but the initial HTML can be static. This pushes some work to the client. Measure client CPU impact on phones because device energy can dominate for mobile heavy audiences.
Incremental regeneration or on demand revalidation. Good when most pages are static but some require periodic updates. This reduces full rebuild frequency and keeps build compute focused on changed content.
Server side rendering for each request. Required when content must be fresh per request or when SEO requires server rendered HTML that cannot be cached. Optimize by using efficient templates, caching fragments, and keeping payloads minimal.
Edge functions for personalization at the CDN. Use when you need low latency personalization and can keep function execution small and fast. Measure function duration and invocation counts carefully because high frequency small functions can still add up.
Example scenarios without numbers
Scenario one: A technical blog with mostly public posts and occasional updates. Serving static pages from a CDN will reduce repeated origin compute and is low maintenance. A build pipeline that runs on content changes is easy to amortize because each build covers many future visits.
Scenario two: An ecommerce store with product pages that are public but cart and checkout require per user state. Use static pages for product listings and images. Keep cart and checkout dynamic and optimized for minimal server side work. Cache product pages aggressively and use small authenticated APIs for cart operations.
Scenario three: A SaaS dashboard that is personalized for each user with frequent updates. Pure static delivery will not work. Focus on reducing server side compute per request by caching query results where possible, reducing data transferred, and considering client side rendering for visualization elements if that does not harm user device battery.
How teams can reduce emissions regardless of architecture
Make bytes smaller by optimizing images and video, use modern compressed formats, and avoid shipping large JavaScript bundles that run unnecessary code. Tune caching headers and use ETags to avoid unnecessary origin work. Measure third party scripts and remove or defer ones that add client CPU or network transfer without clear value.
Consider scheduling heavy build tasks and batch processes to times when the electricity grid is cleaner if you report or optimize for time aware emissions. Use carbon aware scheduling tools or APIs if you want automated alignment with lower carbon windows.
How to make credible sustainability claims
Report the scope and boundaries of your measurement. State whether you include build pipeline emissions, third party hosting, and client device energy. Share the methods used to convert energy to emissions including the carbon intensity sources and whether those are average or time aware. Show uncertainty bounds and avoid claims of absolute zero unless fully offset with documented offsets and clear scope.
When publishing comparisons, provide the representative user journey, traffic assumptions, and cache hit rates so readers can assess whether the result applies to their context.
Next steps for teams evaluating their site
Start with measurement. Run a short A B experiment that isolates rendering from other differences. Use the decision rules in this article to choose an architecture that minimizes repeated origin compute and network transfer for your highest volume paths. Iterate and measure again after making optimizations. Treat emissions as another performance metric that benefits from continuous measurement and small incremental improvements.