How to benchmark Hugo vs Astro build speeds

A diagnostic framework for measuring and comparing Hugo and Astro compilation times under identical content loads. This guide establishes reproducible metrics, cache isolation protocols, and configuration-level bottleneck resolution.

Standardize content volume, asset types, and routing depth across both generators. Isolate plugin overhead by disabling non-essential features during baseline tests. Track cold versus warm cache metrics using identical CI runner specifications. Reference established methodologies for Choosing the Right Static Site Generator for Production to contextualize performance thresholds.

Environment Standardization & Baseline Configuration

Hardware and runtime variables must be eliminated to ensure deterministic benchmark results. Pin Node.js and Go versions to exact releases across all test environments. Disable OS-level background services and thermal throttling mechanisms.

Use Docker containers with identical resource limits to prevent host-level contention. Cross-reference Hugo Build Times for Large Repositories for baseline memory allocation thresholds.

docker run --cpus=2 --memory=4g -it node:20-alpine /bin/bash
docker run --cpus=2 --memory=4g -it golang:1.22-alpine /bin/bash

Content Volume & Asset Injection Protocol

Generate identical synthetic datasets to stress-test markdown parsing, image processing, and routing generation. Create 10k, 50k, and 100k markdown files with randomized frontmatter. Inject identical image sets at 1MB, 5MB, and 10MB across WebP, AVIF, and PNG formats.

Disable external API calls and remote data fetching during all tests. Verify directory structure parity between content/ for Hugo and src/content/ for Astro.

python3 -c "import os; [os.makedirs(f'test_repo/content/{i}', exist_ok=True) for i in range(10000)]"

Benchmark Execution & Metric Collection

Capture precise compilation durations, memory peaks, and I/O wait times. Use the time command or perf for wall-clock and CPU cycle tracking. Log verbose outputs to isolate pipeline bottlenecks.

Record peak RSS memory and disk I/O operations per run. Execute five iterations per generator. Discard the highest and lowest values. Average the remaining three for statistical accuracy.

for i in {1..5}; do /usr/bin/time -v hugo --gc --minify 2>> hugo_metrics.log; done

CI/CD Pipeline Integration & Caching Tests

Simulate production deployment workflows to evaluate incremental build performance. Configure GitHub Actions with identical runner types. Test with empty cache, partial cache, and full cache states.

Measure hugo server --disableFastRender versus astro dev cold start times. Validate artifact upload and download overhead impact on total pipeline duration.

# GitHub Actions cache configuration
- uses: actions/cache@v3
 with:
 path: |
 .hugo_cache/
 node_modules/.astro/
 key: ${{ runner.os }}-ssg-benchmark-${{ hashFiles('**/package-lock.json', 'go.sum') }}

Configuration Tuning for Isolated Testing

Strip non-essential features to isolate pure markdown-to-HTML compilation speed. Disable RSS generation, sitemap creation, and comment systems. Prevent Vite minification and sourcemap generation.

Hugo Configuration (config.toml)

baseURL = "http://localhost/"
languageCode = "en-us"
title = "Benchmark Test"

[params]
 disableComments = true
 disableRSS = true
 disableSitemap = true

[build]
 writeStats = false
 noJSConfigInHead = true

[markup]
 [markup.goldmark]
 [markup.goldmark.renderer]
 unsafe = false

Astro Configuration (astro.config.mjs)

import { defineConfig } from 'astro/config';

export default defineConfig({
 site: 'http://localhost',
 output: 'static',
 build: {
 inlineStylesheets: 'auto',
 format: 'directory',
 concurrency: 4
 },
 vite: {
 build: {
 minify: false,
 sourcemap: false,
 rollupOptions: {
 output: { manualChunks: () => null }
 }
 }
 }
});

Automated Timing Wrapper

Use the following script to capture wall-clock time and peak RSS memory across iterations. Output structured CSV data for statistical analysis.

#!/bin/bash
set -e
ITERATIONS=3
LOG_FILE="benchmark_results.csv"
echo "Generator,Iteration,WallTime_s,PeakRSS_MB" > $LOG_FILE

for gen in hugo astro; do
 for i in $(seq 1 $ITERATIONS); do
 if [ "$gen" = "hugo" ]; then
 START=$(date +%s%N)
 hugo --gc --minify
 END=$(date +%s%N)
 else
 START=$(date +%s%N)
 npx astro build
 END=$(date +%s%N)
 fi
 WALL=$(( (END - START) / 1000000000 ))
 PEAK_RSS=$(grep "Maximum resident set size" /proc/self/status | awk '{print $2}')
 echo "$gen,$i,$WALL,$PEAK_RSS" >> $LOG_FILE
 done
done

Common Pitfalls

Inconsistent cache states between test runs Hugo caches processed images in resources/_gen. Astro uses .astro/ and node_modules/.cache/vite. Failing to clear these directories before cold-start tests artificially inflates warm-cache metrics.

Dev server versus production build confusion Running hugo server or astro dev triggers HMR, file watchers, and unminified asset bundling. Benchmarks must exclusively use hugo and astro build to reflect actual deployment compilation times.

Uncontrolled plugin and integration overhead Astro integrations or Hugo module pipelines execute on-the-fly during builds. Leaving these enabled without identical test datasets causes disproportionate processing time. This masks core generator performance.

Thermal throttling and CPU governor variance Laptops and unmanaged CI runners dynamically scale CPU frequency under sustained load. Pin CPU governor to performance mode. Enforce Docker resource limits for deterministic execution.

FAQ

Should I benchmark cold or warm builds for production relevance? Benchmark both. Cold builds reflect initial CI/CD deployment times and cache misses. Warm builds represent incremental content updates. Production relevance depends on deployment frequency and cache retention policies.

How do I normalize Astro's Vite cache versus Hugo's Go cache? Delete .astro/, node_modules/.cache/, and resources/_gen/ before each cold test. For warm tests, preserve only the generator-specific cache directories. Run identical incremental content additions.

What CI runner specs guarantee reproducible results? Use fixed-spec runners. Pin Docker base images. Enforce --cpus and --memory limits to prevent host-level resource contention. Avoid shared runners with unpredictable background workloads.

Does markdown frontmatter parsing affect comparative speeds? Yes. Hugo uses Go's native TOML/YAML parsers. Astro relies on Node.js libraries. Standardize frontmatter format to YAML only. Disable schema validation during tests to isolate parsing overhead.

Static Site Generators in Production