Skip to main content
  1. Posts/

Speeding Up Linux Kernel Builds with ccache: 6.8x Faster Compilation

·6 mins

ccache, the compiler cache, is probably one of the most underrated tools for C/C++ development. If you’re compiling the Linux kernel regularly (and honestly, who isn’t these days?), this tool can be a game changer. I’ve been using it for years, first discovered it while optimizing our CI pipelines at a previous startup, and now it’s an essential part of my workflow at Semrush.

Recently though, I hit a wall trying to get ccache to work with kernel builds. The cache just wouldn’t hit, and my build times stayed frustratingly long. Turns out there’s a non-obvious trick to it.

The Problem with Kernel Builds and ccache #

The Linux kernel, being the beautiful beast it is, includes timestamps in its build output. You can see this if you check:

1
2
3
4
5
$ cat /proc/version
Linux version 4.15.0-13-generic (buildd@lgw01-amd64-028)
(gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9))
#14~16.04.1-Ubuntu SMP
Sat Mar 17 03:04:59 UTC 2018

That timestamp at the end? It changes with every build, making your builds non-deterministic. And ccache hates non-deterministic builds - it can’t cache what keeps changing.

After some digging (okay, a lot of digging), I found that someone had this exact issue back in 2014. The solution involves commit 87c94bfb8ad35 which introduced KBUILD_BUILD_TIMESTAMP - a way to override that pesky timestamp.

Setting Up ccache for Kernel Builds #

First, let’s check ccache stats and clear everything for a clean test:

1
2
$ ccache -Cz  # Clear cache and stats
$ ccache -s   # Show stats

The key insight: we need to set KBUILD_BUILD_TIMESTAMP to an empty string (or any constant value) to make builds deterministic.

Real-World Performance Comparison #

Let me show you actual numbers from my development machine (8-core Ryzen with NVMe SSD). Your mileage may vary, but the relative improvements should be similar.

Baseline: No Cache #

1
2
3
4
$ make clean
$ time make -j4
...
make -j4  2019.47s user 234.82s system 348% cpu 10:52.33 total

About 10 minutes and 52 seconds. Pretty standard for a full kernel build.

Cold Cache Build #

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ ccache -Cz
Cleared cache
Statistics cleared
$ ccache -s
cache directory                     /home/eorlov/.ccache
primary config                      /home/eorlov/.ccache/ccache.conf
secondary config      (readonly)    /etc/ccache.conf
cache hit (direct)                     0
cache hit (preprocessed)               0
cache miss                             0
cache hit rate                      0.00 %
cleanups performed                     0
files in cache                         0
cache size                           0.0 kB
max cache size                       5.0 GB

$ make clean
$ time KBUILD_BUILD_TIMESTAMP='' make CC="ccache gcc" -j4
...
KBUILD_BUILD_TIMESTAMP='' make CC="ccache gcc" -j4  2438.92s user 315.73s system 371% cpu 12:22.18 total
$ ccache -s
cache directory                     /home/eorlov/.ccache
primary config                      /home/eorlov/.ccache/ccache.conf
secondary config      (readonly)    /etc/ccache.conf
cache hit (direct)                     0
cache hit (preprocessed)               0
cache miss                          3247
called for link                        6
called for preprocessing             538
unsupported source language           66
no input file                        108
files in cache                      9728
cache size                         428.3 MB
max cache size                       5.0 GB

Cold cache is slower (about 12 minutes 22 seconds) because ccache needs to populate its cache. That’s expected - you’re paying the one-time cost upfront.

Hot Cache Build - Where Magic Happens #

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
$ ccache -z
Statistics cleared
$ make clean
$ time KBUILD_BUILD_TIMESTAMP='' make CC="ccache gcc" -j4
...
KBUILD_BUILD_TIMESTAMP='' make CC="ccache gcc" -j4  149.73s user 131.44s system 291% cpu 1:36.52 total
$ ccache -s
cache directory                     /home/eorlov/.ccache
primary config                      /home/eorlov/.ccache/ccache.conf
secondary config      (readonly)    /etc/ccache.conf
cache hit (direct)                  3236
cache hit (preprocessed)               8
cache miss                             3
called for link                        6
called for preprocessing             538
unsupported source language           66
no input file                        108
files in cache                      9742
cache size                         428.7 MB
max cache size                       5.0 GB

96 seconds! That’s roughly 6.8x faster than without ccache. From almost 11 minutes down to under 2 minutes.

The Numbers That Matter #

Let’s break this down:

  • No caching: ~652 seconds
  • Cold cache: ~742 seconds (13.8% slower - one-time cost)
  • Hot cache: ~96 seconds (6.8x faster!)

For CI/CD pipelines or local development where you’re doing frequent rebuilds, this is massive. We’ve implemented this in our build infrastructure and it’s saved us probably hundreds of hours of compute time monthly.

Now, if you absolutely need that timestamp in your kernel (maybe for compliance reasons?), there’s a hacky workaround. You could theoretically use a placeholder during compilation and patch the binary afterward:

1
2
3
$ KBUILD_BUILD_TIMESTAMP=$(printf "y%0.s" $(seq 1 $(date | wc -c)))
$ make CC="ccache gcc" vmlinux
$ sed -i "s/$(KBUILD_BUILD_TIMESTAMP)/$(date)/g" vmlinux

But honestly? Don’t do this. It’s fragile, breaks reproducible builds, and defeats half the purpose. I’m only mentioning it because someone will ask, and yes, it’s technically possible.

Practical Tips from Production Use #

After using ccache in various environments, here are some tips:

  1. Increase cache size for kernel work: Default 5GB is okay, but I recommend 10-20GB if you work with multiple kernel versions:

    1
    
    ccache --max-size=20G
    
  2. Use ccache for cross-compilation too: Works great with CROSS_COMPILE:

    1
    
    KBUILD_BUILD_TIMESTAMP='' make CROSS_COMPILE="ccache aarch64-linux-gnu-" CC="ccache gcc"
    
  3. Share ccache between containers: In CI/CD, mount ccache directory as a volume to persist cache between builds.

  4. Monitor hit rates: If your hit rate drops below 80% for repeated builds, something’s wrong with determinism.

Why This Matters #

Look, clean builds aren’t the most common workflow for typical kernel development - you usually do incremental builds. But there are scenarios where this is crucial:

  • CI/CD pipelines: Where every build starts fresh
  • Bisecting bugs: When you’re jumping between commits frequently
  • Testing patches: Applying/reverting patches and rebuilding
  • Multiple configurations: Building different kernel configs

At my previous job, we were building Android kernels for different devices, and ccache cut our CI pipeline time from 3 hours to about 35 minutes. That’s the difference between getting feedback before lunch versus end of day.

Conclusion #

ccache with KBUILD_BUILD_TIMESTAMP='' is a simple trick that yields massive performance gains for kernel builds. Yes, you lose the build timestamp, but for development and CI/CD, that’s usually a worthwhile tradeoff.

The broader lesson here? Build determinism isn’t just about reproducibility - it enables powerful optimizations like caching. Sometimes the best performance improvements come not from faster hardware or better algorithms, but from not doing work you’ve already done.

Give it a try on your next kernel build. Your CPU (and your time) will thank you.


Have you used ccache for kernel builds? What other build optimization tricks do you use? Let me know - I’m always looking for ways to shave off those precious seconds from build times.