<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Guides on Install Nix | Reproducible Package Manager for Linux &amp; macOS</title><link>https://getnix.io/guides/</link><description>Recent content in Guides on Install Nix | Reproducible Package Manager for Linux &amp; macOS</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Sun, 12 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://getnix.io/guides/index.xml" rel="self" type="application/rss+xml"/><item><title>What is Nix?</title><link>https://getnix.io/guides/what-is-nix/</link><pubDate>Fri, 27 Mar 2026 00:00:00 +0000</pubDate><guid>https://getnix.io/guides/what-is-nix/</guid><description>The problem Nix solves Software breaks in predictable ways. A project builds on your machine but not on a colleague&rsquo;s. A CI pipeline produces a different binary than your laptop. An upgrade pulls in a new library version and silently breaks something unrelated. A server that &ldquo;hasn&rsquo;t changed&rdquo; starts behaving differently after an OS update.
These are all symptoms of the same root cause: implicit dependencies. Traditional package managers and build tools leave gaps — they don&rsquo;t track every input that affects an output. Nix closes those gaps.</description><content:encoded><![CDATA[<h2 id="the-problem-nix-solves">The problem Nix solves</h2>
<p>Software breaks in predictable ways. A project builds on your machine but not on a colleague&rsquo;s. A CI pipeline produces a different binary than your laptop. An upgrade pulls in a new library version and silently breaks something unrelated. A server that &ldquo;hasn&rsquo;t changed&rdquo; starts behaving differently after an OS update.</p>
<p>These are all symptoms of the same root cause: <strong>implicit dependencies</strong>. Traditional package managers and build tools leave gaps — they don&rsquo;t track every input that affects an output. Nix closes those gaps.</p>
<h2 id="what-nix-actually-is">What Nix actually is</h2>
<p>Nix is three things that share a name:</p>
<ul>
<li><strong>A package manager</strong> that installs software in isolation, never overwriting system libraries</li>
<li><strong>A build system</strong> that produces identical outputs from identical inputs, every time</li>
<li><strong>A language</strong> (also called Nix) used to describe packages, environments, and system configurations</li>
</ul>
<p>When people say &ldquo;Nix,&rdquo; they usually mean the package manager and build system. When they say &ldquo;NixOS,&rdquo; they mean a full Linux distribution built entirely on Nix. You don&rsquo;t need NixOS to use Nix — it runs on any Linux, macOS, and WSL.</p>
<h2 id="how-nix-differs-from-other-tools">How Nix differs from other tools</h2>
<p>If you&rsquo;ve used package managers or containerization before, you might wonder where Nix fits in:</p>
<table>
  <thead>
      <tr>
          <th>Tool</th>
          <th>What it does</th>
          <th>How Nix compares</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><strong>apt / dnf / Homebrew</strong></td>
          <td>Install packages globally, one version at a time</td>
          <td>Nix installs packages in isolation — multiple versions coexist, nothing is overwritten</td>
      </tr>
      <tr>
          <td><strong>Docker</strong></td>
          <td>Bundles an app with its OS into an image</td>
          <td>Nix builds reproducible artifacts <em>without</em> a container runtime — you can still produce Docker images from Nix, but you don&rsquo;t need Docker to get reproducibility</td>
      </tr>
      <tr>
          <td><strong>asdf / mise / nvm</strong></td>
          <td>Switch between tool versions per project</td>
          <td>Nix does this <em>and</em> manages native libraries, system dependencies, and non-language tooling in the same lockfile</td>
      </tr>
      <tr>
          <td><strong>Ansible / Chef</strong></td>
          <td>Converge a machine toward a desired state</td>
          <td>Nix declares the <em>exact</em> state — it doesn&rsquo;t patch in place, it builds a new immutable result and switches to it atomically</td>
      </tr>
  </tbody>
</table>
<p>Nix isn&rsquo;t a replacement for all of these — many teams use Docker <em>on top of</em> Nix, for instance. The key difference is that Nix tracks <strong>every</strong> input, so results are reproducible by construction rather than by convention.</p>
<h2 id="core-concepts">Core concepts</h2>
<h3 id="everything-is-a-derivation">Everything is a derivation</h3>
<p>In Nix, a <strong>derivation</strong> is a recipe for building something — a package, a configuration file, a Docker image, an entire operating system. Every derivation declares its exact inputs: source code, dependencies, build commands. Nix hashes all inputs together to produce a unique store path:</p>
<pre><code class="language-text">/nix/store/a1b2c3d4…-go-1.25.4/</code></pre>
<p>If any input changes — a different source commit, a different compiler version, a different build flag — the hash changes and Nix builds a new, separate output. Nothing is overwritten. This is what makes Nix <strong>reproducible</strong> and <strong>safe to roll back</strong>.</p>
<h3 id="the-nix-store">The Nix store</h3>
<p>All packages live in <code>/nix/store/</code>, each in its own directory named by its content hash. Multiple versions of the same package coexist without conflict. There is no global <code>bin/</code> or <code>lib/</code> directory where packages fight over filenames.</p>
<pre><code class="language-bash">/nix/store/a1b2c3d4…-go-1.25.4/
/nix/store/e5f6a7b8…-go-1.26.1/
/nix/store/c9d0e1f2…-nodejs-24.14.0/</code></pre>
<p>This means installing a new version of Go doesn&rsquo;t affect any project that depends on the old one. Uninstalling a package is instant — nothing else linked against it.</p>
<h3 id="flakes">Flakes</h3>
<p>A <strong>flake</strong> is a standardized way to define a Nix project. It&rsquo;s a directory with a <code>flake.nix</code> file that declares:</p>
<ul>
<li><strong>Inputs</strong> — where to get dependencies (usually <code>nixpkgs</code>, the main Nix package repository with 120,000+ packages)</li>
<li><strong>Outputs</strong> — what the project produces (dev shells, packages, system configurations, Docker images)</li>
</ul>
<p>And a <code>flake.lock</code> file that pins every input to an exact revision. This lock file is what guarantees reproducibility — anyone who builds the flake gets the same inputs, therefore the same outputs.</p>
<pre><code class="language-nix">{
  inputs.nixpkgs.url = &#34;github:NixOS/nixpkgs/nixpkgs-unstable&#34;;

  outputs = { nixpkgs, ... }:
  let
    pkgs = nixpkgs.legacyPackages.aarch64-darwin;  # or x86_64-linux, etc.
  in {
    # A development shell with Python and Node
    devShells.aarch64-darwin.default = pkgs.mkShell {
      packages = [ pkgs.python3 pkgs.nodejs ];
    };
  };
}</code></pre>
<h3 id="dev-shells">Dev shells</h3>
<p>The most common entry point to Nix is <code>nix develop</code>. It drops you into a shell with exactly the tools your project declares — no more, no less. Nothing is installed globally. When you exit the shell, the tools are gone from your <code>PATH</code> (but cached in the store for instant reuse).</p>
<pre><code class="language-bash">$ nix develop
(nix) $ python3 --version
Python 3.13.12
(nix) $ node --version
v24.14.0
(nix) $ exit

$ python3 --version
python3: command not found</code></pre>
<p>Combined with <a href="https://direnv.net/">direnv</a>, the shell activates automatically when you <code>cd</code> into the project directory.</p>
<h2 id="what-you-can-do-with-nix">What you can do with Nix</h2>
<p>Nix scales from a single dev shell to an entire infrastructure:</p>
<table>
  <thead>
      <tr>
          <th>Use case</th>
          <th>What Nix provides</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><strong>Development environments</strong></td>
          <td>Per-project toolchains via <code>nix develop</code> — no version managers, no conflicts</td>
      </tr>
      <tr>
          <td><strong>CI/CD pipelines</strong></td>
          <td><code>nix build</code> and <code>nix flake check</code> for hermetic, cacheable builds</td>
      </tr>
      <tr>
          <td><strong>Docker images</strong></td>
          <td>Minimal OCI images built from Nix expressions — no Dockerfile needed</td>
      </tr>
      <tr>
          <td><strong>Dotfile management</strong></td>
          <td><a href="https://github.com/nix-community/home-manager">Home Manager</a> declares shell, editor, and tool configs across machines</td>
      </tr>
      <tr>
          <td><strong>macOS system management</strong></td>
          <td><a href="https://github.com/nix-darwin/nix-darwin">nix-darwin</a> manages system preferences, Homebrew, and services declaratively</td>
      </tr>
      <tr>
          <td><strong>Server infrastructure</strong></td>
          <td><a href="https://nixos.org/">NixOS</a> defines entire servers as code with atomic upgrades and rollbacks</td>
      </tr>
  </tbody>
</table>
<h2 id="keeping-the-store-clean">Keeping the store clean</h2>
<p>Because Nix never overwrites — it adds new store paths alongside old ones — the <code>/nix/store/</code> directory grows over time. Nix provides <strong>garbage collection</strong> to reclaim space:</p>
<pre><code class="language-bash">$ nix store gc
1284 store paths deleted, 2.47 GiB freed</code></pre>
<p>This removes any store path that is no longer referenced by a profile, dev shell, or running system. It is always safe to run — Nix will never delete a path that something still depends on.</p>
<p>For more aggressive cleanup you can delete old profile generations first, then garbage-collect:</p>
<pre><code class="language-bash">$ nix profile wipe-history --older-than 14d   # drop generations older than 2 weeks
$ nix store gc                                # now collect the unreferenced paths</code></pre>
<p>On NixOS and nix-darwin you can automate this with a scheduled garbage collection option so the store stays tidy without manual intervention.</p>
<h2 id="getting-started">Getting started</h2>
<h3 id="install-nix">Install Nix</h3>
<pre><code class="language-bash">curl -sSf -L https://getnix.io/install | sh -s -- install</code></pre>
<p>This installs the <a href="https://determinate.systems/nix-installer/">Determinate Nix Installer</a> with flakes enabled by default.</p>
<h3 id="try-a-dev-shell">Try a dev shell</h3>
<p>Without cloning anything, run a one-off shell with any package:</p>
<pre><code class="language-bash">$ nix shell nixpkgs#ripgrep nixpkgs#jq
$ rg --version
ripgrep 15.1.0
$ exit  # tools gone from PATH, cached in store</code></pre>
<h3 id="create-your-first-flake">Create your first flake</h3>
<pre><code class="language-bash">mkdir my-project &amp;&amp; cd my-project</code></pre>
<p>Create a <code>flake.nix</code>:</p>
<pre><code class="language-nix">{
  description = &#34;My first Nix project&#34;;

  inputs.nixpkgs.url = &#34;github:NixOS/nixpkgs/nixpkgs-unstable&#34;;

  outputs = { nixpkgs, ... }:
  let
    systems = [ &#34;x86_64-linux&#34; &#34;aarch64-linux&#34; &#34;aarch64-darwin&#34; ];
    forAllSystems = nixpkgs.lib.genAttrs systems;
  in {
    devShells = forAllSystems (system:
    let
      pkgs = nixpkgs.legacyPackages.${system};
    in {
      default = pkgs.mkShell {
        packages = with pkgs; [ go gopls ];
      };
    });
  };
}</code></pre>
<p>Then:</p>
<pre><code class="language-bash">$ git init &amp;&amp; git add flake.nix  # flakes require a git repo
$ nix develop
(nix) $ go version
go version go1.26.1 linux/amd64</code></pre>
<p>The first run downloads dependencies and may take a minute. Subsequent runs are instant — everything is cached in the Nix store.</p>
<h2 id="quick-reference">Quick reference</h2>
<table>
  <thead>
      <tr>
          <th>Command</th>
          <th>What it does</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><code>nix develop</code></td>
          <td>Enter the project&rsquo;s dev shell</td>
      </tr>
      <tr>
          <td><code>nix shell nixpkgs#&lt;pkg&gt;</code></td>
          <td>Temporary shell with a package</td>
      </tr>
      <tr>
          <td><code>nix build</code></td>
          <td>Build the default package</td>
      </tr>
      <tr>
          <td><code>nix flake check</code></td>
          <td>Run checks (linters, tests, formatting)</td>
      </tr>
      <tr>
          <td><code>nix flake update</code></td>
          <td>Update all inputs to latest</td>
      </tr>
      <tr>
          <td><code>nix store gc</code></td>
          <td>Remove unused packages from the store</td>
      </tr>
  </tbody>
</table>
<h2 id="next-steps">Next steps</h2>
<ul>
<li><a href="/guides/ai-nix-adhoc/">AI + Nix: Run Anything, Install Nothing</a> — let AI agents pull in tools on the fly with <code>nix run</code> and <code>nix shell</code></li>
<li><a href="/guides/go-nix-docker/">Reproducible Go with Nix &amp; Docker</a> — build a Go binary and Docker image that are byte-for-byte identical across environments</li>
<li><a href="/guides/cross-platform-dotfiles/">End Environment Drift</a> — manage your desktop environment across macOS and Linux from a single repository</li>
<li><a href="/guides/nixos-auto-upgrades/">Automatic NixOS Upgrades</a> — keep NixOS servers up-to-date automatically with CI-driven flake updates and self-upgrading hosts</li>
<li><a href="https://nix.dev">nix.dev</a> — official Nix documentation and tutorials</li>
<li><a href="https://search.nixos.org/packages">Nixpkgs search</a> — browse 120,000+ available packages</li>
</ul>
]]></content:encoded><media:content url="https://getnix.io/og-what-is-nix.png"/></item><item><title>Reproducible Go with Nix &amp; Docker</title><link>https://getnix.io/guides/go-nix-docker/</link><pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate><guid>https://getnix.io/guides/go-nix-docker/</guid><description>Why reproducible builds? Most Go projects rely on a patchwork of version managers, Dockerfiles, and CI scripts to keep environments consistent. Inevitably, the binary you test locally drifts from what runs in production — different compiler versions, different C libraries, different flags.
Nix solves this at the root. A single flake.nix declares every dependency with a cryptographic hash. The same inputs always produce the same outputs, bit-for-bit. Your development binary is your production binary.</description><content:encoded><![CDATA[<h2 id="why-reproducible-builds">Why reproducible builds?</h2>
<p>Most Go projects rely on a patchwork of version managers, Dockerfiles, and CI scripts to keep environments consistent. Inevitably, the binary you test locally drifts from what runs in production — different compiler versions, different C libraries, different flags.</p>
<p>Nix solves this at the root. A single <code>flake.nix</code> declares every dependency with a cryptographic hash. The same inputs always produce the same outputs, bit-for-bit. Your development binary <strong>is</strong> your production binary.</p>
<h2 id="step-1-development-environment">Step 1: Development environment</h2>
<p>Start with a <code>flake.nix</code> that gives every developer the same Go toolchain:</p>
<pre><code class="language-nix">{
  description = &#34;Go project&#34;;

  inputs.nixpkgs.url = &#34;github:NixOS/nixpkgs/nixpkgs-unstable&#34;;

  outputs = { nixpkgs, ... }:
  let
    systems = [ &#34;x86_64-linux&#34; &#34;aarch64-linux&#34; &#34;aarch64-darwin&#34; ];
    forAllSystems = nixpkgs.lib.genAttrs systems;
  in {
    devShells = forAllSystems (system:
    let
      pkgs = nixpkgs.legacyPackages.${system};
    in {
      default = pkgs.mkShell {
        packages = with pkgs; [
          go
          gopls
          golangci-lint
          delve
          go-task
        ];
      };
    });
  };
}</code></pre>
<p>Run <code>nix develop</code> and you get a shell with the exact same Go version, language server, linter, debugger, and <a href="https://taskfile.dev">Task</a> runner on every machine. No <code>goenv</code>, no <code>asdf</code>, no &ldquo;works on my machine.&rdquo;</p>
<p>With <a href="https://direnv.net/">direnv</a> and a one-line <code>.envrc</code> (<code>use flake</code>), the dev shell loads automatically when you <code>cd</code> into the project — no need to run <code>nix develop</code> manually.</p>
<blockquote>
  <p>Pin your flake inputs with <code>nix flake update</code> and commit the <code>flake.lock</code>. This locks every dependency to a specific revision.</p>

</blockquote>
<h2 id="step-2-build-the-go-binary-with-nix">Step 2: Build the Go binary with Nix</h2>
<p>Add a <code>packages</code> output using <code>buildGoModule</code>. This compiles your Go project inside the Nix sandbox — no ambient state and fully reproducible:</p>
<pre><code class="language-nix">packages = forAllSystems (system:
let
  pkgs = nixpkgs.legacyPackages.${system};
in {
  default = pkgs.buildGoModule {
    pname = &#34;myapp&#34;;
    version = &#34;0.1.0&#34;;
    src = ./.;
    vendorHash = null; # use `go mod vendor` or set the hash

    env.CGO_ENABLED = 0;

    ldflags = [
      &#34;-s&#34; &#34;-w&#34;
      &#34;-extldflags &#39;-static&#39;&#34;
    ];
  };
});</code></pre>
<p>Key points:</p>
<ul>
<li><strong><code>env.CGO_ENABLED = 0</code></strong> produces a fully static binary — no glibc dependency, runs anywhere.</li>
<li><strong><code>vendorHash</code></strong> locks your Go module dependencies. Set it to <code>null</code> if you use <code>go mod vendor</code>, or let Nix tell you the correct hash on first build.</li>
<li><strong><code>ldflags</code></strong> with <code>-s -w</code> strips debug info for a smaller binary.</li>
</ul>
<p>Build it:</p>
<pre><code class="language-bash">$ nix build
$ ./result/bin/myapp</code></pre>
<p>Or use the included <code>Taskfile.yaml</code> (available via <code>go-task</code> in the dev shell):</p>
<pre><code class="language-bash">$ task build</code></pre>
<p>The binary in <code>./result</code> is the Nix store path. Build it again tomorrow, on another machine, in CI — you get the same bytes.</p>
<h2 id="step-3-docker-image-with-nix">Step 3: Docker image with Nix</h2>
<p>Instead of writing a <code>Dockerfile</code>, use Nix&rsquo;s <code>dockerTools</code> to build a minimal OCI image that contains only your binary. Docker containers run Linux, so the flake builds a separate Linux binary for the image — on Linux this is identical to the native binary, on macOS it cross-compiles automatically:</p>
<pre><code class="language-nix">packages = forAllSystems (system:
let
  pkgs = nixpkgs.legacyPackages.${system};

  goArch = {
    &#34;x86_64-linux&#34;   = &#34;amd64&#34;;
    &#34;aarch64-linux&#34;  = &#34;arm64&#34;;
    &#34;aarch64-darwin&#34; = &#34;arm64&#34;;
  }.${system};

  isLinux = builtins.match &#34;.*-linux&#34; system != null;

  myapp = pkgs.buildGoModule {
    pname = &#34;myapp&#34;;
    version = &#34;0.1.0&#34;;
    src = ./.;
    vendorHash = null;

    env.CGO_ENABLED = 0;

    ldflags = [
      &#34;-s&#34; &#34;-w&#34;
      &#34;-extldflags &#39;-static&#39;&#34;
    ];
  };

  # On Linux, reuse the native binary. On macOS, cross-compile for Linux.
  myapp-linux = if isLinux then myapp else pkgs.buildGoModule {
    pname = &#34;myapp&#34;;
    version = &#34;0.1.0&#34;;
    src = ./.;
    vendorHash = null;

    env.CGO_ENABLED = 0;

    preBuild = &#39;&#39;
      export GOOS=linux
      export GOARCH=${goArch}
    &#39;&#39;;

    # Go puts cross-compiled binaries in bin/GOOS_GOARCH/ — flatten it
    postInstall = &#39;&#39;
      if [ -d &#34;$out/bin/linux_${goArch}&#34; ]; then
        mv &#34;$out/bin/linux_${goArch}/&#34;* &#34;$out/bin/&#34;
        rmdir &#34;$out/bin/linux_${goArch}&#34;
      fi
    &#39;&#39;;

    ldflags = [
      &#34;-s&#34; &#34;-w&#34;
      &#34;-extldflags &#39;-static&#39;&#34;
    ];
  };
in {
  default = myapp;

  docker = pkgs.dockerTools.buildLayeredImage {
    name = &#34;myapp&#34;;
    tag = &#34;latest&#34;;
    contents = [ myapp-linux ];
    config = {
      Cmd = [ &#34;/bin/myapp&#34; ];
      ExposedPorts.&#34;8080/tcp&#34; = {};
    };
  };
});</code></pre>
<p>Build and load it:</p>
<pre><code class="language-bash">$ nix build .#docker
$ docker load &lt; result
Loaded image: myapp:latest

$ docker run -p 8080:8080 myapp:latest</code></pre>
<p>Or in one step:</p>
<pre><code class="language-bash">$ task docker:run</code></pre>
<h3 id="why-buildlayeredimage">Why <code>buildLayeredImage</code>?</h3>
<ul>
<li><strong>Minimal</strong> — No base image, no shell, no package manager. Only your binary and what you explicitly include.</li>
<li><strong>Layered</strong> — Nix store paths become individual Docker layers. Unchanged dependencies are cached between builds.</li>
<li><strong>Deterministic</strong> — The image is built from the Nix store, not from <code>apt-get</code> or <code>apk</code>. Same inputs, same image.</li>
</ul>
<h2 id="the-result-byte-by-byte-identical">The result: byte-by-byte identical</h2>
<p>On Linux, <code>myapp-linux</code> <strong>is</strong> <code>myapp</code> — Nix reuses the same derivation. The binary you test locally with <code>nix build</code> is the exact same binary inside the Docker image. Since the image is minimal (no shell, no coreutils), copy the binary out to compare:</p>
<pre><code class="language-bash">$ nix build
$ sha256sum ./result/bin/myapp
a1b2c3d4...  ./result/bin/myapp

$ nix build .#docker
$ docker load &lt; result
$ docker create --name tmp myapp:latest
$ docker cp tmp:/bin/myapp /tmp/myapp-from-docker
$ docker rm tmp
$ sha256sum /tmp/myapp-from-docker
a1b2c3d4...  /tmp/myapp-from-docker</code></pre>
<p>Or run <code>task verify</code> to do this automatically.</p>
<p>Same hash. The binary in your dev environment, your CI pipeline, and your production container are identical — not &ldquo;close enough,&rdquo; not &ldquo;built from the same source,&rdquo; but the same bytes.</p>
<p>On macOS, the Docker image contains a cross-compiled Linux binary while your local <code>nix build</code> produces a native macOS binary. The Linux binary inside Docker is still fully reproducible — build it on any machine with the same <code>flake.lock</code> and you get the same bytes.</p>
<p>This eliminates an entire class of bugs: &ldquo;it worked in dev but not in prod.&rdquo;</p>
<h2 id="complete-example">Complete example</h2>
<p>The full working example is at <a href="https://codeberg.org/getnix/go-nix-docker">codeberg.org/getnix/go-nix-docker</a> with all files needed to build and run:</p>
<ul>
<li><code>flake.nix</code> — Dev shell + Go build + Docker image</li>
<li><code>.envrc</code> — direnv config for automatic shell loading</li>
<li><code>main.go</code> — Simple HTTP server</li>
<li><code>go.mod</code> — Go module definition</li>
<li><code>Taskfile.yaml</code> — Task runner shortcuts for common commands</li>
</ul>
<p>Try it instantly — no clone needed:</p>
<pre><code class="language-bash">$ nix run git&#43;https://codeberg.org/getnix/go-nix-docker</code></pre>
<p>Nix fetches the repo, builds the binary, and starts the HTTP server. In another terminal:</p>
<pre><code class="language-bash">$ curl -i http://127.0.0.1:8080
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8

Hello from myapp

Go go1.26.1 linux/amd64</code></pre>
<p>To explore the source and develop locally, clone it:</p>
<pre><code class="language-bash">$ git clone https://codeberg.org/getnix/go-nix-docker.git
$ cd go-nix-docker
$ direnv allow       # auto-load dev shell (or: nix develop)
$ go run .           # run locally

$ nix build          # build reproducible binary
$ nix build .#docker # build Docker image</code></pre>
<h3 id="quick-reference">Quick reference</h3>
<table>
  <thead>
      <tr>
          <th>Command</th>
          <th>What it does</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><code>nix run git+https://codeberg.org/getnix/go-nix-docker</code></td>
          <td>Run the app without cloning</td>
      </tr>
      <tr>
          <td><code>direnv allow</code></td>
          <td>Auto-load dev shell on <code>cd</code></td>
      </tr>
      <tr>
          <td><code>nix develop</code></td>
          <td>Enter dev shell manually</td>
      </tr>
      <tr>
          <td><code>task run</code></td>
          <td>Run the app locally</td>
      </tr>
      <tr>
          <td><code>task build</code></td>
          <td>Build the Go binary reproducibly</td>
      </tr>
      <tr>
          <td><code>task docker:build</code></td>
          <td>Build the Docker image</td>
      </tr>
      <tr>
          <td><code>task docker:load</code></td>
          <td>Build and load image into Docker</td>
      </tr>
      <tr>
          <td><code>task docker:run</code></td>
          <td>Build, load, and run the container</td>
      </tr>
      <tr>
          <td><code>task verify</code></td>
          <td>Compare sha256 of local and Docker binary</td>
      </tr>
      <tr>
          <td><code>task lint</code></td>
          <td>Run golangci-lint</td>
      </tr>
      <tr>
          <td><code>task clean</code></td>
          <td>Remove build artifacts, containers, and images</td>
      </tr>
      <tr>
          <td><code>task update</code></td>
          <td>Update flake inputs and regenerate <code>flake.lock</code></td>
      </tr>
  </tbody>
</table>
]]></content:encoded><media:content url="https://getnix.io/og-go-nix-docker.png"/></item><item><title>End Environment Drift: Manage macOS &amp; Linux from a Single Nix Repo</title><link>https://getnix.io/guides/cross-platform-dotfiles/</link><pubDate>Thu, 19 Mar 2026 00:00:00 +0000</pubDate><guid>https://getnix.io/guides/cross-platform-dotfiles/</guid><description>The problem: environment drift Anyone who uses more than one computer knows the pain: you spend hours setting up your shell, editor, applications, and system preferences on one machine, only to start from scratch on the next. Multiply that across macOS and Linux, personal and work machines, and the gap widens. Developers feel this acutely — different tool versions cause bugs that appear on one machine but not another — but the problem applies to any desktop user who wants a consistent environment.</description><content:encoded><![CDATA[<h2 id="the-problem-environment-drift">The problem: environment drift</h2>
<p>Anyone who uses more than one computer knows the pain: you spend hours setting up your shell, editor, applications, and system preferences on one machine, only to start from scratch on the next. Multiply that across macOS and Linux, personal and work machines, and the gap widens. Developers feel this acutely — different tool versions cause bugs that appear on one machine but not another — but the problem applies to any desktop user who wants a consistent environment.</p>
<p>This is <strong>environment drift</strong>, and it compounds over time. The more devices and platforms you use, the more things slip through the cracks.</p>
<p>Nix eliminates drift at the root. With <a href="https://github.com/nix-community/home-manager">Home Manager</a> and <a href="https://github.com/nix-darwin/nix-darwin">nix-darwin</a>, you declare your entire desktop environment — applications, shell, editor, terminal, git, system preferences — in Nix. The same <code>flake.lock</code> pins every dependency to the same version on every machine. One repository, one source of truth, across macOS and Linux.</p>
<blockquote>
  <p><strong>Note:</strong> This guide presents one opinionated way to structure a Nix-based dotfiles setup. There are many valid approaches — the goal here is to outline the overall idea and give you a concrete starting point you can adapt to your own workflow.</p>

</blockquote>
<h2 id="repository-structure">Repository structure</h2>
<p>A well-organized dotfiles repo separates concerns into modules. Each tool gets its own file, and platform-specific logic lives inside the modules themselves:</p>
<pre><code class="language-text">dotfiles/
├── flake.nix              # inputs, outputs, platform targets
├── flake.lock             # pinned dependency versions
├── home.nix               # root Home Manager config (imports modules)
├── darwin.nix             # macOS system-level settings (nix-darwin)
├── modules/
│   ├── packages.nix       # shared package declarations
│   ├── sh.nix             # shell configuration
│   ├── git.nix            # Git settings, aliases, signing
│   ├── direnv.nix         # automatic environment loading
│   └── ...                # editor, terminal, prompt, etc.
├── dotfiles/              # platform-specific static files
│   ├── common/            # shared across all platforms
│   ├── darwin/            # macOS-only
│   └── linux/             # Linux-only
└── secrets/               # encrypted secrets (sops)</code></pre>
<blockquote>
  <p><strong>One module per concern.</strong> Every module imports on both platforms and uses <code>lib.mkIf</code> internally to handle differences. No platform-specific import lists to maintain.</p>

</blockquote>
<h2 id="step-1-the-flake">Step 1: The flake</h2>
<p>The <code>flake.nix</code> defines inputs and outputs for each platform target. A helper function keeps things DRY:</p>
<pre><code class="language-nix">{
  description = &#34;Dotfiles — cross-platform desktop environment&#34;;

  inputs = {
    nixpkgs.url = &#34;github:NixOS/nixpkgs/nixpkgs-unstable&#34;;
    home-manager = {
      url = &#34;github:nix-community/home-manager&#34;;
      inputs.nixpkgs.follows = &#34;nixpkgs&#34;;
    };
    nix-darwin = {
      url = &#34;github:nix-darwin/nix-darwin/master&#34;;
      inputs.nixpkgs.follows = &#34;nixpkgs&#34;;
    };
  };

  outputs = { self, nixpkgs, home-manager, nix-darwin, ... }:
  let
    username = &#34;developer&#34;;

    mkPkgs = system: import nixpkgs {
      inherit system;
      config.allowUnfree = true;
    };

    mkHome = system: home-manager.lib.homeManagerConfiguration {
      pkgs = mkPkgs system;
      modules = [ ./home.nix ];
      extraSpecialArgs = { inherit self username; };
    };
  in {
    # Linux: standalone Home Manager
    homeConfigurations.&#34;${username}@linux&#34; = mkHome &#34;x86_64-linux&#34;;

    # macOS: nix-darwin wraps Home Manager
    darwinConfigurations.${username} = nix-darwin.lib.darwinSystem {
      pkgs = mkPkgs &#34;aarch64-darwin&#34;;
      specialArgs = { inherit username; };
      modules = [
        ./darwin.nix
        home-manager.darwinModules.home-manager
        {
          home-manager.useGlobalPkgs = true;
          home-manager.useUserPackages = true;
          home-manager.users.${username} = import ./home.nix;
          home-manager.extraSpecialArgs = { inherit self username; };
          users.users.${username}.home = &#34;/Users/${username}&#34;;
        }
      ];
    };

    # Validate both platforms in CI
    checks = {
      &#34;x86_64-linux&#34;.home = (mkHome &#34;x86_64-linux&#34;).activationPackage;
      &#34;aarch64-darwin&#34;.darwin =
        self.darwinConfigurations.${username}.system;
    };
  };
}</code></pre>
<p><strong>Linux</strong> uses standalone Home Manager (<code>home-manager switch</code>). <strong>macOS</strong> uses nix-darwin, which wraps Home Manager so you get both user-level and system-level configuration in one <code>darwin-rebuild switch</code>. The <code>checks</code> output validates both platform configurations — run <code>nix flake check</code> in CI to catch breakage before it hits anyone&rsquo;s machine.</p>
<h2 id="step-2-home-manager-root-config">Step 2: Home Manager root config</h2>
<p><code>home.nix</code> imports every module and sets platform-aware defaults:</p>
<pre><code class="language-nix">{ config, pkgs, username, ... }:
{
  imports = [
    ./modules/packages.nix
    ./modules/sh.nix
    ./modules/git.nix
    ./modules/direnv.nix
    # add more modules as needed: editor, terminal, prompt, etc.
  ];

  home.username = username;
  home.homeDirectory =
    if pkgs.stdenv.isDarwin
    then &#34;/Users/${config.home.username}&#34;
    else &#34;/home/${config.home.username}&#34;;

  xdg.enable = true;
  programs.home-manager.enable = true;
  home.stateVersion = &#34;26.05&#34;;
}</code></pre>
<p>Every module is imported on both platforms. Platform differences are handled <em>inside</em> each module with <code>pkgs.stdenv.isDarwin</code> and <code>pkgs.stdenv.isLinux</code>.</p>
<h2 id="step-3-shared-packages">Step 3: Shared packages</h2>
<p>Declare your packages once. Every machine gets the same versions:</p>
<pre><code class="language-nix">{ pkgs, ... }:
let
  isLinux = pkgs.stdenv.isLinux;
in {
  home.packages = with pkgs; [
    bat eza fd ripgrep fzf wget  # core CLI
    go gopls golangci-lint       # development (example: Go)
    trivy opentofu               # infrastructure
  ]
  &#43;&#43; pkgs.lib.optionals isLinux [
    glibcLocales  # locale data for Nix programs on non-NixOS
  ];
}</code></pre>
<p>The <code>lib.optionals isLinux</code> pattern adds packages conditionally. On non-NixOS distributions (Ubuntu, Fedora, Arch), Nix-installed programs need <code>glibcLocales</code> to find locale data — on NixOS or macOS it is already available.</p>
<h2 id="step-4-cross-platform-shell">Step 4: Cross-platform shell</h2>
<p>Both platforms get the same aliases, history, and completions — only platform-specific integrations differ:</p>
<pre><code class="language-nix">{ config, pkgs, lib, ... }:
let
  isLinux = pkgs.stdenv.isLinux;
  isDarwin = pkgs.stdenv.isDarwin;
in {
  programs.zsh = {
    enable = true;
    autosuggestion.enable = true;
    syntaxHighlighting.enable = true;
    enableCompletion = true;

    history = {
      size = 50000;
      ignoreDups = true;
      ignoreAllDups = true;
      share = true;
    };

    shellAliases = {
      ls = &#34;eza --group-directories-first&#34;;
      ll = &#34;eza -l --group-directories-first --git --icons=always&#34;;
      la = &#34;eza -la --group-directories-first&#34;;
      lt = &#34;eza --tree --group-directories-first&#34;;
    };

    initContent = &#39;&#39;
      # Word navigation with Ctrl&#43;arrow
      bindkey &#39;^[[1;5D&#39; backward-word
      bindkey &#39;^[[1;5C&#39; forward-word
      bindkey &#39;^H&#39;      backward-kill-word
    &#39;&#39;
    &#43; lib.optionalString isDarwin &#39;&#39;
      eval &#34;$(/opt/homebrew/bin/brew shellenv)&#34;
    &#39;&#39;
    &#43; lib.optionalString isLinux &#39;&#39;
      [ -f &#34;$HOME/.nix-profile/etc/profile.d/hm-session-vars.sh&#34; ] \
        &amp;&amp; . &#34;$HOME/.nix-profile/etc/profile.d/hm-session-vars.sh&#34;
    &#39;&#39;;
  };

  # Expose Nix binaries to the systemd session (desktop entries, etc.)
  xdg.configFile.&#34;environment.d/10-nix-path.conf&#34; = lib.mkIf isLinux {
    text = &#34;PATH=$HOME/.nix-profile/bin:/nix/var/nix/profiles/default/bin:$PATH&#34;;
  };

  # Locale data for Nix programs on non-NixOS Linux
  home.sessionVariables = pkgs.lib.mkIf isLinux {
    LOCALE_ARCHIVE = &#34;${pkgs.glibcLocales}/lib/locale/locale-archive&#34;;
  };
}</code></pre>
<p>The <code>lib.optionalString</code> and <code>lib.mkIf</code> patterns are the building blocks. Every module follows the same approach: shared config at the top, platform-specific blocks gated by <code>isDarwin</code> / <code>isLinux</code>.</p>
<h2 id="step-5-git-with-conditional-identities">Step 5: Git with conditional identities</h2>
<p>Declarative git config with conditional includes is especially valuable when switching between work and personal repositories:</p>
<pre><code class="language-nix">{ config, pkgs, ... }:
let
  isDarwin = pkgs.stdenv.isDarwin;
  signingKey =
    if isDarwin
    then &#34;${config.home.homeDirectory}/.ssh/id_ed25519_sk.pub&#34;
    else &#34;${config.home.homeDirectory}/.ssh/id_ed25519.pub&#34;;
in {
  programs.git = {
    enable = true;
    signing = { key = signingKey; signByDefault = true; format = &#34;ssh&#34;; };
    lfs.enable = true;

    # Switch identity based on repository location
    includes = [
      { path = &#34;~/.gitconfig-work&#34;;     condition = &#34;gitdir:~/src/gitlab.com/acme-corp/&#34;; }
      { path = &#34;~/.gitconfig-personal&#34;; condition = &#34;gitdir:~/src/github.com/&#34;; }
    ];

    settings = {
      user.useConfigOnly = true;
      push.default = &#34;simple&#34;;
      pull.rebase = true;
      fetch.prune = true;
      rebase.autoStash = true;
      init.defaultBranch = &#34;main&#34;;
      pager.show = &#34;bat&#34;;
      alias = {
        s = &#34;status -s&#34;;
        lg = &#34;log --graph --abbrev-commit --decorate --date=relative --format=format:&#39;%C(bold blue)%h%C(reset) - %C(bold green)(%ar)%C(reset) %C(white)%s%C(reset) %C(dim white)- %an%C(reset)%C(bold yellow)%d%C(reset)&#39; --all&#34;;
      };
    };
  };
}</code></pre>
<p>The referenced config files contain the identity for each context:</p>
<pre><code class="language-ini"># ~/.gitconfig-work
[user]
  name = Jane Doe
  email = jane.doe@acme-corp.com</code></pre>
<pre><code class="language-ini"># ~/.gitconfig-personal
[user]
  name = Jane Doe
  email = janedoe@example.com</code></pre>
<p>If you use different signing keys for work and personal projects, you can move the signing key out of the global git config and into each conditional file instead.</p>
<p>The <code>gitdir:</code> conditions mean your work email and signing identity are automatically active inside work repos, while personal settings apply everywhere else.</p>
<h2 id="step-6-macos-system-preferences">Step 6: macOS system preferences</h2>
<p>On macOS, nix-darwin declaratively manages system preferences that normally require clicking through System Settings:</p>
<pre><code class="language-nix">{ username, ... }:
{
  system.primaryUser = username;

  homebrew = {
    enable = true;
    onActivation = { autoUpdate = true; upgrade = true; cleanup = &#34;zap&#34;; };
    casks = [ &#34;firefox&#34; &#34;rectangle&#34; ];
    brews = [ &#34;ffmpeg&#34; &#34;wireguard-tools&#34; ];
  };

  system.defaults = {
    dock = {
      autohide = true;
      mru-spaces = false;
      show-recents = false;
    };
    finder = {
      FXEnableExtensionChangeWarning = false;
      _FXShowPosixPathInTitle = true;
    };
    NSGlobalDomain = {
      AppleShowAllExtensions = true;
      AppleInterfaceStyleSwitchesAutomatically = true;
      NSDocumentSaveNewDocumentsToCloud = false;
    };
    CustomUserPreferences.&#34;com.apple.desktopservices&#34; = {
      DSDontWriteNetworkStores = true;
      DSDontWriteUSBStores = true;
    };
  };

  system.stateVersion = 6;
}</code></pre>
<p>The <code>homebrew.onActivation.cleanup = &quot;zap&quot;</code> removes any cask or formula not listed in the config, preventing drift from ad-hoc installs.</p>
<h2 id="applying-the-configuration">Applying the configuration</h2>
<p>On <strong>macOS</strong>:</p>
<pre><code class="language-bash">$ sudo darwin-rebuild switch --flake .</code></pre>
<p>On <strong>Linux</strong>:</p>
<pre><code class="language-bash">$ home-manager switch --flake .#developer@linux</code></pre>
<p>Both commands evaluate the configuration for the current platform and atomically activate it. If something goes wrong, roll back:</p>
<pre><code class="language-bash">$ home-manager generations          # list previous generations
$ sudo darwin-rebuild --list-generations  # on macOS</code></pre>
<p>Validate both platforms in CI without applying:</p>
<pre><code class="language-bash">$ nix flake check</code></pre>
<h2 id="why-this-works">Why this works</h2>
<p>Whether you are managing a single personal setup or rolling this out across a team, the approach provides concrete benefits:</p>
<ol>
<li>
<p><strong>Instant setup.</strong> Clone the repo, run one <code>switch</code> command, and get a fully configured desktop. For teams, this turns onboarding from days into minutes.</p>
</li>
<li>
<p><strong>Pinned versions everywhere.</strong> The <code>flake.lock</code> ensures every machine uses the same nixpkgs revision. Update it in a PR, review, merge — every machine gets the update on the next <code>switch</code>.</p>
</li>
<li>
<p><strong>Multi-platform parity.</strong> macOS laptops and Linux workstations share the same shell, applications, and configuration. The <code>mkIf</code> guards handle platform differences transparently.</p>
</li>
<li>
<p><strong>Audit trail.</strong> Every change to the environment is a git commit. You can see exactly when a package version changed, who changed it, and why.</p>
</li>
<li>
<p><strong>No ambient state.</strong> Applications come from the Nix store, not from <code>brew install</code> or <code>apt-get</code> or <code>curl | sh</code>. There is no hidden state that differs between machines.</p>
</li>
</ol>
<h2 id="quick-reference">Quick reference</h2>
<table>
  <thead>
      <tr>
          <th>Command</th>
          <th>What it does</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><code>sudo darwin-rebuild switch --flake .</code></td>
          <td>Apply config on macOS</td>
      </tr>
      <tr>
          <td><code>home-manager switch --flake .#user@linux</code></td>
          <td>Apply config on Linux</td>
      </tr>
      <tr>
          <td><code>nix flake check</code></td>
          <td>Validate both platforms without applying</td>
      </tr>
      <tr>
          <td><code>nix flake update</code></td>
          <td>Update all inputs to latest</td>
      </tr>
      <tr>
          <td><code>nix flake update nixpkgs</code></td>
          <td>Update only nixpkgs</td>
      </tr>
      <tr>
          <td><code>home-manager generations</code></td>
          <td>List previous generations</td>
      </tr>
      <tr>
          <td><code>sudo darwin-rebuild --list-generations</code></td>
          <td>List generations on macOS</td>
      </tr>
  </tbody>
</table>
]]></content:encoded><media:content url="https://getnix.io/og-cross-platform-dotfiles.png"/></item><item><title>AI + Nix: Run Anything, Install Nothing</title><link>https://getnix.io/guides/ai-nix-adhoc/</link><pubDate>Thu, 19 Mar 2026 00:00:00 +0000</pubDate><guid>https://getnix.io/guides/ai-nix-adhoc/</guid><description> Warning: The techniques in this guide apply with or without AI — AI is just the most common entry point today. Nix does not sandbox execution: ad-hoc programs run with your user permissions. -&gt; AI is a tool, not an authority — you are responsible for every action on your system and the resulting consequences. -&gt; Review commands before running them.
The problem: AI needs software, hosts need hygiene AI coding agents are increasingly capable. They can write code, run commands, debug issues, and orchestrate complex workflows. But they hit a wall the moment they need a tool that is not installed: a linter for an unfamiliar language, a database client, an image editor, a network diagnostic tool.</description><content:encoded><![CDATA[<blockquote>
  <p><strong>Warning:</strong> The techniques in this guide apply with or without AI — AI is just the most common entry point today. Nix does not sandbox execution: ad-hoc programs run with your user permissions.<br/>
-&gt; <strong>AI is a tool, not an authority — you are responsible for every action on your system and the resulting consequences.</strong><br/>
-&gt; <strong>Review commands before running them.</strong></p>

</blockquote>
<h2 id="the-problem-ai-needs-software-hosts-need-hygiene">The problem: AI needs software, hosts need hygiene</h2>
<p>AI coding agents are increasingly capable. They can write code, run commands, debug issues, and orchestrate complex workflows. But they hit a wall the moment they need a tool that is not installed: a linter for an unfamiliar language, a database client, an image editor, a network diagnostic tool.</p>
<p>The traditional options are bad:</p>
<ul>
<li><strong>Install it permanently</strong> — pollutes the host with packages the user never asked for and may never need again.</li>
<li><strong>Run it in a container</strong> — isolates the tool from the host, breaking access to local files, GPU, display server, and other resources the AI often needs.</li>
<li><strong>Ask the user to install it</strong> — breaks the flow and defeats the purpose of autonomous agents.</li>
</ul>
<p>Nix offers a fourth option: <strong>run anything ad-hoc, directly on the host, and garbage collect it later.</strong> The AI gets full host access. The host stays clean.</p>
<h2 id="how-ad-hoc-execution-works">How ad-hoc execution works</h2>
<p>Nix can run any of its 120,000+ packages without installing them in the traditional sense. The binaries land in <code>/nix/store</code> — an immutable, content-addressed directory — and are available immediately. No <code>apt-get</code>, no <code>brew</code>, no <code>sudo</code>, no permanent changes to <code>PATH</code>.</p>
<h3 id="nix-run-execute-and-exit"><code>nix run</code>: execute and exit</h3>
<p><code>nix run</code> fetches a package, builds or downloads it, and runs its default binary in a single command:</p>
<pre><code class="language-bash">$ nix run nixpkgs#cowsay -- &#34;Hello from Nix&#34;
 ________________
&lt; Hello from Nix &gt;
 ----------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||</code></pre>
<p>The binary is cached in <code>/nix/store</code>. Nothing is added to your profile or <code>PATH</code>. Run it again and it starts instantly from cache.</p>
<h3 id="nix-shell-get-a-temporary-environment"><code>nix shell</code>: get a temporary environment</h3>
<p><code>nix shell</code> drops you into a shell (or runs a command) with one or more packages available:</p>
<pre><code class="language-bash">$ nix shell nixpkgs#ffmpeg nixpkgs#imagemagick
$ ffmpeg -version
ffmpeg version 7.1 ...
$ convert --version
Version: ImageMagick 7.1.1 ...
$ exit
$ ffmpeg -version
zsh: command not found: ffmpeg</code></pre>
<p>The packages exist only for the duration of the shell session. After <code>exit</code>, they are gone from <code>PATH</code> — though still cached in the store for instant reuse.</p>
<h3 id="graphical-applications-work-too">Graphical applications work too</h3>
<p>Nix packages include desktop applications with full access to the host display server, GPU, audio, and filesystem:</p>
<pre><code class="language-bash">$ nix run nixpkgs#firefox
$ nix run nixpkgs#gimp
$ nix run nixpkgs#blender
$ nix run nixpkgs#libreoffice</code></pre>
<p>These are not sandboxed or containerized. They run as native processes on the host with the same permissions as the user. They can open local files, access the clipboard, use hardware acceleration — everything a normally installed application can do.</p>
<h2 id="garbage-collection-reclaim-disk-space-on-demand">Garbage collection: reclaim disk space on demand</h2>
<p>Every <code>nix run</code> and <code>nix shell</code> invocation caches its packages in <code>/nix/store</code>. Over time, this cache grows. Nix provides precise garbage collection to reclaim space:</p>
<pre><code class="language-bash"># Remove packages not referenced by any profile or GC root
$ nix-collect-garbage

# Remove everything older than 30 days
$ nix-collect-garbage --delete-older-than 30d

# Remove all old generations, then collect
$ nix-collect-garbage -d</code></pre>
<p>Garbage collection is safe. Nix traces references from your active profiles and GC roots. Anything not reachable is removed. Anything still needed stays. There is no risk of breaking running software.</p>
<p>Check how much space the store uses:</p>
<pre><code class="language-bash">$ nix store info
$ du -sh /nix/store</code></pre>
<h2 id="why-this-matters-for-ai">Why this matters for AI</h2>
<p>AI agents that can invoke shell commands gain a practical superpower with Nix: <strong>they can use any software without asking permission to install it and without leaving a mess behind.</strong></p>
<h3 id="what-this-looks-like-in-practice">What this looks like in practice</h3>
<p>An AI coding agent working on a project might need to:</p>
<ol>
<li><strong>Lint unfamiliar code</strong> — The project has a Haskell file. The agent does not need Haskell installed:</li>
</ol>
<pre><code class="language-bash">$ nix run nixpkgs#hlint -- src/Parser.hs</code></pre>
<ol start="2">
<li><strong>Process images</strong> — A task requires resizing screenshots. No need to permanently install ImageMagick:</li>
</ol>
<pre><code class="language-bash">$ nix shell nixpkgs#imagemagick -c convert input.png -resize 50% output.png</code></pre>
<ol start="3">
<li><strong>Inspect a database</strong> — The agent needs to check a PostgreSQL schema:</li>
</ol>
<pre><code class="language-bash">$ nix shell nixpkgs#postgresql -c psql -h localhost -U dev -d myapp -c &#39;\dt&#39;</code></pre>
<ol start="4">
<li><strong>Generate diagrams</strong> — Documentation needs architecture diagrams from Graphviz DOT files:</li>
</ol>
<pre><code class="language-bash">$ nix shell nixpkgs#graphviz -c dot -Tpng architecture.dot -o architecture.png</code></pre>
<ol start="5">
<li><strong>Open a GUI application</strong> — The agent needs to verify a PDF renders correctly:</li>
</ol>
<pre><code class="language-bash">$ nix run nixpkgs#evince -- report.pdf</code></pre>
<ol start="6">
<li><strong>Run a language runtime</strong> — A quick Python script is the fastest way to solve a data transformation:</li>
</ol>
<pre><code class="language-bash">$ nix shell nixpkgs#python3 -c python3 transform.py</code></pre>
<p>In every case, the tool appears instantly (or after a one-time download and build), runs with full host access, and leaves nothing behind once garbage collected.</p>
<h3 id="the-container-trap">The container trap</h3>
<p>Containers are the default answer to &ldquo;run something without installing it,&rdquo; but they introduce friction that undermines AI agents:</p>
<table>
  <thead>
      <tr>
          <th>Concern</th>
          <th>Container</th>
          <th>Nix ad-hoc</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Host filesystem access</td>
          <td>Requires explicit mounts</td>
          <td>Full access by default</td>
      </tr>
      <tr>
          <td>Display server (GUI)</td>
          <td>Complex X11/Wayland forwarding</td>
          <td>Native, zero config</td>
      </tr>
      <tr>
          <td>GPU acceleration</td>
          <td>Needs runtime flags and drivers</td>
          <td>Works transparently</td>
      </tr>
      <tr>
          <td>Startup overhead</td>
          <td>Image pull + container creation</td>
          <td>Instant from cache</td>
      </tr>
      <tr>
          <td>Compose with other tools</td>
          <td>Isolated environment</td>
          <td>Same shell, same <code>PATH</code></td>
      </tr>
      <tr>
          <td>Cleanup</td>
          <td><code>docker system prune</code></td>
          <td><code>nix-collect-garbage</code></td>
      </tr>
  </tbody>
</table>
<p>Containers are excellent for deployment isolation. But for an AI agent that needs to <em>use</em> software on a developer&rsquo;s workstation, Nix&rsquo;s ad-hoc execution is a better fit. The agent operates as an extension of the user, not inside a sealed box.</p>
<h3 id="security-model">Security model</h3>
<blockquote>
  <p><strong>Warning:</strong> Nix ad-hoc execution runs programs with the current user&rsquo;s permissions — the same as any other program the user runs. This may have serious security implications: AI agents can read project files, write output, and access local services.</p>

</blockquote>
<p>The Nix store itself is read-only and content-addressed. Packages cannot be tampered with after they are built. Every store path includes a cryptographic hash of all inputs. If you fetch the same package twice, you get the same bytes.</p>
<h2 id="setting-up-nix-for-ad-hoc-use">Setting up Nix for ad-hoc use</h2>
<p>If Nix is not yet installed, a single command sets it up. This installs the Determinate Nix package manager with reasonable defaults.</p>
<pre><code class="language-bash">$ curl -sSf -L https://getnix.io/install | sh -s -- install</code></pre>
<p>Verify it works:</p>
<pre><code class="language-bash">$ nix run nixpkgs#hello
Hello, world!</code></pre>
<p>That is all the setup needed. No package lists to maintain, no profiles to manage. Any of the 120,000+ packages in nixpkgs is now one command away.</p>
<h2 id="combining-ad-hoc-packages-in-a-pipeline">Combining ad-hoc packages in a pipeline</h2>
<p>Nix shell sessions can compose multiple packages for complex operations:</p>
<pre><code class="language-bash">$ nix shell nixpkgs#ffmpeg nixpkgs#whisper-ctranslate2 -c bash -c &#39;
  ffmpeg -i meeting.mp4 -vn -ar 16000 -ac 1 audio.wav
  whisper audio.wav --model medium --output_format txt
&#39;</code></pre>
<p>This extracts audio from a video and transcribes it using Whisper — neither tool needs to be installed. An AI agent can construct pipelines like this dynamically, pulling in whatever tools the task requires.</p>
<h2 id="example-claude-code-with-nix">Example: Claude Code with Nix</h2>
<p><a href="https://docs.anthropic.com/en/docs/claude-code">Claude Code</a> is Anthropic&rsquo;s CLI agent that runs shell commands on your machine. By default it can only use tools already installed on the host. With Nix available, Claude gains access to the entire nixpkgs package set — without installing anything permanently.</p>
<h3 id="instructing-claude-to-use-nix">Instructing Claude to use Nix</h3>
<p>Add a <code>CLAUDE.md</code> file to your project (or your home directory for global instructions) telling Claude that Nix is available and how to use it:</p>
<pre><code class="language-text"># Tools

Nix is installed on this system. When you need a tool that is not
available, use `nix run nixpkgs#&lt;package&gt;` or `nix shell nixpkgs#&lt;package&gt;`
to run it ad-hoc. Do not install packages permanently.

Search for packages with `nix search nixpkgs &lt;term&gt;`.</code></pre>
<p>That is all it takes. Claude reads <code>CLAUDE.md</code> at the start of every session and will use <code>nix run</code> / <code>nix shell</code> whenever it needs software that is not on the <code>PATH</code>.</p>
<h3 id="what-a-session-looks-like">What a session looks like</h3>
<p>You ask Claude to generate a diagram from a Graphviz DOT file in your project. Claude does not have Graphviz installed, but Nix is available:</p>
<pre><code class="language-text">You: Generate a PNG from docs/architecture.dot

Claude: I&#39;ll render the DOT file to PNG using Graphviz via Nix.

  $ nix shell nixpkgs#graphviz -c dot -Tpng docs/architecture.dot -o docs/architecture.png

  Created docs/architecture.png (2048x1536, 184 KB).</code></pre>
<p>Or you ask Claude to optimize images. It reaches for multiple tools at once:</p>
<pre><code class="language-text">You: Compress all PNGs in static/ losslessly

Claude: I&#39;ll use OptiPNG via Nix to compress them in place.

  $ nix shell nixpkgs#optipng -c bash -c &#39;for f in static/*.png; do optipng -o5 &#34;$f&#34;; done&#39;

  static/og.png: 142 KB → 118 KB (17% reduction)
  static/hero.png: 89 KB → 74 KB (17% reduction)
  static/logo.png: 12 KB → 11 KB (8% reduction)</code></pre>
<p>Claude can also open graphical applications when it needs visual verification:</p>
<pre><code class="language-text">You: Check if the PDF report renders correctly

Claude: I&#39;ll open the PDF with Evince via Nix so you can inspect it.

  $ nix run nixpkgs#evince -- output/report.pdf</code></pre>
<p>In each case, Claude finds and uses the right tool without any prior installation. The tools are cached in <code>/nix/store</code> for instant reuse and cleaned up by garbage collection when no longer needed.</p>
<h2 id="automatic-cleanup-with-scheduled-garbage-collection">Automatic cleanup with scheduled garbage collection</h2>
<p>For hosts where AI agents run frequently, automate garbage collection so the store does not grow unbounded:</p>
<pre><code class="language-bash"># systemd timer (Linux) — collect garbage weekly
$ cat &gt; ~/.config/systemd/user/nix-gc.service &lt;&lt; &#39;EOF&#39;
[Unit]
Description=Nix garbage collection

[Service]
Type=oneshot
ExecStart=/nix/var/nix/profiles/default/bin/nix-collect-garbage --delete-older-than 7d
EOF

$ cat &gt; ~/.config/systemd/user/nix-gc.timer &lt;&lt; &#39;EOF&#39;
[Unit]
Description=Weekly Nix garbage collection

[Timer]
OnCalendar=weekly
Persistent=true

[Install]
WantedBy=timers.target
EOF

$ systemctl --user enable --now nix-gc.timer</code></pre>
<p>On NixOS, this is a one-liner in your system configuration:</p>
<pre><code class="language-nix">nix.gc = {
  automatic = true;
  dates = &#34;weekly&#34;;
  options = &#34;--delete-older-than 7d&#34;;
};</code></pre>
<h2 id="quick-reference">Quick reference</h2>
<table>
  <thead>
      <tr>
          <th>Command</th>
          <th>What it does</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><code>nix run nixpkgs#&lt;pkg&gt;</code></td>
          <td>Run a package&rsquo;s default binary</td>
      </tr>
      <tr>
          <td><code>nix run nixpkgs#&lt;pkg&gt; -- &lt;args&gt;</code></td>
          <td>Run with arguments</td>
      </tr>
      <tr>
          <td><code>nix shell nixpkgs#&lt;pkg1&gt; nixpkgs#&lt;pkg2&gt;</code></td>
          <td>Open a shell with multiple packages</td>
      </tr>
      <tr>
          <td><code>nix shell nixpkgs#&lt;pkg&gt; -c &lt;cmd&gt;</code></td>
          <td>Run a single command with the package available</td>
      </tr>
      <tr>
          <td><code>nix-collect-garbage</code></td>
          <td>Remove unreferenced store paths</td>
      </tr>
      <tr>
          <td><code>nix-collect-garbage -d</code></td>
          <td>Delete all old generations, then collect</td>
      </tr>
      <tr>
          <td><code>nix-collect-garbage --delete-older-than 30d</code></td>
          <td>Remove store paths older than 30 days</td>
      </tr>
      <tr>
          <td><code>nix store info</code></td>
          <td>Show store statistics</td>
      </tr>
      <tr>
          <td><code>nix search nixpkgs &lt;term&gt;</code></td>
          <td>Search available packages</td>
      </tr>
      <tr>
          <td><code>nix path-info -shr nixpkgs#&lt;pkg&gt;</code></td>
          <td>Show package size including dependencies</td>
      </tr>
  </tbody>
</table>
]]></content:encoded><media:content url="https://getnix.io/og-ai-nix-adhoc.png"/></item><item><title>Automatic NixOS Upgrades with Forgejo Actions</title><link>https://getnix.io/guides/nixos-auto-upgrades/</link><pubDate>Sun, 12 Apr 2026 00:00:00 +0000</pubDate><guid>https://getnix.io/guides/nixos-auto-upgrades/</guid><description>The problem: update fatigue Every NixOS machine you manage needs its flake inputs updated, its configuration rebuilt, and the new generation activated. Do it manually and you either fall behind on security patches or spend your weekends SSH-ing into servers. Script it naively and you ship untested updates straight to production.
The ideal workflow:
CI updates flake.lock on a schedule and shows you exactly what changed. You review and merge a pull request with per-host package diffs. Hosts self-upgrade from the merged commit — no manual intervention, no surprises. This guide builds exactly that with a Forgejo Actions workflow and a small NixOS module.</description><content:encoded><![CDATA[<h2 id="the-problem-update-fatigue">The problem: update fatigue</h2>
<p>Every NixOS machine you manage needs its flake inputs updated, its configuration rebuilt, and the new generation activated. Do it manually and you either fall behind on security patches or spend your weekends SSH-ing into servers. Script it naively and you ship untested updates straight to production.</p>
<p>The ideal workflow:</p>
<ol>
<li><strong>CI updates <code>flake.lock</code></strong> on a schedule and shows you exactly what changed.</li>
<li><strong>You review and merge</strong> a pull request with per-host package diffs.</li>
<li><strong>Hosts self-upgrade</strong> from the merged commit — no manual intervention, no surprises.</li>
</ol>
<p>This guide builds exactly that with a Forgejo Actions workflow and a small NixOS module.</p>
<h2 id="architecture">Architecture</h2>
<p>The system has two independent timers, staggered three hours apart:</p>
<table>
  <thead>
      <tr>
          <th>Component</th>
          <th>Runs at</th>
          <th>What it does</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Forgejo Actions workflow</td>
          <td>02:00 UTC</td>
          <td>Runs the diff script, updates <code>flake.lock</code>, opens a PR with per-host diffs</td>
      </tr>
      <tr>
          <td>NixOS auto-upgrade timer</td>
          <td>05:00 (host-local)</td>
          <td>Fetches <code>main</code>, builds own configuration, runs <code>nixos-rebuild switch</code></td>
      </tr>
  </tbody>
</table>
<figure class="mermaid-figure">
  
  <figcaption class="mermaid-caption">Interaction flow</figcaption>
  
  <div class="mermaid">

sequenceDiagram
participant CI as Forgejo CI
participant Repo as 🗂 Git Repo
participant Host as NixOS Host
participant You

    rect rgba(126, 186, 228, 0.08)
    Note over CI,Repo: Daily — 02:00 UTC
    CI->>CI: scripts/flake-update-diff.sh
    CI->>CI: Build all hosts before & after
    CI->>CI: nvd diff per host
    CI->>Repo: Open PR with diff report
    end

    rect rgba(126, 186, 228, 0.04)
    Note over Repo,You: Whenever ready (hours, days, …)
    You->>Repo: Review diffs & merge PR
    end

    rect rgba(126, 186, 228, 0.08)
    Note over Host,Repo: Daily — 05:00 host-local
    Host->>Repo: Fetch main
    Host->>Host: nix build (--no-update-lock-file)
    Host->>Host: nixos-rebuild switch
    Note over Host,Repo: No-op if main is unchanged
    end


  </div>
</figure>

<figure class="mermaid-figure">
  
  <figcaption class="mermaid-caption">Branch lifecycle</figcaption>
  
  <div class="mermaid">

gitGraph
commit id: " "
commit id: " "
branch auto/flake-update
checkout auto/flake-update
commit id: "chore: update flake inputs"
checkout main
merge auto/flake-update id: "merge PR"
commit id: "hosts rebuild" type: HIGHLIGHT
commit id: " "
commit id: " "

  </div>
</figure>

<p>Hosts never modify <code>flake.lock</code> themselves. They always use <code>--no-update-lock-file</code> and build whatever version of nixpkgs (and other inputs) the CI committed to <code>main</code>. This keeps every machine on the same, reviewed set of inputs.</p>
<h2 id="step-1-the-diff-script">Step 1: The diff script</h2>
<p>The core logic lives in a standalone shell script that can run both locally and in CI. It builds every NixOS host configuration before and after a flake update, then produces a per-host package diff via <a href="https://khumba.net/projects/nvd/"><code>nvd</code></a>.</p>
<p>Create <code>scripts/flake-update-diff.sh</code> in your NixOS flake repository:</p>
<pre><code class="language-bash">#!/usr/bin/env bash
#
# Build all NixOS host configurations before and after a flake update,
# then report per-host package diffs via nvd.
#
# Exit codes:
#   0 — at least one host has closure changes (diff report on stdout)
#   1 — unexpected error
#   2 — flake.lock did not change after update
#   3 — flake.lock changed but no host closures differ
#
# Options:
#   --skip-update   Skip &#39;nix flake update&#39; (useful when flake.lock is
#                   already updated and you just want to diff)
#
set -euo pipefail

SKIP_UPDATE=false
for arg in &#34;$@&#34;; do
  case &#34;$arg&#34; in
    --skip-update) SKIP_UPDATE=true ;;
    *) echo &#34;Unknown option: $arg&#34; &gt;&amp;2; exit 1 ;;
  esac
done

# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
log()   { echo &#34;  $*&#34; &gt;&amp;2; }
ok()    { echo &#34;  ✓ $*&#34; &gt;&amp;2; }
fail()  { echo &#34;  ✗ $*&#34; &gt;&amp;2; }
# In CI: use ::group::/::endgroup:: for collapsible sections.
# Locally: print readable section headers instead.
if [ -n &#34;${CI:-}&#34; ]; then
  step()     { echo &#34;::group::$1&#34; &gt;&amp;2; }
  endstep()  { echo &#34;::endgroup::&#34; &gt;&amp;2; }
else
  step()     { echo &gt;&amp;2; echo &#34;── $1 ──&#34; &gt;&amp;2; }
  endstep()  { :; }
fi

# ---------------------------------------------------------------------------
# 1. Discover full (non-minimal) host configurations
# ---------------------------------------------------------------------------
step &#34;Discovering hosts&#34;
HOSTS=$(nix eval .#nixosConfigurations \
  --apply &#39;cs: builtins.filter (n: builtins.match &#34;.*-minimal&#34; n == null) (builtins.attrNames cs)&#39; \
  --json | nix run nixpkgs#jq -- -r &#39;.[]&#39;)

if [ -z &#34;$HOSTS&#34; ]; then
  fail &#34;No host configurations found.&#34;
  exit 1
fi
HOST_COUNT=$(echo &#34;$HOSTS&#34; | wc -w | tr -d &#39; &#39;)
log &#34;Found $HOST_COUNT host(s): $(echo &#34;$HOSTS&#34; | tr &#39;\n&#39; &#39; &#39;)&#34;
endstep

# ---------------------------------------------------------------------------
# 2. Build current (before-update) configurations
# ---------------------------------------------------------------------------
for host in $HOSTS; do
  step &#34;Building current $host&#34;
  nix build &#34;.#nixosConfigurations.$host.config.system.build.toplevel&#34; \
    -o &#34;result-before-$host&#34; || true
  endstep
done

# ---------------------------------------------------------------------------
# 3. Update flake inputs
# ---------------------------------------------------------------------------
if [ &#34;$SKIP_UPDATE&#34; = false ]; then
  step &#34;Updating flake inputs&#34;
  nix flake update
  endstep
fi

# ---------------------------------------------------------------------------
# 4. Check whether flake.lock actually changed
# ---------------------------------------------------------------------------
step &#34;Checking for flake.lock changes&#34;
if git diff --quiet flake.lock 2&gt;/dev/null; then
  ok &#34;No changes — nothing to do.&#34;
  endstep
  exit 2
fi
ok &#34;flake.lock has changed, rebuilding hosts.&#34;
endstep

# ---------------------------------------------------------------------------
# 5. Build updated configurations and generate per-host diffs
# ---------------------------------------------------------------------------
DIFF_REPORT=&#34;&#34;
HAS_CHANGES=false
CHANGED_HOSTS=&#34;&#34;
UNCHANGED_HOSTS=&#34;&#34;

for host in $HOSTS; do
  step &#34;Building updated $host&#34;
  nix build &#34;.#nixosConfigurations.$host.config.system.build.toplevel&#34; \
    -o &#34;result-after-$host&#34; || true
  endstep

  if [ -e &#34;result-before-$host&#34; ] &amp;&amp; [ -e &#34;result-after-$host&#34; ]; then
    step &#34;Package diff for $host&#34;
    HOST_DIFF=$(nix run nixpkgs#nvd -- diff &#34;result-before-$host&#34; &#34;result-after-$host&#34; 2&gt;&amp;1 || true)
    echo &#34;$HOST_DIFF&#34; &gt;&amp;2
    endstep

    if [ &#34;$(readlink &#34;result-before-$host&#34;)&#34; != &#34;$(readlink &#34;result-after-$host&#34;)&#34; ]; then
      HAS_CHANGES=true
      CHANGED_HOSTS=&#34;$CHANGED_HOSTS $host&#34;
      DIFF_REPORT=&#34;${DIFF_REPORT}### ${host}&#34;$&#39;\n&#39;&#34;\`\`\`&#34;$&#39;\n&#39;&#34;${HOST_DIFF}&#34;$&#39;\n&#39;&#34;\`\`\`&#34;$&#39;\n\n&#39;
    else
      UNCHANGED_HOSTS=&#34;$UNCHANGED_HOSTS $host&#34;
    fi
  fi
done

# ---------------------------------------------------------------------------
# 6. Report results
# ---------------------------------------------------------------------------
step &#34;Summary&#34;
if [ -n &#34;$CHANGED_HOSTS&#34; ]; then
  ok &#34;Changed:  $CHANGED_HOSTS&#34;
fi
if [ -n &#34;$UNCHANGED_HOSTS&#34; ]; then
  log &#34;Unchanged:$UNCHANGED_HOSTS&#34;
fi

if [ &#34;$HAS_CHANGES&#34; = false ]; then
  fail &#34;flake.lock changed but no host closures differ.&#34;
  endstep
  exit 3
fi

ok &#34;Done — diff report ready.&#34;
endstep

# Print the diff report to stdout for consumers (CI, local review, etc.)
printf &#39;%s&#39; &#34;$DIFF_REPORT&#34;</code></pre>
<h3 id="how-the-script-works">How the script works</h3>
<ol>
<li><strong>Discover hosts</strong> — evaluates the flake to list all <code>nixosConfigurations</code>, filtering out any ending in <code>-minimal</code> (installer images, etc.).</li>
<li><strong>Build before</strong> — every host configuration is built from the current <code>flake.lock</code> and stored as a symlink <code>result-before-&lt;host&gt;</code>.</li>
<li><strong>Update</strong> — <code>nix flake update</code> pulls the latest nixpkgs, home-manager, and any other inputs (skipped with <code>--skip-update</code>).</li>
<li><strong>Check</strong> — if <code>flake.lock</code> is unchanged, the script exits early with code <code>2</code>.</li>
<li><strong>Build after &amp; diff</strong> — hosts are rebuilt with the updated lock file. For each host, <code>nvd</code> compares the before/after store paths and reports added, removed, and version-changed packages. The script distinguishes hosts whose closures actually changed from those that are identical despite the lock file update.</li>
<li><strong>Report</strong> — progress and a summary go to stderr; the machine-readable diff report goes to stdout for consumers (the CI workflow, a local terminal, etc.). If <code>flake.lock</code> changed but no host closures differ, the script exits with code <code>3</code>.</li>
</ol>
<h3 id="exit-codes">Exit codes</h3>
<table>
  <thead>
      <tr>
          <th>Code</th>
          <th>Meaning</th>
          <th>CI action</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><code>0</code></td>
          <td>At least one host has closure changes</td>
          <td>Open PR with diff report</td>
      </tr>
      <tr>
          <td><code>1</code></td>
          <td>Unexpected error</td>
          <td>Fail the job</td>
      </tr>
      <tr>
          <td><code>2</code></td>
          <td><code>flake.lock</code> did not change — nothing to update</td>
          <td>Skip PR, job succeeds</td>
      </tr>
      <tr>
          <td><code>3</code></td>
          <td><code>flake.lock</code> changed but no host closures differ</td>
          <td>Skip PR, job succeeds</td>
      </tr>
  </tbody>
</table>
<h3 id="running-locally">Running locally</h3>
<p>Because the script is independent of CI, you can run it on your workstation to preview what an update would change before committing anything:</p>
<pre><code class="language-bash"># Full run — update flake.lock and diff all hosts
$ ./scripts/flake-update-diff.sh

# Diff only — you already ran nix flake update manually
$ ./scripts/flake-update-diff.sh --skip-update</code></pre>
<p>Progress is printed to stderr, so the diff report on stdout can be piped or redirected:</p>
<pre><code class="language-bash">$ ./scripts/flake-update-diff.sh &gt; /tmp/diff-report.md</code></pre>
<h2 id="step-2-the-ci-workflow">Step 2: The CI workflow</h2>
<p>The workflow is a thin wrapper around the script. It handles SSH setup, git identity, and opening a pull request — all build and diff logic lives in the script above.</p>
<p>Create <code>.forgejo/workflows/update.yaml</code>:</p>
<pre><code class="language-yaml">name: Update Flake Inputs

on:
  schedule:
    - cron: &#34;0 2 * * *&#34; # Daily at 02:00 UTC — well before hosts auto-upgrade.
  workflow_dispatch: # Allow manual trigger from the Forgejo UI.

env:
  GIT_USER_NAME: ci # ⚠️ Replace with your CI bot name.
  GIT_USER_EMAIL: ci@example.com # ⚠️ Replace with your CI bot email.

jobs:
  update:
    name: Update flake.lock and diff all hosts
    runs-on: nixos-builder # A native NixOS runner — leverages the host Nix store.
    steps:
      - name: Checkout repository
        uses: https://data.forgejo.org/actions/checkout@v6

      # Write the deploy key so git can push branches and sign commits.
      - name: Configure SSH key for git push and commit signing
        run: |
          SSH_DIR=&#34;$RUNNER_TEMP/.ssh&#34;
          mkdir -p &#34;$SSH_DIR&#34;
          echo &#34;${{ secrets.GIT_PRIVATE_KEY }}&#34; &gt; &#34;$SSH_DIR/forgejo_key&#34;
          chmod 600 &#34;$SSH_DIR/forgejo_key&#34;
          SSH_BIN=&#34;$(command -v ssh)&#34;
          export GIT_SSH_COMMAND=&#34;$SSH_BIN -i $SSH_DIR/forgejo_key -o StrictHostKeyChecking=no&#34;
          echo &#34;GIT_SSH_COMMAND=$GIT_SSH_COMMAND&#34; &gt;&gt; &#34;$FORGEJO_ENV&#34;
          echo &#34;SSH_KEY_PATH=$SSH_DIR/forgejo_key&#34; &gt;&gt; &#34;$FORGEJO_ENV&#34;
          FORGEJO_DOMAIN=&#34;${FORGEJO_SERVER_URL#https://}&#34;
          FORGEJO_DOMAIN=&#34;${FORGEJO_DOMAIN#http://}&#34;
          echo &#34;FORGEJO_DOMAIN=$FORGEJO_DOMAIN&#34; &gt;&gt; &#34;$FORGEJO_ENV&#34;

      # Configure the CI bot identity and SSH commit signing.
      - name: Configure git identity, signing, and SSH remote
        run: |
          git config user.name &#34;${{ env.GIT_USER_NAME }}&#34;
          git config user.email &#34;${{ env.GIT_USER_EMAIL }}&#34;
          git config gpg.format ssh
          git config user.signingkey &#34;$SSH_KEY_PATH&#34;
          git config commit.gpgsign true
          git remote set-url origin git@${{ env.FORGEJO_DOMAIN }}:${{ forgejo.repository }}.git

      # Run the diff script. Exit codes 2 and 3 are expected
      # &#34;no-op&#34; conditions — only code 0 means a PR is needed.
      - name: Update flake inputs and diff all hosts
        id: diff
        run: |
          DIFF_REPORT=$(bash ./scripts/flake-update-diff.sh) &amp;&amp; STATUS=0 || STATUS=$?

          case &#34;$STATUS&#34; in
            0)
              echo &#34;has_changes=true&#34; &gt;&gt; &#34;$FORGEJO_OUTPUT&#34;
              {
                echo &#34;report&lt;&lt;DIFF_EOF&#34;
                printf &#39;%s\n&#39; &#34;$DIFF_REPORT&#34;
                echo &#34;DIFF_EOF&#34;
              } &gt;&gt; &#34;$FORGEJO_OUTPUT&#34;
              ;;
            2) echo &#34;No flake.lock changes, skipping PR.&#34; ;;
            3) echo &#34;No host closure changes, skipping PR.&#34; ;;
            *) exit &#34;$STATUS&#34; ;;
          esac

      # Commit the updated flake.lock and open a PR with the diff report.
      - name: Create branch, commit, and open pull request
        if: steps.diff.outputs.has_changes == &#39;true&#39;
        env:
          DIFF_REPORT: ${{ steps.diff.outputs.report }}
          FORGEJO_TOKEN: ${{ forgejo.token }}
        run: |
          DATE=$(date &#43;%Y-%m-%d)
          BRANCH=&#34;auto/flake-update-$DATE&#34;

          git push origin --delete &#34;$BRANCH&#34; || true
          git checkout -b &#34;$BRANCH&#34;
          git add flake.lock
          git commit -m &#34;chore: update flake inputs $DATE&#34;
          git push origin &#34;$BRANCH&#34;

          # jq --arg safely handles all escaping (backticks, newlines, quotes)
          PAYLOAD=$(nix run nixpkgs#jq -- -n \
            --arg title &#34;chore: update flake inputs $DATE&#34; \
            --arg head  &#34;$BRANCH&#34; \
            --arg base  &#34;main&#34; \
            --arg diff  &#34;$DIFF_REPORT&#34; \
            &#39;{
              title: $title, head: $head, base: $base,
              body: &#34;## Automated flake.lock update\n\nPackage changes per host (via nvd):\n\n\($diff)\n---\n*Auto-generated by the update workflow.*&#34;
            }&#39;)

          nix run nixpkgs#curl -- -sf -X POST \
            -H &#34;Authorization: token $FORGEJO_TOKEN&#34; \
            -H &#34;Content-Type: application/json&#34; \
            &#34;${{ forgejo.server_url }}/api/v1/repos/${{ forgejo.repository }}/pulls&#34; \
            -d &#34;$PAYLOAD&#34;

      - name: Cleanup
        if: always()
        run: |
          rm -f &#34;$SSH_KEY_PATH&#34;
          rm -f result-before-* result-after-*</code></pre>
<p>Compared to inlining all the build and diff logic in the workflow, this split has two advantages: the script can be run locally to preview updates before committing, and the workflow YAML stays focused on CI plumbing (SSH, git, PR creation) rather than build logic.</p>
<h3 id="where-it-runs">Where it runs</h3>
<p>The workflow runs on a <strong>NixOS host</strong> (runner label <code>nixos-builder</code>), not inside a container. This is important for two reasons:</p>
<ul>
<li>It needs a working Nix installation with direct access to <code>/nix/store</code>. The host&rsquo;s Nix store acts as a persistent build cache — derivations that haven&rsquo;t changed since the last run are already in the store and don&rsquo;t need to be rebuilt or downloaded again.</li>
<li>It builds full NixOS system configurations (<code>system.build.toplevel</code>), which are large closures that benefit greatly from an existing store.</li>
</ul>
<p>You <em>can</em> run this workflow in a container (e.g. a Docker-based Forgejo runner with Nix installed), but each run would start with a cold Nix store. That means every derivation is fetched or built from scratch, turning a job that takes minutes on a NixOS host into one that can take significantly longer. If you go the container route, mounting a persistent volume for <code>/nix/store</code> and <code>/nix/var/nix/db</code> helps, but a native NixOS runner remains the most efficient option.</p>
<p>If you already run a Forgejo runner on a NixOS machine, point this workflow at it. Otherwise, register a new runner on any NixOS host with the label <code>nixos-builder</code>.</p>
<h3 id="secrets-and-configuration">Secrets and configuration</h3>
<p>The workflow needs one secret. Most configuration is derived automatically from Forgejo context variables.</p>
<h4 id="required-secrets">Required secrets</h4>
<table>
  <thead>
      <tr>
          <th>Secret</th>
          <th>Purpose</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><code>GIT_PRIVATE_KEY</code></td>
          <td>SSH private key used for two things: pushing the update branch (<code>git push</code>) and signing the commit (<code>gpg.format ssh</code>). The key must have write access to the repository. Store it in <strong>Forgejo → Repository Settings → Secrets</strong>.</td>
      </tr>
  </tbody>
</table>
<p>The <code>FORGEJO_TOKEN</code> used to create the pull request via the API is provided automatically by Forgejo (<code>${{ forgejo.token }}</code>). No manual setup is required.</p>
<h4 id="values-to-customize">Values to customize</h4>
<p>Only two values are hardcoded in the YAML — everything else (remote URL, API endpoint) is derived from Forgejo context variables (<code>forgejo.server_url</code>, <code>forgejo.repository</code>):</p>
<table>
  <thead>
      <tr>
          <th>Value</th>
          <th>Where in the YAML</th>
          <th>What to set</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Git identity</td>
          <td><code>env.GIT_USER_NAME</code> / <code>env.GIT_USER_EMAIL</code></td>
          <td>The name and email for CI commits</td>
      </tr>
      <tr>
          <td>Runner label</td>
          <td><code>runs-on: nixos-builder</code></td>
          <td>Must match your registered NixOS runner</td>
      </tr>
  </tbody>
</table>
<h4 id="optional--tunable">Optional / tunable</h4>
<table>
  <thead>
      <tr>
          <th>Field</th>
          <th>Default</th>
          <th>Notes</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><code>cron:</code> schedule</td>
          <td><code>0 2 * * *</code> (02:00 UTC)</td>
          <td>Adjust to any cron expression that suits your timezone or review habits</td>
      </tr>
      <tr>
          <td><code>workflow_dispatch</code></td>
          <td>enabled</td>
          <td>Allows manual triggering from the Forgejo UI — remove the key if not wanted</td>
      </tr>
      <tr>
          <td>Host filter</td>
          <td>excludes <code>*-minimal</code> hosts</td>
          <td>The <code>builtins.filter</code> in the script skips hosts matching <code>.*-minimal</code> — adjust the regex in <code>scripts/flake-update-diff.sh</code> to match your naming convention</td>
      </tr>
      <tr>
          <td>PR base branch</td>
          <td><code>main</code></td>
          <td>Change if your default branch has a different name</td>
      </tr>
  </tbody>
</table>
<h3 id="what-the-pr-looks-like">What the PR looks like</h3>
<p>The resulting pull request body contains a per-host section like this:</p>
<pre><code class="language-text">## Automated flake.lock update

Package changes per host (via nvd):

### webserver

  [nvd output showing package upgrades, additions, and removals]

### devbox

  [nvd output for this host]</code></pre>
<p>Hosts whose closures did not change (despite the <code>flake.lock</code> update) are omitted from the report. You review the PR, see exactly which packages changed on which host, and merge when ready. Nothing reaches your machines until you merge to <code>main</code>.</p>
<blockquote>
  <p><strong>Tip: fully hands-off with auto-merge.</strong> If you trust the CI builds and don&rsquo;t want to review every update manually, most Git forges (Forgejo, GitHub, GitLab) support auto-merging PRs once all required checks pass. Enable branch protection with a required status check for the build job, then configure auto-merge on the PR. The PR will merge itself as soon as CI is green — turning the entire pipeline into a zero-touch flow where hosts upgrade daily without any human interaction. You can still review the merged diff after the fact and roll back if needed.</p>

</blockquote>
<h2 id="step-3-the-auto-upgrade-nixos-module">Step 3: The auto-upgrade NixOS module</h2>
<p>Create a module that wraps <code>system.autoUpgrade</code> so hosts pull from your Git repository on a schedule:</p>
<pre><code class="language-nix"># modules/auto-upgrade.nix
{
  config,
  lib,
  ...
}:
let
  hostname = config.networking.hostName;
in
{
  options.services.auto-upgrade.enable =
    lib.mkEnableOption &#34;automatic daily flake-based NixOS upgrade&#34;;

  config = lib.mkIf config.services.auto-upgrade.enable {
    system.autoUpgrade = {
      enable = true;

      # Fetch the latest main branch from your Git server.
      # Each host selects its own nixosConfiguration by hostname.
      flake = &#34;git&#43;https://your-forgejo/nix/nixos.git#${hostname}&#34;;

      # Never update flake.lock on the host — CI handles that.
      flags = [
        &#34;--no-update-lock-file&#34;
      ];

      # Run daily at 05:00 (host-local time), one hour after CI.
      dates = &#34;05:00&#34;;

      # Do not reboot automatically.
      # Most services restart on nixos-rebuild switch.
      allowReboot = false;

      # &#34;switch&#34; activates immediately.
      # Use &#34;boot&#34; to defer activation until next reboot.
      operation = &#34;switch&#34;;
    };
  };
}</code></pre>
<p>Key design decisions:</p>
<ul>
<li><strong><code>--no-update-lock-file</code></strong> — the most important flag. Hosts consume whatever <code>flake.lock</code> is on <code>main</code>. They never resolve inputs independently, so every machine converges on the same package set.</li>
<li><strong><code>flake = &quot;git+https://...#${hostname}&quot;</code></strong> — each host selects its own <code>nixosConfiguration</code> output by hostname. One repository, many machines.</li>
<li><strong><code>dates = &quot;05:00&quot;</code></strong> — staggered three hours after the CI runs at 02:00 UTC. This gives you a window to review the PR. If it is not merged yet, hosts simply rebuild from the current <code>main</code> (a no-op if nothing changed).</li>
<li><strong><code>allowReboot = false</code></strong> — most NixOS services restart on <code>switch</code>. Set to <code>true</code> if you need kernel or initrd updates to take effect immediately.</li>
</ul>
<h2 id="step-4-enable-on-each-host">Step 4: Enable on each host</h2>
<p>Include the module in your flake and enable it per host:</p>
<pre><code class="language-nix"># flake.nix (simplified)
{
  outputs = { nixpkgs, ... }: {
    nixosConfigurations = {
      webserver = nixpkgs.lib.nixosSystem {
        modules = [
          ./modules/auto-upgrade.nix
          ./hosts/webserver
        ];
      };

      devbox = nixpkgs.lib.nixosSystem {
        modules = [
          ./modules/auto-upgrade.nix
          ./hosts/devbox
        ];
      };
    };
  };
}</code></pre>
<p>Then in each host&rsquo;s configuration:</p>
<pre><code class="language-nix"># hosts/webserver/default.nix
{
  services.auto-upgrade.enable = true;
}</code></pre>
<p>For machines you deploy manually (like the CI builder itself, or test machines), simply omit the option or set it to <code>false</code>.</p>
<h2 id="step-5-verify-the-setup">Step 5: Verify the setup</h2>
<p>After deploying the configuration to your hosts, check that the systemd timer and service are in place:</p>
<pre><code class="language-bash"># Check the timer schedule and when it last fired
$ systemctl status nixos-upgrade.timer
● nixos-upgrade.timer
     Loaded: loaded
     Active: active (waiting)
    Trigger: tomorrow at 05:00

# Check the last upgrade run
$ journalctl -u nixos-upgrade.service -n 30</code></pre>
<p>The <code>nixos-upgrade.service</code> logs show the full <code>nixos-rebuild switch</code> output — which generation was activated, which services restarted, and any errors.</p>
<h2 id="the-full-flow">The full flow</h2>
<p>Here is what happens every day without any manual intervention:</p>
<pre><code class="language-text">02:00 UTC  Forgejo Actions runs on CI builder
           ├─ scripts/flake-update-diff.sh
           │   ├─ Discover hosts, build before
           │   ├─ nix flake update
           │   ├─ Build after, nvd diff per host
           │   └─ Report (stdout → workflow)
           └─ Open PR with diff report

   You     Review PR, check package changes, merge to main

05:00      Each NixOS host (systemd timer)
           ├─ git fetch main (via flake URL)
           ├─ nix build own configuration
           └─ nixos-rebuild switch</code></pre>
<p>If you don&rsquo;t merge the PR before 05:00, hosts simply rebuild from the current <code>main</code> — effectively a no-op. The update waits until you merge.</p>
<h2 id="rollback">Rollback</h2>
<p>If a bad update slips through, NixOS makes rollback trivial:</p>
<pre><code class="language-bash"># Roll back to the previous generation
$ sudo nixos-rebuild switch --rollback

# Or boot into a previous generation from the bootloader</code></pre>
<p>Every generation is kept in the Nix store until garbage-collected, so you can always go back.</p>
<h2 id="adapting-the-schedule">Adapting the schedule</h2>
<table>
  <thead>
      <tr>
          <th>What to change</th>
          <th>Where</th>
          <th>Default</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>CI update time</td>
          <td><code>.forgejo/workflows/update.yaml</code> → <code>cron:</code></td>
          <td><code>0 2 * * *</code> (02:00 UTC)</td>
      </tr>
      <tr>
          <td>Host upgrade time</td>
          <td><code>modules/auto-upgrade.nix</code> → <code>dates</code></td>
          <td><code>05:00</code></td>
      </tr>
      <tr>
          <td>Auto-reboot</td>
          <td><code>modules/auto-upgrade.nix</code> → <code>allowReboot</code></td>
          <td><code>false</code></td>
      </tr>
      <tr>
          <td>Upgrade operation</td>
          <td><code>modules/auto-upgrade.nix</code> → <code>operation</code></td>
          <td><code>switch</code></td>
      </tr>
  </tbody>
</table>
<p>For desktops you might prefer <code>operation = &quot;boot&quot;</code> so the new configuration only activates on the next reboot, avoiding disruption during work hours. For servers, <code>switch</code> is usually the right choice since services restart gracefully.</p>
<h2 id="why-this-works-well">Why this works well</h2>
<ul>
<li><strong>No unreviewed changes reach production.</strong> The PR gate means you always see what changed before it deploys.</li>
<li><strong>Hosts never drift from each other.</strong> Every machine builds from the same <code>flake.lock</code> on <code>main</code>.</li>
<li><strong>Zero manual SSH sessions.</strong> Once the module is enabled, upgrades are fully automatic.</li>
<li><strong>Safe by default.</strong> If CI fails to build a host, the PR shows the error. If a host fails to build, it stays on its current generation. If you merge something bad, <code>nixos-rebuild switch --rollback</code> fixes it in seconds.</li>
<li><strong>Works for any fleet size.</strong> Whether you run two machines or twenty, the same workflow scales — one PR, one merge, all hosts converge.</li>
</ul>
]]></content:encoded><media:content url="https://getnix.io/og-nixos-auto-upgrades.png"/></item></channel></rss>