Introduction
Installation Guide
This guide covers various installation scenarios for the nix-config repository.
Windows Subsystem for Linux (WSL)
Prerequisites
- Windows 10 version 2004 and higher (Build 19041 and higher) or Windows 11
- WSL 2 enabled
- Administrator access to Windows
Installation Steps
1. Install WSL 2
# Run in PowerShell as Administrator
wsl --install
# If WSL is already installed, ensure you're using WSL 2
wsl --set-default-version 2
2. Setup NixOS for WSL
Download and install NixOS-WSL via NixOS-WSL:
# Download the latest NixOS-WSL tarball
# Import the NixOS-WSL distribution
wsl --import NixOS .\NixOS\ nixos-wsl.tar.gz --version 2
# Start the NixOS instance
wsl -d NixOS
3. Configure NixOS-WSL
After starting your NixOS-WSL instance:
# Clone this repository
sudo git clone https://github.com/DaRacci/nix-config.git /etc/nixos
# Apply the WSL configuration
sudo nixos-rebuild switch --flake /etc/nixos#winix
4. WSL-Specific Features
The WSL configuration includes:
- SSH agent relay between Windows and WSL
- Hardware acceleration support for development
- Remote desktop capabilities
- Optimized for headless operation
Native NixOS Installation
Prerequisites
- NixOS installation media
- Target hardware
- Network connectivity
- Backup of important data
Installation Process
1. Boot from NixOS Installation Media
- Download NixOS ISO from nixos.org
- Create bootable USB/DVD
- Boot from installation media
2. Network Configuration
# For WiFi connections
sudo systemctl start wpa_supplicant
wpa_cli
> add_network
> set_network 0 ssid "YourSSID"
> set_network 0 psk "YourPassword"
> enable_network 0
> quit
# Verify connectivity
ping nixos.org
3. Disk Setup
Follow standard NixOS installation procedures for disk partitioning and filesystem setup as described in the NixOS manual.
4. Generate Hardware Configuration
# Generate hardware configuration
nixos-generate-config --root /mnt
# Copy to your host configuration
mkdir -p /mnt/etc/nixos/hosts/{device-type}/{hostname}
cp /mnt/etc/nixos/hardware-configuration.nix /mnt/etc/nixos/hosts/{device-type}/{hostname}/hardware.nix
# Clone this repository
cd /mnt/etc/nixos
git clone https://github.com/DaRacci/nix-config.git .
5. Customize Host Configuration
Edit hosts/{device-type}/{hostname}/default.nix and hardware.nix according to your needs.
6. Install NixOS
# Install with your specific host configuration
nixos-install --flake .#{hostname}
# Set root password when prompted
7. Post-Installation
# Reboot into new system
reboot
# After reboot, ensure configuration is applied
sudo nixos-rebuild switch --flake /etc/nixos#{hostname}
Existing NixOS System Migration
From Traditional NixOS Configuration
1. Backup Current Configuration
# Backup current configuration (adjust path if using flakes)
sudo cp -r /etc/nixos /etc/nixos.backup
2. Clone This Repository
# Clone to a working directory
git clone https://github.com/DaRacci/nix-config.git /tmp/nix-config
sudo cp -r /tmp/nix-config/* /etc/nixos/
3. Create Host Configuration
# Create your host directory
sudo mkdir -p /etc/nixos/hosts/{device-type}/{hostname}
# Migrate your hardware configuration
sudo cp /etc/nixos.backup/hardware-configuration.nix /etc/nixos/hosts/{device-type}/{hostname}/hardware.nix
# Create default.nix based on your old configuration
# Edit to follow the new structure
4. Test and Apply
# Test the new configuration
sudo nixos-rebuild build --flake .#{hostname}
# Apply if build succeeds
sudo nixos-rebuild switch --flake .#{hostname}
IO Guardian - Database Availability System
The IO Guardian system ensures that services across the infrastructure are aware
of the availability of centralized databases (PostgreSQL and Redis) hosted on config.server.ioPrimaryHost.
It provides graceful startup and shutdown coordination between the database host and dependent services on other servers.
Overview
The system consists of two components:
-
Guardian Server (runs on client servers)
- WebSocket server that listens for commands from the coordinator
- Executes drain/undrain commands by controlling
io-databases.target
-
Guardian Client (runs on the IO Host)
- WebSocket client that connects to all guardian servers
- Sends
undraincommand after databases are online (start dependent services) - Sends
draincommand before database shutdown (stop dependent services)
How It Works
System Startup
- Client servers boot and run
wait-for-io-databases.service - This service waits (with retries) until PostgreSQL and Redis on the IO Host are reachable
- Once databases are confirmed available, the service completes
- The
io-databases.targetis now ready to be activated - When the IO Hosts
io-database-coordinator.servicestarts, it sendsundrainto all clients - Clients start
io-databases.target, which starts all dependent services
Database Shutdown (Graceful Drain)
- When
io-database-coordinator.servicestops (before databases stop) - It connects to all guardian servers via WebSocket
- Sends
draincommand to each server - Guardian servers stop
io-databases.target - Dependent services stop gracefully before databases go down
Database Startup (Undrain)
- When databases come online on the IO Host
io-database-coordinator.servicestarts- It sends
undraincommand to all guardian servers - Guardian servers start
io-databases.target - All dependent services start
Security
Communication is secured using a Pre-Shared Key (PSK) that must be at least 32 characters. All WebSocket connections must authenticate with this key before commands are accepted.
Generating the PSK
Generate a new PSK using OpenSSL:
openssl rand -base64 32
Adding the Secret
Add the generated PSK to hosts/server/secrets.yaml:
IO_GUARDIAN_PSK: <your-generated-key>
Then encrypt the file:
sops --encrypt --in-place hosts/server/secrets.yaml
Configuration
Port
The guardian WebSocket server listens on port 9876 by default. This port is automatically opened to local subnets on servers with database dependencies.
Dependent Services
Dependent Services will be automatically populated with service names where there
is a systemd.service.<name> defined from the names in server.database.postgres
or server.database.redis.
To manually add a service bind to the database availability target, add it to the
server.database.dependentServices option:
{
server.database.dependentServices = [
"my-service"
"another-service"
];
}
Services listed here will:
- Start only when
io-databases.targetis active - Stop when
io-databases.targetstops - Restart when the target restarts
Systemd Units
On Client Servers
| Unit | Type | Description |
|---|---|---|
io-guardian.service | simple | WebSocket server for receiving commands |
io-databases.target | target | Represents “databases are online” |
wait-for-io-databases.service | oneshot | Waits for databases at boot (runs once) |
On nixio
| Unit | Type | Description |
|---|---|---|
io-database-coordinator.service | oneshot | Sends undrain on start, drain on stop |
Troubleshooting
Checking Guardian Status
On client servers:
systemctl status io-guardian.service
systemctl status io-databases.target
systemctl status wait-for-io-databases.service
journalctl -u io-guardian.service -f
On IO Hosts:
systemctl status io-database-coordinator.service
journalctl -u io-database-coordinator.service
Manual Commands
To manually start dependent services on a client:
systemctl start io-databases.target
To manually stop dependent services:
systemctl stop io-databases.target
Common Issues
Guardian server won’t start:
- Check that
IO_GUARDIAN_PSKsecret is properly configured - Verify the sops decryption is working:
cat /run/secrets/IO_GUARDIAN_PSK
Services not starting after boot:
- Check wait service
logs:
journalctl -u wait-for-io-databases.service - Verify network connectivity to an IO Host on ports 5432 (Postgres) and 6379 (Redis)
- Ensure an IO Hosts coordinator has sent the undrain command
Authentication failures in logs:
- Ensure the same PSK is deployed to all servers
- Re-encrypt secrets if the key was changed
Protocol Reference
The guardian uses a simple JSON-based WebSocket protocol:
Authentication
// Client sends:
{"type": "auth", "key": "<psk>"}
// Server responds:
{"type": "auth", "status": "ok", "message": "Authentication successful"}
// or
{"type": "auth", "status": "error", "message": "Invalid key"}
Commands
// Coordinator sends:
{"type": "command", "action": "drain"}
// or
{"type": "command", "action": "undrain"}
// or
{"type": "command", "action": "ping"}
// Server responds:
{"type": "response", "action": "<action>", "status": "ok", "message": "..."}
// or
{"type": "response", "action": "<action>", "status": "error", "message": "..."}
Server Cluster Monitoring
The monitoring module provides a comprehensive observability stack for the server cluster using Prometheus (metrics), Loki (logs), and Grafana (visualization). All components are configured as reusable NixOS modules with automatic cross-host discovery.
Overview
The system consists of three layers:
-
Exporters (run on all servers)
- node_exporter for system-level metrics (CPU, memory, disk, network)
- Promtail for shipping journald logs to Loki
- Application-specific exporters (Caddy, PostgreSQL, Redis) enabled automatically
-
Collectors (run on the monitoring primary host)
- Prometheus for metrics aggregation with 90-day retention
- Loki for log aggregation with 90-day retention
- Alertmanager for alert routing and notifications
-
Visualization (runs on the monitoring primary host)
- Grafana with provisioned datasources and dashboards
- Native Kanidm OAuth2 authentication
Architecture
┌─────────────────────────────────────────────────────┐
│ nixmon (Monitoring Primary) │
│ ┌──────────┐ ┌──────┐ ┌─────────┐ ┌──────────┐ │
│ │Prometheus │ │ Loki │ │ Grafana │ │Alertmgr │ │
│ │ :9090 │ │:3100 │ │ :3000 │ │ :9093 │ │
│ └────┬──┬──┘ └──┬───┘ └─────────┘ └────┬─────┘ │
│ │ │ │ │ │
│ ┌────┘ │ ┌────┘ ┌────────────────┘ │
│ │ scrape│ │ push │ webhooks │
├──┼───────┼───┼─────────────┼────────────────────────┤
│ ▼ ▼ ▼ ▼ │
│ All servers: Home Assistant / Nextcloud │
│ - node_exporter :9100 │
│ - promtail → Loki │
│ - caddy metrics :2019 (if proxy configured) │
│ - postgres_exporter :9187 (if postgres configured) │
│ - redis_exporter :9121 (if redis configured) │
│ - pve_exporter :9221 (nixmon only, Proxmox API) │
└─────────────────────────────────────────────────────┘
Configuration
Enabling Monitoring
Monitoring is enabled by default on all servers (server.monitoring.enable = true).
The monitoring primary host is configured via the allocations.server.monitoringPrimaryHost
option, currently set to nixmon.
Options Reference
All options live under server.monitoring:
| Option | Type | Default | Description |
|---|---|---|---|
enable | bool | true | Enable monitoring for this server |
retention.metrics | string | "90d" | Prometheus TSDB retention period |
retention.logs | string | "90d" | Loki log retention period |
exporters.node.enable | bool | true | Enable node_exporter |
exporters.caddy.enable | bool | auto | Enable Caddy metrics (auto if proxy configured) |
exporters.postgres.enable | bool | auto | Enable PostgreSQL exporter (auto on IO host) |
exporters.redis.enable | bool | auto | Enable Redis exporter (auto on IO host) |
logs.enable | bool | true | Enable Promtail log shipping |
collector.enable | bool | auto | Enable collectors (auto on monitoring host) |
collector.grafana.kanidm.enable | bool | true | Enable Kanidm OAuth2 for Grafana |
collector.alerting.enable | bool | true | Enable Alertmanager |
collector.alerting.homeAssistant.enable | bool | false | Enable Home Assistant webhook alerting |
collector.alerting.nextcloudTalk.enable | bool | false | Enable Nextcloud Talk webhook alerting |
collector.proxmox.enable | bool | true | Enable Proxmox VE metrics collection |
Auto-Detection
The module automatically detects and enables exporters based on host role:
- Caddy exporter: Enabled when
server.proxy.virtualHostsis non-empty - PostgreSQL exporter: Enabled on the IO primary host when postgres databases are configured
- Redis exporter: Enabled on the IO primary host when redis instances are configured
- Collector services: Enabled only on the monitoring primary host
Secrets
The monitoring module requires the following secrets in hosts/server/nixmon/secrets.yaml:
MONITORING:
GRAFANA:
SECRET_KEY: <random-secret-key>
OAUTH_SECRET: <kanidm-oauth2-secret>
HOME_ASSISTANT:
WEBHOOK_URL: <ha-webhook-url>
NEXTCLOUD_TALK:
WEBHOOK_URL: <nc-talk-webhook-url>
PROXMOX:
USER: <proxmox-user-at-realm>
TOKEN_ID: <proxmox-token-name>
TOKEN_SECRET: <proxmox-token-secret>
Generating Secrets
Generate the Grafana secret key:
cat /dev/urandom | tr -dc 'A-Za-z0-9' | head -c 48
The MONITORING/GRAFANA/OAUTH_SECRET must match the value in hosts/server/nixcloud/secrets.yaml
under KANIDM/OAUTH2/GRAFANA_SECRET (the Kanidm provisioning side).
Caddy Virtual Hosts
The module configures three virtual hosts on nixmon:
| Service | Subdomain | Access |
|---|---|---|
| Grafana | grafana.<domain> | Public |
| Prometheus | prometheus.<domain> | LAN |
| Loki | loki.<domain> | LAN |
These are defined in hosts/server/nixmon/default.nix and collected by the IO
primary host’s Caddy configuration.
Alert Rules
The following alerts are configured by default:
| Alert | Condition | Severity |
|---|---|---|
HostDown | up{job="node"} == 0 for 2 minutes | Critical |
DiskSpaceCritical | Root filesystem < 10% free for 5 minutes | Critical |
HighCPUUsage | CPU usage > 90% for 5 minutes | Warning |
HighMemoryUsage | Memory usage > 90% for 5 minutes | Warning |
ServiceDown | up{job!="node"} == 0 for 2 minutes | Critical |
Alerts are routed to:
- Home Assistant: All critical and warning alerts via webhook (requires
collector.alerting.homeAssistant.enable = true) - Nextcloud Talk: Critical alerts only via webhook (requires
collector.alerting.nextcloudTalk.enable = true)
Module Structure
modules/nixos/server/monitoring/
├── default.nix # Entry point, imports sub-modules
├── options.nix # All server.monitoring.* options
├── collector/
│ ├── default.nix # Imports collector sub-modules
│ ├── prometheus.nix # Prometheus server + scrape targets
│ ├── loki.nix # Loki server + storage config
│ ├── grafana.nix # Grafana + Kanidm OAuth2
│ ├── alerting.nix # Alertmanager + alert rules
│ └── dashboards.nix # Dashboard provisioning
├── exporters/
│ ├── default.nix # Imports exporter sub-modules
│ ├── node.nix # node_exporter
│ ├── caddy.nix # Caddy metrics
│ ├── postgres.nix # PostgreSQL exporter
│ └── redis.nix # Redis exporter
├── logs/
│ └── promtail.nix # Promtail log shipping
└── integrations/
└── proxmox.nix # PVE exporter for Proxmox API
Troubleshooting
Checking Service Status
On the monitoring host (nixmon):
systemctl status prometheus.service
systemctl status loki.service
systemctl status grafana.service
systemctl status prometheus-alertmanager.service
systemctl status prometheus-pve-exporter.service
On any server:
systemctl status prometheus-node-exporter.service
systemctl status promtail.service
Verifying Metrics Collection
Check Prometheus targets are up:
curl -s http://localhost:9090/api/v1/targets | jq '.data.activeTargets[] | {instance: .labels.instance, health: .health}'
Verifying Log Collection
Check Promtail is shipping logs:
journalctl -u promtail.service -f
Query Loki directly:
curl -s 'http://localhost:3100/loki/api/v1/labels' | jq
Common Issues
Grafana OAuth login fails:
- Verify
GRAFANA_OAUTH_SECRETin nixmon matchesKANIDM/OAUTH2/GRAFANA_SECRETin nixcloud - Check Kanidm provisioning has the grafana OAuth2 client configured
- Verify DNS resolves
auth.<domain>correctly
Prometheus targets showing as down:
- Check firewall rules allow traffic on exporter ports from the monitoring host
- Verify the exporter service is running on the target host
- Check network connectivity between nixmon and the target host
Proxmox metrics missing:
- Verify
proxmox/token_idandproxmox/token_secretare valid - Check PVE API is accessible from nixmon:
curl -k https://pve.<domain>/api2/json - Review PVE exporter logs:
journalctl -u prometheus-pve-exporter.service
Creating New Users
To add a new user configuration:
1. Create User Directory
mkdir -p home/newuser
2. Create User Configuration Files
Create host-specific configurations in home/newuser/{hostname}.nix:
{ pkgs, lib, ... }:
{
imports = [
# Import shared configurations
./features/cli # Common CLI tools
./features/desktop/common # Desktop environment basics
];
# User-specific configuration
home = {
username = "newuser";
homeDirectory = "/home/newuser";
stateVersion = "25.05";
};
# Add user-specific packages and configuration
programs = {
git = {
userName = "Your Name";
userEmail = "your.email@domain.com";
};
};
}
Create feature modules in home/newuser/features/:
mkdir -p home/newuser/features/{cli,desktop,development}
3. Link User to Hosts
The auto-discovery system will automatically link users to hosts if:
- A file
home/{username}/{hostname}.nixexists - The hostname matches an existing host configuration
4. Test User Configuration
# Build home-manager configuration
home-manager build --flake .#newuser@hostname
# Switch to new configuration
home-manager switch --flake .#newuser@hostname
Creating New Hosts
To add a new host to your configuration:
1. Create Host Directory Structure
# For a new desktop host named "mydesktop"
mkdir -p hosts/desktop/mydesktop
# For a new server host named "myserver"
mkdir -p hosts/server/myserver
# For a new laptop host named "mylaptop"
mkdir -p hosts/laptop/mylaptop
2. Create Required Configuration Files
Create hosts/{device-type}/{hostname}/default.nix:
{ self, pkgs, ... }:
{
imports = [
# Hardware configuration (required)
./hardware.nix
# Optional: device-specific modules
# "${self}/hosts/shared/optional/containers.nix"
# "${self}/modules/nixos/custom-module.nix"
];
# Host-specific configuration
host = {
device.isHeadless = false; # Set to true for servers
};
# Add your system configuration here
# networking.hostName is automatically set from directory name
}
Create hosts/{device-type}/{hostname}/hardware.nix:
{ inputs, ... }:
{
imports = [
# Include relevant hardware modules
inputs.nixos-hardware.nixosModules.common-cpu-amd
inputs.nixos-hardware.nixosModules.common-pc-ssd
# For laptops, also include:
# inputs.nixos-hardware.nixosModules.common-pc-laptop
];
# Boot configuration
boot.loader = {
systemd-boot.enable = true;
efi.canTouchEfiVariables = true;
};
# Filesystem configuration (use disko for declarative disk setup)
fileSystems."/" = {
device = "/dev/disk/by-label/nixos";
fsType = "ext4";
};
# Add hardware-specific configuration
}
3. Add Hardware Acceleration (Optional)
If your host supports hardware acceleration, add it to the acceleration lists in flake.nix:
accelerationHosts = {
cuda = [
"your-new-host" # Add here for CUDA support
];
rocm = [
"your-amd-host" # Add here for ROCm support
];
};
4. Build and Test
# Build the configuration (don't switch yet)
sudo nixos-rebuild build --flake .#your-new-host
# Test the configuration
sudo nixos-rebuild test --flake .#your-new-host
# Switch to the new configuration
sudo nixos-rebuild switch --flake .#your-new-host
Using a Nix Package or NixOS Module from a Separate Fork of Nixpkgs
This guide will show you how to use a Nix package or NixOS module from a separate fork of nixpkgs.
Step 1: Define the Forked Repository
In your Nix file, define the forked repository using fetchFromGitHub function:
nixpkgs.overlays = [
(self: super: {
<your-package> = (import
(pkgs.fetchzip (
let owner = "<owner>"; branch = "<branch>"; in {
url = "https://github.com/${owner}/nixpkgs/archive/${branch}.tar.gz";
# Change to 52 zeros when archive needs to be redownloaded.
sha256 = "<sha256>";
}
))
{ overlays = [ ]; config = super.config; }).<your-package>;
})
];
In this example, replace <your-package>, <owner>, <branch>, and <sha256> with the actual values from the forked repository.
Step 2: Use Packages or Modules from the Forked Repository
Now you can use packages or modules from the forked repository in your Nix expressions. For example, if you want to use a package from the forked repository, you can refer to it using the <your-package> attribute. Here’s an example:
{
environment.systemPackages = with pkgs; [
<your-pckage>
];
}
In this example, replace <your-package with the actual name of the package you want to use.
Declarative Gnome Dconf
Description
When changing GNOME or GNOME extension settings, it is recommended to use dconf2nix and cherry pick its output. This allows for easy configuration using the GUI, but requires copying the settings back into the respective dconf settings in home-manager to save them.
DConf Locations
The locations for where to save DConf settings to is:
- Base.nix for standard GNOME DConf Settings.
- Extensions.nix for Extensions DConf Settings
- Per User Settings should be saved in the format of
home/${username}/desktop/gnome.nix
Getting the Output
dconf2nix will be installed as part of this flakes dev shell.
Running the following will output the current dconf settings into a temporary file so you can Cherry Pick your changes.
dconf dump / | dconf2nix > dconf.nix
Using a Package/Module from a Fork
Modules Overview
Purpose
This section provides an overview of the custom NixOS and Home-Manager modules defined in this repository. These modules allow for modular and reusable configurations across different hosts and users.
Entry Points
modules/nixos/: Contains NixOS-specific modules.modules/flake/: Flake-level modules for cross-host configuration.modules/home-manager/: Contains Home-Manager-specific modules.
Key Options/Knobs
Modules in this repository often expose configuration options under the device or custom service namespaces. Refer to the specific module documentation for detailed options.
Common Workflows
- Enabling a Module: Set
services.<name>.enable = true;or the relevant enable option in your host or home configuration. - Configuring a Module: Use the options defined by the module to customize its behavior.
References
NixOS Services
This section documents the custom NixOS service modules available in this configuration. These modules provide specialized integrations and monitoring capabilities.
Huntress
Managed EDR (Endpoint Detection and Response) platform that protects systems by detecting malicious footholds used by attackers.
- Entry point:
modules/nixos/services/huntress.nix - Upstream: Huntress Managed EDR
Special Options
services.huntress.accountKeyFile: Path to a file containing the Huntress account key.services.huntress.organisationKeyFile: Path to a file containing the Huntress organisation key.
Usage Example
{ config, ... }: {
services.huntress = {
enable = true;
accountKeyFile = config.sops.secrets.huntress_account_key.path;
organisationKeyFile = config.sops.secrets.huntress_org_key.path;
};
}
Operational Notes
The agent configuration is generated at /etc/huntress/agent_config.yaml during the service’s preStart phase. It merges the provided account and organisation keys using yaml-merge. The keys are securely loaded into the service using systemd LoadCredential.
MCPO (Model Context Protocol Orchestrator)
Orchestrates Model Context Protocol (MCP) servers, providing a centralized way to manage and expose multiple MCP servers.
- Entry point:
modules/nixos/services/mcpo.nix - Upstream: MCPO GitHub Repository
Special Options
services.mcpo.configuration: An attribute set defining the MCP servers to orchestrate.services.mcpo.apiTokenFile: Optional path to a file containing an API token for the service.services.mcpo.extraPackages: Additional packages to include in the service’sPATH.services.mcpo.helpers: Read-only attribute set of helper functions for common server types (e.g.,npxServer,uvxServer).
Usage Example
{ config, ... }: {
services.mcpo = {
enable = true;
configuration = {
everything = config.services.mcpo.helpers.npxServer "@modelcontextprotocol/server-everything";
};
};
}
Operational Notes
MCPO runs as a DynamicUser with a state directory at /var/lib/mcpo. The configuration is rendered via sops.templates and loaded into the service via systemd credentials. The service’s PATH includes bash, nodejs, and uv by default to support various MCP server types.
Metrics & Hacompanion
Comprehensive metrics collection and integration with Home Assistant via hacompanion.
- Entry point:
modules/nixos/services/metrics.nix - Upstream: Hacompanion GitHub Repository
Special Options
services.metrics.hacompanion.enable: Enable the Home Assistant Companion service.services.metrics.hacompanion.sensor.<name>.enable: Enable specific built-in sensors (e.g.,cpu_temp,memory,uptime).services.metrics.hacompanion.script: Define custom scripts to expose as sensors or switches in Home Assistant.services.metrics.hacompanion.storage: Configure monitoring for storage devices and ZFS pools.services.metrics.upgradeStatus.enable: Enable a specialized sensor for tracking NixOS upgrade status.
Usage Example
{ ... }: {
services.metrics.hacompanion = {
enable = true;
sensor.cpu_temp.enable = true;
sensor.memory.enable = true;
storage.main = {
name = "Main OS Drive";
sensors.used = true;
};
};
}
Operational Notes
Hacompanion uses a generated TOML configuration file and securely loads the Home Assistant API token from sops.secrets.HACOMPANION_ENV. The upgradeStatus feature can also integrate with Uptime Kuma to provide heartbeat notifications for successful system upgrades.
Tailscale
Extensions to the standard NixOS Tailscale module, providing easier tag management.
- Entry point:
modules/nixos/services/tailscale.nix - Upstream: Tailscale Tags Documentation
Special Options
services.tailscale.tags: A list of tags to advertise for this device. These tags are automatically prefixed withtag:when passed totailscale up.
Usage Example
{ ... }: {
services.tailscale = {
enable = true;
tags = [ "server" "internal" ];
};
}
Operational Notes
This module simplifies the application of Tailscale tags by automatically constructing the --advertise-tags flag. Ensure that the device has the necessary permissions in your Tailscale ACLs to apply the requested tags.
Desktop Module
The Desktop module provides a base configuration for desktop environments in the flake. It is a small aggregator typically imported by desktop hosts to ensure a common baseline for graphical environments.
Purpose
The primary purpose of this module is to bundle common desktop-related services and configurations that should be present on all workstations, such as display managers and remote access tools.
Entry Point
modules/nixos/desktop/default.nix
Special Options and Behaviors
This module does not expose its own options. Instead, it serves as a central point for importing other shared desktop components:
- Display Manager: Configured via
../shared/display-manager.nix. - Remote Access: Configured via
../shared/remote.nix.
Example Usage
This module is a base component for desktop hosts. It must be manually imported in the host’s configuration.
# hosts/desktop/my-workstation/default.nix
{
imports = [
../../../modules/nixos/desktop/default.nix
];
}
Operational Notes
- This module ensures that all desktop hosts have a consistent baseline for graphical interfaces and remote management.
- If you need to disable a specific component imported by this module, you may need to use
lib.mkForceor target the specific component’s enable option if available.
Server Module
The Server module provides a cluster-aware configuration for server hosts in the flake. It must be explicitly enabled using the server.enable option.
Purpose
The primary purpose of this module is to establish a shared environment for servers in the cluster, defining a coordinator node (ioPrimaryHost) and providing helper functions for inter-server communication and attribute collection.
Entry Point
modules/nixos/server/default.nix
Special Options and Behaviors
The main configuration entry point is server.enable. Once enabled, it sets up the server-specific baseline:
- Journald Persistence: Configured with a 14-day retention period and storage limits.
- Pre-Switch Checks: Runs
dixon system activation to report changes between generations. server.ioPrimaryHost: Specifies the hostname of the coordinator host for the cluster. This host runs primary database instances, the reverse proxy, and storage master nodes. This option is typically set on the coordinator host and used by other servers in the cluster for synchronization.
Example Usage
To use the server module, it must be explicitly enabled in the host configuration.
# hosts/server/nixmon/default.nix
{
server = {
enable = true;
# Set to the hostname of the cluster's coordinator node
ioPrimaryHost = "nixio";
};
}
Operational Notes
- This module provides many helper functions (like
getAllAttrsFunc,collectAllAttrs, etc.) that are used by submodules to gather configuration data from other servers in the cluster. - These helpers allow for dynamic configuration based on the state of other cluster nodes, such as building a global dashboard or a reverse proxy configuration.
- The
ioPrimaryHostis a critical component of the cluster, as many services (like Dashy or MinIO) rely on it as the central point of coordination.
Server Dashboard Module
The Server Dashboard module provides an integrated dashboard for monitoring and accessing services within the server cluster.
Purpose
The dashboard module integrates with Dashy and collects dashboard sections from all servers in the cluster to display on the ioPrimaryHost.
Entry Point
modules/nixos/server/dashboard.nix
Special Options and Behaviors
The module provides options under server.dashboard to define the section for each server:
server.dashboard.name: The name of the section in the dashboard, defaulting to the capitalized hostname.server.dashboard.icon: An optional icon for the section.server.dashboard.items: A set of dashboard items (withtitle,icon, andurl) to be displayed.server.dashboard.displayData: Arbitrary JSON data to be passed to the dashboard configuration.
Example Usage
Configure the dashboard section for a server:
# hosts/server/nixserv/default.nix
{
server.dashboard = {
name = "Services";
icon = "fas fa-server";
items = {
"Grafana" = {
title = "Grafana Dashboard";
icon = "fas fa-chart-line";
url = "https://grafana.example.com";
};
};
};
}
Operational Notes
- This module uses
getAllAttrsFuncto gatherserver.dashboardconfigurations from all servers in the cluster. - The aggregated configuration is only applied to the
ioPrimaryHost, which runs the primary Dashy instance. - This allows each server to define its own dashboard items, which are then automatically collected and displayed on a single unified dashboard.
Server Network Module
The Server Network module provides a declarative way to manage network configurations and firewall rules across the server cluster.
Purpose
The network module coordinates network subnet definitions and firewall rules, allowing for centralized configuration of subnets and automatic propagation of these settings to other servers in the cluster.
Entry Point
modules/nixos/server/network.nix
Special Options and Behaviors
The main configuration options are under server.network:
server.network.subnets: A list of subnet definitions, each with:dns: The DNS server for the subnet.domain: The domain name for the subnet.ipv4,ipv6: Configuration options (like CIDR and ARPA) for the subnet’s IP range.
server.network.openPortsForSubnet: Defines TCP and UDP ports to be opened on the firewall for each defined subnet.
Example Usage
Configure a subnet and open ports on a server:
# hosts/server/nixio/default.nix
{
server.network = {
subnets = [
{
dns = "192.168.1.1";
domain = "lan.example.com";
ipv4.cidr = "192.168.1.0/24";
}
];
openPortsForSubnet = {
tcp = [ 80 443 ];
};
};
}
Operational Notes
- This module uses
getIOPrimaryHostAttrto fetch theserver.network.subnetsconfiguration from theioPrimaryHost. - This ensures that all servers in the cluster are aware of the network structure defined on the coordinator host.
- The module automatically generates
iptablesandip6tablesrules for the specified ports, allowing traffic only from the defined subnets. - These rules are added to the
nixos-fwchain and are managed through thenetworking.firewall.extraCommandsandnetworking.firewall.extraStopCommandsoptions.
Server Distributed Builds Module
The Server Distributed Builds module provides a declarative way to manage distributed builds across the server cluster.
Purpose
The distributed builds module allows for distributed building of Nix derivations using remote build machines, providing a coordinator host and several build machines to distribute the build load.
Entry Point
modules/nixos/server/distributed-builds.nix
Special Options and Behaviors
The module provides options under server.distributedBuilder:
server.distributedBuilder.builderUser: The user to use when connecting to remote build daemons (default:builder).server.distributedBuilder.builders: A list of hostnames of remote build daemons to connect to for distributed builds.
Example Usage
Configure a build server and a host to use it for distributed builds:
# hosts/server/nixserv/default.nix (build server)
{
server.distributedBuilder = {
builders = [ "nixserv" ];
};
}
# hosts/server/nixdev/default.nix (host using build server)
{
server.distributedBuilder = {
builders = [ "nixserv" ];
};
}
Operational Notes
- This module coordinates the creation of a system user (
builder) on the build server and adds the necessary SSH keys to allow other hosts to connect. - On the hosts using the build server, the module automatically configures
nix.distributedBuildsand sets up the build machines usingnix.buildMachines. - The
builderuser is automatically added tonix.settings.trusted-userson the build server. - The module uses
self.nixosConfigurationsto dynamically discover the system architecture of the build machines. - For more information on distributed builds in Nix, see the NixOS Manual.
Database Submodule
The database submodule provides a managed interface for PostgreSQL and Redis across the server infrastructure. It centralizes database configuration on the primary database host (config.server.ioPrimaryHost) while allowing client services to declaratively request databases.
The module is implemented across several files in modules/nixos/server/database/:
default.nix: Core options and connection management.postgres.nix: PostgreSQL-specific provisioning and secrets.redis.nix: Redis-specific ID mappings and security.guardian.nix: Lifecycle synchronization and the IO Guardian.
Purpose
This submodule automates:
- Provisioning of PostgreSQL databases and roles.
- Management of Redis database IDs via static mappings.
- Synchronization of service lifecycle with database availability using the IO Guardian.
- Automated password handling via SOPS secrets.
Entry Points
server.database.postgres: Manage PostgreSQL databases and users (inpostgres.nix).server.database.redis: Manage Redis database instances (inredis.nix).server.database.host: Centralized host address for database connections (indefault.nix).server.database.dependentServices: Lifecycle coordination for dependent services (inguardian.nix).
Key Options and Behaviors
Connection Management
The server.database.host option determines how services connect to databases. On the primary database host (ioPrimaryHost), it defaults to localhost. On all other hosts, it defaults to the value of config.server.ioPrimaryHost.
PostgreSQL Management
When a service defines a database in server.database.postgres:
- Automatic Provisioning: The IO Host automatically creates the database and a role with the same name.
- Password Management: A SOPS secret is expected at
POSTGRES/<DB_NAME_UPPER>_PASSWORD. Database names containing hyphens (-) replace them with underscores (_) when constructing the secret path. The system automatically sets this password for the role during thepostgresql-setupservice. - Aggregated Configuration: The IO Host collects all PostgreSQL requirements from across the entire flake to ensure all necessary extensions and initial scripts are loaded.
Redis Management
Redis management uses a similar aggregation pattern:
- Database IDs: Because Redis uses numeric IDs (0-15), the system uses a static mapping file (
redis-mappings.json) on the IO Host to ensure consistent ID assignment across the fleet. - Password Management: A shared password for the primary Redis instance is managed via
REDIS/PASSWORDin SOPS. - Tooling: Use the
update-redis-mappingscommand on the IO Host to update the mapping file when adding new Redis clients.
Per-Module Examples
Connection Configuration (default.nix)
You can override the default database host (e.g., if using a custom tunnel or local proxy):
{
server.database.host = "10.0.0.50";
}
PostgreSQL Example (postgres.nix)
Requesting a PostgreSQL database for a service:
{
server.database.postgres."my-app" = {
# database and user will be 'my-app'
# Password expected at sops secret: POSTGRES/MY_APP_PASSWORD
};
}
Redis Example (redis.nix)
Requesting a Redis database:
{
server.database.redis.myapp = {
# prefix will be 'myapp'
# database_id is assigned from redis-mappings.json
};
}
Guardian Dependency Example (guardian.nix)
Manually adding services to the database lifecycle coordination:
{
server.database.dependentServices = [
"custom-backend.service"
"worker-node" # .service suffix is added automatically
];
}
Operational Notes
IO Guardian Coordination
Lifecycle management is handled by the IO Guardian.
- On Clients: Services that use these database modules are automatically bound to
io-databases.target. This ensures they only start when the remote databases are reachable and stop before the databases go offline. - On IO Primary Host: The
io-database-coordinatorservice manages thedrainandundrainsignals sent to clients during system startup and shutdown.
IO Primary Host Behavior
The host designated as the IO Primary Host (config.server.ioPrimaryHost) is responsible for running the actual database engines. It aggregates all database requirements from every host in the flake and applies them locally.
Storage
The storage module manages persistent storage abstractions, specifically for mounting S3-compatible buckets from MinIO as local filesystems.
Purpose
This submodule provides a declarative way to mount remote storage buckets. It handles the underlying FUSE configuration and credential mapping automatically.
Key Options and Behaviors
Bucket Mounts
The bucketMounts option uses s3fs-fuse to mount buckets from https://minio.racci.dev.
- Credential Management: It automatically looks for sops secrets with the pattern
S3FS_AUTH/<NAME_IN_UPPERCASE>. These secrets should contain the credentials in theACCESS_KEY_ID:SECRET_ACCESS_KEYformat. - Mount Points: Buckets are mounted at
/mnt/buckets/<bucket-name>unless a differentmountLocationis specified. - Ownership and Permissions: You can control the mount ownership using
uidandgid. Theumaskoption (defaulting to022) controls the default file and directory permissions.
Example
The following example mounts a “media” bucket and sets specific ownership.
{
server.storage.bucketMounts.media = {
uid = 1000;
gid = 1000;
umask = 007;
};
}
Operational Notes
- s3fs-fuse: This module uses the
s3fspackage. It relies on FUSE, so it requiresprograms.fuse.userAllowOther = truewhich the module enables automatically when mounts are defined. - Network Dependency: Mounts use the
_netdevoption to ensure they are only attempted after the network is up. - Credential Format: Ensure that your sops secrets provide the exact string format required by s3fs.
- MinIO Endpoint: The module is currently configured to use
https://minio.racci.dev.
References
Proxy Submodule
The Proxy submodule provides a unified interface for exposing internal services through Caddy. It handles virtual host configuration, automatic SSL via ACME, OAuth2 authentication with Kanidm, and public exposure through Cloudflared tunnels.
Purpose
This module abstracts the complexity of reverse proxying by allowing services to define their proxy requirements within their own module configuration. It automatically coordinates between backend hosts and the primary IO host to ensure ports are open and traffic is correctly routed.
Entry Points
The primary configuration is managed through:
server.proxy.domain: The root domain for all services (e.g.,example.com).server.proxy.virtualHosts: An attribute set of service configurations.
Key Options and Behaviors
Global Options
server.proxy.domain: Defines the base domain. All virtual hosts default to<name>.<domain>unless overridden.server.proxy.kanidmContexts: Defines shared OAuth2 configurations that can be reused across multiple virtual hosts.scopes: Default scopes are["openid" "email" "profile" "groups"].tokenLifetime: Default lifetime is3600seconds.
Virtual Host Options
aliases: A list of additional hostnames (relative toserver.proxy.domain) that route to this service.public: If true, the service is added to the Cloudflared tunnel ingress for public access.ports: List of ports to open on the backend host to allow traffic from the IO primary host.kanidm: Configures OAuth2 protection. If enabled, the module generates Caddy security blocks and handles Kanidm client provisioning.bypassPaths: List of path patterns (e.g.,/api/*) that should bypass authentication.allowGroups: List of groups allowed access. Note: This cannot be empty if Kanidm is enabled.
l4: Configures Layer 4 forwarding using the Caddy L4 plugin. This opens both TCP and UDP ports on the IO primary host.extraConfig: Injected directly into the Caddyhandleblock.localhostreferences are automatically replaced with the backend host’s address.
Per-Module Examples
default.nix - Logic and Helpers
This file contains the internal logic for resolving OAuth contexts and mapping local addresses to backend hostnames.
# Example: How contextToEnvPrefix transforms names for environment variables
contextToEnvPrefix "my-service" # Returns "MY_SERVICE"
options.nix - Option Definitions
Defines the structure of virtual hosts and shared contexts.
server.proxy.kanidmContexts.admin-apps = {
authDomain = "auth.internal.example.com";
allowGroups = [ "admins@auth.example.com" ];
};
server.proxy.virtualHosts.grafana = {
public = true;
kanidm = {
context = "admin-apps";
allowGroups = [ "grafana-users@auth.example.com" ];
bypassPaths = [ "/health" ];
};
extraConfig = "reverse_proxy localhost:3000";
};
config.nix - Caddy Integration
Handles the generation of services.caddy.virtualHosts and ACME certificate requests.
# Generated Caddy block for a vhost with Kanidm
grafana.example.com {
import default
import public
@bypass_auth_grafana path /health
handle @bypass_auth_grafana {
reverse_proxy 10.0.0.5:3000
}
route /auth/* {
authenticate with grafana_portal
}
handle {
authorize with grafana_policy
reverse_proxy 10.0.0.5:3000
}
}
kanidm.nix - Authentication Security
Generates the Caddy security block, including identity providers, portals, and authorization policies.
security {
oauth identity provider admin-apps {
realm admin-apps
client_id "admin-apps"
client_secret {env.OAUTH_ADMIN_APPS_CLIENT_SECRET}
metadata_url https://auth.internal.example.com/oauth2/openid/admin-apps/.well-known/openid-configuration
}
# ... portals and policies
}
extensions.nix - System Integration
Connects the proxy to the dashboard, Cloudflared tunnels, and automates Kanidm client provisioning.
# Automatic Kanidm provisioning based on proxy config
services.kanidm.provision.systems.oauth2.admin-apps = {
displayName = "Admin Apps";
originUrl = [ "https://grafana.example.com/auth/oauth2/admin-apps/authorization-code-callback" ];
# ...
};
Operational Notes
Caddy Integration
The module assumes the existence of a default Caddy snippet for common headers and security settings. When public is enabled, it also expects a public snippet.
Dashboard Integration
Services defined in server.proxy.virtualHosts are automatically added to the server dashboard with default titles and icons derived from the host name.
Kanidm OAuth2 Context
Authentication requires specific secrets per context, managed via sops-nix:
KANIDM/OAUTH2/<UPPER_CONTEXT>_SECRET: Provisioning secret for Kanidm systems.OAUTH_<PREFIX>_CLIENT_SECRET: The OAuth2 client secret for Caddy.<PREFIX>_SHARED_KEY: A shared key used by Caddy to sign and verify authentication tokens.
These are automatically managed if Kanidm provisioning is enabled on the same host.
Layer 4 Forwarding
L4 forwarding uses the caddy.layer4 plugin. It is primarily used for non-HTTP traffic like database connections or SSH.
Public services are routed through the Cloudflared tunnel with ID 8d42e9b2-3814-45ea-bbb5-9056c8f017e2. Ensure this tunnel is correctly configured on the IO host.
References
Server SSH Module
The Server SSH module provides a rich interactive environment for root users upon login. It automatically transitions interactive root sessions into a dedicated development shell, ensuring consistent tooling and a powerful shell experience across server environments.
Purpose
The SSH submodule enhances administrative access by providing a session-only environment tailored for server management. It removes the need for manual setup of common tools and aliases by automatically entering a pre-configured nix-shell when a root user logs in interactively over SSH.
Key Options and Behaviors
Auto-entry Logic (ssh/default.nix)
The module modifies /etc/bashrc to detect interactive root logins via SSH. It evaluates several conditions before launching the session shell:
- User must be root (
EUID=0). - Session must be via SSH (
SSH_CONNECTIONpresent). - Session must be interactive (
stdinis a TTY). - No active session shell detected (
SSH_NIX_SHELLunset). - User has not opted out via
NIX_SKIP_SHELL.
The module also configures OpenSSH to accept the NIX_SKIP_SHELL environment variable from clients, allowing remote users to bypass the auto-shell entry when necessary.
Session Environment (ssh/shell.nix)
The default session shell is a nix-shell environment containing:
- Modern Shells: Fish shell with Starship prompt, Zoxide navigation, and Carapace completions.
- Enhanced Tooling: Replacements for standard utilities such as
bat(cat),fd(find),ripgrep(grep), andprocs(ps). - System Diagnostics: Tools like
btop,doggo,gping,inxi, andhyfetch.
The shellHook in shell.nix starts an interactive Fish session and immediately exits the nix-shell wrapper once the Fish session concludes.
Per-Module Examples
Enabling the SSH Shell
Enable the auto-shell behavior in your host configuration:
{
server.sshShell.enable = true;
}
Customizing the Shell File
Override the shell definition file if you require a different set of tools:
{
server.sshShell.shellFile = ./my-custom-shell.nix;
}
Operational Notes
Opt-Out Behavior
If you need to log in as root without entering the specialized shell, set the NIX_SKIP_SHELL environment variable on your local machine before connecting:
NIX_SKIP_SHELL=1 ssh root@your-server
This is particularly useful for automated scripts or troubleshooting scenarios where the standard Bash environment is preferred.
Guard Mechanism
The auto-entry script uses the SSH_NIX_SHELL environment variable to prevent recursive shell entries. If nix-shell fails to start, the system falls back to the default shell and provides a warning message.
References
Flake Allocations
The flake allocations module defines cross-host configuration options at the flake level. Rather than configuring each NixOS system independently, allocations let you declare cluster-wide concerns — like which machines have GPUs, which server coordinates I/O, and which servers act as distributed builders — in a single place.
How It Works
The allocation system has three layers:
- Option Definitions (
modules/flake/allocations.nix) — Declares the available allocation options. - Configuration (
flake/nixos/flake-module.nix) — Sets the actual values for those options. - Apply Modules (
modules/flake/apply/) — Propagates allocation values into each NixOS or Home-Manager configuration viaspecialArgs.
Data Flow
allocations.nix flake-module.nix apply/system.nix
┌──────────────┐ ┌──────────────────────┐ ┌───────────────────────┐
│ Define opts │──▶│ Set values │──▶│ Map to NixOS options │
│ (types, │ │ (which host has what)│ │ per system via │
│ defaults) │ │ │ │ specialArgs │
└──────────────┘ └──────────────────────┘ └───────────────────────┘
When mkSystem builds a NixOS configuration, it receives the allocations attribute set and passes it as a specialArgs argument. The apply module then conditionally maps those allocations to NixOS module options based on the host’s device type.
Options
allocations.accelerators
Maps hostnames to their available hardware accelerators (cuda, rocm). Used by the builder system to configure nixpkgs with the correct cudaSupport / rocmSupport flags per host.
allocations.accelerators = {
nixmi = [ "cuda" ];
nixai = [ ];
};
Hosts not listed default to no accelerators. The builder (lib/builders/default.nix) reads allocations.accelerators.${hostname} and sets the corresponding nixpkgs config flags.
allocations.hostTypes
Read-only attribute set mapping device types to their hostnames. Auto-populated from getHostsByType, which scans hosts/ directory structure.
# Automatically resolves to something like:
allocations.hostTypes = {
server = [ "nixio" "nixserv" "nixmon" ];
desktop = [ "nixmi" ];
};
allocations.server.ioPrimaryCoordinator
Designates a server as the primary I/O coordinator for the cluster. This is the host that runs primary database instances, the reverse proxy, and storage master nodes.
The type is constrained to an enum of server hostnames (automatically derived from hostTypes.server).
allocations.server.ioPrimaryCoordinator = "nixio";
This value flows through apply/system.nix into server.ioPrimaryHost on each server configuration.
allocations.server.distributedBuilders
List of servers that act as remote builders for distributed builds.
allocations.server.distributedBuilders = [ "nixserv" ];
Flows into server.distributedBuilder.builders on each server configuration.
Apply Modules
The apply modules (modules/flake/apply/) bridge flake-level allocations to per-system NixOS options.
apply/system.nix
Imported by mkSystem during system construction. Receives allocations and deviceType via specialArgs. For server-type hosts, it maps:
allocations.server.ioPrimaryCoordinator→server.ioPrimaryHostallocations.server.distributedBuilders→server.distributedBuilder.builders
Uses optionalAttrs to only apply server-specific options when deviceType == "server", preventing errors on non-server systems.
apply/home-manager.nix
Imported by the Home-Manager builder. Currently a no-op (mkMerge []) — exists as a placeholder for future home-manager-level allocations.
Source Files
| File | Role |
|---|---|
modules/flake/allocations.nix | Option definitions |
modules/flake/apply/system.nix | NixOS system apply |
modules/flake/apply/home-manager.nix | Home-Manager apply (placeholder) |
flake/nixos/flake-module.nix | Actual configuration values |
lib/builders/default.nix | Builder that consumes allocations |
DIY & Making
This section documents the Home-Manager modules under purpose.diy, which provide tooling and configuration for hardware tinkering, 3D printing, and related maker activities.
Printing
The printing module installs 3D-printing software and wires up persistent storage so that settings survive reboots on impermanence-based systems.
- Entry point:
modules/home-manager/purpose/diy/printing.nix
Options
purpose.diy.printing.enable
| Type | bool |
| Default | false |
Enables 3D-printing support. Installs OrcaSlicer and LycheeSlicer and registers their configuration directories for persistence.
Git Sync
The gitSync sub-module adds a long-running systemd user service that watches the OrcaSlicer profile directory and automatically creates a git commit every time a profile file is added, changed, or removed. This gives you a full revision history of your slicer settings with zero manual effort.
purpose.diy.printing.gitSync.enable
| Type | bool |
| Default | false |
Enable the OrcaSlicer git auto-commit watcher. Requires purpose.diy.printing.enable = true.
purpose.diy.printing.gitSync.repoPath
| Type | string |
| Default | "${config.home.homeDirectory}/.config/OrcaSlicer/user/default" |
Absolute path to the directory that will be managed as a git repository. The directory is initialised automatically the first time the watcher service starts, so it does not need to exist at activation time.
The default points at the standard OrcaSlicer per-user profile directory, which contains the filament/, process/, and machine/ sub-directories, so all profile types are tracked without any additional configuration.
Commit Message Convention
Commit messages are generated automatically based on the type of filesystem event and the location of the file within the repository:
| Event | Commit message format |
|---|---|
| File added / created | feat(<type>): added <name> |
| File modified | refactor(<type>): updated <name> |
| File deleted | chore(<type>): removed <name> |
Where:
<type>is the name of the first directory component under the repo root (e.g.filament,process,machine). Files placed directly at the root level use the fallback typeconfig.<name>is the filename stripped of its extension (e.g. a file namedPrusament_PLA.jsonyields the namePrusament_PLA).
Examples:
feat(filament): added Prusament_PLA
refactor(process): updated Standard_0.2mm_Quality
chore(machine): removed Prusa_MK4S
How It Works
- A systemd user service (
orca-slicer-git-sync.service) is started at login and kept alive by systemd. - The service uses
inotifywait(frominotify-tools) in one-shot mode inside a loop to detect any filesystem event under the repo path (excluding the.gitdirectory). - After an event is received the watcher sleeps for 2 seconds to debounce rapid bursts of writes (e.g. when OrcaSlicer rewrites multiple files at once).
- All pending changes are then committed one file at a time, each with an individually crafted commit message.
- If the watched directory does not yet exist (e.g. OrcaSlicer has never been run), the service polls every 10 seconds until it appears, then initialises the repository and starts watching.
Usage Example
{ ... }: {
purpose.diy.enable = true;
purpose.diy.printing = {
enable = true;
gitSync = {
enable = true;
# Optional: use a custom path outside the OrcaSlicer config directory
# repoPath = "/home/alice/slicer-profiles";
};
};
}
Operational Notes
- The git repository is initialised with
git initand an initial commit (chore: initial commit) the first time the service starts if no.gitdirectory exists. - The service is set to restart on failure (
Restart=on-failure,RestartSec=10) so transient errors do not leave settings un-tracked. - Because the watcher operates on the live OrcaSlicer profile directory, no separate mirroring or rsync step is needed.
Home-Manager: AI Editors & Assistants
This page documents the Home-Manager module at:
modules/home-manager/purpose/development/editors/ai/default.nix
It configures editor/agent tooling for AI-assisted development, centered around OpenCode and shared skill directories.
What this module sets up
When enabled, the module:
- Ensures
~/Projects/AIFSexists at activation time. - Adds useful global git ignores:
.workspace.sisyphus
- Configures Zed to expose an
OpenCodeagent server (opencode acp). - Enables and configures
programs.opencodewith:- plugins
- Nix formatter integration
- Nix LSP integrations (
nixd,nil) - command permissions policy
- local MCP server (
mcp-nixosviauvx)
- Writes:
~/.config/opencode/oh-my-opencode.json~/.config/opencode/opencode-notifier.json
- Registers AI skills under
~/.agents/skills/<name>viahome.file. - Persists OpenCode state directories:
.local/share/opencode.local/state/opencode
Options
purpose.development.editors.ai.enable
| Type | bool |
| Default | false |
Enable AI tools and assistant/editor integrations for the user profile.
purpose.development.editors.ai.includeDefaults
| Type | bool |
| Default | true |
Whether to include the module’s built-in skills and agents from:
modules/home-manager/purpose/development/editors/ai/skillsmodules/home-manager/purpose/development/editors/ai/agents
Set to false for a minimal setup with only base OpenCode configuration.
purpose.development.editors.ai.skills
| Type | list of string |
| Default | [] |
Additional skill source paths to register globally under ~/.agents/skills.
Each entry should point to a skill directory (for example from a flake input or from this repository). The basename of each source path is used as the destination directory name.
Example:
"${inputs.my-skill-repo}/skills/my-skill""${self}/skills/another-skill"
Usage example
{ self, inputs, ... }: {
purpose.development.editors.ai = {
enable = true;
includeDefaults = true;
skills = [
"${inputs.my-skill-repo}/skills/my-skill"
"${self}/skills/another-skill"
];
};
}
Notes
- Skill links are generated under
~/.agents/skills/<basename>. - Default skills are discovered automatically from the module’s local
skills/directory whenincludeDefaults = true. - The module currently defines default agent discovery as well, but only skill link materialization is active in
home.fileoutput.
Packages Overview
Purpose
This section documents the custom packages defined in this repository. These are packages that are either not available in nixpkgs or require custom builds.
Entry Points
pkgs/: Contains the package definitions, typically organized by package name.alvr-bin: Binaries for ALVR that allows nvidia accelerated by using the AppImage.drive-stats: Tool for monitoring and reporting drive statistics.helpers: Collection of helper scripts for configuration management.huntress: Integration for Huntress security agent.hypr-gamemode: Script to optimize Hyprland performance for gaming.io-guardian: Database lifecycle management across hosts.lidarr-plugins: Lidarr plugins branch.list-ephemeral: Utility to identify and list ephemeral filesystem entries.lix-woodpecker: Woodpecker CI runner.mcp-sequential-thinking: MCP server for step-by-step reasoning.mcp-server-amazon: MCP server for Amazon services interaction.proton-mcp: MCP server for ProtonMail.monocoque: Sim-racing dashboard and telemetry tool.orca-slicer-zink: Orca Slicer configured to use the Zink Vulkan driver to resolve nvidia rendering issues.python: Packages for home assistant python components.take-control-viewer: Remote support viewer for N-able Take Control via Wine.
Key Options/Knobs
Custom packages may expose different build options depending on their derivation definition.
Common Workflows
- Adding a Package: Create a new directory in
pkgs/with adefault.nixfile. - Using a Package: Reference the package via
pkgs.<name>if thepkgsoverlay is active.
Overlays Overview
Purpose
Overlays allow us to extend or modify the standard nixpkgs collection. We use them to add our custom packages, apply patches, or override package versions.
Entry Points
overlays/: Directory containing individual overlay definitions.overlays/default.nix: The main entry point for the overlays. It composes additions (frompkgs/and external inputs) and modifications (overrides for upstream packages).
Key Options/Knobs
Overlays themselves don’t typically have “knobs,” but they affect the available packages and their versions in the pkgs set.
Common Workflows
- Adding an Overlay: Create a new
.nixfile in theoverlays/directory. - Applying an Overlay: Overlays are typically applied in the
flake.nixconfiguration for NixOS or Home-Manager.
Hosts Overview
Purpose
This section covers the configuration of individual host machines. This repository uses an automatic discovery system to manage hosts based on their device type.
Entry Points
hosts/: Root directory for all host configurations.hosts/desktop/: Configurations for desktop systems.hosts/laptop/: Configurations for laptop systems.hosts/server/: Configurations for server systems.hosts/shared/: Shared configuration modules applied across multiple hosts.hosts/secrets.yaml: Root-level encrypted secrets for host configurations.
Key Options/Knobs
Host-specific configurations are found in hosts/{device-type}/{hostname}/default.nix. Global options shared across all hosts are in hosts/shared/global/.
Common Workflows
- Adding a New Host: Create a directory for the host in the appropriate device type category and add a
default.nix. - Modifying a Host: Update the
default.nixor associated files in the host’s directory.
References
Decky Loader Lifecycle
When jovian.decky-loader.enable = true is set on any host that imports hosts/shared/optional/gaming.nix, Decky Loader is not started automatically at boot. Instead it is managed in lock-step with the Steam desktop application:
hosts/shared/optional/gaming.nix— overrides the Jovian-provideddecky-loader.serviceto remove it frommulti-user.target, suppresses noisy CSS_Loader health-check log spam viaLogFilterPatterns, and adds a polkit rule that permits any active local user session to start/stop the system service without a password prompt. All of this is behind alib.mkIf (config.jovian.decky-loader.enable or false)guard so it is a no-op on machines without Jovian.home/shared/features/games/decky-loader.nix— defines adecky-loader-steam-watchsystemd user service (active for the duration of the graphical session) that polls~/.steam/steam.pidevery 3 seconds to detect Steam starting, then startsdecky-loader.service, and usestail --pidto block until Steam exits before stopping it again. The service is only enabled whenosConfig.jovian.decky-loader.enableis true.
Log filtering
The CSS_Loader plugin health-checks Steam’s internal web interface (port 8080) every few seconds. When Steam is not running these produce continuous journal noise of the form:
[CSS_Loader] [FAIL] [css_browserhook.py:437] [Health Check] Cannot connect to host 127.0.0.1:8080 …
This is suppressed with the following LogFilterPatterns entry on the service (requires systemd ≥ 255):
LogFilterPatterns = "~\[CSS_Loader\].*\[Health Check\].*Cannot connect";
Lib Overview
Purpose
The lib directory contains custom Nix functions and builders used throughout the repository to simplify configuration and reduce duplication.
Entry Points
lib/: Root directory for lib functions.attrsets.nix: Functions for manipulating and merging attribute sets.default.nix: Main entry point providing themineandbuildersnamespaces.files.nix: Utilities for filesystem operations and path handling.hardware.nix: Detection and configuration helpers for hardware acceleration and drivers.hypr.nix: Specialized helpers for Hyprland window manager configurations.keys.nix: Management of SSH, GPG, and other cryptographic keys.package.nix: Custom package definitions and derivation helpers.persistence.nix: Helpers for managing path persistence in ephemeral (TempFS) environments.strings.nix: String manipulation and formatting utilities.
lib/builders/: Contains specialized builders for system and home configurations.
Key Options/Knobs
The functions in lib take various arguments depending on their purpose. Builders typically take parameters for hostnames, user names, and modules.
Common Workflows
- Using a Lib Function: Access functions via
outputs.lib.<functionName>or by importing the relevant file. - Creating a Builder: Add new builder logic to
lib/builders/.