Twelve-Factor App

Twelve-Factor App

Twelve-Factor is a concise, language-agnostic manifesto for building cloud-native SaaS apps, which are single codebase tracked in VCS, clear separation of config from code, stateless processes, backing services as attached resources, logs as event streams, etc.

Below I’ll give a compact refresher of the twelve factors, then dive deep into how teams typically implement them in Go and Python — including concrete libraries, idiomatic patterns, short code examples, and short case-study pointers.

The Twelve Factors in short guide

Official Twelve-Factor site: 12factor.net

  1. Codebase — One repo per app, many deploys. Use VCS (git). Best practice: keep infra/ops scripts in the same repo or an adjacent infra repo with clear build/release docs.

  2. Dependencies — Explicitly declare & isolate (lock files, vendoring). Use language-native package management + CI to reproduce builds.

  3. Config — Store config in environment variables (don’t bake secrets/config into code). Best practice: treat DATABASE_URL, API keys, feature flags as env vars or secrets; never commit .env to VCS.

  4. Backing services — Treat DBs, caches, message brokers as attached resources (config points to them). Use connection URLs and allow swapping services without code change.

  5. Build, release, run — Separate build (artifact) from release (combine artifact + config) from run (execute processes). Automate with CI/CD.

  6. Processes — Execute as one or more stateless processes; persist state in backing services.

  7. Port binding — Export services by binding to a port (HTTP servers bind to $PORT), making the app self-contained.

  8. Concurrency — Scale by running multiple process types; processes are first-class for horizontal scaling.

  9. Disposability — Fast startup and graceful shutdown (SIGTERM handling, readiness/liveness probes in K8s).

  10. Dev/prod parity — Keep dev, staging, prod similar: data, time, personnel, and dependencies.

  11. Logs — Treat logs as event streams; delegate capture/aggregation to the execution environment (e.g., structured logs → stdout/stderr → aggregator).

  12. Admin processes — Run one-off admin/maintenance tasks in the same environment as the app (migrations, REPLs).

Twelve-Factor is intentionally simple and prescriptive at a conceptual level; many teams adopt it as a baseline and extend with service meshes, secrets managers, telemetry/observability, and platform-specific patterns (Kubernetes, serverless, etc.).

Twelve-Factor was created from Heroku’s experience and remains the canonical guidance. In 2024/2025 the Twelve-Factor project was open-sourced by Heroku to let the community evolve it. Use the canonical site for the authoritative list and rationale. (Heroku blog on November 12, 2024)

When is Twelve-Factor not a perfect fit? — State-heavy apps (rich client state, low-latency on-device logic), some legacy monoliths, and certain edge/IoT apps may need adaptations. Also Twelve-Factor predates some patterns (service meshes, sidecars, complex mesh networking), so treat it as a baseline and adapt.

Core idea in Go

Go apps are often compiled binaries distributed as artifacts — this fits the Twelve-Factor build → release → run model well. The most common gap to fill is how to map environment variables to typed Go config cleanly and how to keep startup/shutdown safe.

Common libraries

Many community examples document building 12-factor Go apps with envconfig or viper; there are tutorials & blog posts showing patterns for config injection and using godotenv for local dev. (blog.gopheracademy.com)

  • os / os.Getenv — standard low-level reads; fine for tiny apps.
  • kelseyhightower/envconfig — struct-tag driven mapping of env vars into typed Go structs; small and idiomatic. (GitHub)
  • spf13/viper — flexible config library: supports env vars, files (JSON/TOML/YAML), remote sources (Consul), and integrates with Cobra CLI. Good for apps needing layered sources. (GitHub)
  • joho/godotenv.env file loader for local development (don’t use .env in production; use platform secrets/config maps). Useful for onboarding developers. (GitHub)
  • spf13/cobra — CLI scaffolding (helps for admin processes / one-off commands). (GitHub)

Practical patterns (Go)

  • Single source of truth for config — Build a typed Config struct; populate it from env once at startup and pass it to components (avoid global mutable state).
  • Use a single connection URL — e.g. DATABASE_URL rather than multiple DB fields when possible.
  • Graceful shutdown — use context.Context, listen for SIGTERM/SIGINT, and allow ongoing requests to finish (with timeouts).
  • 12-factor dev convenience — use godotenv for dev only; CI and production ingest env from CI secrets managers and platform config maps.
  • Logs - write structured logs to stdout (e.g., log/slog, zerolog, or logrus) so platform can collect them.

For example, Viper can read a config.yaml, allow env overrides via AutomaticEnv(), and bind flags using Cobra. It’s heavier but powerful when you need multiple sources of layered config or feature toggles.

Example: Go config using envconfig

config.go
package config

import "github.com/kelseyhightower/envconfig"

type Config struct {
    Port        string `envconfig:"PORT" default:"8080"`
    DatabaseURL string `envconfig:"DATABASE_URL" required:"true"`
    LogLevel    string `envconfig:"LOG_LEVEL" default:"info"`
}

func Load() (*Config, error) {
    var c Config
    if err := envconfig.Process("", &c); err != nil {
        return nil, err
    }
    return &c, nil
}

main.go loads config.Load() once and passes *Config to the server. This approach is compact, explicit, and fits the Twelve-Factor config principle.

Core idea in Python

Python teams often use frameworks (Django, Flask, FastAPI). The 12-factor principle “config in env” maps to multiple popular libs that make reading/parsing env vars, validating types, and supporting .env files simple.

Key Python libraries

  • python-dotenv — load .env into environment for local dev (don’t commit .env). Straightforward helper. (PyPI)
  • pydantic / pydantic-settings (BaseSettings) — type-safe settings classes that load from env (and .env), with validation and defaults — excellent for FastAPI/modern codebases. Pydantic explicitly mentions compatibility with 12-factor patterns. (Pydantic)
  • dynaconf — layered configuration system inspired by the Twelve-Factor guide; supports multiple formats and environment layering, plus secrets/remote backends. Good when you need more flexibility than simple env mapping. (dynaconf.com)
  • environs, python-decouple, django-environ — lightweight helpers for env parsing and Django integration. (PyPI, django-environ.readthedocs.io)

Framework-specific integrations: django-environ for Django, pydantic for FastAPI, dynaconf for projects that need multi-format support. These are widely used across the community. See library docs for usage and patterns.

Practical patterns (Python)

  • Settings class — define a Settings (Pydantic BaseSettings) object and instantiate it at startup. This centralizes defaults, docs, and validation.
  • Use single-value connection strings — e.g. DATABASE_URL parsed by sqlalchemy/dj-dburl to avoid many scattered env keys.
  • Local dev — use python-dotenv or pydantic’s built-in .env support to load dev env. CI/prod must pull secrets from a vault, not a committed file.
  • Logs — use structured logging to stdout/stderr; have centralized aggregators (ELK, SaaS).

Dynaconf supports [default], [development], [production] layers and will read env vars to override values — good for apps that want a single library to offer both file and env layering.

Example: Pydantic BaseSettings

settings.py
from pydantic import BaseSettings, AnyUrl

class Settings(BaseSettings):
    database_url: AnyUrl
    port: int = 8000
    log_level: str = "info"

    class Config:
        env_file = ".env"  # convenience for dev only

settings = Settings()

settings.database_url is validated at startup; in production, the .env will be ignored because env vars from the platform will override. Pydantic explicitly supports this pattern.

Modern adaptations

  1. Secrets management — Twelve-Factor says “env vars” but production practices usually place secrets in a secret manager (AWS Secrets Manager, HashiCorp Vault, platform secrets) and inject them into env at runtime or mount them as files. The principle is the same: config external to code.
  2. Kubernetes — map the Twelve-Factor model onto K8s primitives: ConfigMaps/Secrets for config, Deployments for processes, readiness/liveness probes for disposability.
  3. .env files — community consensus: use .env for local development convenience, but do not treat .env as production secrets. Libraries like python-dotenv and godotenv explicitly support this dev use.
  4. Structured logs & telemetry — Twelve-Factor’s “logs as event streams” aligns with emitting structured JSON to stdout and relying on a platform for aggregation/processing.
  5. Validation + schema — Use typed settings (Go structs, Pydantic) so config problems show up at startup rather than in production.

📚 Key Cloud-Provider Guides

Here is a curated list of guide documents / reference guides for 12-Factor / Cloud-Native Apps from major cloud providers (Amazon Web Services, Microsoft Azure, Google Cloud) — particularly those that help teams implement cloud-native / 12-Factor (or similar) applications. I highlight what each guide offers and why it matters.

Amazon Web Services

“AWS アーキテクチャで学ぶ The Twelve Factors App 本格入門” (AWS builders.flash)

  • What it’s about: A Japanese-language AWS article that goes through each of the 12 factors, explains why they matter, and shows how to implement them on AWS.
  • Why it’s useful: Great for implementing 12-Factor on AWS — walks through each factor in AWS context (build → deploy → runtime) and maps them to AWS services (ECS/Fargate, ECR, CodeBuild/CodePipeline, CloudWatch).

“Developing Twelve-Factor Apps using Amazon ECS and AWS Fargate” (AWS Blogs)

  • What it’s about: A tutorial-style AWS blog post describing a sample solution using containers on ECS/Fargate, with backing services, CI/CD, and log management — showing how 12-Factor–style architecture maps into AWS container workloads.
  • Why it’s useful: Useful if you’re containerizing applications and want a “reference architecture” that follows 12-Factor principles on AWS.

Microsoft Azure

“Cloud-native architecture & 12-Factor App guidance” (Microsoft Learn)

  • What it’s about: The Azure documentation page on cloud-native architecture explicitly references 12-Factor as a “solid foundation” for cloud-native apps. It describes how cloud-native designs (stateless processes, elasticity, configuration separation, etc.) map onto Azure infrastructure.

  • Why it’s useful: Helpful to align 12-Factor thinking with Azure’s architecture patterns, especially if using Azure Kubernetes Service (AKS), container services, or serverless.

Azure App Configuration docs / guide (Microsoft Learn)

  • What it’s about: The Azure App Configuration service is presented as a tool to implement the 12-Factor “Config” principle: externalizing configuration from code, enabling dynamic configuration management especially in microservices / container-based deployments.
  • Why it’s useful: Practical for teams running containerized or distributed apps on Azure, need external config management, want to keep config out of code — especially useful for microservices or multi-environment deployments.

Google Cloud

Google Cloud Architecture Center / “Application development” section (Google Cloud Documentation)

  • What it’s about: A general landing area in Google Cloud docs that gathers resources on application development (compute, hosting, containers, data, observability, etc.) — good starting point for designing cloud-native apps on Google Cloud.
  • Why it’s useful: Helps explore cloud-native building blocks (compute, storage, managed services) that align with 12-Factor ideas (backing services, stateless containers, config externalization, etc.).

“From the Twelve to Sixteen-Factor App” (Google Cloud)

  • What it’s about: A recent (2025) Google Cloud blog post arguing that in the AI era the original 12-Factor model should be extended — offering a modern evolution of 12-Factor thinking that includes considerations for AI apps.
  • Why it’s useful: Useful if your application involves AI / ML workloads — shows how to adapt 12-Factor (or enhance it) for modern use-cases beyond traditional CRUD or service-backed apps.

How They Complement Each Other

  • The AWS guides are very concrete — they map each 12-Factor principle to specific AWS services (ECS/Fargate, ECR, CloudWatch, CodeBuild, etc.), making it easier to adopt 12-Factor on AWS without reinventing the wheel.
  • Azure’s cloud-native architecture guidance + App Configuration service offers a more managed/config-driven approach, especially for containerized or microservice-based apps; good if you’re already using Azure ecosystem (AKS, Functions, etc.).
  • On Google Cloud, the documentation is more general (Application Development center), but there are community and official evolutions of 12-Factor thinking. The “Sixteen-Factor” post is especially interesting for AI/ML workloads — suggesting 12-Factor isn’t obsolete but needs adaptation for modern app patterns.
  • Across all three providers, there’s a recurring pattern: configuration externalization, containerization (stateless processes), backing services as managed cloud services — exactly what 12-Factor advocates. These docs help ground 12-Factor theory into provider-specific best practices.

Example app

Here shows an example app that follows Twelve-Factor patterns which includes:

  • Go webapp (binds to $PORT, reads env, connects to backing services like Postgres and Redis)
  • PostgreSQL (main DB)
  • Redis (cache)
  • pgAdmin (DB admin UI)
  • Prometheus + Grafana (observability)
  • Alloy + Loki (log aggregation — optional but common)
  • Networks and volumes for clean isolation

We can set it up using Docker Compose with the folloing files. You need a .env file that contains settings of POSTGRES_PASSWORD and PGADMIN_DEFAULT_PASSWORD which Docker Compose refer in secrets section.

    • docker-compose.yml
    • .env
      • Dockerfile
      • main.go
      • go.mod
      • go.sum
      • grafana-datasources.yaml
      • prometheus.yml
      • loki-local-config.yaml
      • alloy-local-config.alloy
  • Below is a developer-friendly docker-compose.yml example. It’s suitable as a local development stack using configs and secrets in the top-level element.

    docker-compose.yml:
    ---
    services:
      webapp:
        build:
          context: ./webapp
          dockerfile: Dockerfile
        depends_on:
          - pgdb
          - redis
        networks:
          - appnet
        ports:
          - '8080:8080'
        environment:
          PORT: '8080'
          DATABASE_URL: postgres://postgres:${POSTGRES_PASSWORD}@pgdb:5432/appdb?sslmode=disable
          REDIS_URL: redis://redis:6379/0
    
      pgdb:
        image: postgres:18
        volumes:
          - pgdata:/var/lib/postgresql
        networks:
          - appnet
        secrets:
          - postgres-passwd
        environment:
          POSTGRES_USER: postgres
          POSTGRES_DB: appdb
          POSTGRES_PASSWORD_FILE: /run/secrets/postgres-passwd
    
      pgadmin:
        image: dpage/pgadmin4:9.10
        depends_on:
          - pgdb
        networks:
          - appnet
        ports:
          - '5050:80'
        secrets:
          - pdadmin-passwd
        environment:
          PGADMIN_DEFAULT_EMAIL: admin@example.com
          PGADMIN_DEFAULT_PASSWORD_FILE: /run/secrets/pdadmin-passwd
          PGADMIN_DISABLE_POSTFIX: 'true'
        configs:
          - source: pgadmin-servers
            target: /pgadmin4/servers.json
    
      redis:
        image: redis:8.4
        command: ['redis-server', '--appendonly', 'yes']
        volumes:
          - redisdata:/data
        networks:
          - appnet
    
      # --------------------------
      # Observability stack
      # --------------------------
    
      prometheus:
        image: prom/prometheus
        depends_on:
          - webapp
        networks:
          - appnet
        configs:
          - source: prometheus-config
            target: /etc/prometheus/prometheus.yml
    
      grafana:
        image: grafana/grafana
        depends_on:
          - prometheus
        volumes:
          - grafana:/var/lib/grafana
        networks:
          - appnet
        ports:
          - '3000:3000'
        configs:
          - source: grafana-datasources-config
            target: /etc/grafana/provisioning/datasources/ds.yaml
    
      loki:
        image: grafana/loki:3.6
        command: -config.file=/etc/loki/loki-config.yaml
        volumes:
          - lokidata:/loki
        networks:
          - appnet
        configs:
          - source: loki-config
            target: /etc/loki/loki-config.yaml
    
      alloy:
        image: grafana/alloy:v1.12.0
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock
        command: run --server.http.listen-addr=0.0.0.0:12345 --storage.path=/var/lib/alloy/data /etc/alloy/config.alloy
        depends_on:
          - loki
        ports:
          - 12345:12345
        networks:
          - appnet
        configs:
          - source: alloy-config
            target: /etc/alloy/config.alloy
    
    networks:
      appnet:
    
    volumes:
      pgdata:
      redisdata:
      grafana:
      lokidata:
    
    configs:
      prometheus-config:
        file: ./observability/prometheus.yml
      loki-config:
        file: ./observability/loki-local-config.yaml
      alloy-config:
        file: ./observability/alloy-local-config.alloy
      grafana-datasources-config:
        file: ./observability/grafana-datasources.yaml
      pgadmin-servers:
        content: |-
          {
            "Servers": {
              "1": {
                "Name": "Backend Database",
                "Group": "Twelve-Factor Example",
                "Port": 5432,
                "Username": "postgres",
                "Host": "pgdb",
                "MaintenanceDB": "postgres",
                "ConnectionParameters": {
                    "sslmode": "prefer",
                    "connect_timeout": 10
                }
              }
            }
          }
    
    secrets:
      postgres-passwd:
        environment: 'POSTGRES_PASSWORD'
      pdadmin-passwd:
        environment: 'PGADMIN_DEFAULT_PASSWORD'

    An example Go app has basic handlers of “/healthz”, “/readyz”, and “/metrics” for observability.

    webapp/main.go:
    package main
    
    import (
    	"context"
    	"fmt"
    	"log"
    	"net/http"
    	"os"
    	"os/signal"
    	"syscall"
    	"time"
    
    	"github.com/go-redis/redis/v8"
    	"github.com/jackc/pgx/v5"
    	"github.com/kelseyhightower/envconfig"
    	"github.com/prometheus/client_golang/prometheus/promhttp"
    )
    
    type App struct {
    	DB    *pgx.Conn
    	Redis *redis.Client
    }
    
    type Config struct {
    	Port        string `envconfig:"PORT" default:"8080"`
    	DatabaseURL string `envconfig:"DATABASE_URL" required:"true"`
    	RedisURL    string `envconfig:"REDIS_URL" required:"true"`
    }
    
    func main() {
    	// --- Load config from env (12-factor friendly)
    	var cfg Config
    	if err := envconfig.Process("", &cfg); err != nil {
    		log.Fatalf("failed to load config: %v", err)
    	}
    	port := cfg.Port
    	dbURL := cfg.DatabaseURL
    	redisURL := cfg.RedisURL
    
    	// --- Connect to PostgreSQL with pgx
    	ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
    	db, err := pgx.Connect(ctx, dbURL)
    	cancel()
    	if err != nil {
    		log.Fatalf("failed to connect to DB: %v", err)
    	}
    	defer db.Close(context.Background())
    
    	// --- Connect to Redis
    	opt, err := redis.ParseURL(redisURL)
    	if err != nil {
    		log.Fatalf("invalid Redis URL: %v", err)
    	}
    	rdb := redis.NewClient(opt)
    
    	app := &App{DB: db, Redis: rdb}
    
    	// --- Routes
    	mux := http.NewServeMux()
    	mux.HandleFunc("/healthz", app.handleHealthz)
    	mux.HandleFunc("/readyz", app.handleReadyz)
    	mux.Handle("/metrics", promhttp.Handler())
    
    	mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
    		fmt.Fprintln(w, "Hello from Go webapp!")
    	})
    
    	srv := &http.Server{
    		Addr:    ":" + port,
    		Handler: mux,
    	}
    
    	// --- Start server
    	go func() {
    		log.Printf("Starting server on %s", srv.Addr)
    		if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
    			log.Fatalf("listen error: %v", err)
    		}
    	}()
    
    	// --- Graceful shutdown
    	stop := make(chan os.Signal, 1)
    	signal.Notify(stop, syscall.SIGTERM, syscall.SIGINT)
    
    	<-stop
    	log.Println("Shutting down gracefully...")
    
    	ctx, cancel = context.WithTimeout(context.Background(), 8*time.Second)
    	defer cancel()
    
    	if err := srv.Shutdown(ctx); err != nil {
    		log.Fatalf("Server Shutdown: %v", err)
    	}
    
    	log.Println("Goodbye!")
    }
    
    func (a *App) handleHealthz(w http.ResponseWriter, _ *http.Request) {
    	// Simple: if server is running, it's alive
    	w.WriteHeader(http.StatusOK)
    	w.Write([]byte("ok"))
    }
    
    func (a *App) handleReadyz(w http.ResponseWriter, _ *http.Request) {
    	ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
    	defer cancel()
    
    	// Check DB
    	if err := a.DB.Ping(ctx); err != nil {
    		http.Error(w, "DB not ready", http.StatusServiceUnavailable)
    		return
    	}
    
    	// Check Redis
    	if _, err := a.Redis.Ping(ctx).Result(); err != nil {
    		http.Error(w, "Redis not ready", http.StatusServiceUnavailable)
    		return
    	}
    
    	w.WriteHeader(http.StatusOK)
    	w.Write([]byte("ready"))
    }
    • GET /healthz always returns 200 as long as process is alive.
    • GET /readyz checks DB connectivity (DB.PingContext) and Redis connectivity (Redis.Ping). It returns 503 if dependencies are not ready.
    • Prometheus will scrape “/metrics” endpoint.

    For liveness and readiness, take a look at Kubernetes Documentation. For metrics, default metrics_path in Prometheus, which specifies the HTTP resource path on which to fetch metrics from targets, is “/metrics”. (Prometheus)

    Last updated on