The Twelve-Factor App Methodology

Some might argue that the twelve-factor principles are dusty and oudated. But they are far from that. The principles are still applicable to almost any (cloud native) application and are contain generally good advices

The twelve-factor app methodology goes back to developers at Heroku - One of the first cloud platforms that ever existed. The methodology consists of best practices to build portable, resilient, and scalable web applications and was presented by Adam Wiggins in 2011. Some argue that these twelve guidelines are outdated and only apply to applications on Heroku. Honestly, eleven years in the software industry are a lifetime. But, we do not share this opinion and claim that this methodology is still relevant. The official website emphasizes that the methodology is relevant for developers and operations engineers who develop or manage apps or Software-as-a-Service (SaaS). In my opinion, it is also relevant to software architects.

Twelve-Factor App Methodology Overview#

ICodebaseOne codebase tracked in revision control, many deploys
IIDependenciesExplicitly declare and isolate dependencies
IIIConfigStore config in the environment
IVBacking ServicesTreat backing services as attached resources
VBuild, Release, RunStrictly separate build and run stages
VIProcessesExecute the app as one or more stateless processes
VIIPort BindingExport services via port binding
VIIIConcurrencyScale out via the process model
IXDisposabilityMaximize robustness with fast startup and graceful shutdown
XDev/Prod ParityKeep development, staging, and production as similar as possible
XILogsTreat logs as event streams
XIIAdmin ProcessesRun admin/management tasks as one-off processes

Source: There is also a free e-book available on this topic by Adam Wiggins:

I. Codebase#

Code is usually tracked in a version control system (VCS) whose state in relation to a specific application is often referred to as a repository. The relation between applications and code bases (aka source control) should be one-to-one. This has multiple implications:

Each microservice is an app and should fulfil the twelve factors on its own. Apps may not share code unless the shared code is a dependency. Each service/app should have its own CI/CD pipelines.

Further a single code base may have multiple deployments. Meaning, regardless of the environment or system, your app always points to the same code base. Though, it is possible there are different versions of the same app deployed across environments. For example, you may deploy v1.1.0 on your development environment while your production environment still points to v1.0.0.

This greatly reduces the risk of desynchronization and eases collaboration.

II. Dependencies#

Applications should always explicitly define dependencies and never rely on them implicitly. Also, dependencies should be isolated from the wrapping system to prevent leakage. Version mismatches and missing dependencies are one of the most annoying issues when setting up applications.

In the virtualization era, most people instinctively understand that explicit dependency definition is crucial. Most languages have very mature package managers that usually use a dependency declaration file. Dependencies do not only relate to language-specific libraries but also to system tools. (Docker) images for example fail to build if you rely on dependencies that are not contained within your base image.

Tip: If you are using Python in your projects and still use plain pip, we would like to encourage you to have a look at Poetry.

III. Config#

Configuration must be strictly separated from the code and stored centrally. Information such as environment-related addresses, credentials, and settings should be managed via a central configuration file. The configuration itself should be managed by environment variables.

DATABASE_PORT = int(os.environ.get('DATABASE_PORT'))

This has a major advantage, wherever your application is deployed,  all you have to adjust is the configuration. Many platforms, frameworks, and solutions offer external configuration management. For example, Kubernetes has configuration maps, docker offers environment variables as -e flags and AWS EC2 may be fed with environment variables by using the AWS CLI.

IV. Backing Services#

Backing services are dependencies of the application that are not code but consumable operations. These may be databases, messaging- or queuing services, SMTP servers, third-party APIs, or caches like Redis, to name a few. No matter if the backing service is a local or remote dependency the application should interact with it in the same way. These dependencies are resources to the application and should be handled as such.

Backing Services Example

This enables loose coupling by default and thereby grants almost unlimited scalability and reliability of both apps and services and also highlights the importance of configuration separation.

To define these backing services even more explicitly, invert their control, maximize decoupling and ease testing greatly, we suggest you have a look into a dependency injection framework in your corresponding language:

V. Build, Release, Run#

DevOps is all around us. Not much to say for this point. It is recommended to target fully automated and separated build and run stages. The more of your processes are automated, the less can go wrong and the more time your save.

VI. Processes#

One of the major advantages of cloud applications is the ease of scaling. But horizontal scaling doesn't come for free. You have to build your applications in a way they are stateless.

Stateless in essence means, that there is no context information saved and each request is an isolated operation. Concepts like JWT and external caches support us in implementing apps and services in a stateless manner, without losing functionality or computation speed.

Running an app as one or multiple stateless processes essentially enables elasticity for the specific component.

VII. Port Binding#

This one aged badly. The initial proposal on this point was that the app should be self-contained and be exposed by being bound to a port. Most of today's applications are running on some kind of platform, service, or framework. By default, many of these require port binding to access the component at all.

One important note though; This also implies that apps may be backing services for other apps and thereby should be handled as such.

VIII. Concurrency#

There is a broad variety of concurrency approaches available. Not only across languages, but also within. Most languages offer a way to scale either by spawning multiple (child) processes or by creating threads. Applications should be able to scale out via processes as workloads can become massive and increasing the resources might not work well at a certain point.

In general, technologies that follow a uber-process approach with threads, tend to be more resource intensive as technologies that utilize multiple processes. An example for this is Java that is known for its massive resource consumption. Though, this is often a negligible fact, as resources are cheap - Only at massive scale would you notice an impact on your overall costs.

Rough difference between multithreading and multiprocessing

IX. Disposability#

Disposability means that twelve-factor apps can be started and stopped at a moments notice. This enables fast and elastic scaling as well as rapid deployment of code and configuration changes. Apps should also shut down gracefully to handle crashes and scale in scenarios. Containers and functions almost always fulfil this requirement by default.

Serverless functions are a great example of how this concept may be used to drastically reduce development effort and costs. Many cloud offerings include serverless functions, such as AWS Lambda, Azure Functions, Cloud Functions on Google Cloud Platform (GCP) or even Vercel Serverless Functions.

X. Dev/Prod Parity#

Environments should be as similar as possible. In fact, all that should be different across your development, staging and production environments should be configuration. This is a crucial concept to implement continuous delivery and continuous deployments without encountering massive problems.

XI. Logs#

Logging is an often underrated task. Though, besides actual code, logging is the most important task in distributed systems. Not implementing logging (correctly) means you're flying blind. Nobody can maintain and debug such complex systems without proper logging. Not being able to identify bugs in a production environment can be frustrating, given the fact, you should never directly debug a productive app, but only evaluate its logs.

These logs should be handled as streams and never be the concern of the app itself. Instead, applications should write to stdout and then either be handled by a terminal (locally) or a log-streaming/aggregation solution (deployed). Meaning the logging aspect should be decoupled from the application and be handled by the wrapping context.

To increase the readability and the capabilities to search and filter your logs, you might want to have a look at a structured logging solution.

XII. Admin Processes#

There are often tasks that a developer or operator would like to see fulfilled. These could be database migrations, synchronization jobs, or event-related tasks. These should be executable as short-lived processes. An example of this could be images that may be started with flags, cron-jobs, or serverless functions.


Even though this list isn't complete and some principles seem a bit outdate, these twelve-factor principles are far from being irrelevant. Most of the points are still highly applicable and should be understood and memorized by any professional working in the software industry.

If you have additions or are not of our opinion, please let us know in the comments to improve this wrap-up of the twelve-factor methodology.