- Luke J Byrne
- Pages
- How to Install n8n Locally for Free
How to Install and Run n8n Locally for Free
Introduction
n8n is a powerful automation tool that allows users to create workflows for data processing and integration. While n8n offers a cloud-based service with paid plans, you can self-host it for free using open-source methods. This guide walks you through installing and running n8n on your local machine.
Why Self-Host n8n?
Avoid Subscription Fees: The cloud version requires a paid subscription after a 14-day free trial.
More Control: You manage all workflows on your own hardware.
Privacy: Your data stays on your local machine instead of cloud servers.
Installation Methods
You can install n8n using:
npm (Node.js Package Manager)
Docker (Containerized deployment)
For simplicity, we will use the npm method.
Prerequisites
Before installing n8n, ensure your system has:
Node.js (v18, v20, or v22 recommended)
npm (Installed with Node.js)
A package manager (Homebrew for Mac/Linux, Chocolatey for Windows)
Step 1: Install Node.js and npm
If you haven’t installed Node.js and npm yet, do so by following these steps:
On Mac/Linux (using Homebrew):
brew install node
On Windows (using Chocolatey):
choco install nodejs
Verify installation:
node -v # Check Node.js version
npm -v # Check npm version
Step 2: Install and Run n8n
One-Time Execution (without installing globally)
npx n8n
This command installs and runs n8n without adding it permanently to your system.
Permanent Installation
To install n8n globally, use:
npm install -g n8n
Then, start n8n with:
n8n
Step 3: Access n8n
Once n8n is running, open your browser and go to:
http://localhost:5678
This is where you will configure workflows.
Running n8n with AI Models Locally
You can integrate n8n with locally hosted AI models using Ollama, an open-source framework for running AI models on your computer.
Step 4: Install Ollama
Download and install Ollama from: https://ollama.ai
Once installed, list available models:
ollama list
To download a model (e.g., DeepSeek), use:
ollama pull deepseek
To run a model:
ollama run deepseek
Step 5: Connect n8n to Ollama
In n8n, configure the AI agent node to use:
Base URL: http://localhost:11434
This connects n8n to Ollama, allowing you to process AI workflows locally.
Conclusion
Congratulations! You have successfully installed and run n8n locally, along with AI integration using Ollama. This setup provides a fully private and cost-free alternative to cloud-based automation tools.
If you want more tutorials like this, stay tuned for more guides!