Automating with Local AI: n8n Self-Hosted Setup Guide
n8n is a powerful workflow automation tool that can integrate with local AI models to create sophisticated, privacy-preserving automation systems. This guide will walk you through setting up n8n’s Self-Hosted AI Starter Kit and building your first AI-powered workflows without relying on cloud services.
What You’ll Learn
- Installing the n8n Self-Hosted AI Starter Kit
- Connecting local LLMs to n8n
- Building basic AI workflows
- Automating tasks with AI assistance
- Creating AI agents for specific purposes
Requirements
- Computer capable of running Docker
- Basic understanding of automation concepts
- Local AI models (through Ollama or similar)
- 16GB+ RAM recommended for smooth operation
1. Setting Up Docker Environment
Before we can install n8n, we need to ensure Docker is properly set up on your system:
- Install Docker and Docker Compose:
# For Ubuntu/Debian
sudo apt-get update
sudo apt-get install docker-ce docker-compose-plugin
# For macOS (using Homebrew)
brew install docker docker-compose
# For Windows
# Download and install Docker Desktop from the official website
- Verify Docker is working correctly:
docker --version
docker-compose --version
- Ensure Docker service is running:
# Linux
sudo systemctl start docker
# macOS/Windows
# Docker Desktop should be running
2. Installing n8n Self-Hosted AI Starter Kit
The n8n Self-Hosted AI Starter Kit provides a preconfigured environment with n8n and various AI tools:
- Clone the repository:
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
cd self-hosted-ai-starter-kit
- Review the docker-compose.yml file to understand the included components:
cat docker-compose.yml
The starter kit includes:
- n8n: The workflow automation platform
- Ollama: Local large language model server
- PostgreSQL: Database for n8n
- Additional AI components (depending on version)
- Start the services:
docker-compose up -d
This will download and start all the necessary containers. The initial download may take some time depending on your internet connection.
- Verify the services are running:
docker-compose ps
You should see all services in the “running” state.
3. Configuring Local AI Connections
The starter kit includes Ollama for running local AI models. Let’s set up some models:
- Pull a language model with Ollama:
docker exec -it n8n-ollama ollama pull mistral
This will download the Mistral model for general-purpose text generation. You can replace “mistral” with other models like “llama2” or “phi”.
- Pull an embedding model:
docker exec -it n8n-ollama ollama pull nomic-embed-text
This model will be used for text embeddings and document processing.
4. Building Your First AI Workflow
Now that n8n and Ollama are set up, let’s access n8n and create your first workflow:
- Open your browser and navigate to
http://localhost:5678
- If this is your first time, follow the setup wizard to create an account
- Once logged in, click “Workflows” in the left sidebar, then “+ New” to create a new workflow
- Click on the canvas and search for “Ollama” in the nodes panel
Basic Text Generation Workflow
Let’s create a simple workflow that generates text based on a prompt:
- Add a “Manual” trigger node to start the workflow
- Add an “Ollama” node and connect it to the trigger
- Configure the Ollama node:
- Set Operation to “Generate Text”
- Set Model to the model you pulled (e.g., “mistral”)
- For Prompt, enter a test prompt like “Explain how local AI protects privacy in 3 bullet points”
- Optionally adjust temperature and other parameters
- Click “Execute Node” to test the Ollama node
- You should see the generated text in the output panel
- Save your workflow with a descriptive name like “Basic Text Generation”
5. Creating Triggers and Actions
To make your workflow more useful, let’s add dynamic triggers and actions:
Scheduled AI Report Generator
This workflow will generate an AI-based report on a scheduled basis:
- Create a new workflow
- Add a “Schedule” trigger node
- Configure the Schedule node to run at your desired frequency (e.g., daily at 9am)
- Add an “Ollama” node connected to the Schedule node
- Configure the Ollama node with a prompt template, such as:
Generate a daily productivity tip for working with local AI tools. Include: - A practical technique - Why it's beneficial - How to implement it - An example use case
- Add a “Send Email” node (or other notification node) connected to the Ollama node
- Configure the email settings to send the generated content to yourself
- Activate the workflow using the toggle in the top-right corner
Your workflow will now automatically generate and email you AI-created productivity tips on schedule.
6. Designing AI Agents for Specific Tasks
n8n workflows can function as AI agents that perform complex tasks. Let’s build a document processing agent:
Document Summarization Agent
- Create a new workflow
- Add an “HTTP Request” trigger to create an API endpoint for your agent
- Add a “Function” node to process incoming text
- Add an “Ollama” node with a prompt like:
Summarize the following text in a concise way. Highlight 3-5 key points. TEXT: {{$json["text"]}}
- Add a “Respond to Webhook” node to return the summary
- Connect the nodes in sequence
- Activate the workflow
You now have an API endpoint that accepts text and returns an AI-generated summary.
Multi-Step AI Agent
For more complex tasks, you can create multi-step agents that combine multiple AI operations:
- Create a workflow with a webhook trigger
- Add an Ollama node to analyze the input and determine required tasks
- Add a Switch node to route to different processing paths based on AI analysis
- Create separate branches for different types of processing (summarization, translation, etc.)
- Add a final Ollama node to combine results into a coherent response
- Return the final result via webhook response
This approach allows for sophisticated AI agents that can handle complex, multi-stage tasks while running completely locally.
7. Monitoring and Troubleshooting
Workflow Execution History
n8n provides tools to monitor your workflows:
- Access execution history by clicking on “Executions” in the left sidebar
- View detailed logs for each execution to identify issues
- Filter executions by status (successful, error, etc.)
Common Issues and Solutions
Ollama Connection Errors:
- Problem: “Could not connect to Ollama server”
- Solution: Verify Ollama is running with
docker ps
and check the URL is correct (usuallyhttp://n8n-ollama:11434
in the starter kit)
Model Not Found:
- Problem: “Model [name] not found”
- Solution: Pull the model using
docker exec -it n8n-ollama ollama pull [model-name]
Out of Memory Errors:
- Problem: Workflows fail with memory-related errors
- Solution: Use smaller models, increase Docker resource limits, or use a machine with more RAM
8. Advanced Configurations
Persistent Data Storage
Ensure your data persists between restarts by configuring volumes:
# In docker-compose.yml
volumes:
- ./n8n_data:/home/node/.n8n
- ./ollama_data:/root/.ollama
Securing Your n8n Instance
For added security:
- Set environment variables in docker-compose.yml:
N8N_BASIC_AUTH_ACTIVE=true N8N_BASIC_AUTH_USER=your_username N8N_BASIC_AUTH_PASSWORD=your_secure_password
- Consider setting up a reverse proxy with HTTPS
- Restrict network access if deployed on a server
Example Workflows
Here are some practical AI workflow ideas you can implement:
Automated Document Summarization
- Monitor a folder for new documents
- Extract text from various file formats
- Generate summaries using local LLM
- Store summaries in a database or send via email
Intelligent Email Processing
- Connect to email via IMAP
- Use LLM to categorize and prioritize emails
- Generate response drafts for common inquiries
- Flag important emails for immediate attention
Social Media Content Analysis
- Pull content from social media platforms
- Analyze sentiment and key topics
- Generate insights and reports
- Track trends over time
Scheduled Content Generation
- Generate blog post ideas or drafts
- Create social media content
- Schedule posts through integration with publishing platforms
- Maintain consistent content calendar
Conclusion
n8n, combined with local AI models, provides a powerful platform for creating privacy-preserving automation workflows. By running everything locally, you maintain complete control over your data while leveraging the capabilities of modern AI models.
As you become more comfortable with n8n, you can create increasingly sophisticated workflows that combine multiple AI operations with other integrations like databases, email services, and file systems to build comprehensive automation solutions tailored to your specific needs.