Pages

Monday, September 1, 2025

n8n

Install Node.js
$ wget -qO- https://deb.nodesource.com/setup_24.x | sudo bash
$ sudo apt install nodejs
$ sudo npm install -g npm@11.5.2
Install n8n globally.
$ sudo npm install n8n -g
$ chmod 600 .n8n/config

Create service file
$ mkdir -p ~/.config/systemd/user
$ cat ~/.config/systemd/user/n8n.service
[Unit]
Description=n8n service
After=network-online.target

[Service]
ExecStart=/usr/bin/n8n
Restart=on-failure
Environment="DB_SQLITE_POOL_SIZE=15"
Environment="N8N_RUNNERS_ENABLED=true"
Environment="N8N_SECURE_COOKIE=false"  # allow access over http

[Install]
WantedBy=default.target

Reload the configuration
systemctl --user daemon-reload
Enable the Service
systemctl --user enable n8n
Start the Service
systemctl --user start n8n
Check status:
systemctl --user status n8n

Editor is now accessible via:
http://IP_Address:5678

Display IP Addresses
$ ip addr
List the process using the port.
$ lsof -i :5678


Sunday, January 12, 2025

Open WebUI

SQLite
---------
Install SQLite
$ wget https://www.sqlite.org/2025/sqlite-src-3500400.zip
$ unzip sqlite-src-3500400.zip
$ cd sqlite-src-3500400
$ ./configure --prefix=/usr/local
$ make
$ sudo make install
$ sqlite3 --version

Chroma
----------
Install Chroma
$ python -m pip install chromadb
In .local/lib/python3.12/site-packages/chromadb/__init__.py, modify line 74
if IN_COLAB  or not is_client:
This will fix RuntimeError : Chroma requires sqlite3 >= 3.35.0 during open-webui start.

Ollama
-----------
Download and Install Ollama
$ wget -qO- https://ollama.com/install.sh | sudo bash
The Ollama API is now available at http://127.0.0.1:11434
$ ollama run llama3.3:3b
$ sudo systemctl status ollama
$ sudo systemctl start ollama

Persisted Chat Memory for n8n
$ sudo apt install postgresql
$ sudo -i -u postgres psql
postgres=# \l
postgres=# CREATE DATABASE ollama;
postgres=# CREATE USER ollama WITH PASSWORD 'ollama';
postgres=# GRANT ALL PRIVILEGES ON DATABASE ollama TO ollama;
ollama=# GRANT ALL PRIVILEGES ON SCHEMA public TO ollama;
postgres=# \c ollama
ollama=# \dt
postgres=# \q
$ psql -h localhost -p 5432 -d ollama -U ollama
ollama=# \d n8n_chat_histories
                                      Table "public.n8n_chat_histories"
   Column   |          Type          | Collation | Nullable |                    Default
------------+------------------------+-----------+----------+------------------------------------------------
 id             | integer                 |                   | not null | nextval('n8n_chat_histories_id_seq'::regclass)
 session_id | character varying(255) |       | not null |
 message    | jsonb                  |                  | not null |
Indexes:
    "n8n_chat_histories_pkey" PRIMARY KEY, btree (id)

$ lsof -i :5432

$ sudo systemctl status postgresql



FFmpeg
-----------
Unzip ffmpeg into .local/bin
$ wget https://github.com/ffbinaries/ffbinaries-prebuilt/releases/download/v6.1/ffmpeg-6.1-linux-64.zip
$ sudo unzip ffmpeg-6.1-linux-64.zip -d /usr/local/bin

Open WebUI
-----------------
Install Open WebUI using the uv runtime manager. It includes ChromaDB.
$ wget -qO- https://astral.sh/uv/install.sh | sh
$ DATA_DIR=~/.open-webui .local/bin/uvx --python 3.11 open-webui@latest serve

Launch the Server
$ open-webui serve &

To see the Web UI go to: http://127.0.0.1:8080

Upgrade locally installed packages
$ python -m pip install -U open-webui

Visit https://openwebui.com/u/haervwe to access the collection of tools.

Locate the desired tool or function on the hub page.
Click the "Get" button 
Open WebUI URL : http://127.0.0.1:8080
Import to WebUI.
Save the tool.

Admin Panel > Settings > Connections > Manage Ollama API Connections > Click on Configure icon
Edit Connection URL: http://localhost:11434
Save

Click on Manage icon 
Access model names for downloading, click here
Filter models
Copy the name and paste to pull a model from Ollama.com
deepseek-r1:1.5b
llama3.2:1b
gemma2:2b

Click on Models to see the downloaded models.

$ sudo systemctl stop ollama

Interactive Chat Commands
$ export OLLAMA_FLASH_ATTENTION=1
$ export OLLAMA_KV_CACHE_TYPE=f16|q8_0|q4_0
$ ollama serve
$ ollama pull llama3.2:1b
$ ollama run llama3.2:1b
$ ollama stop llama3.2:1b

Download GGUF model from huggingface. Create Modelfile.
FROM /path/to/model.gguf
Import model into ollama
$ ollama create model_name -f Modelfile