Setting up Perplexica on Arch Linux locally

Perplexica is an open source replica of the (quite famous) Perplexity AI based search engine. Which I use from time to time when I had enough parsing articles. The AI just does it faster (though can be wrong at times). I find it to be a great tool to quickly establish some understanding of a topic. Like most AI tools, Perplexity is only really usable with a paid subscription and it's hosted so it's a black box and I don't like that. Perplexica came to my attention when I was messing on GitHub a while back. The official document on setting up Perplexica is quite lacking. Espicially if like me, you hate Docker too. So here's my version as I am able to get it working.

I won't get into serving LLMs locally as that's an entirely different beast. I'll just be setting up Perplexica and let it use public LLM providers.

Background

Personally, in my honest opinion, we are at the end of the LLM hype cycle. People are starting to realize what generative AI can and cannot do. No one sane thinks it's a magic bullet anymore. LLMs are quite good a comprehension but not so much at reasoning. That lends it to be a great tool for condensing and summarizing information. Epically when you don't have time to understand the full material. Or the source martial is too difficult to get started. Yes AI will make mistakes, but getting started is the hardest part. Perplexica processes your questions into sets of search queries, retrieves search results from a search engine, and then uses an LLM to summarize the results according to the question. Arguably it's not as good as humans just reading the material. But it's faster and much valuable can be gained if you don't care much about being right 100% of the time.

Install SearXNG

Perplexica depends on SearXNG, the open source meta search engine to provide search results. So SearXNG must be installed first. Luckily there is an AUR package for it. However, there are some packaging issues that needs to be worked around:

yay -S searxng-git
# Add missing dependencies
yay pacman -S python-msgspec python-typer python-isodate

# Start the service
sudo systemctl start emperor.uwsgi

Test your SearXNG instance by visiting http://localhost:8888 in your browser. Type something into the search bar and it should be able to return results.

SearXNG running on localhost, should look like this
Image: SearXNG running on localhost, should look like this

If it works, proceed to modify the configuration so Perplexica can use it.

# /etc/searxng/settings.yml
# Change the following line

search:
  formats:
    - html
    - json # Add this line

server:
  limiter: false # Disable rate limiting else for some reason it rejects Perplexica's requests

Then restart the service:

sudo systemctl restart emperor.uwsgi

Install Perplexica

Perplexica is written in TypeScript building from source manually is required, with some modification as Arch comes with a much later version of NodeJS than what Perplexica is tested with.

sudo pacman -S git npm nodejs
git clone https://github.com/ItzCrazyKns/Perplexica
cd Perplexica

Edit pkg/package.json and upgrade @types/better-sqlite3 and better-sqlite3 to ^7.6.11 and ^11.6.0 respectively.

"@types/better-sqlite3": "^7.6.11",
"better-sqlite3": "^11.6.0",

Then we build the backend and frontend separately. We'll do the backend first. The Perplexica frontend expects the backend to be running on a different URL then the frontend server. The URL to the backend is controlled by the NEXT_PUBLIC_API_URL and NEXT_PUBLIC_WS_UR environment variables. We'll set them to http://localhost:3001 and ws://localhost:3001 respectively as that's where the backend will be running by default. For public facing deployments (even if you are locking it behind authentication), you should set these to the actual URL of your endpoint.

# In the Perplexica directory
# Build the backend
npm install
npm run build

# Build the frontend
cd ui
npm install
NEXT_PUBLIC_API_URL=http://localhost:3001/api/ NEXT_PUBLIC_WS_URL=ws://localhost:3001/ npm run build

Now copy the sample.config.toml to config.toml under the project root and modify the configuration. This is where you want ot setup API tokens to LLM inference services and the SearXNG instance. Perplexica supports multiple popular LLM providers and even self hosted ones. My configuration looks like this:

[GENERAL]
PORT = 3001 # Port to run the server on
SIMILARITY_MEASURE = "cosine" # "cosine" or "dot"
KEEP_ALIVE = "5m" # How long to keep Ollama models loaded into memory. (Instead of using -1 use "-1m")

[API_KEYS]
OPENAI = "" # OpenAI API key - sk-1234567890abcdef1234567890abcdef
GROQ = "" # Groq API key - gsk_1234567890abcdef1234567890abcdef
ANTHROPIC = "" # Anthropic API key - sk-ant-1234567890abcdef1234567890abcdef
GEMINI = "" # Gemini API key - sk-1234567890abcdef1234567890abcdef

[API_ENDPOINTS]
SEARXNG = "http://localhost:8888/" # SearxNG API URL
OLLAMA = "" # Ollama API URL - http://host.docker.internal:11434

Then start both the backend and frontend servers:

npm run start

# In another terminal
cd ui
npm run start

Navigate to http://localhost:3000 in your browser and you should see the Perplexica interface. Type any question and it should answer it. It can be slow at times (not supporting streaming responses). Results shall be back in less then a minute given your LLM API is not slow.

Perplexica serving answers to a question
Image: Perplexica serving answers to a question
Author's profile. Photo taken in VRChat by my friend Tast+
Martin Chang
Systems software, HPC, GPGPU and AI. I mostly write stupid C++ code. Sometimes does AI research. Chronic VRChat addict

I run TLGS, a major search engine on Gemini. Used by Buran by default.


  • marty1885 \at protonmail.com
  • Matrix: @clehaxze:matrix.clehaxze.tw
  • Jami: a72b62ac04a958ca57739247aa1ed4fe0d11d2df