Every document, conversation, and decision you feed it becomes a structured wiki page. Cross-referenced. Contradiction-checked. Synthesized across sources. Versioned in git.
Read the source. Identify entities, concepts, and assertions. Produce 15–20 structured wiki pages per source.
Compare new pages against up to 20 existing related pages. Add [[wikilinks]] to weave the graph.
Micro-calls testing whether new claims conflict with existing ones. Flag and supersede as needed.
"The problem with RAG over chat logs is that it doesn't maintain knowledge — it just retrieves the most similar fragment. A real knowledge system deduplicates, reconciles, and synthesizes across every source. It's a wiki, curated by the LLM." — inspired by Andrej Karpathy's notes on LLM-maintained wikis
WikiLLM's web layer runs on Azure so your wiki is always available. Heavy LLM inference can stay wherever your hardware lives — a GPU at home, a workstation at the office, or any cloud you prefer. The two halves talk over an outbound-only secure tunnel. No VPN, no inbound firewall rules, no exposed ports.
Your data stays yours.