Developed by Prathamesh Deshmukh
Your personal AI research lab. Bring your own keys for the web apps, or run completely offline with the desktop suite.
Using Desktop App? Skip this step. No keys required.
Best general-purpose reasoning. Large context window (2M tokens) perfect for long PDFs.
Lightning fast inference for Llama 3 models. Ideal for chat.
Access thousands of open models. Great for specialized tasks and uncensored models.
Agentic search engine. Breaks down queries into sub-tasks and searches Wikipedia, Papers, and the Web in parallel.
Launch AppSecure document analyst. Load PDFs directly into browser RAM (no uploads) for private Q&A and summarization.
Launch AppLightweight, multi-provider chat interface. Perfect for quick code snippets, translation, or drafting.
Launch AppAI-based research scholar tool. Dedicated engine for finding academic papers, citations, and scholarly sources.
Launch AppVisualize token probabilities and LLM thinking process in real-time.
Launch App// 1. Clone & Install Dependencies
// 2. Pull Models (Ollama)
Contains App + Dependencies + Models (Llama3/Qwen)
* Extract to C:/Kusanagi-AI on target machine. No internet required.
NO API KEYS REQUIRED
Full desktop RAG agent. Uses local Ollama models (Llama3/Mistral). Perfect for sensitive documents on air-gapped networks.
NO API KEYS REQUIRED
System-tray chat assistant. Runs completely offline. Ideal for coding help or quick drafting without leaking data.