Average IDE is a revolutionary offline-capable and privacy-first Integrated Development Environment (IDE). It brings the power of Large Language Models (LLMs) directly to your local machine, ensuring that not a single byte of your code leaves your computer.
Built for developers who value security, speed, and autonomy, LocalDev indexes your project locally using a vector database to provide context-aware AI assistance—completely offline.
"A good writer possesses not only his own spirit but also the spirit of his friends." — Friedrich Nietzsche
- 🔒 Privacy-First: Zero telemetry. No cloud API calls. Your code stays yours.
- 🧠 Local RAG Pipeline: Intelligent code indexing with LanceDB for context-aware answers.
- 💬 AI Chat Interface: Integrated chat with Markdown rendering and syntax highlighting.
- ⚡ Offline-Capable: Runs entirely on your machine using Ollama.
- 📝 Monaco Editor: Professional-grade code editing experience.
- 🔄 Batch Scaffolding: Generate entire project structures with the "Architect" persona.
- 🛠️ MCP Integration: Extensible tool execution via Model Context Protocol.
Follow these steps to set up your environment.
Ensure you have the following installed:
- Node.js (v18 or higher)
- Python (v3.8 or higher)
- Ollama (for running local LLMs)
If you prefer using Docker to run Ollama, use the following command:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama-
Clone the Repository
git clone https://github.com/UnsettledAverage73/Locally-AI-Integrated-IDE.git cd Locally-AI-Integrated-IDE -
Install Root Dependencies
npm install
-
Install Frontend Dependencies
cd frontend npm install -
Install Backend Dependencies
cd ../backend python -m venv venv source venv/bin/activate # On Windows use: venv\Scripts\activate pip install -r requirements.txt
-
Pull Required Models Ensure Ollama is running (
ollama serveor via Docker), then pull the models:ollama pull qwen2.5:0.5b ollama pull nomic-embed-text
To start the application (Electron + Frontend + Backend) in one go:
# From the project root
npm startThis will launch the desktop application window.
- Change Models: Click the ⚙️ Settings icon to switch between installed Ollama models.
- Keybindings: Standard VS Code-like keybindings are supported.
We are actively developing LocalDev! Here's what's new and what's coming:
- Core Editor with Syntax Highlighting
- Local RAG (Retrieval-Augmented Generation)
- Chat Interface with History
- Model Switching Mechanism
- "Architect" Persona for Scaffolding
- Git Integration (Commits, Diffs)
- Multi-tab Editing
- Plugin System
- Integrated Debugger
To ensure everything is working correctly, you can run the test suite:
# Run backend tests
cd backend
python -m pytestWe welcome contributions! If you'd like to help improve LocalDev:
- Fork the repository.
- Create a feature branch (
git checkout -b feature/AmazingFeature). - Commit your changes (
git commit -m 'Add some AmazingFeature'). - Push to the branch (
git push origin feature/AmazingFeature). - Open a Pull Request.
Please read our CONTRIBUTING.md (coming soon) for details on our code of conduct.
For the local, privacy-first IDE experience, we have chosen Ollama as our primary inference engine.
- Privacy: Runs entirely offline on consumer hardware. No data leaves your machine.
- Simplicity: Zero-config setup for the end-user. Handles model quantization automatically.
- Efficiency: Optimized for single-user local development workflows.
Note: For future high-throughput Enterprise or Private Cloud deployments, we plan to support vLLM as an alternative high-performance backend.
Made by: @unsettledaverage73
- Author: @unsettledaverage73
- Support & FAQ: atharvabodade@gmail.com
- LinkedIn: https://www.linkedin.com/in/unsettledaverage73/
Special thanks to the open-source community, specifically the teams behind Ollama, Electron, and React.
Do share your valuable opinion, I appreciate your honest feedback!
Enjoy Coding ❤
