AI Interview Copilot is a next-generation, real-time AI-powered assistant designed to streamline and enhance the technical interview process. Built for interviewers, hiring managers, and technical recruiters, this tool leverages state-of-the-art speech-to-text, large language models (LLMs), and retrieval-augmented generation (RAG) to provide instant, context-aware support during live interviews. The Copilot transcribes interviewer audio, analyzes candidate responses, and delivers intelligent, actionable suggestions, summaries, and follow-up questions—all in a clean, professional, and minimal UI.
AI Interview Copilot - Real-time transcription and AI-powered assistance
- Deepgram Integration: Captures and transcribes interviewer/system audio in real time using advanced speech recognition.
- Screen Sharing Preview: Visual feedback for active screen/audio capture, with robust error handling and fallback logic.
- Speaker Labeling: Only "Interviewer" audio is transcribed and displayed, ensuring clarity and focus.
- Groq LLM Integration: Utilizes Groq's ultra-fast, high-accuracy LLMs (Llama 3) for instant reasoning, explanations, and follow-up questions.
- Intelligent Routing: The system first checks if the LLM can answer from its own knowledge. If not, it automatically triggers RAG agents to fetch relevant context from documents, web, or company knowledge bases.
- Progressive Enhancement: Users receive immediate AI responses, with additional sources and citations appended as soon as RAG completes, minimizing wait time.
- Contextual Search: When the LLM needs more information, RAG agents search internal documents, PDFs, and web sources to provide accurate, context-rich answers.
- Citations & Sources: All RAG-based responses include clear, structured citations for transparency and auditability.
- Clean Design: No gradients, cards, or unnecessary UI elements—just a focused, distraction-free workspace.
- Chat-Style Transcription: All interviewer speech is displayed in a chat interface, with clear timestamps and speaker labels.
- Screen Sharing: Prominent, resizable preview for screen/audio capture, with live status indicators.
- Error Handling: Graceful fallbacks for audio/video issues, API errors, and permission denials.
- Environment Variables: All API keys and sensitive configs are managed via .env files.
- No Candidate Audio: Only interviewer/system audio is captured, ensuring privacy and compliance.
Building on this open-source prototype, I created BlitzQ - a comprehensive SaaS platform that takes interview preparation to the next level.
Your intelligent AI-powered interview copilot that provides real-time answers, helps you practice, and boosts your confidence. Ace every interview and land your dream job.
BlitzQ - Production-ready AI interview assistance platform
BlitzQ in action during a live technical interview - real-time AI suggestions and contextual help
BlitzQ's practice mode with curated interview questions and AI-generated example responses
BlitzQ Features:
- 🎯 Real-time AI Assistance: Get instant, contextual answers during live interviews
- 📚 Smart Knowledge Base: Access curated technical questions and best practices
- 🎓 Practice Mode: Simulate interviews with AI feedback and improvement suggestions
- 📊 Performance Analytics: Track your progress and identify areas for improvement
- 🔐 Enterprise Security: SOC2 compliant with end-to-end encryption
- 🌍 Multi-language Support: Available in 15+ languages for global candidates
flowchart TD
A[Start Interview Session] --> B{Screen/Audio Capture}
B -->|Grant Permission| C[Deepgram Transcription]
C --> D[Transcript to Copilot UI]
D --> E{AI Knowledge Check}
E -- Known --> F[Groq LLM Immediate Response]
E -- Need Context --> G[RAG Agents: Docs/Web Search]
G --> H[Contextual Prompt to Groq LLM]
F & H --> I[Streamed Response to User]
H --> J[Show Citations/Sources]
I --> K[End/Export Transcript]
- Frontend: Next.js, React, Tailwind CSS
- Audio/Video: Web APIs for screen/audio capture, Deepgram SDK for transcription
- AI/LLM: Groq SDK for Llama 3 models, with intelligent routing between LLM and RAG
- RAG: Modular agents for document, PDF, and web search (Pinecone, Tavily, etc.)
- Backend: API routes for streaming LLM responses, RAG orchestration, and context management
bash git clone https://github.com/Vijaysingh1621/AI-powererd-interview-Assistant.git cd ai-interview-copilot
bash npm install
yarn install
Copy .env.example to .env and fill in your API keys: bash cp .env.example .env
- GROQ_API_KEY (Groq LLM)
- NEXT_PUBLIC_DEEPGRAM_API_KEY (Deepgram)
- PINECONE_API_KEY, TAVILY_API_KEY, etc. (for RAG)
bash npm run dev
yarn dev
Visit http://localhost:3000 to use the Copilot.
- Start Screen Sharing: Click "Connect" to begin capturing interviewer audio and screen.
- Live Transcription: The Copilot will transcribe all interviewer speech in real time.
- Ask Questions: Type or speak interview questions; the Copilot will provide instant AI-powered suggestions and follow-ups.
- Review Sources: If the AI needs more context, sources and citations will appear after the initial response.
- Export/Save: Download or copy chat transcripts for record-keeping or feedback.
- Intelligent LLM Routing: The Copilot uses a knowledge-check prompt to decide if the LLM can answer directly. If not, it triggers RAG for deeper context.
- Streaming Responses: All AI and RAG responses are streamed for minimal latency.
- Customizable Models: Easily switch between Groq, Gemini, or OpenAI by updating environment variables and config files.
- Extensible RAG Agents: Add new document or web search agents by extending the RAG orchestrator.
- /app - Next.js app, API routes, and main pages
- /components - UI components (recorder, chat, PDF manager, etc.)
- /lib - Core logic (transcription manager, LLM/RAG clients, utils)
- /public - Static assets
- /scripts - Setup and utility scripts
- /docs - Technical and integration documentation
- Fork the repo and create a feature branch.
- Make your changes with clear, well-documented code.
- Add/modify tests as needed.
- Submit a pull request with a detailed description.
- Audio/Screen Not Captured: Ensure browser permissions are granted for screen and audio.
- API Errors: Check your .env file for correct API keys and quotas.
- Slow Responses: RAG-based answers may take longer; LLM-only answers are near-instant.
- UI Issues: Clear browser cache or try a different browser.
MIT License. See LICENSE for details.
- Deepgram for real-time transcription
- Groq for ultra-fast LLMs
- Pinecone and Tavily for RAG/search
- Next.js, React, and Tailwind CSS for the frontend
Author: Vijay Singh 📧 Email: [email protected]
🔗 LinkedIn: https://www.linkedin.com/in/vijay-singh-b25483288/
🔗 Repo: https://github.com/Vijaysingh1621/AI-powererd-interview-Assistant.git
For support, feature requests, or enterprise solutions, please contact the maintainers or open an issue on GitHub.