Mitra is an advanced AI assistant that combines deep reasoning capabilities with emotional intelligence to provide thoughtful, supportive interactions through Telegram. Built with a local AI model - no external API calls required!
- Local AI Model: Runs entirely on your infrastructure using Microsoft Phi-3-mini (3.8B parameters)
- Deep Reasoning: Multi-step problem solving with chain-of-thought processing
- Emotional Intelligence: Sentiment analysis and emotion detection to adapt responses
- Crisis Detection: Identifies crisis situations and provides appropriate resources
- Safety Features: Content moderation and safety boundaries
- Conversation Memory: Maintains context across conversation history
- Rate Limiting: Built-in abuse prevention and rate limiting
- Self-Contained: No external API dependencies or costs
- Modular Architecture: Clean separation of concerns for maintainability
- Structured Logging: Comprehensive logging with correlation IDs
- Error Handling: Robust error handling with user-friendly messages
- Azure Optimized: Designed for Azure Container Apps deployment
- CI/CD Pipeline: Automated testing, building, and deployment via GitHub Actions
- Type Safety: Full type hints with mypy checking
- Containerized: Docker support with GPU acceleration
- Customizable: Can be fine-tuned on Indian language data
mitra/
├── core/ # Core AI intelligence engine
│ ├── engine.py # Main AI orchestration
│ ├── emotion_analyzer.py # Emotion detection
│ ├── safety_filter.py # Content safety
│ └── prompts.py # System prompts and personality
├── bot/ # Telegram bot interface
│ └── telegram_bot.py
├── models/ # Data models
│ ├── conversation.py
│ └── user.py
├── utils/ # Utilities
│ ├── logger.py # Structured logging
│ ├── error_handler.py # Error handling
│ └── rate_limiter.py # Rate limiting
└── config.py # Configuration management
- Python 3.11 or higher
- Telegram Bot Token (from @BotFather)
- 4GB+ RAM (8GB recommended)
- GPU optional but recommended for faster inference
- Clone the repository
git clone https://github.com/DenxVil/MitraAI.git
cd MitraAI- Install dependencies
pip install -r requirements.txt- Configure environment variables
cp .env.example .env
# Edit .env with your credentialsRequired environment variables:
TELEGRAM_BOT_TOKEN: Your Telegram bot tokenLOCAL_MODEL_NAME: Model name (default:microsoft/Phi-3-mini-4k-instruct)LOCAL_MODEL_DEVICE: Device to use (cpu,cuda, orauto)LOCAL_MODEL_QUANTIZE: Enable 4-bit quantization (true/false)
- Run the bot
python main.pyNote: On first run, the model will be downloaded (~4GB). This may take a few minutes.
All configuration is managed through environment variables. See .env.example for available options:
| Variable | Description | Default |
|---|---|---|
ENVIRONMENT |
Environment (development/staging/production) | development |
LOG_LEVEL |
Logging level (DEBUG/INFO/WARNING/ERROR) | INFO |
MAX_CONVERSATION_HISTORY |
Max messages to keep in context | 10 |
RATE_LIMIT_MESSAGES_PER_MINUTE |
Rate limit per user | 20 |
ENABLE_CONTENT_MODERATION |
Enable safety filtering | true |
Run the test suite:
# Run all tests
pytest
# Run with coverage
pytest --cov=mitra --cov-report=html
# Run specific test file
pytest tests/unit/test_emotion_analyzer.py -vdocker build -t mitra-ai .docker run -d \
--name mitra-ai \
--env-file .env \
mitra-aiCreate a docker-compose.yml:
version: '3.8'
services:
mitra:
build: .
env_file: .env
restart: unless-stoppedRun with: docker-compose up -d
Mitra AI is optimized for deployment on Azure Container Apps. We provide a comprehensive step-by-step guide with screenshots for deploying via the Azure Portal (web interface).
See AZURE_DEPLOYMENT_GUIDE.md for:
- Complete Azure Portal (web interface) setup walkthrough
- Model training on Indian language data
- GPU acceleration configuration
- Cost optimization strategies
- Monitoring and troubleshooting
- Production best practices
- Create Azure resources
# Create resource group
az group create --name mitra-ai-rg --location eastus
# Create Container Apps environment
az containerapp env create \
--name mitra-env \
--resource-group mitra-ai-rg \
--location eastus- Configure GitHub Secrets
Add these secrets to your GitHub repository:
AZURE_CREDENTIALS: Azure service principal credentialsAZURE_RESOURCE_GROUP: Your resource group nameTELEGRAM_BOT_TOKEN: Your Telegram bot tokenAZURE_OPENAI_API_KEY: Your Azure OpenAI API keyAZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint
- Deploy via GitHub Actions
Push to the main branch to trigger automatic deployment:
git push origin mainThe GitHub Actions workflow will:
- Run tests and linting
- Build Docker image
- Push to GitHub Container Registry
- Deploy to Azure Container Apps
/start- Start conversation with Mitra/help- Show help and available commands/clear- Clear conversation history/status- Show usage statistics
Problem Solving:
You: I'm struggling to decide between two job offers. Can you help me think through this?
Mitra: I'd be happy to help you think through this decision...
Emotional Support:
You: I'm feeling really stressed about my exams.
Mitra: I hear that you're feeling stressed about your exams. That's completely understandable...
Learning:
You: Can you explain how machine learning works?
Mitra: I'd be glad to explain machine learning! Let me break it down step by step...
All operations are logged with structured data including:
- Correlation IDs for request tracking
- User IDs (anonymized)
- Performance metrics
- Error details with stack traces
Logs are output in JSON format (production) or pretty-printed (development) for easy analysis.
The Docker container includes health checks for monitoring.
- Content Moderation: Filters harmful content
- Crisis Detection: Identifies crisis situations and provides resources
- Rate Limiting: Prevents abuse
- Data Privacy: Minimal data collection, no storage of sensitive information
- Secrets managed via environment variables
- No credentials in code or logs
- Regular security scanning via Trivy
- Non-root Docker user
# Format code
black mitra/ main.py
# Lint
flake8 mitra/ main.py --max-line-length=100
# Type check
mypy mitra/ main.py- Create feature branch:
git checkout -b feature/my-feature - Implement changes with tests
- Run tests:
pytest - Format and lint:
black . && flake8 - Create pull request
MitraAI/
├── .github/
│ └── workflows/ # CI/CD workflows
├── mitra/ # Main application package
│ ├── core/ # Core intelligence engine
│ ├── bot/ # Telegram bot interface
│ ├── models/ # Data models
│ ├── utils/ # Utilities
│ └── config.py # Configuration
├── tests/ # Test suite
│ ├── unit/ # Unit tests
│ └── integration/ # Integration tests
├── main.py # Application entry point
├── requirements.txt # Python dependencies
├── Dockerfile # Docker configuration
├── pyproject.toml # Project metadata
└── README.md # This file
- Modular Design: Separates concerns for easier testing and maintenance
- Emotion-First: Analyzes emotions before generating responses for empathetic interactions
- Safety-First: Multiple layers of safety checks and crisis detection
- Async by Default: Uses async/await for better performance
- Cloud-Native: Designed for containerized deployment on Azure
- Python 3.11+: Modern Python with excellent AI/ML ecosystem
- Microsoft Phi-3-mini: 3.8B parameter model, efficient and capable
- HuggingFace Transformers: Industry-standard model inference
- 4-bit Quantization: Reduces memory usage by 75% with minimal quality loss
- python-telegram-bot: Robust, well-maintained Telegram library
- structlog: Structured logging for better observability
- Pydantic: Data validation and settings management
- Docker: Consistent deployment across environments
- No API Costs: Run indefinitely without per-request charges
- Data Privacy: All processing happens on your infrastructure
- No Rate Limits: Handle unlimited concurrent users
- Customizable: Fine-tune on your specific data (e.g., Indian languages)
- Offline Capable: Works without internet connectivity
- Predictable Performance: No external service dependencies
- Core AI engine with emotion detection
- Telegram bot interface
- Safety and moderation
- Basic deployment pipeline
- Persistent storage (PostgreSQL/MongoDB)
- Advanced conversation memory
- Multi-language support
- Voice message support
- Web dashboard
- Fine-tuned models for emotion detection
- Advanced analytics and insights
- User personalization
- Integration with other platforms
Bot not responding:
- Check that
TELEGRAM_BOT_TOKENis correct - Verify API credentials are set
- Check logs for errors:
docker logs mitra-ai
Rate limit errors:
- Adjust
RATE_LIMIT_MESSAGES_PER_MINUTEin.env - Check if user is being rate limited in logs
AI generation fails:
- Verify Azure OpenAI/OpenAI credentials
- Check API quota and limits
- Review error logs for specific issues
This project is available for educational and personal use.
- Built with OpenAI and Azure OpenAI
- Telegram integration via python-telegram-bot
- Inspired by the vision of emotionally intelligent AI assistants
For issues, questions, or contributions:
- Open an issue on GitHub
- Check existing documentation
- Review logs for error details
Note: Mitra is an AI assistant and should not replace professional mental health support, medical advice, or other professional services. For emergencies, always contact appropriate professionals.