AI/ML Engineer Open Source Contributor
Building production-grade multi-agent systems and AI infrastructure. Developing scalable ML pipelines and autonomous reasoning systems for real-world problems.
I architect and ship multi-agent systems, distributed AI infrastructure, and production ML pipelines. I work with standardized protocols like MCP to enable agent interoperability, optimize inference at scale, and solve real-world problems through autonomous reasoning.
- Multi-Agent Systems & MCP: Build Agent-to-Agent (A2A) communication patterns, implement Model Context Protocol (MCP) for standardized tool interoperability, design agentic loop architectures for autonomous reasoning at scale.
- AI Infrastructure: Develop distributed inference systems, retrieval pipelines, and real-time data systems with latency optimization and production robustness.
- Production ML: Ship end-to-end ML systems with evaluation frameworks, automated deployment, and monitoring strategies.
- Open Source: Contribute to production AI systems, build reusable frameworks, and maintain code for real-world usage.
Python β’ LangChain β’ SerpAPI β’ Gradio β’ OpenAI/NVIDIA LLMs
-
What I Built: Hybrid RAG (Retrieval-Augmented Generation) pipeline that integrates real-time web search (SerpAPI) with LLMs to eliminate hallucinations. Built interactive Gradio UI for A/B testing RAG vs. baseline LLM outputs.
-
Technical Implementation:
- Architected LangChain orchestration layer for multi-step workflows
- Integrated SerpAPI for live web search with robust parsing and error handling
- Implemented functools-based caching strategy for query optimization
- Built Gradio comparative UI for side-by-side response analysis
- Designed environment-based secret management (no hardcoded keys)
-
Results: Production-ready system with comprehensive documentation (README, WALKTHROUGH.md), deployment guides for AWS Lambda and Hugging Face Spaces, and security best practices.
Python 3.12+ β’ Pydantic β’ Async/Await β’ Google ADK β’ MCP β’ Agent-to-Agent Protocol
-
What I Built: Enterprise-scale multi-agent system for GitHub repository analysis. Implemented primary orchestrator agent that dynamically spawns specialized Debug Agents via A2A protocol. Integrated Model Context Protocol (MCP) for standardized tool communication across agent ecosystem.
-
Technical Implementation:
- Implemented MCP Specification: Built MCP server for standardized tool/resource sharing between orchestrator and specialist agents, enabling interoperable multi-agent workflows
- Agent Delegation System: Designed A2A protocol for agent spawning, task assignment, and result aggregation with proper error handling and retry logic
- Multi-LLM Support: Built abstraction layer supporting OpenAI, Google Gemini, local Ollama/vLLM with unified interface
- Production Architecture: Strict typing with Pydantic, async/await concurrency, CLI interface, environment configuration, structured logging
- Tool Integration: Integrated GitHub API with MCP-compliant tool protocol for code analysis, bug detection, security scanning
-
Results: Deployable agent framework with extensible architecture. Demonstrated sophisticated coordination patterns and standardized protocol implementation.
- Agent Protocols & Frameworks: Model Context Protocol (MCP), LangChain, Pydantic, Google ADK, Ollama, vLLM
- Tools & Libraries: NumPy, Apache Spark, functools, async/await
- Vector Databases: Pinecone, ChromaDB, FAISS
- Evaluation & Monitoring: RAGAS, MLflow, structured logging, error tracking
- π Credly Certifications β AI credentials
- βοΈ Google Skills Profile β Google training
- π¨βπ» OpenAI Developer Community β Active member
- π€ Google Developer Group β Community contributor
Seeking internship or junior engineer roles in AI/ML systems engineering. Open to roles focused on multi-agent systems, inference infrastructure, or production ML platforms.
