Skip to content
View vishalchoudhari11's full-sized avatar

Organizations

@naplab

Block or report vishalchoudhari11

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
vishalchoudhari11/README.md

👋 Hi, I'm Vishal Choudhari

🎓 Ph.D. Candidate, Electrical Engineering @ Columbia University

🎙️ Speech • 🎧 Audio • 🤖 AI/ML/LLMs • 🧠 Brain–Computer Interfaces • 🧬 Health Sensing


⚡ About Me

I’m a 6th-year Ph.D. candidate in Electrical Engineering at Columbia University, advised by Prof. Nima Mesgarani. My research builds brain-controlled hearing systems that decode neural signals in real time to identify which talker a listener is focusing on — and selectively enhance that voice in noisy environments.

I bring end-to-end experience across experiment design, neural & audio data processing, and ML model development for real-time inference. My work bridges signal processing, auditory neuroscience, and machine learning, and I’m now exploring how foundation models and LLMs can augment human perception.


💡 Skills

  • ⚙️ Signal Processing: time-series analysis, multimodal sensor fusion
  • 🔊 Speech & Audio ML: enhancement, extraction, noise-cancellation
  • 🤖 AI & Data Science: PyTorch, Python, Transformers, LLMs, RAG
  • 🧬 Health Sensing: physiological signals (EEG, iEEG, EoG)
  • 🧠 Brain–Computer Interfaces (BCI): neural decoding, auditory attention tracking

🧩 Experience

  • Meta Reality LabsResearch Intern (2025)
    → Built multimodal (audio + IMU + video) sensing pipelines for Ray-Ban AI Display Glasses

  • Bose CorporationResearch Intern (2024)
    → Prototyped feed-forward noise-cancellation algorithms

  • Columbia UniversityPh.D. Researcher
    → Designed, trained, and deployed real-time brain–audio ML models enabling intelligent, adaptive hearing systems


🧰 Tech Stack

Python · PyTorch · MATLAB · NumPy · SciPy · Transformers · Librosa · Torchaudio · ONNX


🌐 Connect

🌎 Website • 💼 LinkedIn • 🧑‍💻 Google Scholar • ✉️ Email

Pinned Loading

  1. Google-Resonance-Audio-Spatializer Google-Resonance-Audio-Spatializer Public

    Spatializes sound sources using HRTFs, reverberation, and shoebox acoustic models. Built with Google’s Resonance Audio SDK for the Web and the Web Audio API.

    JavaScript 2 1

  2. Beamforming-LLM-GenAI Beamforming-LLM-GenAI Public

    Semantic recall system for multi-speaker environments using beamforming, Whisper ASR, and retrieval-augmented generation with LLMs.

    Jupyter Notebook 1

  3. Brain-Controlled-Selective-Hearing Brain-Controlled-Selective-Hearing Public

    Forked from naplab/AAD-MovingSpeakers

    End-to-end system that leverages brain signals to control a binaural speech separation model, selectively amplifying the talker a listener focuses on. Advances real-world auditory attention decodin…

    MATLAB

  4. Fine-Tuning-LLM-Transformer-FLAN-T5 Fine-Tuning-LLM-Transformer-FLAN-T5 Public

    Fine-tuning and LoRA (PEFT) adaptation of Flan-T5 for dialogue summarization on the DialogSum dataset. A fully reproducible Hugging Face Trainer pipeline with ROUGE evaluation.

    Python

  5. Real-Time-Neural-Net-Inference-Demo Real-Time-Neural-Net-Inference-Demo Public

    Real-time neural network inference and visualization on streaming time series data - featuring STFT plots and custom ring buffers.

    MATLAB

  6. Seq2Seq-Convolutional-Recurrent-Network Seq2Seq-Convolutional-Recurrent-Network Public

    End-to-end PyTorch implementation of a Convolutional Recurrent Network (CRN) for seq-to-seq modeling in the time–frequency domain. Incorporates causal convolutions, recurrent temporal modeling, and…

    Python