Skip to content

Badri-Narayanan-Ramesh/Deep-Learning-for-American-Sign-Language-Detection-ASL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 

Repository files navigation

EE 541 – A Computational Introduction to Deep Learning : FALL 2024

CV for American Sign Language Detection

This repository contains the code for a web application built with TensorFlow and Gradio for interactive American Sign Language (ASL) gesture recognition. The application allows users to upload images or use real-time webcam input to detect ASL gestures and provides predictions with confidence scores.

In addition to the web application, the repository also includes a Jupyter notebook that performs a comparative study on different benchmark Computer Vision (CV) models. This analysis aims to evaluate the performance of various models for ASL recognition, helping to determine the best model for real-time hand gesture classification.

Predicted Outputs from Gradio :

A short video was recorded on the Gradio Web Application and the same video was processed for prediction.

Video 2

Video 1

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

This repository detects American Sign Language (ASL) gestures using deep learning models in PyTorch, including CNN, VGG16, GoogLeNet, ResNet50, MobileNetV2, and DinoV2. It performs a comparative study and uses SVM with DinoV2. The project also features a web application built with TensorFlow and Gradio for interactive ASL recognition.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors