A real-time sign language translation system using computer vision and machine learning. This project uses hand tracking and gesture recognition to translate American Sign Language (ASL) letters into text.
ASLTranslator.3.mp4
- Real-time hand tracking and detection
- Sign language gesture recognition
- Support for ASL letters (currently A, B, C)
- Live video feed with visual feedback
- Real-time predictions with confidence scores
- Python 3.x
- OpenCV (
cv2) - cvzone
- TensorFlow/Keras
- NumPy
- Clone this repository:
git clone [this-repository-url]
cd sign_language_translator- Install the required dependencies:
pip install opencv-python cvzone tensorflow numpy- Make sure you have a webcam connected to your computer.
- Run the main application:
python dataCollection.py- Position your hand in front of the camera.
- The system will detect your hand gestures and display the corresponding ASL letter prediction.
The application uses the following components:
- Hand Detection: Uses cvzone's HandTrackingModule to detect and track hands in real-time.
- Image Processing:
- Crops and processes the hand region
- Normalizes the image to a standard size
- Maintains aspect ratio for consistent recognition
- Classification: Uses a trained Keras model to classify hand gestures into corresponding ASL letters.
- Visual Feedback: Displays the recognized letter and bounding box around the detected hand.
The system uses a pre-trained model stored in Model/keras_model.h5. The model was trained on ASL letter gestures and can recognize letters A, B, and C.