Case Study

Signify

Web app that captures sign-language gestures over webcam, renders text instantly, and speaks the translation to bridge communication gaps.

TensorFlowTensorFlow
PyTorchPyTorch
FlaskFlask
ReactReact
Tailwind CSSTailwind CSS
Web Speech APIWebSockets

Overview

Signify helps people who do not know sign language understand conversations by translating gestures into spoken words in real time.

Problem

  • Non-signers struggle to understand sign language in daily interactions.
  • Many translation tools require specialized devices.
  • Deaf community needs inclusive, real-time communication aids.

Solution

  • Webcam pipeline captures gestures without extra hardware.
  • TensorFlow models classify signs and display text immediately.
  • Web Speech API vocalizes translations for hearing participants.

Research & Techniques

  • Curated and augmented sign-language datasets.
  • Used MediaPipe to stabilize hand and pose tracking.
  • Optimized inference latency below 300 milliseconds.

Results

  • Live demos translated everyday phrases accurately.
  • Mentors endorsed the accessibility impact of the prototype.
  • Documented plan to expand vocabulary with community input.

Key Features

Tap a feature to explore how it supports the experience.

Real-time webcam gesture capture with pose estimation

Tech Stack

TensorFlowPyTorchFlaskReactTailwind CSSWeb Speech APIWebSockets