Sign language is a crucial mode of communication for the Deaf and Hard-of-Hearing (DHH) community. However, sign language content in video archives often lacks structured indexing and accessibility, making it difficult to search, analyze, or utilize for linguistic research and education. This proposal aims to develop a computer vision-based system to automatically detect, track, and analyze sign language motions in video archives, enabling efficient retrieval and study of sign language content.
This project proposes a novel approach to enhancing sign language accessibility in video archives using computer vision and deep learning. By implementing automatic recognition, annotation, and searchability, the proposed system will provide valuable tools for researchers, educators, and the DHH community.