Introduction
Technology has revolutionized the way we communicate, breaking down barriers across languages and enabling greater connection and understanding, ultimately leading to more inclusion. Together with the The American Society for Deaf Children (ASDC) we launched GiveAHand.ai, with the aim to build the world’s largest open-source image library of hands fully tagged with data to help build better hand models.
Live audio transcription and translation tools have limitations for the deaf and hard-of-hearing due to sign language's complex combination of fast-paced hand gestures, facial expressions, and full body movements. While machine learning models can handle facial expressions and body movements, detecting hand and finger movements remains a challenge. AI is on the rise and democratizing access to data so it can be used in limitless ways. But since most available tools are trained on pre-existing data and images, it’s difficult to build useful machine learning models from sources that aren’t readily available.
Launched to celebrate the American Sign Language Day (April 15th), GiveAHand.ai is using tech for good. One hundred percent crowdsourced, the data collected in the platform will generate a diverse dataset of hands: diverse shapes, colors, backgrounds and gestures. Now, anyone can put their hands to good use, by contributing and uploading images, helping to build an image library that will help unlock sign language. Researchers can then download and use these fully tagged images to improve their machine learning models, truly allowing the detection and translation of the full spectrum of Sign Language.