Student built an AI model to translate SIGN Language

AI has been leveraged for creating diverse translation models that facilitate communication and eliminate language barriers. Tech giants such as Google and Facebook have developed sophisticated translation models employing AI technology. Recently, a third-year engineering student from India has designed an AI model that is capable of recognizing American Sign Language (ASL) and translating it into English instantaneously.

Priyanjali Gupta, a student at Vellore Institute of Technology (VIT), recently demonstrated her AI-based ASL Detector through a video shared on her LinkedIn profile. Although the AI model can currently recognize only a limited number of words and phrases in American Sign Language (ASL) and translate them into English in real-time, it is a significant step towards enhancing communication for the hearing-impaired community.

Gupta utilized the Tensorflow object detection API and employed transfer learning with the pre-trained model called ssd_mobilenet to create her ASL Detector model. She reconfigured existing codes to suit her specific use case. However, the AI model does not actually translate ASL to English; instead, it recognizes the signs and identifies how similar they are to pre-programmed objects in its database.

In an interview with Interesting Engineering, Gupta revealed that her mother’s constant nagging to “do something” after joining VIT’s engineering program was the primary motivation behind creating such an AI model. Gupta said that her mother’s taunts made her contemplate what she could do with her knowledge and skill set. The idea of inclusive technology occurred to her during a conversation with Alexa, which triggered a set of plans.

Gupta also acknowledged the influence of YouTuber and data scientist Nicholas Renotte’s 2020 video on creating an AI-based ASL Detector. Gupta used his video as a reference to create her AI model. Renotte’s video served as a source of inspiration and provided Gupta with a direction to work towards creating a meaningful application of her engineering skills.

Priyanjali Gupta’s AI-based ASL Detector has garnered considerable attention and positive feedback from the community. However, a few experts have criticized the transfer learning method employed in the model as it is considered the easiest approach in AI. Gupta has responded to the criticism by acknowledging that building a deep learning model solely for sign detection is a complex problem, but not impossible.

As an amateur student, Gupta recognizes that there is still much to learn and discover about developing deep learning models for sign languages. She believes that the open-source community, which has a wealth of knowledge and experience, will eventually find a solution to this challenge. The development of such models would undoubtedly be a significant breakthrough in enhancing communication for the hearing-impaired community.

If you’re interested in learning more about Priyanjali Gupta’s ASL Detector, you can visit her GitHub page to access the relevant resources and get a better understanding of the project. Share your thoughts on Gupta’s innovative AI model in the comments section below.

You might also be interested in reading, Revolutionary Chemical-Creation Platform Unveiled: Cubic Molecules Could Rejuvenate Drugs and Agrochemicals