With technology advancing rapidly, the use of voice to control computer systems is beginning to supersede the use of keyboards. This is demonstrated by devices such as Amazon’s Alexa, Google Home and Apple’s Siri function. Whilst this is advantageous for the majority, it fails to consider people with communication difficulties, particularly those who are deaf. As a result, there have been increasing efforts to teach computers sign language.
What are the challenges this presents?
How could a software like this be used in medicine?
Following the COVID-19 pandemic, there has been a large shift to online appointments, whether that be video or audio call. For deaf individuals, this can make communication even more difficult. Therefore a software that can transfer sign into text or speech would facilitate easier communication and remove the need for an interpreter. So far there has only been a focus on converting in this direction as it is more difficult to translate speech to sign. However, in a healthcare context, this would be essential in allowing deaf patients to receive a response from their doctors. Do you think implementing this software would be a good idea? How do you think it would make deaf patients feel? Do you think it would affect the flow of the conversation, and if so, how? Think about the challenges outlined above and how they would affect the doctor-patient dynamic.