Bias in AI: A Simple Call to Action
Although artificial intelligence (AI) is in its primary development stages, it will soon become a leading technology used in every industry we interact with. However, there have been too many cases of AI directly impacting the lives of people with disabilities. Let’s take the case of Aramitsu Kitazono, a visually impaired Paralympian who was injured by a driverless car in the Tokyo Paralympic Village. At the time, Kitazono was crossing a pedestrian crosswalk when the car struck him after failing to stop for him. After the accident, he had a two-week recovery time, almost missing his men’s judo event. This raises a variety of questions regarding the safety of people with disabilities: will this tragedy reoccur? Will AI be able to properly identify people with disabilities in the coming future? How can we better design AI technology to prevent this from ever happening again?
The issue I see right now is that “disability” is not a binary classification, unlike most decisions that AI makes. It’s heavily nuanced, but classic AI can’t distinguish between various forms of diability because of its flawed datasets. At the moment, there aren’t enough data inputs for an AI system to properly recognize a pattern of disability, just because there are so many to include. This is why, as mentioned in my previous article, there needs to be some sort of ethics test on AI technology and datasets. Our products are just too outdated, and the frameworks we use do not account for people with disabilities. We must promote inclusion and keep disability in mind when revising past and creating future frameworks to prevent accidencts like Aramitsu’s.