Can AI Decisions be Biased?

0
102
3D rendering artificial intelligence AI research of robot and cyborg development for future of people living. Digital data mining and machine learning technology design for computer brain.

Artificial Intelligence (AI) has evolved to be more than just a technology that exists to help humans and is slowly invading people’s lives and is bringing routine changes. As it keeps on evolving there are concerns about its decisional bias. AI looks like a futuristic technology that could do only good to humans but is proven to be risky based on certain decisional bias.

AI is more of a mechanism that abstracts and reacts to contents like a human does as it is designed and developed by humans. There are questions about whether AI can adopt everything that humans including being biased to an ideology, and the answer to it will be a big Yes. AI can do that. Because human data is an important substance to make AI function and that is where the AI-bias problem lies. AI-biasing is the data prejudice that’s used to create AI algorithms, which can ultimately result in racial discrimination and other societal consequences.

We already have witnessed some cases where AI reacted like a biased mechanism. Eg: Microsoft’s first AI Chatbot named ‘Tay’ tweeted over 95,000 times within sixteen hours of launch but mostly with abusive content because Tay was trained using a basic dataset of anonymized public data and some pre-written material.

The reason behind AI-bias is because data input for any AI device is collected from human actions. And we can’t expect humans to have a mechanical mindset. It looks simple from a common perspective but is dangerous since the applications of AI have increased tremendously, for example in a company if a job application going through AI classification and it rejects a person based on their race this practice creates social injustice in the society. Every human has a biased thought about something. Even professionals with good intentions can be influenced by biases.

Computers don’t become biased on their own. They need to learn from humans or other data sets. The real problem comes when the algorithms learn and adapt from their original coding, they become more opaque and less predictable. So the people behind AI and its mechanism should be unbiased otherwise it can soon become difficult to understand exactly how the complex interaction of algorithms generated a problematic result. And many private organizations are reluctant to reveal the commercially sensitive inner workings of their algorithms.

LEAVE A REPLY

Please enter your comment!
Please enter your name here