AI Biases and Our Future

AI technology is not a futuristic tool

AI Biases and Our Future

Think outside the Box!

When Mark Twain said that the best predictor of future behaviour is past behaviour, he could have been predicting how data science is used to build our world to match our needs and likes. But a question we should ask, do we want our needs matched all the time? If everything is carefully curated to our needs, how then will we be able to stretch our curiosity and knowledge – and fuel innovation. We know that those occasions when we are pushed far outside of our comfort zone, we feel discomfort, but can experience the best learning. This is something that leaders may need to consider as AI grows in depth and range. Our current world is now so reliant on digital technology, and yes it is making things smoother, but may also keep us in neat compartments that our prevent creativity and even inhibit our ability challenge. 

In 2021, we may all be surprised to know just how much AI is part of our every day lives. The number of smartphone users in Germany has only grown in recent years, amounting to 60.74 million smartphone owners in 2020. In 2009, this number was 6.3 million. We are also using our smart phones a lot more, with data suggesting many of us looking at our phones over 100 times a day, making several thousands of decisions based on what we look at. If you just look at your smart phone you’ll see its packed with AI technology. From your Siri and Google assistants providing AI enabled voice recognition to respond to your questions on what the weather might be for example, through to newer computer vision AI tech which will look at your photos and using pattern recognition technology, will offer you clever ways of identifying, sorting and adapting them. You’ll know that apps on your phone are using AI to track your usage and offer you more closely matched solutions. What is happening in your phone is the AI and specifically the Machine Learning technology is using algorithms to start learning from the data it is given – based on all your data and your habits, this data collated sometimes even if you don’t even use the app. You might feel comfortable with the AI in Spotify offering you playlists based on your listening habits. You might be pleased that Amazon’s AI offers you products that fit around your interests and shopping. However, if we pause to ask, are the decisions used in AI technology as clear as we would want in these curated lists we given, and, do we only want to get presented with what we have been socialised to having preferences for. Wouldn’t it be better to pushed out of a comfort zone in order to look at some new music, for example? 

Let’s look at dating apps as an example. As we casually swipe or left or right on potential love matches, the algorithms will start to learn our preferences. It will also learn where we ‘rate’ in how we are swiped too. What the dating data shows that because there are innate biases most humans have, the app will respond to prejudices such as race, and as such some users may get shown to fewer or limited users and limit their chances of being matched. Dating apps are only responding to human prejudices and practices, they are not creating them. But there’s an uneasy feeling that AI is supporting our existing discriminatory practices – because that’s what it is trained to do. And the uneasy question is, should it actually be more challenging on stereotypes? It might be one thing in dating, but what about recruitment, if the AI feeds off historical inequitable race and gender practices a company will be restricted in growth. If we allow AI to only giving us what we want, how are we able to be see and experience things that are beyond our comfort zone.

If we take this idea wider to solve the big challenges of our time, the most critical of course is the climate crisis. As organisations and countries make moves to a net-zero economy, AI is being harnessed at a greater level. Indeed, for such a multi-dimensional challenge of Climate-AI, we must ask the question, how can data behind be used fairly, ethically and transparently – and shared across nations? We would not want poorer nations, to fare worse than richer countries when say data on fossil fuel usage, but we can’t know, unless the data is shared openly. Companies such as Fero Labs, in Germany uses AI and Machine Learning technology to work within factories to reduce waste, and improve energy efficiencies. As they point out, explainable and actional AI is what is needed to progress, as it is critical that such systems show internal biases are addressed. Climate-AI needs to be deployed to make even faster and more accurate predictions – given the scale of the task ahead, for example in making climate simulations quicker. While we build complex Climate-AI models we will need to also ensure that the AI is as explainable and bias free as possible – or we can end up with complexities we were not expecting. To plan ahead, we can look far back to the wisdom of the Marcus Aurelius, the Roman emperor and a Stoic philosopher for guidance, as he said, “Never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present."

Dr Naeema Pasha

Naeema Pasha is Director of EDI, Careers Professional Development and Future of Work in Henley Business School where she also established World of Work (WOW) to explore future of work readiness. Her doctoral research sheds light on key factors that enables people to succeed against the backdrop of future of work technological changes such as adaptability in times of huge uncertainty. Naeema’s recent research on careers in future of work provided the foundation for building WOW looking at the range of influences on work such as, AI and Automation, Diversity & Inclusion and Quad-Generations.