Every day we are seeing more and more sophisticated technology being created, and artificial intelligence (AI) is no exception. AI has been a saving grace for many businesses over the past few years by allowing specific processes to be automated, fraud to be detected, research to be automatically conducted and much more. But where is the line?
Due to the rapid rise of increasingly intelligent AI, many concerns have surfaced about the almost boundless landscape of AI. Will there be limits to what AI can and can’t do or will be entering a terminator-esque time?
Here are just three major concerns that have been brought up about AI:
Disinformation
Disinformation and Fake News have been running rampant, especially in the last five years. With AI becoming an expert in creating fake images, videos, conversations and other content, will it contribute to the disinformation pandemic we have been experiencing? Will politicians begin to use AI to create fake images and videos of their opponents to ensure they lose favour in the eyes of the public? Unless there are laws and regulations put in place, it is a possibility. AI can therefore also be used to spread propaganda that suits the message of specific groups ranging from political parties, organisations, activist groups and even terrorist groups. This disinformation can be extremely dangerous, and therefore, governments need to think about how they plan to police AI-enabled content creation to ensure it is not fabricated information. It is not only governments that need to think about these implications; companies need to start considering fake content as a threat and treat it as seriously as they do cybercrime.
Pervasive surveillance
In recent years, AI has become a part of surveillance. With the help of AI, companies and governments will be able to keep constant tabs on what everyone is doing. And it has already started happening in small areas in certain countries. With this incredibly intelligent surveillance, facial recognition is used to detect people in crowds, parks, stadiums, on the street and other public places without their permission. While this can be a useful tool when it comes to matters of security, for example preventing terror attacks, where does that leave our right to privacy? And how will this technology be used in countries with serious human rights violations?
AI bias
While AI can process information in speeds and capacities that are unfathomable to us humans, it cannot always be trusted to be fair and impartial. AI that is used to identify people and objects has already been proved to miss the mark on racial sensitivity. For example, when AI has been used to scan crowds for criminals, it has shown a bias against people of colour. Therefore this shows that AI can have biases against specific groups of people. This is because AI systems are created by humans who have biases and prejudices themselves. Therefore the system will be programmed, even if unintentionally, with the biases of its creator. However, if this AI is used ethically and responsibly, it can be used to create positive change. It all depends on where the line will be drawn for AI technology.
Ensure ethical use of AI technology in your business. Contact us to learn more about our AI ethics consulting services.