Should we trust AI?

09 11

Should we trust AI?

logo-kwadrat-2 Author: Maria Janiszewska

In today’s world, we increasingly seek to delegate some of our repetitive tasks to Chat GPT or its derivatives. We translate texts into other languages, verify information, create messages, and try to shift simple tasks to technology. Every day, new tools are created to make our lives easier and accelerate processes, built using advanced technology, including AI. But is this approach truly safe and trustworthy?

AI one step ahead of the law

Looking at the legal aspect, unfortunately, it does not look very optimistic. The first regulations on this topic are only now being developed. For several years, the European Union has been working on provisions that will ensure that Artificial Intelligence solutions are safe, non-discriminatory, transparent, and environmentally friendly. However, as of today, there are no official legal documents regulating this issue. This means that any legal abuses related to Artificial Intelligence will be adjudicated based on laws that are not adapted to the current world of technology.

It is worth noting that legislative efforts by the European Union are the first of their kind globally. The Artificial Intelligence Act, which is a set of comprehensive regulations governing aspects related to Artificial Intelligence, aims not only to ensure the safe use of such solutions but also to provide a technologically neutral, common definition of AI. Currently, there is no single common definition with general applicability. Additionally, the mentioned act is supposed to determine various levels of risk associated with Artificial Intelligence. Importantly, members of the European Parliament assure that negotiations on this Act should conclude by the end of 2023.

The (im)perfect solution

This topic is very broad because the European Union is also working on sector-specific uses of AI, including in the fields of education and culture, as well as criminal law. Moreover, Members of the European Parliament are working on legal provisions that will regulate copyright to content wholly or partially generated by AI. If we delegate some of our work to Chat GPT, should we officially include such information in our work? Who then is the author of such a publication?

If you are currently wondering what harm AI can actually do to us and why we should even be concerned, there are already quite a few examples of these concerns. To begin with, the AI algorithms currently in use are not transparent. This poses a threat that our decisions based on what a program based on AI shows us may be manipulated. We do not know how unbiased the answers presented to us are. Additionally, we do not know whether the data fed into such a system is a proper reflection of reality. There are already cases where a voice recognition algorithm was trained only on male samples and could not recognize words spoken by a woman. Does the widely known example from Amazon, aimed at streamlining the recruitment process for warehouse workers based on the CV database of current employees, suggest that we should stop using AI-based solutions altogether? Historically, Amazon predominantly employed men, indicating this gender as preferred.

Does this mean that we should stop using solutions based on Artificial Intelligence? In my personal opinion, of course not. However, let’s use our own intelligence to assess whether the data we enter into external systems are sensitive or confidential. There are already examples of corporations that have banned the use of such solutions for fear of revealing trade secrets. Because remember, we still do not know exactly where our data entered into such systems ends up.