Skip to main content

The dark side of artificial intelligence


  • Tagged in:
  • Created the:
    10 February 2023

Did artificial intelligence come to save us or to sink us? How can we trust an autonomous tool? Who makes ChatGPT possible?

Let's start from the beginning: GPT stands for Generative Pre-trained Transformer and was created by OpenAI, the company founded by Elon Musk and Sam Altman. Its value is estimated at $29 billion, including a possible investment of $10 billion from Microsoft.

ChatGPT is a dialogue bot based on the 3.5 version of the GPT model. It has the ability to predict the next set of words or sentences from a given natural language phrase, providing intelligent answers to complex questions and generating content automatically.

The system incorporates 175 million parameters and was trained with the largest repository of human language available: the internet, where there is both good and bad. Therefore, early versions of GPT, and many other artificial intelligences, tend to generate biased and discriminatory content.

There is no simple method to eliminate the (large) parts filled with racism, sexism, discrimination, and hate from the cloud, so OpenAI had to incorporate an additional security mechanism. It was necessary for the artificial intelligence to learn how to detect all that toxic language with real examples of violence, hate speech, sexual abuse, and all kinds of crimes. The problem is that, at least for now, it is a task only achievable by human intelligence.

"This database (the internet) is the cause of GPT-3's impressive language capabilities, but it is also, perhaps, its greatest curse," says TIME magazine in a publication that describes the work of people who must dive into the worst of the internet.

For how much money would we accept the task of seeing, reading, and listening to the most horrifying crimes of humanity and categorizing them into different categories? What are the criteria by which we determine whether something is appropriate or not?

TIME's investigation found that OpenAI outsourced this task to a company that employs workers from Kenya who earn less than $2 per hour and who reported that they do not have the promised psychological support to face such exposure.

Far from being an isolated problem or a secondary anecdote to what is advertised as the beginning of the future, the case of ChatGPT serves to illustrate that these technological innovations do not arise by magic; on the contrary, they rely on massive supply chains of human labor and data of dubious origin.