Skip to main content

Technology

8M: Challenges faced by women in technology and the lack of recognition

Science, research, and technology are increasingly relevant activities in the knowledge-based economy, and countries with a critical mass in these areas can better specialize in more dynamic sectors and become more competitive.

Despite their great relevance, the technological field is one of the sectors of the economy with the lowest female participation in the world, and particularly in Latin America, where employed women in Argentina, Brazil, and Mexico are only one-third.

Why are there few women in science and technology?

The situation of women in science and technology can be explained as a vicious circle, according to a 2021 research by CIPPEC and Salesforce.

In broad terms, few women enter the scientific and technological field, and then those few face barriers that truncate their careers and leadership, strengthening glass ceilings.

In the educational stage, there are social and cultural norms that affect women's confidence and interest, adding to the misinformation about these types of careers and the lack of role models:

From childhood and adolescence, gender biases about women and hard sciences are reinforced.

There is no vocational guidance in childhood and youth.

In higher education, young women face a hostile environment in careers dominated by men.

In addition to being a minority, female university students usually do not find mentors in their teachers or instances of mentorship to receive support in the transition to the labor market.

In the professional development stage, the few women who enter CyT-related work environments have difficulties accessing, maintaining, and advancing in their careers:

As in all sectors of the economy, there is an imbalanced burden of care responsibilities.

Scientific promotion is based on male evaluation criteria and norms, for example, women's fertile age coincides with the instance of specialization and postgraduate studies.

The climate and culture remain hostile in masculinized environments, and women suffer greater discrimination, demands, and sometimes even harassment.

The lack of visibility of women in CyT reinforces stereotypes about what women can or cannot do, or in which areas they excel:

There is a lack of easily accessible information and disaggregated statistics by gender, geographic region, and ethnicity.

The cultural and symbolic representation of women in STEM is limited.

Public and private sector awards and recognitions have little, although growing, notoriety.

A more equal technological sector

To reduce the gender gap in science and technology, it is necessary to implement public policies and private initiatives at each stage of the vicious circle. Among them, perhaps the most accessible is the development of inspiring narratives that address girls, young women, adults, and society as a whole.

Women, as a diverse group, bring a rich variety of experiences and perspectives that have incalculable value and contribute to improving the quality of science and innovation.

Achieving gender equality is not only relevant for women: gender refers to the cultural practices and expectations that govern the expected, approved, and actual behavior of men, women, and dissidents.

With the fourth industrial revolution and demographic transition as context, CyT areas must become an opportunity for gender equality and greater economic freedom for women.

Apple Vision Pro arrives in the right place at the right time

Apple welcomes the era of spatial computing— at least that's the intention —with their new augmented reality glasses called Vision Pro.

“You navigate simply by using your eyes, hands, and voice so you can do the things you love in ways never before possible”, says the product description. “It blends digital content with your physical space”.

It's true: while cinema had already imagined this invention, and Google Glass was the first step, it seems that we are facing a 'before and after' moment in technology.

Undoubtedly, this is a more tangible approach to virtual reality than Meta's proposal. Or, at least, we know the key to "access the Metaverse."

Waiting for ideas to mature before launching them into the market is the greatest achievement of the company founded by Steve Jobs. The iPhone changed the digital paradigm, transforming a phone into a small computer. Are we facing a similar revolution, or did it arrive a pandemic too late?

The Vision Pro will be released in 2024 for $3,499, giving the technology market time to work on adaptations and new proposals. The most relevant details are:

  • The "immersion" can be dial-up and down.
  • Currently, it connects to the battery via a cable.
  • It is made of metal and glass, although it is claimed to be lightweight.
  • It tracks eye movements, perceives hand commands, and allows voice typing.
  • It has a 3D camera and display.
  • It creates a semi-realistic user image that functions as an avatar in calls and meetings.
  • The user's eyes are not visible from the outside; instead, the avatar's eyes are what can be seen.

While it is too early to see all its implications, it will undoubtedly be a great addition to the entertainment industry.

Argentina advances in state regulation of IA

In early September, the Agency for Access to Public Information (AAIP) created the "Transparency and Personal Data Protection Program in the Use ofArtificial Intelligence," a first step in regulatingAI in accordance with international standards.

The goal is to promote processes of analysis, regulation, and the strengthening of state capacities to support the development and use of this technology inboth the public and private sectors.The planned actions, according to iProUP, include:

  • Strengthening the government'sknowledge in both artificial intelligence and personal data protection.
  • Generate public policies and regulations that allow for itssafe, ethical, and transparent development.
  • Creating theArtificial Intelligence Observatory, which will map key actors, track progress, and compile statistics and reports.
  • Establishing aMultidisciplinary Advisory Council aimed at generating consensus and coordination of sectoral policies.
  • Implementingparticipatory processes to address rights violations and prevent negative impacts of AI use.
  • FormulatingGood Practices Guides, providing training, and conducting awareness campaigns.

This advancement comes in an international context marked by significant developments, with bureaucracy often lagging behind technological possibilities. It's worth noting that at the end of March, a letter was published, signed by a large number of experts, requesting a six-month pause in the training of more powerful systems like GPT-4.

TheEuropean Union was the first to present a regional law proposal with the aim of"promoting the adoption of reliable and human-centered artificial intelligence, as well as ensuring a high level of protection for health, safety, fundamental rights, democracy, the rule of law, and the environment against its harmful effects," just three months ago.

On the other hand, the U.S. Senate is divided, with the Democratic Party arguing that all regulations related to privacy and data protection in digital environments need to be revisedbecause they no longer guarantee the protection that citizens demand, while the Republicans want to focus solely on regulating AI.

The deployment of ChatGPT took place in November 2022, followed by more than 700 generative artificial intelligence systems. The diversity of application fields is so broad that data can be extracted or used without the knowledge or consent of users. It is essential for the government to be involved and translate this involvement into concrete public policies that do not hinder innovation.

Can the development of Artificial Intelligence be stopped?

A few weeks ago, the Future of Life Institute, a non-profit organization, published an open letter to artificial intelligence labs to halt the training of the most powerful AI systems for at least six months. The fear is that AI brings "profound risks to society and humanity". While the scope of AI uses is amazing, the risks are even greater. The best risks are those that can be anticipated, tested, and prevented; the worst are those that we don't see coming.

The letter was signed by more than 20,000 prominent figures from science, technology, and social sciences. The most surprising signatories were Elon Musk (who was part of OpenAI and left in 2018) and Steve Wozniak (of Apple). A notable case is that of writer and historian Yuval Harari, who reflected: "Artificial intelligence systems with the power of GPT-4 or greater should not get entangled in the lives of billions of people at a faster rate than cultures can safely absorb them." "A curtain of illusions could descend over all humanity, and we may never be able to run that curtain again or realize that it is there," he predicted.

At the same time, linguist and philosopher Noam Chomsky said, "This is part of what it means to think. To be right, it must be possible to be wrong. Intelligence consists not only of making creative guesses but also of making creative criticisms." "If humans are limited to the kind of explanations we can rationally conjecture, machine learning can learn at the same time that the earth is flat and that it is round," he warned.

Recently, Sundar Pichai, the CEO of Google, also raised alarms, although he was not included in the letter. "How can we develop AI systems that align with human values, including morality?" he asked. "I think it should include not only engineers but also social scientists, ethics experts, philosophers, and more."

Finally, the European Data Protection Committee (EDPC) also decided to intervene in the discussion since they doubt that Chat GPT and other AI tools comply with current legislation, especially regarding data protection.

Putting the development of AI on hold is impossible. Sam Altman, CEO of OpenAI, responded that the letter calling for a pause lacked "technical nuances regarding what should be stopped," and assured that they were not working on Chat GPT-5. "Without government involvement, it is impractical and almost impossible," said Bill Gates. Musk, who signed the letter, has just created a new AI company and said that all technology companies are buying GPUs (processors that can perform multiple tasks simultaneously, unlike CPUs), which are key to these intelligent systems. Putting a brake on technological development is like covering the sun with your hands.

Immersive internet, a present future

What is Web 3.0? ???? If we consider that in the early days of the internet, the web was content produced by specialists, and in Web 2.0, users can create content but are not the owners, Web 3.0 will be based on the idea of ownership facilitated by blockchain technology: a universal code that doesn't belong to any company.

So, we're not talking about a new smartphone or a better console, but about the total transformation of the internet ???? Blockchain technology structures data into blocks that decentralize information and enable secure cryptocurrency transactions.

???? Blockchain assigns ownership of each token by registering it with a code in multiple blocks, ensuring that it is almost impossible to appropriate it illegally.

For now, the internet is still confined to a two-dimensional screen where we are bombarded with ads, and what Web 3.0 promises is an immersive and three-dimensional digital experience through different metaverses ????

Large companies such as Microsoft, Vodafone, Meta, Disney, or Apple have already begun their race to lead the great leap to the new internet ???? Art, fashion, video games, work, and interpersonal relationships will take place in this new virtual life that is about to begin in what seems to be a loop of digital transformations.

#ThinkDigital

Polarization and fake news in Meta

In order to understand the effect of social media on democratic electoral processes,Meta undertook a research project in collaboration with external partners, back in 2020. Based on this study conducted with 23,000 users from different platforms, different articles were published in the most renowned international scientific journals, Science and Nature. What were their conclusions?

Facebook and Fake News

According to Iproup, the first paper characterizes Facebook as "a social and informational environment that shows significant ideological segregation." It reveals ahigher prevalence of unreliable content in right-leaning media compared to left-leaning ones. Therefore, individuals with more conservative political inclinations are moreexposed to false information compared to those with more progressive opinions.

How do algorithms influence us?

The research also analyzed the differences between content displayed in chronological order and content driven by algorithms, both on Instagram and Facebook.

For both platforms, they found thatthe content from unreliable sources is more prominent in the chronological feed than in the algorithm-driven feed (by more than two-thirds in the case of Facebook and slightly less for Instagram). Furthermore, the chronological feed significantly reduced the time users spent on both platforms and their level of interaction with them.

Dissemination and Viralization

Finally, the research concluded that the function of sharing content already posted by others contributes to the dissemination of certain publications, although this action does not always lead to the viralization of a post.When users see content from similar sources in their feed, their level of engagement is higher.

Democracy at stake

Since the 2016 elections Facebook ― now Meta ― was accused of promoting misinformation and political polarization. The Spanish newspaperEl País recalls four key moments:

  • After Donald Trump's victory, it was evident that the platform could be used to share content without restrictions and organize groups of people around certain positions, ideologies, or ideas, not necessarily democratic or respectful.
  • The "Mueller Report" ruled out that Trump had collaborated with Russia to win the presidential elections, but it concluded thatMoscow had indeed interfered in the elections and Facebook had been one of its channels.
  • The data of more than 50 million Facebook users were used from 2014without their consent to be illegally commercialized with third parties and profit from users' information, such as age, gender, preferences, and habits; the case of Cambridge Analytica is remembered.
  • Finally, an investigation by the U.S. justice system determined that the four tech giants - Facebook, Google, Amazon, and Apple - had "exploited their market power in an anticompetitive way," but no measures have been taken yet.

It is within this context that the published reports become more relevant. In response, the President of Global Affairs at Meta stated that"there is little evidence that the fundamental features of Meta's platforms, by themselves, cause harmful affective polarization or have significant effects on important political attitudes, beliefs, or behaviors."

According to Meta,when participants see less content that reinforces their views, they tend to interact more with content from like-minded ideas that were presented to them. However, one thing is clear: while there are progressives and conservatives on social media, they are not symmetrical groups:audiences consuming political news on Facebook generally have a right-leaning inclination.

The dark side of artificial intelligence

Did artificial intelligence come to save us or to sink us? How can we trust an autonomous tool? Who makes ChatGPT possible?

Let's start from the beginning: GPT stands for Generative Pre-trained Transformer and was created by OpenAI, the company founded by Elon Musk and Sam Altman. Its value is estimated at $29 billion, including a possible investment of $10 billion from Microsoft.

ChatGPT is a dialogue bot based on the 3.5 version of the GPT model. It has the ability to predict the next set of words or sentences from a given natural language phrase, providing intelligent answers to complex questions and generating content automatically.

The system incorporates 175 million parameters and was trained with the largest repository of human language available: the internet, where there is both good and bad. Therefore, early versions of GPT, and many other artificial intelligences, tend to generate biased and discriminatory content.

There is no simple method to eliminate the (large) parts filled with racism, sexism, discrimination, and hate from the cloud, so OpenAI had to incorporate an additional security mechanism. It was necessary for the artificial intelligence to learn how to detect all that toxic language with real examples of violence, hate speech, sexual abuse, and all kinds of crimes. The problem is that, at least for now, it is a task only achievable by human intelligence.

"This database (the internet) is the cause of GPT-3's impressive language capabilities, but it is also, perhaps, its greatest curse," says TIME magazine in a publication that describes the work of people who must dive into the worst of the internet.

For how much money would we accept the task of seeing, reading, and listening to the most horrifying crimes of humanity and categorizing them into different categories? What are the criteria by which we determine whether something is appropriate or not?

TIME's investigation found that OpenAI outsourced this task to a company that employs workers from Kenya who earn less than $2 per hour and who reported that they do not have the promised psychological support to face such exposure.

Far from being an isolated problem or a secondary anecdote to what is advertised as the beginning of the future, the case of ChatGPT serves to illustrate that these technological innovations do not arise by magic; on the contrary, they rely on massive supply chains of human labor and data of dubious origin.

War 4.0? The battle fought on the web

A few hours before missile launches or tank movements, Microsoft had already detected the beginning of the Russian attack, aimed no less than at Ukraine's digital infrastructure. In just three hours, they managed to alert the authorities and update the server defenses, avoiding a situation similar to the one that occurred with the almost uncontrollable destruction of the NotPetya malware in 2017. "Looking to the future, it is clear that digital technology will play a vital role in both war and peace," said the president of Microsoft in a statement, reminding that they are neither a government nor a country.

But the truth is that there is already talk of "War 4.0", where social networks are the main channel through which conflict images are globally disseminated and individual

s are forced to take sides. Despite the fact that misinformation and social ostracism play a very relevant role, digital warfare is not a cosmetic issue: destroying the Ukrainian internet network would affect critical infrastructures such as electricity and water, communications, banking, and health services.

Such is the magnitude of the conflict that the hacker group Anonymous decided to intervene and, according to them, is working on the Ukrainian side. Thanks @iproup for helping us reflect as events continue to unfold.

#ThinkDigital

WhatsApp launches a feature for infinite storage: what it's like

The messaging app WhatsApp has already recorded more than 2.5 billion downloads worldwide.

However, this platform can take up a lot of space on a cellphone, and in some cases, it can completely fill up the storage, rendering the phone completely useless.

In this context, it is possible to activate a "infinite space" function in the app by following the steps below:

  1. Access WhatsApp.
  2. Go to the "Settings" tab or "Configuration".
  3. Next, select "Chats".
  4. Deactivate the option "Visibility of multimedia files". This will prevent videos and images from automatically downloading to the cellphone's camera roll, freeing up memory completely.

WhatsApp: WHAT IS THE SAFEST WAY TO SAVE ALL CHATS AND NOT TAKE UP SPACE ON THE PHONE?

An American cybersecurity expert, Zak Doffman, revealed that to ensure the privacy of the application, you should stop saving chats in the Google Drive cloud.

"If you use the WhatsApp option to back up your chat history in the Apple or Google cloud, those copies are not protected by end-to-end encryption," explains the specialist.

For this reason, it is recommended to back up messages with an email. To do this, go to the platform and click on "Settings".

Then, click on "Chats" and "Chat history". Next, select "Export chats".

Now, a menu will appear that allows you to choose the specific contact.

Once the chat is selected, WhatsApp will ask if you want to include multimedia content or just send plain text and emojis. The latter option, as it does not have files such as videos, images, and documents, is lighter and faster.

Finally, WhatsApp allows users to choose which email address to send the chat record to. A few minutes later, the entire conversation will arrive in the email inbox.