//Skip to content
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Will AI Contribute to A Distorted Reality?

June 1, 2023
Photo credit: Image of Pope Francis in Balenciaga generated by Midjourney

Earlier this year, photos emerged of the Head of the Catholic Church, Pope Francis sporting a white Balenciaga puffer coat; another one of former United States President, Donald Trump, being arrested by policemen; and one of president Joe Biden dressed in Afghani garb surrounded by citizens of Afghanistan.

While exceptionally realistic, none of these scenes took place in reality, they were created by an Artificial Intelligence (AI) software.

The conversation surrounding AI, particularly in terms of its advancement in relation to humanity’s ability to manage it, has been a point of contention for the past year.

In March, an open letter calling on tech players to pause development on AI was released and signed by leaders in the field including CEO of Space X, Elon Musk and Apple Co-Founder, Steve Wozniak.

The intent was to voice their concern towards the accelerated development of artificial intelligence.

With the rapid development of these AI softwares, the line between truth and lie, reality and fiction, becomes blurred.

A recent programme that has been creating a buzz online is Midjourney. The software creates images based on the information fed to it by users.

Designed by an independent research lab based in San Francisco, Midjourney went into open beta — a phase in which a software is released to the public with the intention of receiving feedback — in 2022. Recently the launch of MidJourney V5.1, the most recent version, was announced on their official twitter account.

Prior to the announcement, Midjourney caused a stir online with hyper realistic images of public figures generated by the software, raising questions on the contribution of AI to misinformation (the spread of false information regardless of intent).

 

View this post on Instagram

 

A post shared by The Afghan (@theafghan)

Following the discourse on disinformation resulting from the images created by Midjourney, the company’s CEO, David Holz, announced in March that free trials would be halted. The reasons given are seemingly unrelated to the fake images that circulated the internet.

Holz clarified that the pause on free trials was due to “massive amounts of people making throwaway accounts to get free images.”

This particular AI, much like other similar softwares available online, is still in its developing stages. This begs the question, what will it achieve once it reaches its full potential?

The Rapid Rise of Artificial Intelligence

Deepfakes are AI-generated media that depict a convincing likeness of a person through digital manipulation. These can be used to engineer and create events that have never taken place.

While the recent fabricated images of public figures have led to questions raised from an ethical, social, and political perspective, they are not the first of their kind. They are merely the most realistic.

“The tools are going to get better, they’re going to get cheaper, and there will come a day when nothing you see on the internet can be believed,” Wasim Khaled, Chief Executive of Blackbird.Ai, an organisation that battles disinformation and promotes information integrity, told the New York Times in April.

Experts have voiced their worry over the fast-development of these tools, which are becoming increasingly accurate in their depictions, and as such, increasingly difficult to detect. The rate at which they are spreading is exceeding the rate at which tech companies can keep up.

Today, tutorials on how to use Midjourney and other AI programmes are easily accessible to online users.

It merely takes a curious mind and a quick google search to explore the controversial world of Artificial Intelligence.

Signing up to these websites is also affordable. The basic plan to subscribe to Midjourney starts at USD 10 (EGP 304) a month.

Along with their accessibility and affordability, these AI softwares have also significantly improved from earlier versions.

While at first glance images created by former AI tools seem realistic, upon closer inspection, hints alluding to the fictitious nature of these photos can be detected.

These include nonsensical text present in the image’s background, along with warped physical attributes (for example, fingers that are too long or hands that stretch unnaturally), among other discrepancies that allow experts to differentiate between truth and lie.

With the rapid development of AI, the misleading content produced by these softwares is becoming increasingly convincing.

The glitches that would otherwise allow a discerning eye to ascertain the veracity of the subject, are consistently being fixed.

Moreover, by the time an individual identifies the fabricated image, it has already sailed through social media and caused damage.

That is not to say that it is impossible to detect a deepfake, but rather that it takes longer and may be more difficult.

The Age of Misinformation

Deepfakes in and of themselves may not necessarily be harmful. The concern lies more with how human beings tend to use them. Coupled with the natural tendency of people to believe what they see online, the potential for misinformation increases.

In 2018, less than 10,000 deepfakes circulating online were spotted, according to an article published by the Wall Street Journal in February.

The perils of misinformation are damaging on both a small and large scale. Earlier in 2023, a video was posted of a middle school principal in New York shouting racist slurs at students.

Photo credit: Bing image creator

The video, however, was not real. It was made, along with a few others, by a group of students using AI. While the videos were taken down, they revealed an underlying problem: the extent to which these programmes are accessible and spreading.

The worry over AI-generated deepfakes extends to politics as well, as they grant users the ability to fabricate what a person says or does.

In a 2017 interview with Nature magazine, Hany Farid, a computer scientist at Dartmouth College in the U.S compared this rapid development to an arms race where human beings need to stay ahead of these softwares.

According to him, the potential risks that come with the speed at which AI is developing are cause for concern.

He said: “At some point, we will reach a stage where we can generate realistic video, with audio, of a world leader, and that’s going to be very disconcerting.”

This has already happened with the AI-generated images representing Donald Trump getting arrested and breaking down in court. At the time of their creation, they flooded the internet, with users pondering the authenticity of the images.

Visual media, more so than written, has the potential to spread like wildfire and promote disinformation because it offers credibility to the narrative it is weaving.

Photos, in essence, are used as a way to legitimise whatever story is being told.
Controlling the accessibility and spread of AI-generated photos is imperative to the proliferation of misinformation.

Media Literacy in Egypt

Media literacy is the notion of being able to critically assess and evaluate different forms of media, as well as to create them.

Included under that umbrella is the ability to distinguish between fake news and real news.

In Egypt, the level of media literacy is hindered by a number of factors, one of which is inadequate media awareness. Moreover, proper media practices are not set in place.

That is, with the advent of social media, citizens as well as journalists source online information without fact-checking.

The COVID-19 pandemic served as an instant in history that shed light on the rampancy of misinformation. Both online and offline, discussions on the virus’ spread, its mutation, and the remedies for it, were taking place. However, the details surrounding these conversations were not fact-checked but merely taken from social media or unreliable sources, and re-iterated from one individual to the next.

This resulted in cases of mass panic, inaccurate representations of what was happening on-ground, and rumours. In Egypt, particularly, within family chat groups and gatherings, information was sent out without any evidential basis.

While smartphones are widespread in the country and the internet easily accessible, the proper means of using these tools are not available. Without learning the adequate fact-checking techniques, the spread of misinformation becomes more likely.

Comment (1)