Fear the lie, not AI

Art by Roxy Anne Perlas/ THE FLAME

CALL IT damage control or diversion if you will, but it ought to be called for what it is: a plain and simple palusot (excuse).

When Sen. Ronald “Bato” dela Rosa and Davao City Mayor Sebastian “Baste” Duterte shared on social media a video of what appeared to be students criticizing the impeachment of Vice President Sara Duterte, their motive was clear.

They wanted to make it appear that the embattled and evasive eagle from Davao City has a quiet but solid support base among the youth. They tried to portray the anger over the delays in the impeachment proceedings as a phony outrage manufactured by, to use Bato’s labels, “yellows” and “communists.”

Conjectures like those of Bato and Baste are tolerable in a democracy, where even flawed arguments may be expressed. One may even contend that the same environment enabled the rise of populists who have the uncanny ability of packaging the most asinine arguments into something that will draw a thunderous applause. Did this remind you of someone dear to Bato and Baste? It should.

But their sharing of the video was not just about sweeping and baseless claims that have been normalized and have become a currency of politicians.

The students in the video who claimed that Sara’s impeachment was “politically motivated” and smacked of “selective justice” are not real persons. They are created by artificial intelligence (AI) and yet Bato and Baste wanted us to believe that those students understand better the issues surrounding the impeachment.

When social media users pointed out that the video was AI-generated, Bato, who can claim to be brave but not brilliant, said the one behind it had a point and that he was “not after the messenger, but the message.”

Sara also defended Sebastian, her younger brother, against critics who had chided him for reposting an AI-generated video. According to her, there is no problem with sharing an AI video supportive of her as long as it is not used to generate profit.

The palusot came from a senator and former police chief who once urged guidance counselors to conduct a profiling of students to determine who are likely to be recruited by communists.

Never mind if their criticisms against institutions are valid. In the mind of ballistic Bato, these students are angry with the world, are likely to join the armed struggle and may pose a threat to national security.

Sara also seemed to have glossed over a reality that political influencers can monetize their online content and that it is naive to assume that none of her supporters have benefited from the money-making mechanism.

The gaffe of Bato and Baste would have been forgivable had they admitted that they inadvertently fell for a fake content. However, they did not offer any apology for it and even reinforced the propaganda lines borne out of AI prompts.

Their reposting of the AI-generated content was therefore not an honest mistake. It was a desperate effort to depict the vice president as someone backed by the youth, a brazen attempt to deceive us and to insult our intelligence.

It was a pathetic palusot by problematic politicians.

This episode is a reminder of what could happen if deception becomes the driving force behind content generation. It is a warning of what may take place if artificial intelligence is exploited in an environment that has been tolerant of moral decadence.

It has proven that AI doesn’t lie. People do.

Unless another Supreme Being emerges and creates an AI in His image and likeness, the technology will not have a free will and will continue to be subjected to the agenda, interests and whims of whoever controls it.

In his book “The Reality Game: How the Next Wave of Technology Will Break the Truth,” media studies scholar Samuel Woolley argued that in order to address the problem of what he called “computational propaganda,” we need to zero in on the people behind the tools.

Woolley, whose research team was verbally attacked by Sara’s father, former president Rodrigo Duterte, noted that “computational propaganda is more about propaganda than about computers.” According to him, the suite of tools as a mode of political communication is “ultimately focused on achieving the human desire of control.”

“Propaganda is a human invention, and it’s as old as society… As an expert on robotics once told me, we should not fear machines that are smart like humans so much as humans who are not smart about how they build machines,” he wrote.

Efforts to address the problems linked to the misuse of AI should not be limited to rectifying the mistakes of the developers of new technologies. Obviously, the issue is not just technical; it is also ethical.

The key to countering disinformation, whether they involve AI or not, lies on the basics that are already known but have not been realized.

It’s about promoting honesty and accountability. It is about ensuring that there are disincentives for any form of deception or falsehood, especially if they come from people in power. It is about creating a mindset that places a premium on empowerment, rather than partisanship and parochial concerns.

It is about creating an environment that encourages a thorough examination of claims made by influential personalities. It is about educating people about the proper use of and the possible effects of new technologies.

While some are disturbed by the growing capability of AI to resemble reality, we have to learn about it, rather than shun it. Greater awareness of the tool being weaponized by truth twisters will make us less vulnerable to manipulation.

Be afraid not because AI can produce very realistic images and scenarios. Be afraid because there are nefarious individuals who have access to it and many believe in them.

The spread of any misleading AI-generated content is rooted in the dishonesty of people with ill intentions.

We must therefore fear the lie, not AI. F

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Posts

Contact Us