Artificial intelligence

Artificial intelligence: the positives and negatives, ethics, privacy, and geopolitics

At the 2023 Ubud Writers & Readers Festival (UWRF) the renowned expert in artificial intelligence Toby Walsh and independent investigative journalist Antony Loewenstein delved into the societal, economic, and personal impacts of AI.

They had a wide-ranging discussion in which they explored positives, negatives, ethics, and privacy.

Walsh has won the prestigious Humboldt Research Award. He has written numerous books, including Machines Behaving Badly: The Morality of AI, published in 2022, and his latest book, Faking It: Artificial Intelligence in a Human World.

According to one study, it is estimated that AI touches our lives twenty times a day without us realising it, Walsh told the UWRF audience. It happens, for instance, every time we get directions with Google Maps or ask Siri a question.

Walsh notes, however, that what had previously been subtle and not so obvious to us, such as the algorithms on social media, had now, with AI programmes like ChatGPT, become much more black and white.

In his new book, Walsh uses the term ‘ethics washing’ and cites Facebook’s oversight board as an example of a big-tech company trying to fool people that it’s taking their concerns seriously.

“We’ve seen no end of discussion from the tech companies about their AI and ethics, but, when the rubber hits the road, you see what they do: they fire their AI and ethics people,” Walsh said.

“It’s hard not to think that it’s a lot of window dressing, it’s a lot of talk, and not much in terms of actually constraining their behaviour.

“The fundamental problem is that there’s huge amounts of money at stake.”

OpenAI, which is the company behind ChatGPT, has gone, in the past year, from no income to a billion dollars a year in income; from a valuation in the millions to 90 billion, Walsh points out. It’s the fastest generation of wealth ever, he says.

Broaching one of the central and most challenging questions about AI, Loewenstein asked whether a machine could have a form of consciousness that we recognise. “We have no idea,” Walsh told the UWRF audience.

It’s possible, Walsh says, that machines might become conscious in some way, but it’s equally possible that consciousness is limited to biology and is something that won’t ever be made in silicon.

Walsh (pictured left) has written about three ways in which machines could perhaps have a form of consciousness: consciousness could spontaneously emerge out of the complexity of the machine, it could be something that machines learned, or it might be something that was programmed.

He doesn’t see the latter option as viable. It is hard, he says, to imagine how one could create a programme of something when we don’t know what it is.

Loewenstein points to the fact that many nations, including the United States and Israel, are already using AI in war.

“Australia, amongst others, is embracing autonomous weapons with no oversight, alongside most Western nations and other non-Western states,” Loewenstein said.

“Now, already, there’s no transparency for victims of so-called traditional weapons in the last decades … So what’s the way to ensure a potential AI war future has any kind of regulation or guard rails?”

Walsh cites Ukraine, which has started building home-made autonomous drones “that are set off without any human oversight, to go off and kill”.

He says he has spoken at the United Nations half a dozen times, warning about the road that we are going down.

It’s the way that AI can be used for assassination that will make politicians sit up and take notice, Walsh says. He cites the use of a drone in an attempt three years ago to assassinate the president of Venezuela.

Loewenstein asked Walsh about the apocalyptic narrative that AI has the potential to destroy humanity entirely.

“We face the real existential risk,” Walsh told the UWRF audience. “But the most immediate existential risk, of course, is the climate emergency. The world is burning in front of our eyes … The modest risk that AI poses is much less than that.

“Machines are only allowed to do the things that we allow them to do so, if a machine goes off and causes nuclear war, what were we doing giving machines the nuclear trigger? That’s human stupidity.”

Walsh notes that Large Language Models (LLMs) like ChatGPT are being trained on copyrighted material, including his own books.

The solution, he says, isn’t filtering out copyrighted answers. “You’ll just have to do what should have been done in the first place. License and pay for the training data,” he tweeted recently.

Walsh was responding to a tweet about the lawsuit The New York Times has filed against OpenAI and Microsoft in which it is contended that millions of articles published by The New York Times were used to train automated chatbots that now compete with the news outlet.

Social media should have been a wake-up call about how technologies can be misused, Walsh says. The reach of social media and the ability to personalise content are, he says, being combined with being able to mislead people.

“We’re going to be in a world very soon where anything you see, anything you hear, you have to consider is synthetic. It’s made to fool you; it’s made to persuade you; it’s made to amuse you,” he told the UWRF audience.

“That’s the world that we’re going to be in. And that’s going to be driven through social media, which is a very effective delivery mechanism.

“But on top of that, we’re going to put in really persuasive content that’s personalised to you at speed, scale; at no cost at all.

“And I’m not sure we’re prepared for that world, a world in which everything you see is trying to manipulate what you think, what you buy, who you vote for.”

Walsh says he is a humanist at heart and has aimed for his new book to be a celebration of humanity.

“I believe that there are many unique characteristics that humans have that AI never will have that will become increasingly important in our lives as the machines take over the duller stuff that machines can do and we get to focus on our human values …  our emotions, our empathy, our social intelligence,” he said.

Machines don’t have emotions or empathy, Walsh says. “They’re cold and clinical. They can do all the calculations and do all the routine stuff, but we’re never going to relate to them like we relate to each other,” he told the UWRF audience.

Humans can relate to each other because they share human experiences, Walsh says. “We all fall in love. We all have to contemplate our mortality. We all lose loved ones. Those are uniquely human experiences,” he said.

Machines are never going to fall in love, have to contemplate their mortality, or have to face the profound issues that humans have to face that make life so rich and rewarding, he says.

Loewenstein (pictured left) raised the issues of human bias, sexism, racism, homophobia, and other human failings that exist in real life and have inevitably seeped into AI models. How, he asks, can one address these issues when so many politicians and so much of the media are uncritically promoting AI.

Walsh doesn’t think there’s a perfect answer because our choice of language is always political. Getting computers to speak Large Language Models are political choices, he says, and we choose our language models like we choose our newspapers, because they reflect our values.

Loewenstein also wonders whether privacy is possible in the 21st century. Walsh thinks that AI is eventually going to make it easier to keep information private as our data will increasingly be on our devices and not on Google servers and we will be less obliged to share it. This, however, is still five to ten years away, he says.

Until then, Walsh says, we need to be careful and not tick terms and conditions and not agree to cookies.

Loewenstein also raised the issue of AI in the context of geopolitics.

“China made its ambitions very clear a decade ago when they announced a national plan where they were going to seek economic and military dominance through the development of AI Quantum,” Walsh said. “And they’ve gone about that in short order.”

Today, Walsh says, China is pretty much neck and neck with the United States in terms of the number of cited papers, the number of AI patents, and the number of startups. The biggest Large Language Model in the world is Chinese, not American, Walsh points out, and the largest supercomputer in the world today is also Chinese.

China is also starting to develop really cheap AI-powered military platforms, Walsh says. A new world order is unfolding, he says, with the axis of power rapidly shifting east.

DONATE TO CHANGING TIMES VIA SIMPLE PAYMENTS

1= 5 euro, x 2 = 10 euro, X 3 =15 euro, etc.

€5.00