Issue #35 - The Word of God

October 28, 2023
Free Edition
Generated with Stable Diffusion XL and ComfyUI
In This Issue

  • Intro
    • A new episode of Tech From the Future and pictures from my press tour in the US to talk about AI.
  • What Caught My Attention This Week
    • New York’s major Eric Adams cloned his voice to reach citizens who don’t speak English as their first language.
    • UK officials are using AI across eight government departments, for a wide range of decisions impacting the lives of millions of people./li>
    • A US lawyer decided to use ChatGPT to prepare the defense of his client. They lost the case.
  • The Way We Work Now
    • Generative AI is infiltrating more and more into religious practice. As expected.
  • How Do You Feel?
    • Amazon wants to study how workers and the general public people feel about the use of robots inside the warehouses. What robots, tho?
  • Putting Lipstick on a Pig
    • Finally, I can be pretty, too, in corporate meetings, thanks to Google Meet.
Intro

The team published a new video on my new YouTube channel, Tech From the Future, where I answer David about some groundbreaking research I’ve tested recently and what will come next.

If you are a C-Level executive or a business leader in any sort of organization, please know this: generative AI models can become your personal, synthetic advisors. Your AI council, as I call it. A group of experts, always within reach, ready to debate your problem and give you their best advice.

If you want to know more about this topic, I did a deep dive in Issue #31 – The AI Council


In other news, I’m happy to share some of the pictures taken during the interviews with the press and on TV after my long trip to the US and the activities here in the UK.

(no orange glasses in these ones – forgot them at home)

Actually…

I didn’t travel to the US, and I didn’t give any interviews.

Earlier this week, I released an automation workflow that can generate a deepfake image like these in […checks notes…] 11.6 seconds on consumer hardware. It doesn’t require any extra work on Photoshop or other design tools.

If misused, this technology can generate thousands of these per day, and disseminate them across social media at the push of *one* button.

Anybody capable of reading, with no programming skills, can do the same thing I did and, in fact, many are doing it for a variety of purposes.

We have known deepfakes for years now, and certain countries are moving to regulate them when used in specific contexts. But very few of us know that this technology has become readily available to anybody and works on ordinary computers.

Just a few days ago, the US SEC Chair, Gary Gensler, told the Financial Times that without swift intervention it was “nearly unavoidable” that AI would trigger a financial crisis within a decade.

This is what he meant.

Anybody determined enough could use this technology to generate hundreds of credible news screenshots featuring well-known, respected figures, and spread those screenshots, all at once, across social media, claiming whatever they want about a public company, world events, or entire industries, triggering mass selling and a consequent market crash.

This technology is way more powerful than most of us realize. And jobs can be impacted in many ways.

Alessandro

What Caught My Attention This Week

New York’s major Eric Adams cloned his voice to reach citizens who don’t speak English as their first language.

Emma G. Fitzsimmons and Jeffery C. Mays, reporting for the New York Times:

The calls to New Yorkers have a familiar ring to them. They all sound like Mayor Eric Adams — only in Spanish. Or Yiddish. Or Mandarin.

The mayor is using artificial intelligence to reach New Yorkers through robocalls in a number of languages. The calls encourage people to apply for jobs in city government or to attend community events like concerts.

“I walk around sometimes and people turn around and say, ‘I just know that voice. That voice is so comforting. I enjoy hearing your voice,’” the mayor said at a recent news conference. “Now they’re able to hear my voice in their language.”

Privacy advocates still criticized the robocalls, arguing that it was “deeply Orwellian” to try to trick New Yorkers into thinking that Mr. Adams speaks languages that he does not. The group has previously criticized the mayor’s embrace of facial recognition technology and his dispatch of a police robot to patrol the Times Square subway station.

The robocall effort has cost about $32,000, city officials said, and the chat bot cost about $600,000 to develop. The city used the Voice Lab program by a company called Eleven Labs to generate the phone messages.

Do you remember all the past issues of Synthetic Work where I kept going on and on about the emotional impact of voice interactions and how critical a role synthetic voices will play in the future?

Well, I meant this.

ElevenLabs technology is the same I’ve been using to clone my voice in the Splendid Edition of Synthetic Work last week: Issue #34 – Talk to me

Let’s continue the article:

New York City’s embrace of the technology came this week as Mr. Adams announced a 50-page “action plan” for artificial intelligence — an effort to “strike a critical balance in the global A.I. conversation,” he said, by embracing its benefits while protecting New Yorkers from its pitfalls.

Mr. Adams also introduced a new chat bot that he said could eventually be used to field basic questions received on the city’s 311 help line.

Mr. Adams said that 70 percent of 311 calls were simple questions about things like alternate side parking and that those could be handled by a chat bot, allowing city workers to focus on more complex questions and reducing wait times for callers.

By the way, as a reminder, no human called those citizens, and no human will answer the phone with their questions.

You know when you see an animated gif online with a gradient of colors that changes oh so very slowly? It seems like the color never changes and then, all of a sudden, as if snapping out of hypnosis, you realize that the color is completely different.


UK officials are using AI across eight government departments, for a wide range of decisions impacting the lives of millions of people.

Kiran Stacey, reporting for The Guardian:

Government officials are using artificial intelligence (AI) and complex algorithms to help decide everything from who gets benefits to who should have their marriage licence approved, according to a Guardian investigation.

Civil servants in at least eight Whitehall departments and a handful of police forces are using AI in a range of areas, but especially when it comes to helping them make decisions over welfare, immigration and criminal justice, the investigation shows.

The Guardian has uncovered evidence that some of the tools being used have the potential to produce discriminatory results, such as:

An algorithm used by the Department for Work and Pensions (DWP) which an MP believes mistakenly led to dozens of people having their benefits removed.

A facial recognition tool used by the Metropolitan police has been found to make more mistakes recognising black faces than white ones under certain settings.

An algorithm used by the Home Office to flag up sham marriages which has been disproportionately selecting people of certain nationalities.

The NHS has used AI in a number of contexts, including during the Covid pandemic, when officials used it to help identify at-risk patients who should be advised to shield.

The Home Office said it used AI for e-gates to read passports at airports, to help with the submission of passport applications and in the department’s “sham marriage triage tool”, which flags potential fake marriages for further investigation.

An internal Home Office evaluation seen by the Guardian shows the tool disproportionately flags up people from Albania, Greece, Romania and Bulgaria.

There’s an argument to be made that the humans AI replaced in these tasks were not necessarily less biased. But without appropriate research and systematic auditing, we’ll never know for sure.

This is one of the risks when we talk about the potential impact of AI on jobs: employers might eliminate jobs even when there’s no evidence that AI is doing a better job than people.

Hopefully, this will be discussed by our Prime Minister (I’m a British citizen) during the global AI Safety Summit he arranged for next week, instead of non-sensical talks about far-fetched global extinction risks.


A US lawyer decided to use ChatGPT to prepare the defense of his client. They lost the case.

Mike Ives, reporting for the New York Times:

A founding member of the hip-hop group the Fugees has requested a new trial for a foreign influence scheme after arguing in part that his lawyer used artificial intelligence software to craft a “frivolous and ineffectual” closing argument.

In April, the rapper Prakazrel Michel was found guilty in federal court of orchestrating an illegal international conspiracy, in which he took millions of dollars from Jho Low, a Malaysian financier who was seeking political influence in the United States. Mr. Michel, known as Pras, was convicted on 10 criminal counts that included money laundering and witness tampering. He faces up to 20 years in prison.

In a motion for a new trial this week, Mr. Michel’s new legal team said the lawyers who defended him during the trial in U.S. District Court in Washington had been “deficient throughout.” They singled out the lead lawyer, David E. Kenner, saying that he had misunderstood the facts of the case and ignored “critical weaknesses” in federal prosecutors’ arguments, and that he used an experimental A.I. program to create a closing argument that made “frivolous” claims.

Mr. Michel’s lawyers also wrote that Mr. Kenner and another lawyer, Alon Israely, “appear to have had an undisclosed financial interest” in the program, EyeLevel.AI. The motion cited a news release from EyeLevel that mentioned a partner company, CaseFile Connect, the website of which lists the same Los Angeles address as Mr. Kenner’s law firm.

Neil Katz, the founder and chief operating officer of EyeLevel.AI, said on Thursday that it was “categorically untrue” that the trial lawyers had had an undisclosed financial interest in the company. He added that neither CaseFile Connect nor the lawyers at Mr. Kenner’s firm had a financial stake in his company.

Regarding the role his company’s software played in the case, Mr. Katz said that it merely allowed the lawyers to conduct research and analysis in real time based on trial transcripts.

The Legal industry has been one of the quickest to adopt generative AI, as we amply documented in the Splendid Edition of Synthetic Work. Over there, we often mention AI companies and early adopters do exactly what the lawyers of this story did, according to the last quote.

The Legal industry has also been the one that ridiculed itself the most in its use of generative AI.

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

Generative AI is infiltrating more and more into religious practice. As expected.

Najmeh Bozorgmehr, reporting for the Financial Times:

“Robots can’t replace senior clerics, but they can be a trusted assistant that can help them issue a fatwa in five hours instead of 50 days,” said Mohammad Ghotbi, who heads a state-linked organisation in Qom that encourages the growth of technology businesses.

The clerical AI push is still very much in its infancy. Ghotbi said a few dozen projects such as his were under way in Qom and elsewhere.

Iran’s religious establishment has been looking at ways to harness the technology since Qom’s first AI conference was held in 2020. The head of Qom Seminary, the largest such institution in the Shia world, recently opened up to how AI could accelerate the Islamic studies of senior clergy and speed up their communication to the public.

“The seminary must get involved in using modern, progressive technology and artificial intelligence,” Ayatollah Alireza Arafi said in July. “We have to enter into this field to promote Islamic civilisation.”

The city’s leading AI research centre, the Noor Computer Centre for Islamic Sciences Research, is affiliated with the seminary and has access to its centuries-old scrolls and other ancient data sources that could be fed into algorithms.

Ayatollah Ali Khamenei, Iran’s supreme leader, has also urged clergy to pay more attention to the possibilities of AI, saying in June that he wanted the country to be “at least among the top-10 countries in the world in terms of artificial intelligence”.

But he also said that while “the tools change . . . what doesn’t change are the goals” of the Islamic republic.

In the meanwhile, further away, South Korean Christian churches and pastors have started using generative AI for sermons too.

Song Jung-a and Christian Davies, reporting for the Financial Times:

Online church services using artificial intelligence are rapidly becoming an essential part of worship in Korea, where Christianity is the biggest religion, as tens of thousands turn to chatbots and audio bibles for spiritual sustenance.

This year, local start-ups have launched generative AI bible study and prayer service apps, which in particular pull in young Protestants.

Pastors have welcomed the time the technology frees up for them to take care of their flock, who account for about a fifth of South Korea’s 52mn population.

Awake Corp, the developer of ChatGPT-based bible chatbot service Ask Jesus — now rebranded as Meadow — has since its launch in March attracted about 50,000 users, including 10,000 from outside Korea. The app has drawn Christians in Muslim countries such as Pakistan as well as in the US and other western countries.

The app has generated interest from churches and pastors, who use Awake’s AI-driven WeBible web service to write sermons. When a pastor asks about a certain section of the Bible, the service can offer detailed explanations, identify main messages and points of reflection, and suggest a title for the sermon.

“We faced strong resistance from churches initially with their suspicion that we are trying to replace God and pastors,” said Kim Min-joon, Awake’s chief executive. “But pastors began to appreciate our service as it helps them save time in preparing for sermons, and find more time to take care of lonely, troubled followers.”

Awake changed the name of its app from Ask Jesus to Meadow after it realised some users regarded the chatbot’s answers as the word of God. “AI is just a technology,” said Kim. “I just hope our service will be used as a digital missionary tool.”

Meadow is based on Open AI’s ChatGPT technology but Awake has trained its chatbot with its own vast theological database and used prompt engineering — optimising textual input to communicate effectively with large language models — to prevent AI “hallucinations”, which is a phenomenon wherein a large language model creates inaccurate output. A committee composed of pastors continuously reviews the accuracy of the chatbot’s answers.

About 20 per cent of 650 Protestant ministers in Korea recently surveyed by the Ministry Data Institute said they have used ChatGPT to create sermons and about 60 per cent of them found ChatGPT useful in coming up with ideas for sermons.

Korean churches are also relying on an AI-backed audio bible platform, Biblely, developed by start-up Voiselah, for their missionary work. Biblely has created audio bibles recorded with pastors from about 50 churches, using generative AI technology trained on each pastor’s voice.

Choo Hun-yup, chief executive of Voiselah, said demand for Biblely surged during the Covid pandemic when the government suspended large-scale religious services.

Choo added that many churchgoers are inspired by the AI-powered audio bibles, unaware that the recordings are generated with the latest technology.

We talked about the implications of AI on religion in the second issue of this newsletter, Issue #2 – 61% of the office workers admit to having an affair with the AI inside Excel. Such is the importance of this topic, which I recommend you to read.

Why do we talk about this and why is it so important?

Well, first, because even being a religious leader is a job. AI doesn’t care about the job title, and if your job is dealing with words, you are a target. Generative AI can be more persuasive, tireless, ubiquitous, and cheaper than you.

Second, if established religions can use AI, so can aspiring new religions. Job displacement doesn’t happen just by the hand of technological advancement. It also happens by the hand of competitors.

And given that other forms of AI, like deepfakes, can make religion more suggestive than any human ever could, aspiring new religions have as much chance as established religions to gain followers.

Religion is not just about compassion and hope. It’s also about control. This will become a huge topic in the years to come.

How Do You Feel?

This section is dedicated to the psychological impact of artificial intelligence on people. You might think that this has no relevance to the changing nature of human labour, but it does. Oh, if it does!

For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.

Amazon wants to study how workers and the general public people feel about the use of robots inside the warehouses. What robots, tho?

Brian Heater, reporting for TechCrunch:

The study seems less concerned with actual job numbers, and more with how human employees and the public feel about the inevitable increase of robotics and AI in warehouses, manufacturing facilities and other industrial settings.

Amazon Robotics’ Chief Technologist Tye Brady did, however, address the question of job numbers ahead of today’s event, noting:

“We have more than 750,000 mobile robots in our operations and thousands of other robotic systems that help move, sort, identify and package customer orders. It’s taken us more than 10 years to reach this scale. During that time, Amazon has hired hundreds of thousands of employees to work in our operations. We take a purpose-driven approach to how we design and deploy technology at our facilities and we consistently prioritize using robots to support safety and ease everyday tasks for our employees.”

The study will applied to key facets of robotic developments, including the discipline of human-robot interaction (HRI), a field that pretty much does what it says on the tin.

That’s disingenuous.

These are the robots that Amazon has used so far in its warehouses:

How do you feel about the warehouse worker job when you see them?

These, instead, are the new robots that Amazon just started testing:

How about now? Do you feel about those warehouse workers, now?

This robot, called Digit, is made by Agility Robotics, and if you spend a bit of time reviewing their published material, you’ll notice that their vision is for these robots to replace humans in many other functions.

How do you think people will feel when they are given the chance to stop doing these drudgery jobs, but are not told what’s the path toward a better life?

Putting Lipstick on a Pig

This section of the newsletter is dedicated to AI tools and services that help people and things pretend to be who/what they are not. Another title for this section could be Cutting Corners.

Finally, I can be pretty, too, in corporate meetings, thanks to Google Meet.

Jess Weatherbed, reporting for The Verge:

A highly requested feature, according to Google, is finally being introduced in Google Meet that allows users to apply ‘beauty’ effects during video calls.

There are two portrait modes available that provide different levels of complexion smoothing, under-eye lightening, and teeth whitening. “Subtle” mode, as the name suggests, provides very light cosmetic adjustments, while “Smoothing” mode is a touch heavier with the enhancements.

Portrait touch-up will be switched off by default and can be enabled in the Google Meet settings. The feature is only available to users with premium Google accounts, including Business Standard, Business Plus, Enterprise Essentials, Education Plus, Google One, and Google Workspace Individual accounts. Portrait touch-up isn’t available to users with a personal Google account.

Just to be clear, Google Meet Subtle and Smoothing modes are Snapchat’s beauty filters for adults. And, of course, we’ll have to pay to pretend to be who we are not.

Breaking AI News

Want More? Read the Splendid Edition

This week’s Splendid Edition is titled X-Rays for Vans.

In it:

  • What’s AI Doing for Companies Like Mine?
    • Learn what Deliveroo, Moody’s, and Amazon are doing with AI.
  • A Chart to Look Smart
    • A creative application of generative AI produces accurate simulations of disease progression with limited data.
  • The Tools of the Trade
    • A local ChatGPT that, finally, can be used to chat with multiple documents (yes, Excel spreadsheets, too).