Issue #20 - £600 and your voice is mine forever

July 9, 2023
Free Edition
Generated with Stable Diffusion XL and ComfyUI
In This Issue

  • The Japanese bank Mizuho has decided to give access to generative AI tools to 45,000 workers.
  • The members of the Directors Guild of America (DGA) have agreed on a new contract, and there’s a provision about AI.
  • Voice actors discovering that their voices, and their jobs, are being replaced by AI.
  • GroupM estimates that AI is likely to be involved with at least half of all advertising revenue by the end of 2023.
  • Top UK universities are changing their mind (and their code of conduct) about the use of generative AI.
  • The two Levidow, Levidow & Oberman lawyers who cited fake ChatGPT-generated legal research in a personal-injury case get fined.

P.s.: This week’s Splendid Edition of Synthetic Work is titled When you are uncertain about triggering WWIII or not, ask ChatGPT.

In it, you’ll read what Bridgewater Associates, McCann Worldgroup, Insilico Medicine, and the US Air Force are doing with AI.

Intro

The plot thickens.

Things are starting to happen.

I don’t know about you, but after researching the impact of AI on jobs, the way we work, and our society since February, this month, for inexplicable reasons, I feel like the world is starting to change because of generative AI in a material way.

It’s a feeling that is more tangible than any previous month.

Perhaps, it’s just because the number of data points we are collecting here is becoming non-insignificant and some patterns are starting to emerge.

Whatever the reason, I think I have enough data to show up on stage, or perhaps on YouTube?, to tell the story of how AI is impacting most of the industries of today’s economy.

This week I selected six canaries in the coal mine that, I think, more than many other stories give a sense of the magnitude of the change that is coming.

What are you doing to seize this opportunity?

Alessandro

What Caught My Attention This Week

The first thing that you might find interesting this week is that the Japanese bank Mizuho has decided to give access to generative AI tools to 45,000 workers.

Taiga Uranaka, reporting for Bloomberg:

Mizuho Financial Group Inc. is giving all its Japan bank employees access to Microsoft Corp.’s Azure OpenAI service this week, making it one of the country’s first financial firms to adopt the potentially transformative generative artificial intelligence technology.

The banking giant will allow 45,000 workers at its core lending units in the country to test out the service, according to Toshitake Ushiwatari, general manager of Japan’s third largest bank’s digital planning department. Already, managers and rank-and-file employees are submitting dozens of pitches for ways to harness the technology even before the software is installed.

There are many staff who are embracing ChatGPT in their private lives, Ushiwatari said in an interview. “It’s like poking a beehive,” he said, referring to the enthusiastic response the firm’s move has sparked. “They think it will completely re-set the world, triggering disruptive innovation.”

This is an adoption pattern we have already seen in multiple companies in the Splendid Edition of Synthetic Work: “We are not quite sure what to do with this technology, but it’s very cool and we have massive FOMO. So, here. You figure it out.”

Jokes aside, this is the way. Large language models like GPT-4 are possibly the most general-purpose technology ever invented. OK, perhaps a piece of paper comes close, but not quite as capable.
These AI models can be shaped to the specific needs of each team or individual. And the most qualified person to shape them is the person who will use them.

That’s why the tutorials on how to use GPT-4 for specific business use cases that I share on the Splendid Editions are so important. Even if they are not immediately useful to you, they can be used as a starting point or training wheels to understand how to reshape the AI the way you want and need.

Is your organization letting the workforce experiment with AI models and contribute ideas to increase productivity? Or are you locking down the corporate environment, forcing the employees to use AI in secret?

More than 15 years ago, when AWS came to be, pioneering cloud computing, the companies that let their employees experiment with that new technology, accumulated a massive advantage over their competitors that lasted for over a decade.

Think about it.

But Alessandro, think about security! Think about compliance! Think about the reputational risk!

Mizuho, like many other financial organizations, law and accounting firms, and healthcare organizations we talk about in the Splendid Edition, thinks about these things, too. Heavily regulated industries. And yet, they are letting their employees experiment.

What does it tell you?

Last month, Japan’s minister of education, culture, sports, science, and technology, Keiko Nagoaka, announced the decision to not enforce traditional copyright laws on AI-generated works.

Since then, it seems that the country is pushed by a new-found resolve to become a world leader in AI, accepting risks in a way that is uncharacteristic for its risk-averse culture.

As soon as Mizuho moves from the ideation phase to the implementation phase, and we get to know more about the use cases, I’ll add them to the AI Adoption Tracker.


The second thing that is worth your attention this week is that the Members of the Directors Guild of America (DGA) have agreed on a new contract, and there’s a provision about AI.

Gene Maddaus, reporting for Variety:

The DGA announced Friday that 87% of the membership had voted in favor of the agreement, with 41% turnout. The guild said the turnout was the highest ever for a ratification vote, with 6,728 members voting out of 16,321 eligible.

In interviews, DGA members generally expressed support for the agreement, though some had reservations about the AI language.

The AI provision — the first in any guild contract — stipulates that generative AI does not constitute a “person,” and states that it will not replace the duties traditionally performed by guild members. But it does not prohibit AI, and mandates only “consultation” on how AI will be used in the creative process. It also does not include provisions governing how AI programs can be trained — which are key priorities for the WGA and SAG-AFTRA.

Many writer-directors, who are members of both the WGA and DGA, had publicly announced they would be voting no in solidarity with the WGA strike.

Some writers also criticized the DGA publicly for reaching the agreement, saying it would have been better to hold off on ratifying until the writers have a contract.

If you don’t remember what this is all about, go check Issue #12 – ChatGPT Sucks at Making Signs.

The Writers Guild of America (WGA) has been on strike for almost two months, impacting the production and broadcast of many films and TV Series. Instead, the Directors Guild of America (DGA), which just reached this agreement, has gone on strike only one time in its history, in 1987, for minutes.

Minutes.

Why does this matter so much? Because, even if the Alliance of Motion Picture and Television Producers (AMPTP) doesn’t offer any additional concession to the WGA, this is the first time a union contract mentions AI, trying to protect the workers.

Other unions, across industries, will follow. This is just the beginning.

The next time you read a pundit telling you how the AI impact on jobs will be the same as it has ever been, showing you newspaper articles and charts from 200 years ago, you can politely explain to this person that today’s conditions are not the same as 200 years ago.

You could also remind them that they cannot invoke the expression “past performance is not indicative of future results” only when it’s convenient for them.

I wrote a long essay on why AI is different from every other technology that has impacted productivity in the past in the Intro of Issue #15 – Well, worst case, I’ll take a job as cowboy.


The third thing worth your time this week is the story of voice actors discovering that their voices, and their jobs, are being replaced by AI.

We talked about this risk in Issue #2 – 61% of the office workers admit to having an affair with the AI inside Excel, when we discovered how Apple has started using synthetic voices for their audiobook at the beginning of 2023.

And now we have this story, reported by Madhumita Murgia for Financial Times:

Greg Marston, a British voice actor with more than 20 years’ experience, recently stumbled across his own voice being used for a demo online.

Marston’s was one of several voices on the website Revoicer, which offers an AI tool that converts text into speech in 40 languages, with different intonations, moods and styles.

Since he had no memory of agreeing to his voice being cloned using AI, he got in touch with the company. Revoicer told him they had purchased his voice from IBM.

In 2005, Marston had signed a contract with IBM for a job he had recorded for a satnav system. In the 18-year-old contract, an industry standard, Marston had signed his voice rights away in perpetuity, at a time before generative AI even existed. Now, IBM is licensed to sell his voice to third parties who could clone it using AI and sell it for any commercial purpose. IBM said it was “aware of the concern raised by Mr Marston” and were “discussing it with him directly”.

Revoicer, the AI voice company, said Marston’s voice came from IBM’s cloud text-to-speech service. The start-up bought it from IBM, “like thousands of other developers”, at a rate of $20 for 1mn characters’ worth of spoken audio, or roughly 16 hours.

“[Marston] is working in the same marketplace, he is still selling his voice for a living, and he is now competing with himself,” said Mathilde Pavis, the artist’s lawyer who specialises in digital cloning technologies. “He had signed a document but there was no agreement for him to be cloned by an unforeseen technology 20 years later.”

Pavis said she has had at least 45 AI-related queries since January, including cases of actors who hear their voices on phone scams such as fake insurance calls or AI-generated ads. Equity, the trade union for the performing arts and entertainment industry in the UK, is working with Pavis and says it too has received several complaints over AI scams and exploitation in the past six months.

“We are seeing more and more members having their voice, image and likeness used to create entirely new performances using AI technology, either with or without consent,” said Liam Budd, an industrial official for new media at Equity. “There’s no protection if you’re part of a data set of thousands or millions of people whose voices or likenesses have been scraped by AI developers.”

Laurence Bouvard, a London-based voice actor for audio books, advertisements and radio dramas, has also come across several instances of exploitative behaviour. She recently received Facebook alerts about fake castings, where AI websites ask actors to read out recipes or lines of gibberish that are really only vehicles to scrape their voice data for AI models.

Some advertise regular voice jobs but slip in AI synthesisation clauses to the contracts, while others are upfront but offer a pittance in return for permanent rights to the actor’s voice. A recent job advertisement on the creative jobs marketplace Mandy.com, for instance, described a half-day gig recording a five-minute script on video to create AI presenters by tech company D-ID.

In return for the actor’s image and likeness, the company was offering individuals a £600 flat fee. D-ID said it paid “fair market prices”. It added that the particular advertisement was withdrawn and “does not reflect the final payment”.

“There is a danger every time a performer steps up to a mic or in front of a camera that they could be contracted out of their AI rights.”

£600 and your voice is mine forever.

And it seems it will only get worse, as John Gapper tells us in his report for Financial Times:

SiriusXM, the US radio broadcaster. It plans to use AI to produce ads for smaller companies, offering them choices of AI-generated pitches, and then getting their pick read by an AI voice, rather than by expensive “voice talent”. The result is unlikely to be as persuasive as a human production but it will be cheaper and faster.

On the last point, I can guarantee you that the result will be as persuasive as a human production. You just have to listen to my Fake Show, a podcast I am building with synthetic voices, to prove their power.

You can bet that the Equity union was not already monitoring very closely the new DGA contract with the AI provision that we talked about earlier.

In fact, the article closes:

Equity, which counts Hutton and Bouvard as members, has been calling for new rights to be encoded into the law, explicitly on time-limited contracts, rather than the industry standard of signing rights away in perpetuity. It also demands that the law include the need for explicit consent if an artist’s voice or body is going to be cloned by AI. Two weeks ago, the union put out a “toolkit” providing model clauses and contracts on the use of AI that artists and their agents can refer to.


A fourth thing was interesting this week: GroupM, a media agency belonging to the WPP group, estimates that AI is likely to be involved with at least half of all advertising revenue by the end of 2023.

Daniel Thomas and Hannah Murphy, reporting for Financial Times:

AI is likely to be involved with at least half of all advertising revenue by the end of 2023, said GroupM. But while it has long been used extensively across media buying, the impact of generative AI technology in creating advertising has only started in practice.

Google plans to introduce generative AI into its advertising business over the coming months to help generate creative campaigns, while Meta is exploring similar tools.

“Computers can create things that look like they come from humans, it’s a pretty fundamental shift,” said one advertising boss, who predicted that this could hit jobs that were in effect the “plumbing” of the industry doing basic creative work. But he added: “The computer is not going to come up with that killer idea — they are going to tell you what’s been used before.”

We’ll see about that. What this advertising boss doesn’t remember is that not killer idea comes out of nothing. Humans mix and remix ideas creating new ones. That is creativity.

As I recommended multiple times to all of you, this boss should really study the documentary Everything is a Remix:

More from the same article:

Multiple executives raised concerns about how AI would change how ad agencies charge for their work, with the concept of being able to bill according to the hours of work incurred likely to be under threat as campaigns may now take hours to produce rather than weeks. This could put more value on truly original creative work, said one ad boss.

Yannick Bolloré, chair of Vivendi’s supervisory board and boss of French agency Havas, compared the impact of AI on the industry to the invention of photography on painters.

“This did not kill the painters, but it killed the average painters. AI will never kill the great creative directors. But it could kill the average creative director.”

The problem here is that Yannick Bolloré doesn’t consider the possibility that AI transforming the economics of the advertising industry might make it impossible for a great creative director to emerge in the first place.

Like in the past, when almost only children of affluent families could afford the luxury of becoming scientists, so truly creative work that can beat AI output might become an activity reserved for rich people.

And who’s not rich these days?


The fifth thing that is notable to read this week is that top UK universities are changing their mind (and their code of conduct) about the use of generative AI.

Sally Weale, reporting for The Guardian:

UK universities have drawn up a set of guiding principles to ensure that students and staff are AI literate, as the sector struggles to adapt teaching and assessment methods to deal with the growing use of generative artificial intelligence.

While once there was talk of banning software like ChatGPT within education to prevent cheating, the guidance says students should be taught to use AI appropriately in their studies, while also making them aware of the risks of plagiarism, bias and inaccuracy in generative AI.

Staff will also have to be trained so they are equipped to help students, many of whom are already using ChatGPT in their assignments.

All 24 Russell Group universities have reviewed their academic conduct policies and guidance to reflect the emergence of generative AI.

Developed in partnership with experts in AI and education, the principles represent a first step in what promises to be a challenging period of change in higher education as the world is increasingly transformed by AI.

The five guiding principles state that universities will support both students and staff to become AI literate; staff should be equipped to help students to use generative AI tools appropriately; the sector will adapt teaching and assessment to incorporate the “ethical” use of AI and ensure equal access to it; universities will ensure academic integrity is upheld; and share best practice as the technology evolves.

In case you are wondering, the list of 24 universities that belong to the Russell Group is here, and it includes prestigious schools like the London School of Economics (LSE), King’s College and Imperial College, the University of Oxford and the University of Cambridge, and University of Edinburgh, which has trained more AI experts than any other university in European continent according to Sequoia Capital.

Generative AI is the tide that lifts all boats. And, as you have read in five months of Synthetic Work, it’s transforming the world under our noses. It’s good to see UK universities recognizing it and working to equip the students to navigate this sea of change.

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

By now, you should have heard the story of a US lawyer that decided to use ChatGTP to fake research supporting his case.

Why am I talking about this now? Because the judge finally ruled on the matter, giving us an insight on how the US legal industry is absorbing the generative AI tsunami.

Erin Mulvaney, reporting for The Wall Street Journal:

A Manhattan federal judge issued sanctions Thursday against two lawyers who cited fake ChatGPT-generated legal research in a personal-injury case, penalizing a blunder that made a New York firm an emblem of artificial intelligence gone wrong.

The judge, addressing a matter he described as unprecedented, imposed a $5,000 fine against the firm, Levidow, Levidow & Oberman, and two of its lawyers for using false AI-generated material in a legal submission on behalf of a man who alleged he was injured on an airline flight.

U.S. District Judge Kevin Castel said in his sanctions order that submitting fake research wastes the time of the court and opposing counsel, deprives clients of authentic legal arguments and “promotes cynicism about the legal profession and the American judicial system.”

The judge separately dismissed the lawsuit, on the grounds that it was untimely.

“There is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” Castel said in his ruling. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

Schwartz used OpenAI’s ChatGPT as he was doing research and preparing the submission.

Because Schwartz wasn’t admitted to practice in the New York court, LoDuca was the attorney of record for the case. Schwartz said he wasn’t aware the tool could make up cases, and LoDuca said he had no reason not to trust the research of his colleague.

Castel said the attorneys for a time “doubled down” and stood by the legal research after the court and opposing counsel pointed out that the cases cited didn’t exist. During a hearing this month, he said that several of the fake cases were, when read completely, “legal gibberish.”

The sanctions come after a Texas federal judge recently ordered lawyers in his court not to use artificial intelligence-reliant legal briefs.

The irony of all of this is that if you raise the point about AI transforming the way we work with professionals in various industries, you can be welcomed with skepticism or even hostility.

There’s nothing to discuss.

It’s toy technology that nobody is using.

Be careful: by the time you are ready to admit that the world has changed, you might be the last person to realize it.

Breaking AI News

Want More? Read the Splendid Edition

Jensen gets into the details of how Bridgewater Associates is reinventing itself around machine learning:

specifically what we’ve done on the AI ML side is we’ve set up this venture. Essentially there’s 17 of us with me leading it. You know, I’m still very much involved in the core of Bridgewater, but the 16 others are a hundred percent dedicated to kind of reinventing Bridgewater in a way with machine learning.

We’re going to have a fund specifically run by machine learning techniques

on Bridgewater’s internal tests, you suddenly got to the point where it was able to answer our investment associate tests at the level of first year IA, right around with ChatGPT-3.5 and Anthropic’s most recent Claude. And then GPT-4 was able to do significantly better.

And yet it’s still 80th percentile kind of thing on a lot of those things

so if somebody’s going to use large language models to pick stocks, I think that’s hopeless. That is just a hopeless path. But if you use large language models to create some theories – it can theorize about things – and you use other techniques to judge those theories and you iterate between them to create a sort of an artificial reasoner where language models are good at certainly generating theories, any theories that already exist in human knowledge, and putting those things connected together.

But