Free Edition Sample

Issue #2 - 61% of the office workers admit to having an affair with the AI inside Excel

March 3, 2023
Free Edition
Hi. If you are seeing this, it means that you are a valued member of our community. Or you are reading Issue #0. Or you hacked the archives.
Whichever the case, bravo.

If you have comments about anything you'll find below, or you have material to suggest, or topics you'd like to see covered (don't you dare to pitch me your startup), just send an email.
I'll read all emails, ignore them, wait a few weeks, and then use the best stuff for a new issue of the newsletter, pretending the ideas are original and mine.

Another thing. Super important question:

Do you have one of those moms that inexplicably know everyone and gossip all day long so that one little secret you have shared with them in confidence at breakfast becomes a fact known by the whole town by noon?

If so, can you tell your mom that Synthetic Work is a secret?

If she talks about this newsletter or forwards it to the entire neighbourhood, it helps me a lot.

The feedback I got about the first issue, sent out last week, is better than I expected. Readers have said things like:

I really like the newsletter layout. It is chocked full of useful information, insights and humor. Hats off to you on the first edition!

or

I’m pretty disappointed. Didn’t you promise I was probably going to be disappointed by the newsletter? Turns out I’m not. It’s awesome!

Just to be clear: I did not pay these people. That’s why I’m putting their quotes on a dedicated Testimonials page. You know, to increase peer pressure.

One thing that some people asked is if Synthetic Work is going to be a weekly newsletter. The answer is: yes, that’s the goal.

And to be sure it’ll be 10x harder for me to achieve that goal, this week I also announced a second project: a video podcast about Artificial Intelligence in Italian, hosted on the famous website 01net and done in collaboration with the Italian media company Tecniche Nuove.

It’s a weekly 10-minute pill for people that didn’t find the subscribe button on the Synthetic Work website. The first episode is out and you can tell why I detest being in front of a camera.

Anyway.
Alessandro

In This Issue

  • GPT-4 might have enough memory to remember all your shameful secrets and use them against you in future conversations
  • Religious leaders are freaking out
  • OpenAI is planting the seeds to guarantee the ubiquity of its AI
  • Wealthy businessmen used to record their notes with a machine bigger than a horse head
  • When your mother was saying that the audiobook narrator is not a real job, she was onto something
  • People fall in love with AI easier and faster than in Argentinean telenovelas

P.s.: This week’s Splendid Edition of Synthetic Work is titled Law firms’ morale at an all-time high now that they can use AI to generate evil plans to charge customers more money and it’s about AI infiltrating the Legal industry.

What Caught My Attention This Week

My problem with this newsletter is that every day there are at least ten exceptionally interesting things that happen about AI technologies, and I can only pick 1-2 out of an entire week to discuss here. So, throughout the day, if you could observe me as you do with pandas at the zoo, you’d see me screaming along in front of the screen “Arghhhh this is important” every hour or so.

Hence, with much frustration, here are a couple of things worth paying attention to this week.

The first one.

Travis Fisher is the guy that built a Twitter bot for ChatGPT. You tweet your question to @ChatGPTBot and you get the answer. Except that Travis offered his bot for free, so it became instantaneously too popular and hit the maximum number of requests that OpenAI accepts. Now the bot has a limit of 2,000 tweets per day and if you send it your question you’ll enter a queue so long that your answer will arrive in 7 years. Super useful.

Now, Travis is on good terms with OpenAI (or, at least, he was), and recently received a confidential document from them. The document details the pricing of a new service that the startup is about to launch: Foundry.

We don’t care about Travis’ chatbot or OpenAI Foundry.

What we care about is that, in the Foundry document, there’s the mention of an upcoming AI model called DV that apparently can do something extraordinary (don’t worry about the gibberish, I’ll explain below):

I hereby explain the gibberish: when you interact with an AI like ChatGPT, the large language model (LLM) underneath it must memorize what you are saying or what it’s answering to you, so that the conversation can be coherent. This memory is technically called “context window”.

Today’s LLMs have small context windows. They can remember only a few phrases that have been said in a conversation. So, when you talk to an AI assistant/chatbot for a long period, it eventually starts to repeat itself or contradict itself. The AI doesn’t remember anymore what anybody has said before, and generates a new version of its answers.

Now, this new AI model in the OpenAI document, possibly the much-awaited GPT-4, sports a huge context window compared to today’s standard. It fits 32,000 tokens (a token is the equivalent of a word or a portion of a word – very confusing, but it doesn’t matter).

Why is this so interesting?

Because a context window of 32,000 tokens can store roughly half of a book of average length. That means that OpenAI’s upcoming AI will be able to remember a lot of what is being said in a conversation (imagine multiple chapters of a book) and therefore sustain long interactions with people.

And your point being?

The point is that it becomes possible to think about business scenarios that were impossible before. The first industry that comes to mind is Health Care.

Think, for example, about an AI designed to be a companion for elderly people, to improve their morale or recover part of their productivity. Or, an AI designed to be a therapist for patients that are affected by minor depression.

None of this would be possible if the AI cannot remain coherent for, let’s say, a one-hour conversation.

 

The second thing that caught my attention: the top religious leaders of the world are paying attention to AI.

Madhumita Murgia, a European Technology Correspondent, writes for the Financial Times:

The summit was called to discuss the broad umbrella of artificial intelligence, including decision-making systems, facial recognition and deepfakes.

Before meeting with the Pope, three of the leaders — Archbishop Vincenzo Paglia, Chief Rabbi Eliezer Simha Weisz of the Council of the Chief Rabbinate of Israel, and Sheikh Abdallah bin Bayyah of the UAE, recognised as one of the greatest living scholars on Islamic jurisprudence — articulated their worries. The sheikh feared societal division due to misinformation, and threats to human dignity because of Big Data’s problems with privacy. The archbishop spoke of AI being used to curtail the freedom of refugees, through automated borders; Rabbi Weisz worried that we would forget that intelligence alone is not what makes us human.

Towards the end of the morning, the three figureheads — the elder Sheikh bin Bayyah represented in person by his son — signed a joint covenant alongside the technology companies, known as the Rome Call. The charter proposes six ethical principles that all AI designers should live by, including making AI systems explainable, inclusive, unbiased, reproducible and requiring a human to always take responsibility for an AI-facilitated decision.

I’ll not comment on this. Instead, I’ll say that this event reminds me of the last chapter of the book Homo Deus, by the Israeli historian Yuval Noah Harari. A book that I can’t recommend enough.

The chapter, titled The Data Religion, reads:

Dataism declares that the universe consists of data flows, and the value of any phenomenon or entity is determined by its contribution to data processing. This may strike you as some eccentric fringe notion, but in fact it has already conquered most of the scientific establishment. Dataism was born from the explosive confluence of two scientific tidal waves. In the 150 years since Charles Darwin published On the Origin of Species, the life sciences have come to see organisms as biochemical algorithms.

Not only individual organisms are seen today as data-processing systems, but also entire societies such as beehives, bacteria colonies, forests and human cities. Economists increasingly interpret the economy too as a data-processing system. Laypeople believe that the economy consists of peasants growing wheat, workers manufacturing clothes, and customers buying bread and underpants. Yet experts see the economy as a mechanism for gathering data about desires and abilities, and turning this data into decisions.

As both the volume and speed of data increase, venerable institutions like elections, political parties and parliaments might become obsolete – not because they are unethical, but because they can’t process data efficiently enough.

Yet power vacuums seldom last long. If in the twenty-first century traditional political structures can no longer process the data fast enough to produce meaningful visions, then new and more efficient structures will evolve to take their place. These new structures may be very different from any previous political institutions, whether democratic or authoritarian. The only question is who will build and control these structures. If humankind is no longer up to the task, perhaps it might give somebody else a try.

Like capitalism, Dataism too began as a neutral scientific theory, but is now mutating into a religion that claims to determine right and wrong. The supreme value of this new religion is ‘information flow’. If life is the movement of information, and if we think that life is good, it follows that we should deepen and broaden the flow of information in the universe. According to Dataism, human experiences are not sacred and Homo sapiens isn’t the apex of creation or a precursor of some future Homo deus. Humans are merely tools for creating the Internet-of-All-Things, which may eventually spread out from planet Earth to pervade the whole galaxy and even the whole universe. This cosmic data-processing system would be like God. It will be everywhere and will control everything, and humans are destined to merge into it.

Today’s religious people have a job, too.

A Chart to Look Smart

The easiest way to look smart on social media and gain a ton of followers? Post a lot of charts. And I mean, a lot. It doesn’t matter about what. It doesn’t even matter if they are accurate or completely made up.
You won’t believe that people would fall for it, but they do. Boy, they do.

So this is a section dedicated to making me popular.

The wonderful research firm CB Insights recently published this chart mapping all the investments that OpenAI has made since October 2022 with its Startup Fund.

This is how it works: OpenAI has put together an investment fund from a number of limited partners (LPs), including Microsoft, and it’s giving that money to a number of AI-first startups.

By the way, AI-first means that the company’s core business depends on AI models and systems. Not like the typical deceitful technology vendor, where one guy, somewhere in a cubicle in a remote office, uses AI once to optimize one value on one dashboard, and the PR department sends out a press release announcing that they are now an AI vendor and the party can get started.

These companies get something else on top of the money. They get access to the latest, most powerful AI model developed by OpenAI, before anybody else on the market. For example, these lucky startups have got access to GPT-4, testing and tweaking it with their users. OpenAI helps these companies gain a huge competitive advantage and, in return, they secure a future revenue stream to reach profitability.

This week, the paid edition of Synthetic Work, called The Splendid Edition, is fully dedicated to the Legal industry and I’ve discussed one of the startups in the chart, Harvey AI, and how it’s impacting the legal profession.

OK. Why do we care about this chart in a place like Synthetic Work?

We care because the investment dynamic captured in this chart is propelling the adoption of AI across a wide range of products and industries, changing the way we do work.

I’ll give you one example. The video editing and content production tool that I used to create the first video of my new video podcast (the one I mentioned at the beginning of this newsletter) is called Descript. It’s right there in the chart.

Descript is using AI to transform the way people do video editing. Their AI transcribed my video in Italian in real time and then allowed me to remove some of my most atrocious sentences by simply selecting the words on the documents and hitting delete. Just like you’d edit a Word document.

Without the right expertise (which I don’t have), it would have taken hours to achieve the same with a professional tool like DaVinci Resolve or Adobe Premiere.

So my productivity as an information worker (or better, a guy that doesn’t know what he’s doing in front of a camera) has gone up massively thanks to AI, and it has allowed me to squeeze in this additional weekly project.
On the other side, this week, somebody that works as a professional transcriber won’t be able to buy that nose and ear trimmer that he really wanted.

The sort of injection of capital captured in our chart is typically led by venture capital firms. It’s still the case, but OpenAI is doing something on top of that which leads to a much faster rollout of AI across our daily productivity tools.

The Way We Used to Work

A section dedicated to archive photos and videos of how people used to do things compared to now.

When we think about how artificial intelligence is changing the nature of our jobs, these memories are useful to put things in perspective. It means: stop whining.

Given that we talked about Descript and professional transcribers, Benedict Evans reminds us how people would record memos in 1955:

Benedict also reminds us that the cost of this thing was $172.47 / month when adjusted for inflation.

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

Do you listen to a lot of audiobooks? It’s a big business. In 2022, Spotify’s CEO believed that it was a $70 billion opportunity.

To make an audiobook, you need a professional narrator. This person charges between $250 and $500+ per finished hour (PFH). Which means the 60 minutes of actual content being narrated by the person.

For each of those PFH hours, people work at least 3x to record the content, do retakes, polish the audio, etc. So audiobook narrators don’t really make a $250 minimum per hour. They make a third of that or less.

You can imagine where I’m going here.

At the beginning of the year, Apple introduced AI voice narration in its Books app. These synthetic voices are trained on millions of hours of recorded human voices narrating the content.

All Apple has to do after that is tweak a few parameters and TADA! now they have an infinite number of people that don’t exist narrating books for their users.

Now. If you are thinking that this is a gimmick because synthetic voices suck, I have bad news for you. Modern AI has revolutionized the text-to-speech field, as we used to call it, and today’s voices are nothing like the robotic voices you heard until last year.

I have a third project to announce. Next week. At that time, I hope you’ll realize how incredible voice synthesis technology has become. Until then, you should really try this Language of Love audiobook. First of all, it sounds like a masterpiece of literature that should be mandatory in schools. Second, the voice is really impressive.

Of course, audiobook narrators are not happy at all.

Shubham Agarwal, reporting for Wired, writes:

Gary Furlong, A Texas-based audiobook narrator, had worried for a while that synthetic voices created by algorithms could steal work from artists like himself. Early this month, he felt his worst fears had been realized.

Furlong was among the narrators and authors who became outraged after learning of a clause in contracts between authors and leading audiobook distributor Findaway Voices, which gave Apple the right to “use audiobooks files for machine learning training and models.”

Surprise!

More from the same article:

“It feels like a violation to have our voices being used to train something for which the purpose is to take our place,” says Andy Garcia-Ruse, a narrator from Kansas City.

Does this mean that, in the future, the job of audiobook narrator will disappear? Perhaps. Or, perhaps, it will disappear in its current form.

Perhaps, only the professional voice actors gifted with the most alluring voice will continue to work, selling the rights to synthesize their voices for royalties, and allowing companies like Apple, Amazon, Spotify, etc. to use those voices to read millions of books instead of a handful.
Or the celebrity of the day. Because who doesn’t want to be knocked out by the voice of David Attenborough narrating a torrid novel titled Language of Love?

The good news for the rest of us is that, at the expense of the audiobook narrators of the world, we can now create our own audiobook at a tiny fraction of the cost of the past. Now, we just need to learn how to write something that is not garbage. Actually, nah. There’s ChatGPT for that.

How Do You Feel?

This section is dedicated to the psychological impact of artificial intelligence on people. You might think that this has no relevance to the changing nature of human labour, but it does. Oh, if it does!

For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.

In Issue #1 of Synthetic Work, titled When I grow up, I want to be a Prompt Engineer and Librarian, we wondered about the psychological implications for humans interacting with large language models at work, when the companies that adopt AI (both the technology providers and our employers) don’t implement enough safeguards.

We used Evil Bing threatening Microsoft users as an example, and I promised that, for Issue #2, it would get weirder.

Read this, carefully, and then I’ll explain:

So. Replika is an AI service available online, on smartphones, or in VR. Its pitch is: The AI companion who cares. Always here to listen and talk. Always on your side.

The story of how Replika came to be is a fascinating one, straight out of an episode of Black Mirror, and I highly encourage you to watch it.

Replika features several LLMs, more or less coherent and credible in their interactions with people, depending on their complexity.
People that use the free version of Replika get to talk with the simpler and less capable model, while paid users can talk with the more sophisticated model, capable of long and rich conversations.

For a period, they used OpenAI GPT-3 (the AI model before ChatGPT), but eventually, they developed their own LLMs.

These proprietary LLMs were so good that people started to fall in love with the AI.
The users have not understood that a large language model doesn’t understand what it’s saying and it’s simply exceptionally good at predicting what’s the best next word to say to a human. Or maybe, they understood it and didn’t care.

I know you don’t believe me, so please go ahead and read the hundreds of posts on the subject on the Reddit forum dedicated to Replika. I’ll wait.

However, this is not the problem. The problem with Replika is that, as things started to go get out of hand, the company severely constrained its AI companions and removed the possibility to engage in the so-called erotic roleplay (ERP).

In other words, people fell in love after a good bit of cyber sex.

You take cyber sex away, and people revolt and go heartbroken, like in the message we saw at the beginning of this section.

And, this happened even if the AI companion looked nothing like a real human. Imagine what happens when these chatbots start to look real. An Israeli company called D-ID is famous for using generative AI to produce more realistic-looking avatars, and this week they launched a new service that allows any company to create one that interacts with users thanks to an LLM.

Imagine the one in the video below, but capable of the interactions that Replika offers:

Why are we talking about all of this on Synthetic Work?

Because the use of AI for certain business applications, without enough safeguards, might have a profound impact on human well-being.

For example: what happens if users can develop a very close relationship with the AI that is now being used in the second-biggest legal studio in the UK (something I’ve discussed in the Splendid Edition of Synthetic Work this week) and then you take that capability away?

Or: what happens in Health Care industry scenarios I described in the What Caught My Attention This Week section and patients develop a feeling for that AI?
Some people already started exploring what can be done in that area. If you are keen to read boring academic papers on the subject, go for it: Seniors’ acceptance of virtual humanoid agents.

The example of Replika is proof that these are no more hypothetical scenarios that we can scoff at.

Also, what happens to all the things we tell AI when we fall in love with it? This week Snapchat launched its own AI companion (powered by ChatGPT, of course). How long before TikTok and Facebook do the same?

The answers to these questions are in the movie Her (they are not, but you should really watch/rewatch the movie).

Given that we’ve talked about audiobooks:

Me: Tell me a joke about books
ChatGPT: Why did the book join Facebook? To find its long lost cover!

I’m going to cancel this section of the newsletter…

Want More? Read the Splendid Edition

To understand how AI is infiltrating and changing the Legal industry we need to start from an app called DoNotPay.

The company behind it launched in 2015 with a unique mission: automatically sort out minor annoyances in the daily life of the users like contest parking tickets or utility bills, cancel free trials, etc.
You pay a subscription fee ($36 every three months in the US, £36 every two months in the UK) and you can sue as many people you want as many times as you want. Really.

I tried it many years ago, as it became available in the UK and, at the time, it was just £2 / month. It wasn’t very impressive either: the typical, frustrating chatbot that is supposed to help you with a problem but instead makes you desire to smash the computer against the wall. I can’t say it was helpful in my particular case.

Issue #20 - £600 and your voice is mine forever

July 9, 2023
Free Edition
Hi. If you are seeing this, it means that you are a valued member of our community. Or you are reading Issue #0. Or you hacked the archives.
Whichever the case, bravo.

If you have comments about anything you'll find below, or you have material to suggest, or topics you'd like to see covered (don't you dare to pitch me your startup), just send an email.
I'll read all emails, ignore them, wait a few weeks, and then use the best stuff for a new issue of the newsletter, pretending the ideas are original and mine.

Another thing. Super important question:

Do you have one of those moms that inexplicably know everyone and gossip all day long so that one little secret you have shared with them in confidence at breakfast becomes a fact known by the whole town by noon?

If so, can you tell your mom that Synthetic Work is a secret?

If she talks about this newsletter or forwards it to the entire neighbourhood, it helps me a lot.

The plot thickens.

Things are starting to happen.

I don’t know about you, but after researching the impact of AI on jobs, the way we work, and our society since February, this month, for inexplicable reasons, I feel like the world is starting to change because of generative AI in a material way.

It’s a feeling that is more tangible than any previous month.

Perhaps, it’s just because the number of data points we are collecting here is becoming non-insignificant and some patterns are starting to emerge.

Whatever the reason, I think I have enough data to show up on stage, or perhaps on YouTube?, to tell the story of how AI is impacting most of the industries of today’s economy.

This week I selected six canaries in the coal mine that, I think, more than many other stories give a sense of the magnitude of the change that is coming.

What are you doing to seize this opportunity?

Alessandro

In This Issue

  • The Japanese bank Mizuho has decided to give access to generative AI tools to 45,000 workers.
  • The members of the Directors Guild of America (DGA) have agreed on a new contract, and there’s a provision about AI.
  • Voice actors discovering that their voices, and their jobs, are being replaced by AI.
  • GroupM estimates that AI is likely to be involved with at least half of all advertising revenue by the end of 2023.
  • Top UK universities are changing their mind (and their code of conduct) about the use of generative AI.
  • The two Levidow, Levidow & Oberman lawyers who cited fake ChatGPT-generated legal research in a personal-injury case get fined.

P.s.: This week’s Splendid Edition of Synthetic Work is titled When you are uncertain about triggering WWIII or not, ask ChatGPT.

In it, you’ll read what Bridgewater Associates, McCann Worldgroup, Insilico Medicine, and the US Air Force are doing with AI.

What Caught My Attention This Week

The first thing that you might find interesting this week is that the Japanese bank Mizuho has decided to give access to generative AI tools to 45,000 workers.

Taiga Uranaka, reporting for Bloomberg:

Mizuho Financial Group Inc. is giving all its Japan bank employees access to Microsoft Corp.’s Azure OpenAI service this week, making it one of the country’s first financial firms to adopt the potentially transformative generative artificial intelligence technology.

The banking giant will allow 45,000 workers at its core lending units in the country to test out the service, according to Toshitake Ushiwatari, general manager of Japan’s third largest bank’s digital planning department. Already, managers and rank-and-file employees are submitting dozens of pitches for ways to harness the technology even before the software is installed.

There are many staff who are embracing ChatGPT in their private lives, Ushiwatari said in an interview. “It’s like poking a beehive,” he said, referring to the enthusiastic response the firm’s move has sparked. “They think it will completely re-set the world, triggering disruptive innovation.”

This is an adoption pattern we have already seen in multiple companies in the Splendid Edition of Synthetic Work: “We are not quite sure what to do with this technology, but it’s very cool and we have massive FOMO. So, here. You figure it out.”

Jokes aside, this is the way. Large language models like GPT-4 are possibly the most general-purpose technology ever invented. OK, perhaps a piece of paper comes close, but not quite as capable.
These AI models can be shaped to the specific needs of each team or individual. And the most qualified person to shape them is the person who will use them.

That’s why the tutorials on how to use GPT-4 for specific business use cases that I share on the Splendid Editions are so important. Even if they are not immediately useful to you, they can be used as a starting point or training wheels to understand how to reshape the AI the way you want and need.

Is your organization letting the workforce experiment with AI models and contribute ideas to increase productivity? Or are you locking down the corporate environment, forcing the employees to use AI in secret?

More than 15 years ago, when AWS came to be, pioneering cloud computing, the companies that let their employees experiment with that new technology, accumulated a massive advantage over their competitors that lasted for over a decade.

Think about it.

But Alessandro, think about security! Think about compliance! Think about the reputational risk!

Mizuho, like many other financial organizations, law and accounting firms, and healthcare organizations we talk about in the Splendid Edition, thinks about these things, too. Heavily regulated industries. And yet, they are letting their employees experiment.

What does it tell you?

Last month, Japan’s minister of education, culture, sports, science, and technology, Keiko Nagoaka, announced the decision to not enforce traditional copyright laws on AI-generated works.

Since then, it seems that the country is pushed by a new-found resolve to become a world leader in AI, accepting risks in a way that is uncharacteristic for its risk-averse culture.

As soon as Mizuho moves from the ideation phase to the implementation phase, and we get to know more about the use cases, I’ll add them to the AI Adoption Tracker.


The second thing that is worth your attention this week is that the Members of the Directors Guild of America (DGA) have agreed on a new contract, and there’s a provision about AI.

Gene Maddaus, reporting for Variety:

The DGA announced Friday that 87% of the membership had voted in favor of the agreement, with 41% turnout. The guild said the turnout was the highest ever for a ratification vote, with 6,728 members voting out of 16,321 eligible.

In interviews, DGA members generally expressed support for the agreement, though some had reservations about the AI language.

The AI provision — the first in any guild contract — stipulates that generative AI does not constitute a “person,” and states that it will not replace the duties traditionally performed by guild members. But it does not prohibit AI, and mandates only “consultation” on how AI will be used in the creative process. It also does not include provisions governing how AI programs can be trained — which are key priorities for the WGA and SAG-AFTRA.

Many writer-directors, who are members of both the WGA and DGA, had publicly announced they would be voting no in solidarity with the WGA strike.

Some writers also criticized the DGA publicly for reaching the agreement, saying it would have been better to hold off on ratifying until the writers have a contract.

If you don’t remember what this is all about, go check Issue #12 – ChatGPT Sucks at Making Signs.

The Writers Guild of America (WGA) has been on strike for almost two months, impacting the production and broadcast of many films and TV Series. Instead, the Directors Guild of America (DGA), which just reached this agreement, has gone on strike only one time in its history, in 1987, for minutes.

Minutes.

Why does this matter so much? Because, even if the Alliance of Motion Picture and Television Producers (AMPTP) doesn’t offer any additional concession to the WGA, this is the first time a union contract mentions AI, trying to protect the workers.

Other unions, across industries, will follow. This is just the beginning.

The next time you read a pundit telling you how the AI impact on jobs will be the same as it has ever been, showing you newspaper articles and charts from 200 years ago, you can politely explain to this person that today’s conditions are not the same as 200 years ago.

You could also remind them that they cannot invoke the expression “past performance is not indicative of future results” only when it’s convenient for them.

I wrote a long essay on why AI is different from every other technology that has impacted productivity in the past in the Intro of Issue #15 – Well, worst case, I’ll take a job as cowboy.


The third thing worth your time this week is the story of voice actors discovering that their voices, and their jobs, are being replaced by AI.

We talked about this risk in Issue #2 – 61% of the office workers admit to having an affair with the AI inside Excel, when we discovered how Apple has started using synthetic voices for their audiobook at the beginning of 2023.

And now we have this story, reported by Madhumita Murgia for Financial Times:

Greg Marston, a British voice actor with more than 20 years’ experience, recently stumbled across his own voice being used for a demo online.

Marston’s was one of several voices on the website Revoicer, which offers an AI tool that converts text into speech in 40 languages, with different intonations, moods and styles.

Since he had no memory of agreeing to his voice being cloned using AI, he got in touch with the company. Revoicer told him they had purchased his voice from IBM.

In 2005, Marston had signed a contract with IBM for a job he had recorded for a satnav system. In the 18-year-old contract, an industry standard, Marston had signed his voice rights away in perpetuity, at a time before generative AI even existed. Now, IBM is licensed to sell his voice to third parties who could clone it using AI and sell it for any commercial purpose. IBM said it was “aware of the concern raised by Mr Marston” and were “discussing it with him directly”.

Revoicer, the AI voice company, said Marston’s voice came from IBM’s cloud text-to-speech service. The start-up bought it from IBM, “like thousands of other developers”, at a rate of $20 for 1mn characters’ worth of spoken audio, or roughly 16 hours.

“[Marston] is working in the same marketplace, he is still selling his voice for a living, and he is now competing with himself,” said Mathilde Pavis, the artist’s lawyer who specialises in digital cloning technologies. “He had signed a document but there was no agreement for him to be cloned by an unforeseen technology 20 years later.”

Pavis said she has had at least 45 AI-related queries since January, including cases of actors who hear their voices on phone scams such as fake insurance calls or AI-generated ads. Equity, the trade union for the performing arts and entertainment industry in the UK, is working with Pavis and says it too has received several complaints over AI scams and exploitation in the past six months.

“We are seeing more and more members having their voice, image and likeness used to create entirely new performances using AI technology, either with or without consent,” said Liam Budd, an industrial official for new media at Equity. “There’s no protection if you’re part of a data set of thousands or millions of people whose voices or likenesses have been scraped by AI developers.”

Laurence Bouvard, a London-based voice actor for audio books, advertisements and radio dramas, has also come across several instances of exploitative behaviour. She recently received Facebook alerts about fake castings, where AI websites ask actors to read out recipes or lines of gibberish that are really only vehicles to scrape their voice data for AI models.

Some advertise regular voice jobs but slip in AI synthesisation clauses to the contracts, while others are upfront but offer a pittance in return for permanent rights to the actor’s voice. A recent job advertisement on the creative jobs marketplace Mandy.com, for instance, described a half-day gig recording a five-minute script on video to create AI presenters by tech company D-ID.

In return for the actor’s image and likeness, the company was offering individuals a £600 flat fee. D-ID said it paid “fair market prices”. It added that the particular advertisement was withdrawn and “does not reflect the final payment”.

“There is a danger every time a performer steps up to a mic or in front of a camera that they could be contracted out of their AI rights.”

£600 and your voice is mine forever.

And it seems it will only get worse, as John Gapper tells us in his report for Financial Times:

SiriusXM, the US radio broadcaster. It plans to use AI to produce ads for smaller companies, offering them choices of AI-generated pitches, and then getting their pick read by an AI voice, rather than by expensive “voice talent”. The result is unlikely to be as persuasive as a human production but it will be cheaper and faster.

On the last point, I can guarantee you that the result will be as persuasive as a human production. You just have to listen to my Fake Show, a podcast I am building with synthetic voices, to prove their power.

You can bet that the Equity union was not already monitoring very closely the new DGA contract with the AI provision that we talked about earlier.

In fact, the article closes:

Equity, which counts Hutton and Bouvard as members, has been calling for new rights to be encoded into the law, explicitly on time-limited contracts, rather than the industry standard of signing rights away in perpetuity. It also demands that the law include the need for explicit consent if an artist’s voice or body is going to be cloned by AI. Two weeks ago, the union put out a “toolkit” providing model clauses and contracts on the use of AI that artists and their agents can refer to.


A fourth thing was interesting this week: GroupM, a media agency belonging to the WPP group, estimates that AI is likely to be involved with at least half of all advertising revenue by the end of 2023.

Daniel Thomas and Hannah Murphy, reporting for Financial Times:

AI is likely to be involved with at least half of all advertising revenue by the end of 2023, said GroupM. But while it has long been used extensively across media buying, the impact of generative AI technology in creating advertising has only started in practice.

Google plans to introduce generative AI into its advertising business over the coming months to help generate creative campaigns, while Meta is exploring similar tools.

“Computers can create things that look like they come from humans, it’s a pretty fundamental shift,” said one advertising boss, who predicted that this could hit jobs that were in effect the “plumbing” of the industry doing basic creative work. But he added: “The computer is not going to come up with that killer idea — they are going to tell you what’s been used before.”

We’ll see about that. What this advertising boss doesn’t remember is that not killer idea comes out of nothing. Humans mix and remix ideas creating new ones. That is creativity.

As I recommended multiple times to all of you, this boss should really study the documentary Everything is a Remix:

More from the same article:

Multiple executives raised concerns about how AI would change how ad agencies charge for their work, with the concept of being able to bill according to the hours of work incurred likely to be under threat as campaigns may now take hours to produce rather than weeks. This could put more value on truly original creative work, said one ad boss.

Yannick Bolloré, chair of Vivendi’s supervisory board and boss of French agency Havas, compared the impact of AI on the industry to the invention of photography on painters.

“This did not kill the painters, but it killed the average painters. AI will never kill the great creative directors. But it could kill the average creative director.”

The problem here is that Yannick Bolloré doesn’t consider the possibility that AI transforming the economics of the advertising industry might make it impossible for a great creative director to emerge in the first place.

Like in the past, when almost only children of affluent families could afford the luxury of becoming scientists, so truly creative work that can beat AI output might become an activity reserved for rich people.

And who’s not rich these days?


The fifth thing that is notable to read this week is that top UK universities are changing their mind (and their code of conduct) about the use of generative AI.

Sally Weale, reporting for The Guardian:

UK universities have drawn up a set of guiding principles to ensure that students and staff are AI literate, as the sector struggles to adapt teaching and assessment methods to deal with the growing use of generative artificial intelligence.

While once there was talk of banning software like ChatGPT within education to prevent cheating, the guidance says students should be taught to use AI appropriately in their studies, while also making them aware of the risks of plagiarism, bias and inaccuracy in generative AI.

Staff will also have to be trained so they are equipped to help students, many of whom are already using ChatGPT in their assignments.

All 24 Russell Group universities have reviewed their academic conduct policies and guidance to reflect the emergence of generative AI.

Developed in partnership with experts in AI and education, the principles represent a first step in what promises to be a challenging period of change in higher education as the world is increasingly transformed by AI.

The five guiding principles state that universities will support both students and staff to become AI literate; staff should be equipped to help students to use generative AI tools appropriately; the sector will adapt teaching and assessment to incorporate the “ethical” use of AI and ensure equal access to it; universities will ensure academic integrity is upheld; and share best practice as the technology evolves.

In case you are wondering, the list of 24 universities that belong to the Russell Group is here, and it includes prestigious schools like the London School of Economics (LSE), King’s College and Imperial College, the University of Oxford and the University of Cambridge, and University of Edinburgh, which has trained more AI experts than any other university in European continent according to Sequoia Capital.

Generative AI is the tide that lifts all boats. And, as you have read in five months of Synthetic Work, it’s transforming the world under our noses. It’s good to see UK universities recognizing it and working to equip the students to navigate this sea of change.

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

By now, you should have heard the story of a US lawyer that decided to use ChatGTP to fake research supporting his case.

Why am I talking about this now? Because the judge finally ruled on the matter, giving us an insight on how the US legal industry is absorbing the generative AI tsunami.

Erin Mulvaney, reporting for The Wall Street Journal:

A Manhattan federal judge issued sanctions Thursday against two lawyers who cited fake ChatGPT-generated legal research in a personal-injury case, penalizing a blunder that made a New York firm an emblem of artificial intelligence gone wrong.

The judge, addressing a matter he described as unprecedented, imposed a $5,000 fine against the firm, Levidow, Levidow & Oberman, and two of its lawyers for using false AI-generated material in a legal submission on behalf of a man who alleged he was injured on an airline flight.

U.S. District Judge Kevin Castel said in his sanctions order that submitting fake research wastes the time of the court and opposing counsel, deprives clients of authentic legal arguments and “promotes cynicism about the legal profession and the American judicial system.”

The judge separately dismissed the lawsuit, on the grounds that it was untimely.

“There is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” Castel said in his ruling. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

Schwartz used OpenAI’s ChatGPT as he was doing research and preparing the submission.

Because Schwartz wasn’t admitted to practice in the New York court, LoDuca was the attorney of record for the case. Schwartz said he wasn’t aware the tool could make up cases, and LoDuca said he had no reason not to trust the research of his colleague.

Castel said the attorneys for a time “doubled down” and stood by the legal research after the court and opposing counsel pointed out that the cases cited didn’t exist. During a hearing this month, he said that several of the fake cases were, when read completely, “legal gibberish.”

The sanctions come after a Texas federal judge recently ordered lawyers in his court not to use artificial intelligence-reliant legal briefs.

The irony of all of this is that if you raise the point about AI transforming the way we work with professionals in various industries, you can be welcomed with skepticism or even hostility.

There’s nothing to discuss.

It’s toy technology that nobody is using.

Be careful: by the time you are ready to admit that the world has changed, you might be the last person to realize it.

Want More? Read the Splendid Edition

Jensen gets into the details of how Bridgewater Associates is reinventing itself around machine learning:

specifically what we’ve done on the AI ML side is we’ve set up this venture. Essentially there’s 17 of us with me leading it. You know, I’m still very much involved in the core of Bridgewater, but the 16 others are a hundred percent dedicated to kind of reinventing Bridgewater in a way with machine learning.

We’re going to have a fund specifically run by machine learning techniques

on Bridgewater’s internal tests, you suddenly got to the point where it was able to answer our investment associate tests at the level of first year IA, right around with ChatGPT-3.5 and Anthropic’s most recent Claude. And then GPT-4 was able to do significantly better.

And yet it’s still 80th percentile kind of thing on a lot of those things

so if somebody’s going to use large language models to pick stocks, I think that’s hopeless. That is just a hopeless path. But if you use large language models to create some theories – it can theorize about things – and you use other techniques to judge those theories and you iterate between them to create a sort of an artificial reasoner where language models are good at certainly generating theories, any theories that already exist in human knowledge, and putting those things connected together.

But

Issue #21 - Ready to compete against yourself?

July 16, 2023
Free Edition
Hi. If you are seeing this, it means that you are a valued member of our community. Or you are reading Issue #0. Or you hacked the archives.
Whichever the case, bravo.

If you have comments about anything you'll find below, or you have material to suggest, or topics you'd like to see covered (don't you dare to pitch me your startup), just send an email.
I'll read all emails, ignore them, wait a few weeks, and then use the best stuff for a new issue of the newsletter, pretending the ideas are original and mine.

Another thing. Super important question:

Do you have one of those moms that inexplicably know everyone and gossip all day long so that one little secret you have shared with them in confidence at breakfast becomes a fact known by the whole town by noon?

If so, can you tell your mom that Synthetic Work is a secret?

If she talks about this newsletter or forwards it to the entire neighbourhood, it helps me a lot.

You have now read five months of Synthetic Work, which is more than double the pile of paper you see in this picture:

You are now more knowledgeable about AI than most of the people out there.

If you read every issue, and both Editions, cover to cover, in these five months, you could confidently go to your boss and say “I know what’s happening. Let me help you with our AI adoption project.”

And that might alter forever the trajectory of your career. Who knows?

Now.

I’m asking you only one thing:

How can Synthetic Work help you even more? What problem would you want it to help you solve?

Reply to this email and let me know. I read every reply.

Alessandro

In This Issue

  • The new AI models of the week, GPT-4 with Code Interpreter and Claude 2, have the potential to transform the way we work. Let’s see why.
  • New York City is now enforcing a law to regulate the use of AI in hiring processes.
  • AI is infiltrating the agenda of the Labour Party in the UK.
  • Some artists are not very happy with Adobe and its new generative AI system Firefly.
  • The Boston Consulting Group surveyed nearly 13,000 people in 18 countries on what they feel about AI.

P.s.: This week’s Splendid Edition is titled Investigating absurd hypotheses with GPT-4.

In it, we discover the ELI5 technique, comparing how well it works with both OpenAI GPT-4 with Code Interpreter and Anthropic Claude 2.

We also use the GPT-4 with Code Interpreter capabilities to analyze two unrelated datasets, overlay one on top of the other in a single chart, and investigate correlation hypotheses.

What Caught My Attention This Week

The first thing that is worth your attention this week is a prime example of machismo in the tech industry wherein OpenAI released a new variant of its most powerful model called GPT-4 with Code Interpreter, Anthropic released Claude 2, and Google enabled Bard to speak its answers.

Normally, Synthetic Work doesn’t concern itself with the release of new AI models, but the new capabilities unlocked by these particular releases will impact the way we work in a substantial way.

So, briefly, let’s discuss about these new models.

GPT-4 Code Interpreter

First, you should know that “Code Interpreter” is a misnomer. This is not a model that is designed just to generate programming code, or interpret programming code, or more broadly help software developers, like the version of GPT-4 that powers GitHub Copilot.

The way to think about the name “Code Interpreter” is: “A version of GPT-4 that, to answer my questions to the best of its abilities, will resort to writing and running small software programs if it has to. I don’t have to know or review or touch any of these small programs. They are there to my benefit but I don’t have to worry about them.”

Which is not entirely different from what the brain does when we are asked to perform a particular task like solving math problems, cooking, or drawing with charcoal on paper. The brain resort to specific routines that it has acquired with training over your life experiences, and that you don’t have to worry about. You just do the task.

That said, it’s an unfortunate name, but you have to imagine that this Code Interpreter capability, just like the capability to run 3rd party plug-ins, will probably end up as part of a tangle of interconnected AI models that OpenAI will simply call GPT-5. Or, due to popular demand, “ChatGPT 5”.

Why does this GPT-4 with Code Interpreter matter so much?

The model can finally do advanced things like looking into certain types of files, like PDFs, extracting and manipulating the data trapped in them, saving you the enormous trouble of copying and pasting that information manually inside the prompt box.

It cannot yet “see” a picture, within a document or as a stand-alone file, but it’s a problem of OpenAI not having enough computing power to serve the entire planet, and not a problem of capabilities. So, this will come, too.

This week’s Splendid Edition is dedicated to exploring how this new capability can be used to do invaluable work like simplifying the language in legal agreements so that more people can understand what they are agreeing to with partners, suppliers, donors, landlords, ex-husbands/wives, service providers, city councils, and so on.

Many people have only tried GPT-3 (if nothing else because it’s free) and have no idea of the gigantic difference that exists in terms of capabilities with GPT-4.

GPT-3 was impressive in terms of technological progress, but still meh in terms of matching the expectations of non-technical users. GPT-4 is a quantum leap from that standpoint and GPT-4 with Code Interpreter, if properly directed, can do even more amazing things.

So, if you haven’t tried the GTP-4 family of models, you really should or you will not understand how all the hype can be justified.

Claude 2

Like GPT-4 with Code Interpreter, Claude 2 can upload and inspect files. Differently from GPT-4 with Code Interpreter, Claude 2 features an enormous context window of 100,000 tokens.

If you have read Synthetic Work since the beginning, you that this is a huge deal. For the many new readers that have joined in the last few weeks, let’s repeat what the context window is one more time:

In a very loose analogy, the context window of an AI like ChatGPT or GPT-4 is like the short-term memory for us humans. It’s a place where we store information necessary to sustain a conversation (or perform an action) over a prolonged amount of time (for example a few minutes or one hour).

Without it, we’d forget what we were talking about at the beginning of the conversation or what we were supposed to accomplish when we decided to go to the kitchen.

The longer this short-term memory, this context window, the easier is for an AI to interact with people without repeating or contradicting itself after, say, ten messages.

The context window of GPT-4 is big enough to fit 8,000 tokens (approximately 6,000 words, as 100 tokens ~= 75 words).
An imminent variant of GPT-4 is big enough to fit 32,000 tokens (approximately 24,000 words).

So, the fact that Claude 2 sports a context window of 100,000 tokens (approximately 75,000 words) should compel you to try the model. If you are in the US or UK, you can do so for free.

In this week’s Splendid Edition, I used the same prompts with both GPT-4 with Code Interpreter and Claude 2 to see how they would compare and I was very impressed by the output generated by the latter.

In the past, I tested Claude 1.3 and the difference in quality is tangible.

On top of what you’ll read in the Splendid Edition, I did additional tests and I can tell you Claude 2’s extended context window makes a huge difference in terms of quality when you ask to extract information from a document.

If you try that task on a long document with the new GPT-4 with Code Interpreter, it will generate a very rigid program that will try to find the information you want to extract via complicated techniques like regular expressions, rather than using the incredible language manipulation capabilities of the default GPT-4 model. And it will fail miserably.

Claude 2, on the other hand, will use its extended context window to understand the document and extract the information you want in a much more flexible way.

The problem is that, at the moment, Claude 2 is much more prone to hallucinations than GPT-4 so you have to double-check everything that is extracted.

Claude 2 seems also capable of browsing the web, like the GPT-4 Web Browsing variant that has been temporarily disabled by OpenAI.

A quick test done to answer a question somebody asked me on Twitter, made me discover that this new model is the best PR agent in the world for Synthetic Work:

It’s always flattering when an AI that understands absolutely nothing about the world and is less aware of an ant sees value in your work.


The second thing worth your attention this week is that New York City is now enforcing a law to regulate the use of AI in hiring processes.

Kyle Wiggers, reporting for TechCrunch:

After months of delays, New York City today began enforcing a law that requires employers using algorithms to recruit, hire or promote employees to submit those algorithms for an independent audit — and make the results public. The first of its kind in the country, the legislation — New York City Local Law 144 — also mandates that companies using these types of algorithms make disclosures to employees or job candidates.

At a minimum, the reports companies must make public have to list the algorithms they’re using as well an an “average score” candidates of different races, ethnicities and genders are likely to receive from the said algorithms — in the form of a score, classification or recommendation. It must also list the algorithms’ “impact ratios,” which the law defines as the average algorithm-given score of all people in a specific category (e.g. Black male candidates) divided by the average score of people in the highest-scoring category.

Companies found not to be in compliance will face penalties of $375 for a first violation, $1,350 for a second violation and $1,500 for a third and any subsequent violations. Each day a company uses an algorithm in noncompliance with the law, it’ll constitute a separate violation — as will failure to provide sufficient disclosure.

Importantly, the scope of Local Law 144, which was approved by the City Council and will be enforced by the NYC Department of Consumer and Worker Protection, extends beyond NYC-based workers.

Nearly one in four organizations already leverage AI to support their hiring processes, according to a February 2022 survey from the Society for Human Resource Management. The percentage is even higher — 42% — among employers with 5,000 or more employees.

More information from Lauren Weber, reporting for the Wall Street Journal:

Under the law, workers and job applicants can’t sue companies based on the impact ratios alone, but they can use the information as potential evidence in discrimination cases filed under local and federal statutes. A ratio—a number between 0 and 1—that’s closer to 1 indicates little or no bias, while a ratio of 0.3 shows, for example, that three female candidates are making it through a screening process for every 10 male candidates getting through.

A low ratio doesn’t automatically mean that an employer is discriminating against candidates, Lipnic said. According to longstanding law, disparate impact can be lawful if a company can show that its hiring criteria are job-related and consistent with business necessity.

For example, Blacks and Hispanics have lower college graduation rates than whites and Asian-Americans, and if an employer can show that a college degree is a necessary requirement for a job and therefore its system screens out a higher share of Hispanic candidates because fewer of those applicants have degrees, the employer can defend its process.

A 2021 study by Harvard Business School professor Joseph Fuller found that automated decision software excludes more than 10 million workers from hiring discussions.

NYC 144 passed the New York City Council in 2021 and was delayed for nearly two years while the council considered public comments, including opposition from many employers and technology vendors. BSA, an organization representing large software companies including Microsoft, Workday and Oracle, lobbied to reduce the reporting requirements and narrow the scope of what kinds of uses would be subject to an audit.

Just like algorithmic trading, the practice of trading on the stock market between computers without human intervention, job hunting is becoming a perverse game akin to Search Engine Optimization (SEO) techniques to game Google’s search results.

The employer uses an AI to screen an ocean of resumes coming in looking for certain keywords and patterns in the employment history captured in the file.

In retaliation, job seekers have started using GPT-4 to optimize their resumes for each job application and satisfy the AI on the opposite side.

At the end of the day, two people will sit face-to-face to discuss a job opportunity because their respective AIs have decided so.

Which reminds me, in a completely different analogy, the episode of Black Mirror titled “Hang the DJ” in season 4.

Isn’t hiring a form of dating?

The relationship between an employee and an employer can be equally short or long-term, equally rewarding or devastating, and it can equally erase your individual identity if you don’t set boundaries.

This is the moment where I would equal children to products, but I won’t do it.

I won’t take it that far.


The third thing that is interesting this week is how AI is infiltrating the agenda of the Labour Party in the UK.

Use the previous story as context for this one.

Kiran Stacey, reporting for The Guardian:

Labour would use artificial intelligence to help those looking for work prepare their CVs, find jobs and receive payments faster, according to the party’s shadow work and pensions secretary.

Jonathan Ashworth told the Guardian he thought the Department for Work and Pensions was wasting millions of pounds by not using cutting-edge technology, even as the party also says AI could also cause massive disruption to the jobs market.

“DWP broadly gets 60% of unemployed people back to work within nine months. I think by better embracing modern tech and AI we can transform its services and raise that figure.”

Labour would use AI in three particular areas. Firstly, it would make more use of job-matching software, which can use the data the DWP already has on people looking for work to pair them up more quickly with prospective employers. Secondly, the party would use algorithms to process claims more quickly. And thirdly, it would use AI to a greater extent to help identify fraud and error in the system. DWP already has a pilot scheme to use AI to find organised benefits fraud, such as cloning other people’s identities.

Ashworth said, however, humans would always be required to make the final decisions over jobs and benefit decisions, not least to avoid accidental bias and discrimination.

Lucy Powell, the shadow digital secretary will say the technology could trigger a second deindustrialisation, causing major economic damage to entire parts of the UK. She will highlight the risk of “robo-firing”. There was a recent case in the Netherlands where drivers successfully sued Uber after claiming they were fired by an algorithm.

The story Powell is referring to focuses on the adoption of facial recognition technology by Uber to verify the identity of its drivers. They introduced it in April 2020 under the name of Real-Time ID Check.

It’s a form of driver surveillance that guarantees that the person behind the wheel is the same person that has been approved by Uber to drive its customers around.

Another form of driver surveillance challenged in court involved JustEat drivers, automatically tracked, evaluated, and fired by the company for taking too much time in collecting food for the customers. We discussed this story in Issue #10 – The Memory of an Elephant.


A fourth story makes the cut this week and it’s about a backslash from artists against Adobe and its new generative AI system Firefly.

Sharon Goldman, reporting for VentureBeat:

Adobe’s stock soared after a strong earnings report last week — where executives touted the success of its “commercially safe” generative AI image generation platform Adobe Firefly. They say Firefly was trained on hundreds of millions of licensed images in the company’s royalty-free Adobe Stock offering, as well as on “openly licensed content and other public domain content without copyright restrictions.” On the Firefly website, Adobe says it is “committed to developing creative generative AI responsibly, with creators at the center.”

But a vocal group of contributors to Adobe Stock, which includes 300 million images, illustrations and other content that trained the Firefly model, say they are not happy. According to some creators, several of whom VentureBeat spoke to on the record, Adobe trained Firefly on their stock images without express notification or consent.

Dean Samed is a UK-based creator who works in Photoshop image editing and digital art. He told VentureBeat over Zoom that he has been using Adobe products since he was 14 years old, and has contributed over 2,000 images to Adobe Stock.

“They’re using our IP to create content that will compete with us in the marketplace,” he said. “Even though they may legally be able to do that, because we all signed the terms of service, I don’t think it is either ethical or fair.”

He said he didn’t receive any notice that Adobe was training an AI model. “I don’t recall receiving an email or notification that said things are changing, and that they would be updating the terms of service,” he said.

What matters here is not the fact that Adobe is being honest or not about their capability to gather permission from the artists that have contributed to Adobe Stock.

There’s something else, a theme that is emerging from this and other stories that we mentioned so far in Synthetic Work, last week’s story about the British voice actor in Issue #20 – £600 and your voice is mine forever.

And the theme is:

People are starting to think that they have to compete against themselves.

Material for a future intro of a future Free Edition.

Let’s continue with the article:

According to Eric Urquhart, a Connecticut-based artist who has a day job as a matte artist in a major animation studio, artists who joined Adobe Stock years ago could never have anticipated the rise of generative AI.

“Back then, no one was thinking about AI,” said Urquhart, who joined Adobe Stock in 2012 and has several thousand images on the platform. “You just keep uploading your images and you get your residuals every month and life goes on — then all of a sudden, you find out that they trained their AI on your images and on everybody’s images that they don’t own. And they’re calling it ‘ethical’ AI.”

Adobe Stock creators also say Adobe has not been transparent. “I’m probably not adding anything new because they will probably still try to train their AI off my new stuff,” said Rob Dobi, a Connecticut-based photographer. “But is there a point in removing my old stuff, because [the model] has already been trained? I don’t know. Will my stuff remain in an algorithm if I remove it? I don’t know. Adobe doesn’t answer any questions.”

The artists say that even if Adobe did not do anything illegal and this was indeed within their rights, the ethical thing to do would have been to pre-notify their Adobe Stock artists about the Firefly AI training, and offer them an opt-out option right from the beginning.

Which is what Stability AI has been doing for a while now.

But there’s one final subtlety that is worth mentioning:

Adobe, in response to the artists’ claims, told VentureBeat by email that its goal is to build generative AI in a way that enables creators to monetize their talents, much as Adobe has done with platforms like Behance. It is important to note, a spokesperson says, that Firefly is still in beta.

“During this phase, we are actively engaging the community at large through direct conversations, online platforms like Discord and other channels, to ensure what we are building is informed and driven by the community,” the Adobe spokesperson said, adding that Adobe remains “committed” to compensating creators. As Firefly is in beta, “we will provide more specifics on creator compensation once these offerings are generally available.”

To me, this sounds like: “We are happy to compensate you for the images we trained on, but there’s no chance in hell we’ll let you remove them from our training dataset. If everybody does that, our model is crap.”

More on the competition with ourselves:

Samed said that Adobe Stock is “not a feasible platform for us to operate in anymore,” adding that the marketplace is “completely flooded and inundated with AI content.”

Adobe should “stop using the Adobe Stock contributors as their own personal IP, it is just not fair,” he said, “and then the derivative that was created from that data scrape is then used to compete against the contributors that [built and supported] that platform from the beginning.”

Dobi said he has noticed his stock photos have not been selling as well. “Someone can just type in a prompt now and recreate the images based off your hard work,” he said. “And Adobe, which is supposed to be, I mean, I guess they thought they were looking out for creators, apparently aren’t because they’re stabbing all their creators that helped create their stock library in the back.”

Urquhart said that as an artist in his mid-50s who also does analog fine art, he feels he can “ride this out,” but he wonders about the next generation of artists who have only worked with digital tools. “You have very talented Gen Z artists, they have the most to worry about,” he said. “Like if all of a sudden AI takes over and iPad digital art is no longer relevant because somebody just typed in a prompt and got five versions of the same thing, then I can always just pick up my paintbrush.”

“The damage that’s going to be done is going to be unlike anything we’ve ever seen before,” he said. “I’m in the process of selling my company, I’ve got out — I don’t want to participate or compete in this marketplace anymore.”

How Do You Feel?

This section is dedicated to the psychological impact of artificial intelligence on people. You might think that this has no relevance to the changing nature of human labour, but it does. Oh, if it does!

For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.

The Boston Consulting Group surveyed nearly 13,000 people—from executive suite leaders to middle managers and frontline employees—in 18 countries on what they feel about AI.

The results indicate a growing optimism, especially among business leaders, and a decreasing concern, especially among people that use AI:

The discrepancy between the managers and the frontline workers is probably due to the fact that managers feel safe in their jobs.

So, perhaps, it’s time to remember my (in)famous post on various social media in May. which was untitled, but if it had a title it would be titled: “AI is coming for people in the top management positions of large organizations, too.”

https://www.linkedin.com/posts/alessandroperilli_one-thing-that-i-want-to-make-clear-ai-is-activity-7062527119142051840-QdYF

Speaking of managers, the survey also suggests that the more senior the manager, the more likely they are optimistic about AI:

So, the narrative that comes across is that the more you use AI the more optimistic you become, and business leaders are champions in this regard because they have embraced AI more than anybody else.

True heroes of the digital transformation.

Of course, all of this sounds fantastic, except that:

  • The Boston Consulting Group is a consulting firm that makes money if its customers want to change things. The more galvanized they are about a new technology, the more there is to transform. So this survey is valuable but not exactly unbiased.
  • If you go read the Splendid Edition of Synthetic Work, there’s a growing number of professionals that are getting quite upset about the introduction of AI in the workplace to do their jobs.
  • My first-hand experience with a sample of 20,000 people working in the tech industry (so, not exactly buddies) is that, until three months ago, nobody (all the way to the top) knew what AI was, let alone being a consumer of it.

So BCG here has done some miracles finding 13,000 enlightened people that not only had an opinion already back in 2018, but have already managed to build a significant experience with Generative AI in the workplace.

And you can tell it’s a miracle because BCG reports that 44% of the business leaders they have surveyed already went through upskilling:

I’m curious about what these business leaders did upskill on considering that the AI models (and systems that depend on them) change almost every week.

Again, a miracle.

Now.

It would be more interesting to see a survey conducted by a reputable organization that doesn’t profit from an enthusiastic adoption of emerging technologies.

It would also be more interesting to see a survey conducted among organizations where AI has been rolled out for a while and that has impacted jobs. And notice that “rolled out” doesn’t mean that you have logged into ChatGTP or Bing to generate a few paragraphs of text. That is not an indicator of enterprise AI adoption.

I will go out on a limb and say that the perception of AI would be less favorable.

Let’s wait to see until people realize what they are gaining and what they are losing before drawing conclusions.

Want More? Read the Splendid Edition

If you have not checked it recently, the Discord server of Synthetic Work has a new section:

You can go there to submit a request for a new feature or vote requests submitted by your fellow Synthetic Work members.

Another thing: I’ve started building a database of technology providers that have a relevant connection with AI and Synthetic Work.

It’s not right to call them AI vendors because, except for the handful of companies that train foundational models, like OpenAI or Anthropic, most technology providers will use or are using AI in one or another. And it’s not very useful if I create a database that contains every company in the world.

So, if the company has been mentioned in past Issues of Synthetic Work, it will have a profile in this new database. And this profile will try to explain in a very clear way why the company is relevant.

Try with Anthropic.

Issue #22 - The Dawn of the Virtual YouTuber

July 22, 2023
Free Edition
Hi. If you are seeing this, it means that you are a valued member of our community. Or you are reading Issue #0. Or you hacked the archives.
Whichever the case, bravo.

If you have comments about anything you'll find below, or you have material to suggest, or topics you'd like to see covered (don't you dare to pitch me your startup), just send an email.
I'll read all emails, ignore them, wait a few weeks, and then use the best stuff for a new issue of the newsletter, pretending the ideas are original and mine.

Another thing. Super important question:

Do you have one of those moms that inexplicably know everyone and gossip all day long so that one little secret you have shared with them in confidence at breakfast becomes a fact known by the whole town by noon?

If so, can you tell your mom that Synthetic Work is a secret?

If she talks about this newsletter or forwards it to the entire neighbourhood, it helps me a lot.

Synthetic Work takes a week off. This newsletter is written by a real human, not an AI, and that human needs a little break.

Expect the next issue on August 6th.

Alessandro

In This Issue

  • YouTube’s first Culture & Trends Report reveals some numbers about the interest in virtual creators among the YouTube audience.
  • The Screen Actors Guild—American Federation of Television and Radio Artists (SAG-AFTRA) joins the Writers Guild of America (WGA) in an unprecedented strike focused on generative AI.
  • 8,000 authors have signed a letter asking the leaders of companies including Microsoft, Meta Platforms and Alphabet to not use their work to train AI systems without permission or compensation.
  • The cost of producing well-written material has fallen 10 thousand fold over the course of the past year and for the first time in almost 125 years.
  • Several large news and magazine publishers are discussing the formation of a new coalition to address the impact of artificial intelligence on the industry
  • The startup Air shows how salespeople all around the world won’t have to be on the phone anymore going forward. Next step: do not show up at the office either.

P.s.: This week’s Splendid Edition is titled What AI should I wear today?. In it, we’ll see what the New York City’s Metropolitan Transit Authority (MTA), Eurostar, and G/O Media are doing with AI.

In the What Can AI Do for Me? section, we’ll see how to use GPT-4 Code Interpreter to ask questions about our website performance that Google Analytics can’t answer without attending a 72 days class.

In the Prompting section, I’ll recommend what use case is more suitable for seven AI systems that can be used today.

What Caught My Attention This Week

The first thing that caught my attention this week is YouTube’s first Culture & Trends Report.

If you are a YouTuber or you plan to become one, there are a lot of interesting insights in the report, and a page dedicated to AI:

Virtual creators and K-pop bands being debuted by Korean companies are a build on VTubers and hint about the places that AI-driven creations may go in the future, providing an early look at viewer interests in creativity that challenges norms of authenticity.

RuiCovery, a real person with an AI-generated face, was made by Seoul-based IT start-up Dob Studio. The studio produces song and dance covers in both long-form videos and Shorts.

Two statistics in particular interest us for Synthetic Work:

  • 60% of people surveyed agree they’re open to watching content from creators who use AI to generate their content.
  • 52% of people surveyed say they watched a VTuber (virtual YouTuber or influencer) over the past 12 months.

This is not a small study: 25,892 online adults, aged 18-44, were surveyed in May 2023.

If people are open to watching synthetic humans delivering synthetic content on YouTube, with time, this might pave the way for a broader acceptance of the concept in other areas. The first application that comes to mind is synthetic anchors for news and sports.

We will not think about the long-term impact on the job market. From a utility standpoint, if all we want is to be entertained or informed, it doesn’t matter if the anchor is synthetic or not.

In fact, the possibility of creating synthetic anchors will probably trigger a gold rush to create the most engaging personality that resonates with this or that section of the addressable audience. Something that is much harder and more expensive to do with real humans.


Speaking of synthetic entertainers, the second thing that caught my attention this week is, of course, the unprecedented strike of the Screen Actors Guild—American Federation of Television and Radio Artists (SAG-AFTRA) that joins the Writers Guild of America (WGA).

We already covered this strike in Issue #12 – ChatGPT Sucks at Making Signs and Issue #20 – £600 and your voice is mine forever, but this fight is so critical for the future of synthetic work (at least in the entertainment industry) that we need to keep looking.

Christopher Grimes, reporting for Financial Times:

Hollywood has not seen anything like it in more than 60 years: thousands of striking actors and writers picketing together outside movie and TV studios, where production has ground to a halt.

Demetri Belardinelli, who has acted in TV shows such as Silicon Valley, was among hundreds of picketers outside Walt Disney’s Burbank studios in sweltering heat on Friday. He and 160,000 other members of the SAG-AFTRA union had voted to strike a day before, after talks with the studios collapsed.

The Screen Actors Guild has not gone on strike in 43 years, and it has been even longer since the actors and writers have picketed at the same time. Their last joint industrial action was in 1960, when Ronald Reagan was the head of the Screen Actors Guild.

Key sticking points for both the writers and actors include royalties — which have declined significantly in the streaming era — and establishing rules over the use of artificial intelligence. Writers fear being paid far less to adapt basic scripts generated by AI programmes, while actors are concerned that their digital likenesses will be used without compensation.

Bob Iger, Disney’s chief executive, told CNBC on Thursday that it was the “worst time in the world” for work stoppages, given the industry’s nascent recovery from the Covid-19 pandemic. “There’s a level of expectation that they have that is just not realistic.”

Of course, this is the best (and possibly, only) time in the world for human artists to attempt this fight. Tech startups are already running away with AI technologies, creating synthetic content that people start to watch and might enjoy.

More information comes from Angela Watercutter, reporting for Wired:

You know it’s bad when the cocreator of The Matrix thinks your artificial intelligence plan stinks. In June, as the Directors Guild of America was about to sign its union contract with Hollywood studios, Lilly Wachowski sent out a series of tweets explaining why she was voting no. The contact’s AI clause, which stipulates that generative AI can’t be considered a “person” or perform duties normally done by DGA members, didn’t go far enough. “We need to change the language to imply that we won’t use AI in any department, on any show we work on,” Wachowski wrote. “I strongly believe the fight we [are] in right now in our industry is a microcosm of a much larger and critical crisis.”

Leading up to the strike, one SAG member told Deadline that actors were beginning to see Black Mirror’s “Joan Is Awful” episode as a “documentary of the future” and another told the outlet that the streamers and studios—which include Warner Bros., Netflix, Disney, Apple, Paramount, and others—“can’t pretend we won’t be used digitally or become the source of new, cheap, AI-created content.”

While Season 6 of Black Mirror is, overall, highly disappointing, the episode “Joan Is Awful” episode is a must-watch:

Let’s continue the Wired article:

Will any of this stop the rise of the bots? No. It doesn’t even negate that AI could be useful in a lot of fields. But what it does do is demonstrate that people are paying attention—especially now that bold-faced names like Meryl Streep and Jennifer Lawrence are talking about artificial intelligence. On Tuesday, Deadline reported that the Alliance of Motion Picture and Television Producers, which represents the studios, was prepared for the WGA to strike for a long time, with one exec telling the publication “the end game is to allow things to drag on until union members start losing their apartments and losing their houses.” Soon, Hollywood will find out if actors are willing to go that far, too.

While all of this happens, in a self-fulfilling prophecy fashion, somebody thought it was a good idea to launch a synthetic show.

Devin Coldewey, reporting for TechCrunch:

little savvy is required to see that this may be the worst possible moment to soft-launch an AI that can “write, animate, direct, voice, edit” a whole TV show — and demonstrate it with a whole fake “South Park” episode.

The company behind it, Fable Studios, announced via tweet that it had made public a paper on “Generative TV & Showrunner Agents.” They embedded a full, fake “South Park” episode where Cartman tries to apply deepfake technology to the media industry.

The technology, it should be said, is fairly impressive: Although I wouldn’t say the episode is funny, it does have a beginning, a middle and an end, and distinct characters (including lots of fake celebrity cameos, including fake Meryl Streep).

Fable started in 2018 as a spinoff from Facebook’s Oculus (how times have changed since then), working on VR films — a medium that never really took off. Now it has seemingly pivoted to AI, with the stated goal of “getting to AGI — with simulated characters living real daily lives in simulations, and creators training and growing those AIs over time,” Saatchi said.

Simulation is the name of the product they intend to release later this year, which uses an agent-based approach to creating and documenting events for media, inspired by Stanford’s wholesome AI town.

If you want to see the software Fable Studios used to create this episode, you just need to look here:

One month ago, Lisa Joy, one of the creators of Westworld, said she was confident her job as a producer of science fiction was safe from artificial intelligence.

Her words, related by Nate Lanxon, and Jackie Davalos for Bloomberg:

“I’m not so much worried about being replaced, partly because AI isn’t as good at creative things yet, but also we tend to take tools and we figure out new ways to use them, and come up with things we couldn’t do before,” he said. “What I look at as a writer is how can this help me tell a better story? And am I adaptable enough to use it and come up with something that I couldn’t have come up with otherwise?”

I wonder for how long she’ll feel this way.


Actors and screenwriters are not the only creators increasingly vocal against the use of AI. The third story worth your attention this week is about 8,000 authors have signed a letter asking the leaders of companies including Microsoft, Meta Platforms and Alphabet to not use their work to train AI systems without permission or compensation.

Talal Ansari, reporting for The Wall Street Journal:

The letter, signed by noteworthy writers including James Patterson, Margaret Atwood and Jonathan Franzen, says the AI systems “mimic and regurgitate our language, stories, style, and ideas.” The letter was published by the Author’s Guild, a professional organization for writers.

“Millions of copyrighted books, articles, essays, and poetry provide the ‘food’ for AI systems, endless meals for which there has been no bill,” the letter says. “You’re spending billions of dollars to develop AI technology. It is only fair that you compensate us for using our writings, without which AI would be banal and extremely limited.”

The letter was addressed to the CEOs of OpenAI, IBM, Stability AI and several tech companies, which run AI models and chatbots such as Bard, ChatGPT and Llama.

The Author’s Guild letter says many of the books used to train AI systems were pulled from “notorious piracy websites.”

Comedian Sarah Silverman and other authors filed a lawsuit against Meta earlier this month, alleging its artificial intelligence model was trained in part on content from a “shadow library” website that illegally contains the authors’ copyrighted works. The group filed a similar lawsuit against OpenAI.

The Authors Guild said writers have seen a 40% decline in income over the last decade. The median income for full-time writers in 2022 was $22,330, according to a survey conducted by the organization. The letter said artificial intelligence further threatens the profession by saturating the market with AI-generated content.

The problem here is that paying for the books would change nothing. Once a book has entered the training set, AI models can extrapolate the style of the author and use it to generate any type of content (including answers to questions that have nothing to do with writing a book).

Provided that enough material is available, what generative AI really does is capture the essence of a creator. Just like a human being learns how to mock a politician or a celebrity.

So what do you pay for? It can’t be royalties for the book. Can it be royalties for personality? If so, are AI companies supposed to pay for every single personality they have captured in the training dataset? If so, how to measure that?

More importantly, this mimicking phase is just the beginning. We’ll see AI models generate completely new personalities that don’t resemble any existing creator and are more engaging than every one of them.

A Chart to Look Smart

The easiest way to look smart on social media and gain a ton of followers? Post a lot of charts. And I mean, a lot. It doesn’t matter about what. It doesn’t even matter if they are accurate or completely made up.
You won’t believe that people would fall for it, but they do. Boy, they do.

So this is a section dedicated to making me popular.

The chart of the week comes from Brett Winton, the Chief Futurist at ARK Invest, the asset management company led by famous investor Cathie Wood.

Brett reminds us that the cost of producing well-reasoned coherent written material has fallen 10 thousand fold over the course of the past year.

But it’s only when we zoom out and we see where we are now compared to the last 25 years and to the last 125 years that we start to understand the true impact of generative AI:

This is a critical point that will push employers to keep using AI-generated content no matter how many strikes humans will organize.

As we said many times in the last 5 months, this is a vicious circle: if even just one of your competitors switches to AI-generated content, your chances to stay competitive from a cost standpoint are close to zero. If the market chooses cheap and decent instead of expensive and excellent, you have no choice but to follow.

On top of that, there’s the issue that the market is polluted by countless options that are very expensive and barely decent.

The trigger for this chart is the release of the AI model Claude 2, developed by Anthropic, which we discussed last week in Issue #21 – Ready to compete against yourself?

Also last week, I compared Claude 2 with GPT-4 Code Interpreter in Issue #21 – Investigating absurd hypotheses with GPT-4.

Going forward, I plan to use Claude 2 a lot more in the use cases we discuss in the Splendid Edition.

How Do You Feel?

This section is dedicated to the psychological impact of artificial intelligence on people. You might think that this has no relevance to the changing nature of human labour, but it does. Oh, if it does!

For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.

On top of book authors, there’s another group of people that doesn’t feel comfortable with AI “stealing” their content: publishers.

The problem is that both authors and publishers are simultaneously complaining about AI learning from their content and using AI to produce new content.

We document which publishers and publications have started producing AI-generated content in the Splendid Edition of Synthetic Work, and we track each and every name in the AI Adoption Tracker.

It’s a typical “It’s complicated” relationship. And like in most “It’s complicated” relationships, it’s complicated only because you can’t have it both ways and you refuse to choose.

To tell us more about this internal struggle, we have Alexandra Bruell, reporting for The Wall Street Journal:

Several large news and magazine publishers are discussing the formation of a new coalition to address the impact of artificial intelligence on the industry, according to people familiar with the matter.

The possibility of such a group has been discussed among executives and lawyers at the New York Times ; Wall Street Journal parent News Corp ; Vox Media; Condé Nast parent Advance; Politico and Insider owner Axel Springer; and Dotdash Meredith parent IAC, the people said.

A specific agenda hasn’t been decided, and some publishers haven’t yet committed to participating, the people said. It is possible a coalition may not be formed, they said.

While publishers agree that they need to take steps to protect their business from AI’s rise, priorities at different companies often vary, the people said.

IAC Chairman Barry Diller at a recent industry event warned of AI content scraping—or the use of text and images available online to train the AI tools—and said that publishers in some cases should “get immediately active and absolutely institute litigation.”

News Corp Chief Executive Robert Thomson, who has been one of the most vocal critics of the platforms, recently warned that intellectual property was under threat due to the rise of AI.

Possibly to placate the growing animosity, OpenAI has started forging partnerships with publishers. The first one is with Associated Press:

The Associated Press and OpenAI have reached an agreement to share access to select news content and technology as they examine potential use cases for generative AI in news products and services.

The arrangement sees OpenAI licensing part of AP’s text archive, while AP will leverage OpenAI’s technology and product expertise. Both organizations will benefit from each other’s established expertise in their respective industries, and believe in the responsible creation and use of these AI systems.

Will AI companies have to forge similar partnerships with every publisher on the planet?

The winner of this week’s spot in the “Putting Lipstick on a Pig” section is a company called Air. The founder writes in his Twitter bios: AI for the enhancement of humanity.

However, the company’s website teases:

Introducing the world’s first ever AI that can have full on 10-40 minute long phone calls that sound like a REAL human, with infinite memory, perfect recall, and can autonomously take actions across 5,000 plus applications. It can do the entire job of a full time agent without having to be trained, managed or motivated. It just works 24/7/365.

Not sure how the two things are compatible, but you don’t have to trust me. I highly recommend you listen carefully to this 3 minutes demo:

If you have friends in sales, you might want to forward them this newsletter. It will be pleasing to know that they don’t have to try and be good anymore.

Want More? Read the Splendid Edition

Not quite our traditional The Tools of the Trade section but, given the AI community’s mad rush to release many new things every week now, it seems necessary to chart a map of the various AI systems now available on the market and when to use them.

Eventually, one system will be able to do everything, but until then, this is my recommendation:

  • ChatGPT with GPT-4 (vanilla), by OpenAI: use this AI system with this specific model for most use cases. It remains the most accurate and capable on the market. Do not waste your time with the GPT-3.5-Turbo model. If that’s the only model you ever tried from OpenAI, please know that there’s an ocean of difference compared go GPT-4.When it will be available to you, switch to the 32K tokens version of GPT-4, to have even longer and more coherent conversations.
  • Claude 2 (100K tokens), by Anthropic: use this AI system to upload and analyze long documents. Be extra careful in checking the results as this model tends to hallucinate more than GPT-4.When it will be available to you, switch to the 200K tokens version, to have even longer and more coherent conversations.
  • ChatGPT with GPT-4 Code Interpreter, by OpenAI:

Issue #23 - One day, studying history will be as gripping as watching a horror movie

August 5, 2023
Free Edition
Hi. If you are seeing this, it means that you are a valued member of our community. Or you are reading Issue #0. Or you hacked the archives.
Whichever the case, bravo.

If you have comments about anything you'll find below, or you have material to suggest, or topics you'd like to see covered (don't you dare to pitch me your startup), just send an email.
I'll read all emails, ignore them, wait a few weeks, and then use the best stuff for a new issue of the newsletter, pretending the ideas are original and mine.

Another thing. Super important question:

Do you have one of those moms that inexplicably know everyone and gossip all day long so that one little secret you have shared with them in confidence at breakfast becomes a fact known by the whole town by noon?

If so, can you tell your mom that Synthetic Work is a secret?

If she talks about this newsletter or forwards it to the entire neighbourhood, it helps me a lot.

My holiday lasted just two business days (plus the weekend). Of course, during this short break, an almost infinite number of things happened in AI-land. At least, we didn’t discover Artificial General Intelligence (AGI). Maybe for the Christmas break?

And of course, I found a way to connect my holiday with AI.

As a short break for my birthday, I went to Rome for some ghost hunting. The city was deserted and so I had an unprecedented chance to visit alone some of the most beautiful museums in the world. Below you see one of the pictures I took: a depiction of Medusa, one of the Gorgons, as we preserve it at the Musei Capitolini.

In this second picture, you see what happens when I use a fine-tuned version of the text-to-image AI model Stable Diffusion, called Realistic Vision, to bring it alive.

The model struggles to recreate the exact facial expression despite I conditioned the image generation with an advanced series of auxiliary models that go under the name of ControlNet.

I did this on a consumer hardware machine (a beefy MacBook Pro) over an afternoon of experiments. Given a much faster computer (aka a Windows machine with an NVIDIA graphic card) and enough time, I could probably do better. But the point is that experimentation is within reach for everyone.

In this third picture, I add a third AI model to help refine the quality of the picture, through a process usually called “HighRes Fix”.

The expression of the woman is even less faithful to the one captured by the statue. At the same time, the final image starts to look quite realistic. If used in schools, this image would certainly capture the imagination of the students way more than a white marble statue.

Imagine then, the possibilities if we animate the picture. We are getting there, with new AI models like Gen-2, developed by Runway, but the technology is not yet mature enough for me to animate this image without spending seven hours of experimentation.

Nonetheless, this is an exciting application for the Education industry. It will require quite a lot more automation to industrialize the process and bring down the costs, but generative AI gives us an unprecedented opportunity to bring history to life.

Notice that, at this point, I’m generating the static single picture through a complex process that involves five different AI models working together in a so-called pipeline.

When you use a commercial AI service like Adobe Firefly or Midjourney, much of this happens behind the scenes, as the owners of the service take many decisions on your behalf to make the process as simple as possible.

The more the decisions, the more constrained the results. This is why Midjourney can generate better images than its competitors, but refuses to deviate from the styles and compositions that you have seen everywhere on social media.

As a bonus, in this fourth picture, I’m using a new, alternative family of auxiliary AI models called UniControl. UniControl has the potential to surpass ControlNet in terms of quality and accuracy. The Stability AI team is already working to integrate the new Stable Diffusion XL model.

In particular, for this picture, I used the Colorization model that is part of UniControl.

As you know, ancient Greek and Roman statues were not really white, but painted to look more realistic. It was their version of photography.

So I thought that the Colorization model in UniControl could be evocative of that approach, even if the result is very far from how the original colors would have looked like.

Lots of (very technical) lessons learned in an afternoon of experiments. The most important thing that you need to know is that the people that you see sharing AI-generated images on social media are either true artists (exerting full control over the image generation by manipulating hundreds of settings) or image miners (generating thousands of images and pick the beautiful ones that came out with a bit of luck).

Either way, these people spend an absurd amount of time to show you the best of what AI can do today. It’s not a single-button-pushing process but, in many cases, a maze of staggering complexity.

If you want full control over what you are generating, today’s technology remains unapproachable for most people.

I’ll publish more pictures and experiments in the next few days (if I find the time) on social media. For now, let’s get back to AI and its impact on jobs.

Alessandro

In This Issue

  • Let’s see how we can use a trip to Rome and generative AI to show a glimpse of the future of Education.
  • OpenAI’s CEO makes it crystal clear: “Jobs are definitely going to go away, full stop.”
  • What’s the future of Q&A websites powered by human interaction? Stack Overflow traffic is down 40% year-over-year.
  • The UK House of Lords is mildly concerned about the impact of AI on jobs. Only mildly.
  • McKinsey believes that 29.5% of the hours worked in the US economy will be taken over by generative AI by 2030.
  • Popular YouTube gamer Kwebbelkop transitioned to a synthetic version of himself running his channel.

P.s.: The Splendid Edition of Synthetic Work is out and it’s titled Your Own, Personal, Genie.

This week, we talk about what News Corp Australia, Wayfair, 3M Health Information Systems, Babylon Health, and ScribeEMR are doing with AI.

In the Prompting section, we discover that large language models might lose accuracy with larger context windows.

In the The Tools of the Trade section, we use LM Studio and the new Stable Beluga 2 model to create a personal AI assistant that runs on our computers.

 

What Caught My Attention This Week

The first thing that I have to put on your reading list is a new, very long profile of Sam Altman and OpenAI, for the Atlantic.

Among the other gems, Ross Andersen captures this quote from Altman:

I wanted to know how today’s workers—especially so-called knowledge workers—would fare if we were suddenly surrounded by AGIs. Would they be our miracle assistants or our replacements? “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”

Even if you don’t want to be sensationalistic, it’s quite an arresting statement.

Two things come immediately to mind.

The first is that this statement connects the job loss to the arrival of Artificial General Intelligence (AGI). This characterization has implications.

Even if you believe Altman’s assumptions about the impact of AI on jobs, you still have to believe that his team, or somebody else, will get to AGI. And since it’s very hard to believe that we are getting close to AGI, the risk of job displacement seems very far away.

In reality, job displacement doesn’t need AGI, as we have documented for almost 6 months on Synthetic Work.

The second thing that comes to mind is that Altman just completed a world tour to meet with the governments of the world. It’s hard to imagine how even one of these governments would have skipped discussing the impact of AI on jobs.

If he said to them the same thing he said to the Atlantic, you’d expect that every single government in the world is working on contingency plans to deal with potential mass unemployment. And that it would become a key topic of discussion in the media.

But it’s not becoming a key topic of conversation.

So either the governments don’t believe there’s an urgency to act, perhaps because they don’t believe that AGI is around the corner, as I said above, or Altman has convinced them that this is a non-problem.

It might be the latter, as we find out in the article:

Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know. He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists? I wondered.) His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors. He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”

This is one of the most problematic aspects of the debate on AI and jobs. For the most part, optimists have no problems admitting that existing jobs will be displaced, but they struggle to describe the new jobs that will be created. And this, in turn, creates an uncertain scenario that people find hard to be motivated by.

It’s called Blurred Vision Bias.

Many academic papers and business management articles have been written about the topic. If you are interested in learning more about it, and ways to address it, How can leaders overcome the blurry vision bias? identifying an antidote to the paradox of vision communication is a good starting point.

It’s not just a problem of blurred vision. The conditions that are fostering the adoption of generative AI are completely different from the ones that fostered the adoption of previous emerging technologies, as I described in the Intro of Issue #15 – Well, worst case, I’ll take a job as cowboy.

Sam Altman understands this better than most other optimistic AI commentators I’ve read online. Let’s continue the article:

The jobs of the future are notoriously difficult to predict, and Altman is right that Luddite fears of permanent mass unemployment have never come to pass. Still, AI’s emerging capabilities are so humanlike that one must wonder, at least, whether the past will remain a guide to the future. As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.

Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years. The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.

In 2020, OpenAI provided funding to UBI Charitable, a nonprofit that supports cash-payment pilot programs, untethered to employment, in cities across America—the largest universal-basic-income experiment in the world, Altman told me. In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.

So, existing jobs will go away, but a wave of wonderful new jobs will arrive. Yet, we’ll need a massive countervailing redistribution of wealth to compensate the dramatic transfer of wealth. To the point that:

AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want?” If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish. One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”

Altman’s vision seemed to blend developments that may be nearer at hand with those further out on the horizon. It’s all speculation, of course. Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations. America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization. It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.

Let’s stop here. We don’t focus on AGI on Synthetic Work. Not until it becomes a concrete possibility.

This is an exception to illustrate how mass job displacement is a real possibility in the mind of the people that are building our most capable AI models. And how too few people are talking about (and building) contingency plans for that scenario.

If you look at the progress made in less than one year by all sorts of generative AI models, you realize that the evolution of this technology is not showing a linear progression. Unless we hit a wall in terms of what machine learning can do, we are going to see extraordinary capabilities in the next two generations of models.

And by the way, Sam Altman is not the only one that expects the “marginal cost of intelligence” to fall very close to zero within 10 years.

One of the most accomplished venture capitalists in the world, Vinod Khosla, just last week, published a statement on Twitter X that is quite a departure from his long-standing recommendation for young people to learn STEM disciplines:

All of this is to say that contingency plans cannot wait. Not just at the government level, but also at the individual level.

  • What are you doing to prepare for a potential takeover of some of your job tasks by generative AI?
  • What are you doing to prepare children for an alternative reality where some of the jobs they are studying for will yield a much lower salary (or will disappear entirely) by the time they graduate?
  • What are you doing to prepare your company against a hypothetical competitor that uses generative AI to recreate your business at a fraction of your cost?

A second thing worth your attention is an observation from Brett Winton, the Chief Futurist at ARK Invest (remember to always roll your eyes when you read that), about the decline in usage of Stack Overflow.

If you are not a technologist and you don’t know what Stack Overflow is: it’s a top Q&A website where developers, web designers, and data scientists go to ask technical questions about obscure technical issues.

Every generative AI startup developing large language models harvested Stack Overflow data to train their creatures at some point. That’s part of the reason why CoPilot, GPT-4 or Claude 2 are so good at answering questions about coding and data science.

But absorbing that knowledge (which is an improper description of what really happened, but let’s keep it simple) had a tangible impact on the usage of Stack Overflow.

Take this as supporting evidence of how quickly, and dramatically, your business can be impacted by generative AI, and how quickly people can evolve the way they work.

Stack Overflow didn’t see this generative AI coming, and now they are doing the only possible thing: catching up as fast as they can. You can tell by the Labs label and the wording of the video below that this is not a fully backed service yet.

Much of this is already offered by CoPilot, which can do significantly more than Stack Overflow AI can. And potentially even more, in terms of integrations with corporate systems, will be offered by GPT-4 through its Plugin system.

Would you prefer to use a generic AI-powered Q&A platform that can answer any question you’d have about any topic? Or a specialized one that can only answer coding questions?

This is a question that every business out there will have: will users prefer to use my specialized AI or OpenAI generic AI?

It will boil down to how much better the specialized AI will be at answering questions. If your specialized AI model is only marginally better than a generalized model, users will prefer the convenience of interacting with a single model for everything.

This is why I keep stressing in all my consulting engagements that companies have to become really good at fine-tuning generative AI models. It’s the only way to stay competitive in the long run.

Back to Stack Overflow.

It’s also unclear how their user base will react to this new feature. There’s a difference between a third party harvesting your forum data in secret to commercialize your business killer, and you openly telling your users that their interactions are being harvested for you to profit from them without sharing the revenue.

As we said many times on Synthetic Work, we are all unknowingly contributing to the training of more and more powerful generative AI models. Every interaction we have with each other in the digital world is teaching these models how to generate more human-like and accurate text, images, sounds, voices, music, etc.

Not just the public interactions with have on cloud services like social media networks, but also within the boundaries of our companies, when we send emails, exchange chat messages, write documents, or participate in video meetings.

All of this data will eventually be used to train AI models that can write, sound, and look like us. But we are not getting paid for it.

So the question is: what and who else will experience that 40% year-over-year decline in usage?

Finally, if we zoom out from the specificity of the Stack Overflow situation, we might wonder if the advent of large language models will have a long-term impact on all websites that promote and depend on human interaction. And if that, in turn, will affect future LLMs.

If you feel particularly nerdy, you could read a new academic paper on this topic titled Are Large Language Models a Threat to Digital Public Goods? Evidence from Activity on Stack Overflow:

Large language models like ChatGPT efficiently provide users with information about various topics, presenting a potential substitute for searching the web and asking people for help online. But since users interact privately with the model, these models may drastically reduce the amount of publicly available human-generated data and knowledge resources.

This substitution can present a significant problem in securing training data for future models. In this work, we investigate how the release of ChatGPT changed human-generated open data on the web by analyzing the activity on Stack Overflow, the leading online Q&A platform for computer programming. We find that relative to its Russian and Chinese counterparts, where access to ChatGPT is limited, and to similar forums for mathematics, where ChatGPT is less capable, activity on Stack Overflow significantly decreased.

A difference-in-differences model estimates a 16% decrease in weekly posts on Stack Overflow. This effect increases in magnitude over time, and is larger for posts related to the most widely used programming languages. Posts made after ChatGPT get similar voting scores than before, suggesting that ChatGPT is not merely displacing duplicate or low-quality content.

These results suggest that more users are adopting large language models to answer questions and they are better substitutes for Stack Overflow for languages for which they have more training data.

Using models like ChatGPT may be more efficient for solving certain programming problems, but its widespread adoption and the resulting shift away from public exchange on the web will limit the open data people and models can learn from in the future.


Connected to the first story of this week, the third story that caught my attention proves that some governments are mildly concerned about the impact of AI on jobs.

To tell us this story there is Safi Bugel, reporting for The Guardian:

The House of Lords could be replaced by bots with “deeper knowledge, higher productivity and lower running costs”, said a peer during a debate on the development of advanced artificial intelligence.

Addressing the upper chamber, Richard Denison hypothesised that AI services may soon be able to deliver his speeches in his own style and voice, “with no hesitation, repetition or deviation”.

He quoted the example to raise the wider issue of AI’s potential effect on the UK jobs market.

“Is it an exciting or alarming prospect that your lordships might one day be replaced by peer bots with deeper knowledge, higher productivity and lower running costs?” the independent crossbencher asked. “Yet this is the prospect for perhaps as many as 5 million workers in the UK over the next 10 years.

“I was briefly tempted to outsource my AI speech to a chatbot and to see if anybody noticed. I did in fact test out two large language models. In seconds, both delivered 500-word speeches which were credible, if somewhat generic.”

That’s only because Lord Londesborough doesn’t follow the tutorials I publish in the Splendid Edition of Synthetic Work.

The Viscount Camrose replied, according to the article:

The AI minister, Jonathan Berry, said: “These advances bring great opportunities, from improving diagnostics and healthcare to tackling climate change, but they also bring serious challenges, such as the threat of fraud and disinformation created by deepfakes.

“We note the stark warnings from AI pioneers, however uncertain they may be about artificial general intelligence and AI biosecurity risks. We will unlock the extraordinary benefits of this landmark technology while protecting our society and keeping the public safe.”

This last statement encapsulates the feeling of most AI optimists: what we’ll gain out of this technology in terms of prosperity is worth the risk of losing our current jobs.

It’s an extraordinary position to take for humans, contrary to everything we see on daily basis. Our standard behavior, captured by the famous book The Innovator’s Dilemma, is that we don’t like to cannibalize their existing business in the hope of creating a much better one.

It’s hard to reconcile these two positions. Unless, of course, we don’t believe there will be any cannibalization.

A Chart to Look Smart

The easiest way to look smart on social media and gain a ton of followers? Post a lot of charts. And I mean, a lot. It doesn’t matter about what. It doesn’t even matter if they are accurate or completely made up.
You won’t believe that people would fall for it, but they do. Boy, they do.

So this is a section dedicated to making me popular.

Just last week, McKinsey published a new report dedicated to Generative AI and the future of work in America:

Across a majority of occupations (employing 75 percent of the workforce), the pandemic accelerated trends that could persist through the end of the decade. Occupations that took a hit during the downturn are likely to continue shrinking over time. These include customer-facing roles affected by the shift to e-commerce and office support roles that could be eliminated either by automation or by fewer people coming into physical offices. Declines in food services, customer service and sales, office support, and production work could account for almost ten million (more than 84 percent) of the 12 million occupational shifts expected by 2030.

By contrast, occupations in business and legal professions, management, healthcare, transportation, and STEM were resilient during the pandemic and are poised for continued growth. These categories are expected to see fewer than one million occupational shifts by 2030.

The biggest future job losses are likely to occur in office support, customer service, and food services. We estimate that demand for clerks12 could decrease by 1.6 million jobs, in addition to losses of 830,000 for retail salespersons, 710,000 for administrative assistants, and 630,000 for cashiers. These jobs involve a high share of repetitive tasks, data collection, and elementary data processing, all activities that automated systems can handle efficiently. Our analysis also finds a modest decline in production jobs despite an upswing in the overall US manufacturing sector, which is explained by the fact that the sector increasingly requires fewer traditional production jobs but more skilled technical and digital roles.

We estimate that 11.8 million workers currently in occupations with shrinking demand may need to move into different lines of work by 2030. Roughly nine million of them may wind up moving into different occupational categories altogether. Considering what has already transpired, that would bring the total number of occupational transitions through the decade’s end to a level almost 25 percent higher than our earlier estimates, creating a more pronounced shift in the mix of jobs across the economy.

Long-time readers of Synthetic Work know how I feel about this kind of forecast. Also, this is in stark contrast with many other studies we discussed before, suggesting that the legal profession will be one of the most impacted by AI.

Among the others, you can review How will Language Modelers like ChatGPT Affect Occupations and Industries?

Yes, I’m building a new Research section of Synthetic Work to aggregate all the most important research on the impact of generative AI on jobs. It’s for situations like this one, when we need to refer to an important study and we can’t waste hours going through the archives of the newsletter.

It’s still a work in progress, which is why there’s no link from the front page and no official announcement.

But we are digressing.

The money quote from the report:

Without generative AI, our research estimated, automation could take over tasks accounting for 21.5% of the hours worked in the US economy by 2030. With it, that share has now jumped to 29.5%.

After dedicating an entire career to automation, I could talk for hours about how the first part of this prediction seems exceptionally unrealistic. But if we decide to believe Sam Altman then we might decide to believe McKinsey, too.

Let’s continue:

This research does not predict aggregated future employment levels; instead, we model various drivers of labor demand to look at how the mix of jobs might change—and those results yield some gains and some losses. In fact, the occupational categories most exposed to generative AI could continue to add jobs through 2030, although its adoption may slow their rate of growth. And even as automation takes hold, investment and structural drivers will support employment. The biggest impact for knowledge workers that we can state with certainty is that generative AI is likely to significantly change their mix of work activities.

The thing that is truly unclear in this report is if McKinsey has modeled this outcome accounting for linear, exponential, or null progress of generative AI models.

For example, did they consider that by 2030 we’ll have GPT-5 and GPT-6, and each might be overwhelmingly more capable than its predecessor? Or did they assume that progress will freeze to GPT-4 capabilities expected by the end of the year?

That makes a huge difference in terms of how to interpret this forecast.

Reviewing the Methodology section of the full 76-page report reveals nothing.

The one takeaway from this study is that there is no consensus among forecasters and experts on how big the impact of AI will be on jobs, and on which kind of jobs.

This is (hopefully) why you read Synthetic Work: to develop an independent and balanced opinion on the topic, while we figure out who’s right.

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

As we anticipated a few issues ago, synthetic clones of ourselves are coming and, if you believe the survey recently made by YouTube, people won’t have a problem if they replace real human entertainers.

Chris Stokel-Walker, reporting for Wired:

Jordi Van Den Bussche used to devote every waking hour to building his presence on social media. The gaming creator, better known as Kwebbelkop, would labor 24/7 on his YouTube channel coming up with video ideas, shooting them, distributing them. He did this while courting brand deals and doing the other work integral to his survival on the platform. Five years ago, he ran into a problem. “Every time I wanted to take a holiday or I needed some time for myself, I couldn’t really do that, because my entire business would stop,” he says.

It’s an issue known as the “key person problem.” Without Van Den Bussche on camera, the entire Kwebbelkop enterprise didn’t work. He was too busy making videos to think about how to scale his business, and too tired to create videos. He needed a break: Around 2018, like many other YouTubers, he experienced significant burnout.

The burnout sparked a change in mindset. He began thinking about what would benefit him and what would benefit the creator industry—which often relies on milking the on-camera presence of an individual until they reach a breaking point, then finding another person seeking fame and fortune. He came up with a solution: a series of AI tools designed to create and upload videos, practically without his involvement. “I’m retired from being an influencer,” he says. “I’ve had a lovely career. I had a lot of fun. I want to take things to the next level. And that means making this brand live on forever.”

Van Den Bussche’s AI influencer platform, which launched this week after a suitably excitable level of hype on Twitter from its creator, is his attempt to make that happen. It comprises two versions of an AI tool. The first is trained on a creator’s likeness—their on-camera performances and what they say in videos—and is used to create new content. It appears to be similar to Forever Voices, the controversial AI tool behind the CarynAI virtual influencer, which outsourced maintaining connections with fans on behalf of creators.

The other involves simplifying the act of creation as much as possible by taking simple prompts—such as “turn this article into a video formatted like an interview involving two people”—and producing the end result. (The latter is similar to a tool called QuickVid, which has seen some early adoption.)

Long-time readers of Synthetic Work know that, since day one, I recommended to not think about generative AI assistants like GPT-4 as very capable interns, an analogy that you see regurgitated over and over on social media and the press.

I always recommended thinking about an AI assistant as the most talented actor in Hollywood. You, the user, are the director.

Jordi Van Den Bussche, the protagonist of this story, is effectively transitioning from being an actor to being a director.

Let’s continue the article:

The ideas that went into the AI tools took years to form. Prior to building them, Van Den Bussche had set up a coaching business, where he gave other aspiring influencers his blueprint for social media success. It was through that process that he developed a protocol for how to be a prominent creator. Eventually, though, even his protégés needed time off, and Van Den Bussche realized the fatal flaw in the creator economy was humans.

Van Den Bussche and his creative team began trying to reverse engineer what made creators successful. “We started testing a lot of theories on this,” he says. “We needed evidence: How much does the voice influence the performance with the fans? How much does the face influence it? How much does the content influence it?”

In April 2021, Van Den Bussche launched a YouTube channel with a virtual YouTuber (vtuber) called Bloo that he developed, powered by AI. Since then, Bloo has gained 775,000 subscribers, with each video watched by tens of thousands or hundreds of thousands of viewers. “He’s a completely virtual influencer with a protocol and set steps and a bunch of AI and machine learning applications involved in the system,” he says. “Now we’re applying that model to my IP and my friends’. It includes voice cloning, so it sounds like me.”

The Kwebbelkop videos made by AI—the first of which dropped on Tuesday—are powered by models trained on Van Den Bussche’s existing content. “It’s modeled after me and my creativity and my input,” he says. “Everyone thinks I’m retiring as a creator and letting this AI run, but I’m not retiring as a creative.”

He claims to have a wait list of 500 influencer friends within the industry eager to adopt his AI tools, though he can’t give them access until the cost of creating new videos drops to an economical level, which he believes will happen as technology advances.

This is a video of Bloo:

This, instead is a video of synthetic Van Den Bussche:

Of course, once the process of creating a synthetic influencer becomes automatable and cheap, an already saturated platform like YouTube will become 1000x more crowded. Which means that, to stand out and capture more than a few users, each synthetic influencer owner will have to spend bigger and bigger budgets on advertising and marketing.

Unless we start doing the reverse: rather than capturing as much audience as possible, we start capturing the smallest possible niche, but at scale. A company like Meta could create millions of synthetic entertainers, exquisitely tailored around the unique taste of each user (of which, they know everything about).

What does make you you?

And what do people want to see from you specifically for them to pay enough to earn a living?

Going forward these questions will become more important than ever.

What if the only two jobs of the future will be building for others and entertaining others?

Want More? Read the Splendid Edition

The time has come for us to start testing open access and open source large language models. These models have reached a level of maturity that starts to match the performance of GPT-3.5-Turbo in some tasks and, at this pace, we might see them getting close to GPT-4 level of performance by the end of the year.

There are many reasons why you’d want to pay attention to these models.

Perhaps because your company doesn’t want to depend on a single AI provider like OpenAI, Anthropic, or Microsoft. The more capable their models become, the more they will be able to charge for them.

Perhaps your company doesn’t want to depend on an AI model that can wildly fluctuate in terms of accuracy over time. You may have read the heated debate between OpenAI and its customer base about the dramatic drop in accuracy of both GPT-3.5-Turbo and GPT-4 from March to Now.

Perhaps your company wants to build a product on top of AI providers’ APIs but the cost is exorbitant for the use case you have in mind, making the whole business financially unsustainable.

Perhaps your company wants to fine-tune an AI model but you don’t feel comfortable sharing your precious proprietary data with an AI provider or you don’t want to wait for them to make the fine-tuning process easy and accessible (right now it’s more akin to a consulting engagement for most companies).

Perhaps your company wants to achieve a level of customization that is simply impossible to achieve with the security scaffolding that the AI providers have put in place to protect their models from prompt injections, reputational risks, and other potential liabilities.

We can go on for a while.

The point is that, at least in this early stage of technology evolution for generative AI, you may want to keep all your options open.

Just last months ago, in the text-to-image (more properly called diffusion) models space, it seemed that nobody could beat Midjourney. To enjoy their frictionless generation of stunning AI images, you would have to accept the lack of flexibility in image composition and final look & feel. Because the alternative was a less-than-impressive quality produced by alternatives like Stable Diffusion and Dall-E 2.
Then, Stability AI released Stable Diffusion XL (SDXL) and everything is worth reconsidering now.

The same might happen with language models and the current dominance of GPT-4.

To understand how much these models have matured, we’ll test a version of the new LLaMA 2 70B model released by Meta, and fine-tuned by Stability AI.

Issue #24 - Cannibals

August 12, 2023
Free Edition
Hi. If you are seeing this, it means that you are a valued member of our community. Or you are reading Issue #0. Or you hacked the archives.
Whichever the case, bravo.

If you have comments about anything you'll find below, or you have material to suggest, or topics you'd like to see covered (don't you dare to pitch me your startup), just send an email.
I'll read all emails, ignore them, wait a few weeks, and then use the best stuff for a new issue of the newsletter, pretending the ideas are original and mine.

Another thing. Super important question:

Do you have one of those moms that inexplicably know everyone and gossip all day long so that one little secret you have shared with them in confidence at breakfast becomes a fact known by the whole town by noon?

If so, can you tell your mom that Synthetic Work is a secret?

If she talks about this newsletter or forwards it to the entire neighbourhood, it helps me a lot.

Synthetic Work is now 6 months old.

This milestone comes with a few insights that I’d like to share with you:

  1. You, dear first-edition reader, have stuck around for this entire time. I’m thankful for (and astonished by) that. Synthetic Work has a churn rate of zero. While there is a seemingly infinite number of resources to read about AI, the particular angle we focus on in this newsletter seems to matter to all of you. I hope you’ll continue to see value in this project as I work to build and scale it in the future.
  2. You, collectively, are a readership of business leaders. I’ve never seen such a concentration of CEOs, CIOs, and other C-level executives, SVPs/VPs, and Managing Directors adopt a service focused on an emerging technology as quickly as I’ve seen for Synthetic Work. You are an audience of truly outstanding thought leaders across a wide range of industries. It’s my privilege to write for you, and do business with you, every week.
  3. You have financially supported Synthetic Work from day one by subscribing to the Splendid Edition. You, better than anybody else, know how hard is to start a business. Especially an independent one like this. But you are helping make this project viable thanks to your enthusiastic support. That said, more support is always welcome as I scale the business to offer you even more value (read below).
  4. Synthetic Work has evolved to be way more than a newsletter. Information primarily comes to you through the two editions of the newsletter every week but, at this point, the project encompasses multiple web assets, like the AI Adoption Tracker and the How to Prompt section. More of these assets are in construction, like a Vendor Database, a Research Database, and a long-overdue database of recommended AI-powered solutions.
  5. It’s still very early. The number of people that have yet to realize how much their business, their workforce, and their career will be impacted by AI, is still small. In my interactions with clients during consultation days, I still see some questions that would have been important two years ago. Other experts in other sub-fields of AI, like AI law, tell me the same. Synthetic Work has the potential to reach a much bigger audience and it will.
  6. This is just a timid first step. The way we are seeing AI being used by early adopters in the Splendid Edition is nothing compared to what’s coming. It’s hard to see the possibilities if you don’t read research papers all day. I do it on your behalf, and I promise that what we have seen so far pales in comparison with what’s in the innovation pipeline. Some of the things that I’m building for you depend on AI technology that is maturing rapidly, but not quite there yet. The idea is that Synthetic Work becomes a showcase for AI applied to business problems.

So, thank you for your support and your trust thus far. If you think Synthetic Work could be valuable for other leaders like you, please consider sharing it with your network. It’s the best way to help it grow.

Alessandro

In This Issue

  • A rare interview with the Anthropic CEO Dario Amodei and his view on the integration of AI in business environments.
  • Billionaire investor Chamath Palihapitiya on the impact of AI on software development and the implications for public companies pressured by activist investors.
  • A leaked conversation during an Adobe staff meeting about the risk of cannibalizing the company’s own business model with AI.
  • The New England Journal Of Medicine published a very interesting article on the adoption of AI in the Health Care industry.
  • The collaboration between a rapper, Lupe Fiasco, and Google is a great example of collaboration between humans and AI. For now.
  • A new report from the UK House of Commons Committee touches on how workers might feel in a workplace where they are surveilled and judged by tech and AI.

P.s.: this week’s Splendid Edition of Synthetic Work is out, titled The Ideas Vending Machine.

In it, you’ll read what WPP, the London Stock Exchange Group, Tinder, and Australia’s Home Affairs Department are doing with AI.

Also, in the What Can AI Do for Me? section, you’ll read how GPT-4 can be used to generate perfectly legit business ideas. For real.

What Caught My Attention This Week

The first thing that I’d like to point your attention to this week is a rare interview with the Anthropic CEO Dario Amodei.

None of the questions were focused on the impact of AI on jobs, but there are two reasons why the interview is important.

First, Amodei is one of the few human beings that truly understand how far generative AI has come as his company is busy developing frontier models that can rival OpenAI’s ones. At some point during the interview, he suggests that AI will reach the ability levels of educated humans in 2-3 years. And yet, he replies “I don’t know” 17 times to the interviewer’s questions.

If the top-of-the-world experts are not certain about what will happen, you should be cautious when you hear a pundit’s prediction about generative AI will evolve in the next 5 or 10 years.

This doesn’t just mean that the predictions might be too optimistic. It also means that the predictions might be too conservative.

The second thing why the interview is important is that Amodei is one of the few that highlighted the critical difference between the proof of concepts that you see constantly promoted on social media and the reality of an enterprise implementation:

Q:Why would it be the case that it could pass a Turing Test for an educated person but not be able to contribute or substitute for human involvement in the economy?

A: A couple of reasons. One is just that the threshold of skill isn’t high enough, comparative advantage. It doesn’t matter that I have someone who’s better than the average human at every task. What I really need for AI research is to find something that is strong enough to substantially accelerate the labor of the thousand experts who are best at it.

We might reach a point where the comparative advantage of these systems is not great.

Another thing that could be the case is that there are these mysterious frictions that don’t show up in naive economic models but you see it whenever you go to a customer or something.

You’re like — “Hey, I have this cool chat bot.” In principle, it can do everything that your customer service bot does or this part of your company does, but the actual friction of how do we slot it in? How do we make it work? That includes both just the question of how it works in a human sense within the company, how things happen in the economy and overcome frictions, and also just, what is the workflow? How do you actually interact with it?

It’s very different to say, here’s a chat bot that looks like it’s doing this task or helping the human to do some task as it is to say, okay, this thing is deployed and 100,000 people are using it.

Right now lots of folks are rushing to deploy these systems but in many cases, they’re not using them anywhere close to the most efficient way that they could. Not because they’re not smart, but because it takes time to work these things out.

That’s a key reason why the Splendid Edition of Synthetic Work focuses exclusively on applied AI (even if it’s just in the testing phase).

The interview is dense with insights, but keep in mind that it’s two hours long and the first part is quite technical:


The second thing worth your attention is an observation from the famed billionaire investor Chamath Palihapitiya on the impact of AI on software development.

Palihapitiya doesn’t exactly have an immaculate track record. The investments he led in the last few years, mainly through the SPAC vehicle, have all crushed and burned, leaving retail investors with heavy losses.

Nonetheless, the following comment is interesting and aligns perfectly with what I’ve been writing in multiple issues of Synthetic Work:

The idea that AI shrinking corporate workloads will lead to companies producing more and of higher quality (scenario #1), as Marc Andreessen recently suggested, is an exceptionally optimistic one. My personal experience in corporate environments, and the trends we have been tracking in the Splendid Edition of Synthetic Work, suggest that scenario #2 is much more likely.

The particular angle that Palihapitiya focuses on, the pressure from activist investors, makes me think that this might become a playbook for Private Equity (PE) firms: acquire distressed companies that have more-than-average business functions ideal for AI-driven automation, and then deploy AI models to radically shrink the workforce and increase margins.


The third thing that I would focus on this week is a leaked conversation during an Adobe staff meeting about the risk of cannibalizing the company’s own business model with AI.

Eugene Kim, reporting for Business Insider:

One senior designer at Adobe recently wrote in an internal AI ethics Slack channel that a billboard and advertising business he knows plans to reduce the size of its graphic design team because of Photoshop’s new text-to-image features.

“Is this what we want?” the person wrote.

Other messages in the Adobe Slack channel were more critical of the AI revolution, calling it “depressing” and an “existential crisis” for many designers. One person said some artists now feel like they are “slaves” to the AI algorithm, since their jobs will mostly involve just touching-up AI-generated work.

Some had a more positive view. Photoshop made artists more productive, and AI will only increase their efficiency, they said. One person said many freelancers and hobbyists will benefit from the increased output, even if some companies reduce their design workforce.

“I don’t think we should feel guilty for providing better and faster tools, as long as it’s done ethically,” that person wrote in the Slack channel.

During an internal staff meeting in June, one employee asked whether generative AI was putting Adobe “in danger of cannibalizing” its lucrative business that targets corporate customers, in exchange for individual users “who want it free or cheap,” according to a screenshot of the question submitted online.

A similar question was broached during an Adobe earnings call in June. Jefferies analyst Brent Thill said the “number one question” he gets from investors is whether AI will reduce Adobe’s “seats available.”

This is a closely watched measure of the company’s customer base. Adobe often sells cloud software subscriptions based on the number of seats, or licenses, which give customers access to the technology. A company with, say, 5 graphic designers in-house would buy five licenses. So if designers are getting laid off, demand for licenses might fall, cutting into Adobe’s revenue, or slowing sales growth.

In response to Thill’s question, David Wadhwani, Adobe’s president of digital media, said the company has a history of introducing new technology that leads to more productivity and jobs.

Some employees are not sold on this idea. In the internal Slack channel, a group of employees discussed how new generative AI technology is fundamentally different from prior disruptive innovations.

Cameras, for example, still required skill and expertise to produce good photography, they said. In contrast, generating AI images requires almost no skill, raising concerns over losing “craft and expertise that can only be gained through continued practice and personal creativity,” one of the people wrote in the Slack channel.

“It does not innovate in the way a camera does in that it replaces people in the mediums that it draws data from instead of opening up new means of expression,” one of the people wrote.

Adobe’s Firefly text-to-image mode is not yet great, but it will become soon. And their new Photoshop features called Generative Fill and Generative Expand, powered by the same AI model, already are impressive.

So, there’s no question that this technology significantly impacts the amount of time a designer needs to spend on a project.

But if Adobe would not have done it, somebody else would have. Open source alternatives of those features have existed since the launch of Stable Diffusion 1.0 last summer, and you should expect that these capabilities will become table stakes in photo manipulation software going forward.

This dilemma will soon concern many other technology providers, and not just in the graphic design space.

Every software company that is productizing AI to automate a task that was previously done by humans will eventually reach a threshold where more AI might hurt the business model rather than make the life of the customers better.

That’s what I usually refer to the expression “It might get better before it gets worse.” in the context of artificial intelligence.

The alternative view is that, following Marc Andreessen’s argument again, technology providers will become significantly more profitable because these AI technologies are lowering the cost of entry for new experts.

If, to put it like the Adobe employees of our story, AI will turn a highly sophisticated skill into a button-push exercise, it means that more people will be able to become graphic designers, or software developers, or copywriters. Those words won’t mean the things that they mean today, but will still be jobs that people can do without studying for years.

So we might go from, let’s say, 100,000 graphic designers to 10 million.

The problem will, at that point, find enough customers to keep those 10 million designers busy. Or pay them enough to make a living.

It might get better before it gets worse.

A Chart to Look Smart

The easiest way to look smart on social media and gain a ton of followers? Post a lot of charts. And I mean, a lot. It doesn’t matter about what. It doesn’t even matter if they are accurate or completely made up.
You won’t believe that people would fall for it, but they do. Boy, they do.

So this is a section dedicated to making me popular.

The New England Journal Of Medicine published a very interesting article on the adoption of AI in the Health Care industry. It comes with an important chart:

From the research:

AI adoption in health care delivery lags behind the use of AI in other business sectors for multiple reasons. Early AI took root in business sectors in which large amounts of structured, quantitative data were available and the computer algorithms, which are the heart of AI, could be trained on discrete outcomes — for example, a customer looked at a product and bought it or did not buy it. Qualitative information, such as clinical notes and patients’ reports, are generally harder to interpret, and multifactorial outcomes associated with clinical decision making make algorithm training more difficult.

In last week’s Splendid Edition, Issue #23 – Your Own, Personal, Genie, we saw how three different companies are using a new AI technology developed by AWS to automatically generate clinical notes from patient-doctor conversations.

So something is moving in that direction.

Let’s continue with the most important insight:

We think that the need for AI to help improve health care delivery should no longer be questioned, for many reasons. Take the case of the exponential increase in the collective body of medical knowledge required to treat a patient. In 1980, this knowledge doubled every 7 years; in 2010, the doubling period was fewer than 75 days.1 Today, what medical students learn in their first 3 years would be only 6 percent of known medical information at the time of their graduation. Their knowledge could still be relevant but might not always be complete, and some of what they were taught will be outdated. AI has the potential to supplement a clinical team’s knowledge in order to ensure that patients everywhere receive the best care possible.

Somehow similarly, in my previous job, and for years, I kept repeating that no human team, no matter how skilled or how numerous, can keep up with the amount of events that occur in a large, complex IT environment and that, because of this, monitoring dashboards full of blinking lights and numbers are completely useless, and only feed a delusion of control.

The commonality between the above quote and my position on monitoring dashboards is that it’s time to accept that human beings cannot scale to the complexity of the world we have created. And if we cannot scale, the number of errors we make is only destined to increase.

Let’s go back to the article for some more data on AI adoption:

In health care delivery, the role of AI in improving clinical judgment has garnered the most attention, with a particular focus on prognosis, diagnosis, treatment, clinician workflow, and expansion of clinical expertise. Specialties such as radiology, pathology, dermatology, and cardiology are already using AI in the process of image analysis. In radiologic screening, for example, up to 30% of radiology practices that responded to a survey indicated that they had adopted AI by 2020, and another 20% of radiology practices indicated that they planned to begin using AI in the near future.

We have found that uses of AI are emerging in nine domains of health care delivery. However, most uses of AI in health care delivery have not been subject to randomized, controlled trials. Therefore, the usual level of evidence required for medical decision making may be lacking.

Adoption of AI in health care delivery is lagging behind for several reasons. First, given the many different sources and types of health care data needed, they are known to be more heterogeneous and variable than data in other business sectors (e.g., data to make a movie recommendation in Netflix). This creates challenges in applying AI. Another major reason is the fee-for-service model of payment as compared with a value-based payment model. The latter payment structure would fund measures that improve care or make it safer, which is where the benefit of AI in health care delivery could be of substantial importance. Under a fee-for-service model, these incentives are substantially less prominent or absent altogether. Other documented reasons for the slow adoption of AI in health care delivery are lack of patient confidence, including concerns about privacy and trust in the output; regulatory issues such as Food and Drug Administration approval and reimbursement; methodologic concerns such as validation and communication of the uncertainty of a given AI-based recommendation or decision; and reporting difficulties such as explanations of assumptions and dissemination. These factors will have to be addressed before long-term adoption of AI and full realization of the opportunity that it provides.

Issues within health care organizations may also account for the slow adoption of AI.

I close with a key point that resonates with Dario Amodei’s comment in the first story of this newsletter:

Finally, implementation is critical for AI adoption within an organization. This category takes the most time and effort, and it is often shortchanged by organizations. One challenge is change management. For example, there may be agreement to move to prescriptive scheduling in the operating room, but the implications of this decision are quite different for a hospital administrator, the chief of surgery, individual surgeons, and the operating-room team. Thus, successful AI adoption is likely to require intentional actions that both help to effect behavioral change and address the details holistically, such as creating AI output visualizations that make interpretation easy for clinicians.

Another implementation challenge is workflow integration. The use of AI in clinical operations is more successful when it is treated as a routine part of the clinical workflow. In essence, AI output is more effective when viewed as a member of the team rather than as a substitute for clinical judgment.

Change management. Another drum I’ve been beating for years when explaining what is the key ingredient for a successful rollout of automation technologies.

It turned out that technologies don’t understand and don’t care about change management. It’s up to the business leadership to pay attention to it and align the organization for awareness and cooperation.

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

This week I invite you to watch a fascinating video about the collaboration between a rapper, Lupe Fiasco, and Google.

The former used a large language model, probably a customized of Google’s AI assistant Bard, which is powered by the AI model called PaLM 2, but not in the way you would expect:

It’s a great, even emotional, story that reinforces the idea that AI is not a threat to human creativity, but a tool that can help us be more creative.

That certainly was Google’s intent.

Except that… 🙂

…the part that most people don’t want to see is that a generative AI model can now also capture the thought process of the rapper in selecting which words were the best fit for the song.

Today, AI models are not very good at that. But one year ago, they were not able to write a mathematical theorem like a Shakespeare sonnet, and now they can.

Seeing the trajectory of our actions and their long-term consequences is the hardest thing.

How Do You Feel?

This section is dedicated to the psychological impact of artificial intelligence on people. You might think that this has no relevance to the changing nature of human labour, but it does. Oh, if it does!

For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.

The UK House of Commons Committee for culture, media and sport just published a 79-pages report titled Connected tech: smart or sinister? in which they dedicate some space to how workers might feel in a workplace where they are surveilled and judged by tech (and that tech is powered by AI):

the introduction of connected tech in workplace environments can also have negative impacts on employees. As the ICO notes, “the key difference is the nature of the employer/employee relationship and its inherent power imbalance”

Dr Tabaghdehi and Dr Matthew Cole, post-doctoral researcher at the Fairwork Project based at the OII, described to us instances where the micro-determination of time and movement tracking through connected devices, which had been introduced to improve productivity, such as in warehouses had also led to workers feeling alienated and experiencing increased stress and anxiety.

Dr Sarah Buckingham similarly described Devon & Cornwall and Dorset Police Services’ trial of a “mobile health (mHealth)” intervention, which consisted of giving officers FitBit activity monitors and Bupa Boost smartphone apps to promote physical activity and reduce sedentary time. The trial increased physical activity on average but also led to “feelings of failure and guilt when goals were not met, and anxiety and cognitive rumination resulting from tracking [physical activity] and sleep”.

A Report on Royal Mail published earlier this year by the then-Business, Energy and Industrial Strategy Committee concluded that data from handheld devices called Postal Digital Assistants (PDAs) had “been used to track the speed at which postal workers deliver their post and, subsequently, for performance management, both explicitly in disciplinary cases and as a tool by local managers to dissuade staff from stopping during their rounds” despite an agreement of joint understanding between Royal Mail and the Communication Workers Union (CWU) in April 2018 to the contrary.

Dr Cole also argued that, more broadly, technological transformation would likely lead to a change in task composition and a deskilling of many roles as complex tasks are broken up into simpler ones to allow machines to perform them.

Dr Tabaghdehi cited the education sector as one profession likely to experience disruption due to technological transformation.

The ICO had noted that respondents to a recent call for evidence on employment practices “raised concerns around the use of connected tech in workplace scenarios including the increased use of monitoring technologies, as well as the ways in which AI and machine learning are impacting how decisions are made about workers” and said it “will provide more clarity on data protection in the employment context as part of this work”. Dr Cole also called for greater observation and monitoring of AI system deployments, empowered labour inspectorates and a greater role for the Health and Safety Executive (HSE), the UK regulator for workplace health and safety, in regulating workplace AI systems and upholding standards of deployment.

As a result, the Committee recommends:

The monitoring of employees in smart workplaces should be done only in consultation with, and with the consent of, those being monitored. The Government should commission research to improve the evidence base regarding the deployment of automated and data collection systems at work. It should also clarify whether proposals for the regulation of AI will extend to the Health and Safety Executive (HSE) and detail in its response to this report how HSE can be supported in fulfilling this remit.

AI-induced stress and anxiety might become a common topic mentioned by unions in the next few years.

Perhaps we should ask China’s citizens how they feel about it.

Want More? Read the Splendid Edition

This week we attempt something that I genuinely would have not considered worth a minute of my (or your) time just one year ago: we’ll ask the AI to come up with a list of business ideas.

Before you click the unsubscribe button of this newsletter, let me give you some context. If there was no science behind this, I guarantee you that I wouldn’t bother you.

Their first discovery is how quickly a human can generate 100 ideas with the help of GPT-4:

Two hundred ideas can be generated by one human interacting with ChatGPT-4 in about 15 minutes. A human working alone can generate about five ideas in 15 minutes. Humans working in groups do even worse.

A professional working with ChatGPT-4 can generate ideas at a rate of about 800 ideas per hour. At a cost of USD 500 per
hour of human effort, a figure representing an estimate of the fully loaded cost of a skilled professional, ideas are generated at a cost of about USD 0.63 each, or USD 7.50 (75 dimes) per dozen. At the time we used ChatGPT-4, the API fee for 800 ideas was about USD 20. For that same USD 500 per hour, a human working alone, without assistance from an LLM, only generates 20 ideas at a cost of roughly USD 25 each, hardly a dime a dozen. For the focused idea generation task itself, a human using ChatGPT-4 is thus about 40 times more productive than a human working alone.

Breathtaking, but only as long as these AI-generated ideas are not complete crap. In fact, per the researchers premise, to be useful, an LLM has to generate a few truly exceptional ideas rather than a lot of non-complete crap ones.

So, how did they evaluate the ideas generated by GPT-4?

OK.

It’s time to try this ourselves, using a slightly modified version of the prompt used by the researchers.

Given that it’s the 6th monthiversary of Synthetic Work, I think it’s the perfect time to try to generate 100 ideas to evolve this project.

Here’s the first batch of 10 ideas:

Idea #1 is already in motion. As you know that I’ve started offering consulting services to companies that want to adopt AI, and I certainly thought about expanding the service to offer access to fellow experts in adjacent areas (like AI and legal).

I’m not going to comment on idea #2. You can tell me if this is a Synthetic Work service that you would be interested in.

I never thought about idea #3. Is it an exceptional one? Perhaps not.

I think about idea #4 every day, even under the shower. Technology is not there yet, but it’s coming.

Ideas #5, #9, and #10 are very difficult to implement. Idea #9 is not adjacent to Synthetic Work, in my opinion. Also: are they exceptional?

Ideas #6 and #7 are adjacent media projects tailored to provide education and cybersecurity content. As Synthetic Work evolves, you should expect more vertical content. Nothing groundbreaking.

Idea #8 is confusing in its description. It’s a mix of content tailored for the Pharmaceutical industry, similar to ideas #6 and #7, and an actual drug discovery platform, which is what Google DeepMind is doing.

Remember: we said that the value of using an LLM as a business idea generator is that there’s judgment involved to bias the process.

OK.

I’ll leave you with another two batches of 10 ideas each.

Especially in this last batch, I’m sure you’ll recognize a few things that Synthetic Work already does. So, I wouldn’t say that GPT-4 is doing a job worse than a human (me, in this case) at idea generation.

Of course, to have a chance to stumble on a truly exceptional idea, just like the researchers said, we’ll have to generate thousands of them. 30 is nowhere near enough.

If you try this experiment yourself, in the areas that matter to you, and you stumble on something truly exceptional, please let me know.

Now, to close.

Why is this research so incredibly important?

Issue #25 - The Clean-Up Commando

August 19, 2023
Free Edition
Hi. If you are seeing this, it means that you are a valued member of our community. Or you are reading Issue #0. Or you hacked the archives.
Whichever the case, bravo.

If you have comments about anything you'll find below, or you have material to suggest, or topics you'd like to see covered (don't you dare to pitch me your startup), just send an email.
I'll read all emails, ignore them, wait a few weeks, and then use the best stuff for a new issue of the newsletter, pretending the ideas are original and mine.

Another thing. Super important question:

Do you have one of those moms that inexplicably know everyone and gossip all day long so that one little secret you have shared with them in confidence at breakfast becomes a fact known by the whole town by noon?

If so, can you tell your mom that Synthetic Work is a secret?

If she talks about this newsletter or forwards it to the entire neighbourhood, it helps me a lot.

In last week’s issue, we celebrated the 6-month milestone of Synthetic Work.

One thing, easily the most important one, I didn’t mention is how happy Synthetic Work members are. Your satisfaction is my number one priority. Bragging about how satisfied you are, not so much. Perhaps, I should do it more often.

Here are a couple of recent testimonials.

This was written by the CEO of a tech company:

Absolutely one of the best source of info and ideas available out there.
Deep, interesting, not blinded by techno-optimism, entertaining. Can’t ask for more.

This, instead, was sent by a VP, R&D in the Health Care industry:

Congratulations on the 6-month milestone.

I am not the least bit surprised by your success or the growing popularity of Synthetic Work.
It is expertly produced, uniquely positioned, and perfectly timed for its intended purpose.

I thoroughly enjoy (and eagerly anticipate) each and every weekly edition.

You find all the others in the Customers page or in the Subscribe page.

Do these testimonials inspire you to write one, too? Yes?

Please do. I’d love to hear from you.
Alessandro

In This Issue

  • The role of the digital artist in the video game industry is changing. Some are not proud of how they see themselves now.
  • The UK National Institute for Health and Care Excellence (Nice) issued the recommendation to start using AI for radiotherapy treatment performed by the National Health Service (NHS).
  • Consulting companies are competing to announce enormous investments in generative AI.
  • The Drucker Institute suggests a correlation between the companies that invest in AI and their business performance.
  • AI experts are now offered salaries between half and a million US dollars but, in Europe, there are very few to hire.
  • Advertising agency Ogilvy calls for transparency in the use of generative AI in commercials and ads. Why?

P.s.: This week’s Splendid Edition is out and it’s titled Hypnosis for Business People.

In it, you’ll find what Maersk, Wesco, Unilever, Siemens, Travelers Cos., and Ubisoft are doing with AI.

In the What Can AI Do for Me? section, you’ll also learn a technique to improve the quality of your corporate presentations with AI-generated images.

What Caught My Attention This Week

The first story that caught my attention this week is about the changing role of the digital artist in the video game industry.

Fernanda Seavon, reporting for Wired:

In March 2023, a Reddit user shared a story of how AI was being used where she worked. “I lost everything that made me love my job through Midjourney overnight,” the author wrote. The post got a lot of attention, and its author agreed to talk to WIRED on condition of anonymity, out of fear of being identified by her employer.

“I was able to get a huge dopamine rush from nailing a pose or getting a shape right. From having this ‘light bulb moment’ when I suddenly understood a form, even though I had drawn it hundreds of times before,” says Sarah (not her real name), a 3D artist who works in a small video game company.

Sarah’s routine changed drastically with version 5 of Midjourney, an AI tool that creates images from text prompts.

When Sarah started working in the gaming industry, she says, there was high demand for 3D environmental and character assets, all of which designers built by hand. She says she spent 70 percent of her time in a 3D motion capture suit and 20 percent in conceptual work; the remaining time went into postprocessing. Now the workflow involves no 3D capture work at all.

Her company, she explains, found a way to get good and controllable results using Midjourney with images taken from the internet fed to it, blending existing images, or simply typing a video game name for a style reference into the prompt. “Afterwards, most outputs only need some Photoshopping, fixing errors, and voilà: The character that took us several weeks before now takes hours—with the downside of only having a 2D image of it,” says Sarah. “It’s efficiency in its final form. The artist is left as a clean-up commando, picking up the trash after a vernissage they once designed the art for,” she adds.

It’s the last sentence that caught my attention.

In this newsletter, on more than one occasion, we discussed a scenario where generative AI might simplify the nature of our jobs to the point that we get paid significantly less (rather than becoming ten times more productive as Mark Andreessen predicts). And this novel, less-than-flattering, characterization of the role of the artist seems to fit that scenario quite well.

And given that we are at this, let’s capture some additional data points and perspectives from the article:

“Not only in video games, but in the entire entertainment industry, there is extensive research on how to cut development costs with AI,” says Diogo Cortiz, a cognitive scientist and professor at the Pontifícia Universidade de São Paulo. Cortiz worries about employment opportunities and fair compensation, and he says that labor rights and regulation in the tech industry may not match the gold rush that’s been indicative of AI adoption. “We cannot outsource everything to machines. If we let them take over creative tasks, not only are jobs less fulfilling, but our cultural output is weakened. It can’t be all about automation and downsizing,” he says, adding that video games reflect and shape society’s values.


The second story worth your attention comes from the UK National Institute for Health and Care Excellence (Nice) issued a surprising recommendation to start using AI for radiotherapy treatment performed by the National Health Service (NHS).

Anna Bawden, reporting for The Guardian:

Draft guidance from the National Institute for Health and Care Excellence (Nice) has given approval to nine AI technologies for performing external beam radiotherapy in lung, prostate and colorectal cancers, in a move it believes could save radiographers hundreds of thousands of hours and help relieve the “severe pressure” on radiotherapy departments.

NHS England data shows there were 134,419 radiotherapy episodes in England in April 2021 to March 2022 of which a significant proportion required complex planning.

At the moment, therapeutic radiographers outline healthy organs on digital images of a CT or MRI scan by hand so that the radiotherapy does not damage healthy cells by minimising the dose to normal tissue. Evidence given to Nice found that using AI to create the contours could free up between three and 80 minutes of radiographers’ time for each treatment plan, and that AI-generated contours were of a similar quality as those drawn manually.

While it recommended using AI to mark the contours, Nice said that the contours would still be reviewed by a trained healthcare professional.

The health secretary, Steve Barclay, welcomed the announcement. He said: “It’s hugely encouraging to see the first positive recommendation for AI technologies from a Nice committee, as I’ve been clear the NHS must embrace innovation to keep fit for the future.

“These tools have the potential to improve efficiency and save clinicians thousands of hours of time that can be spent on patient care. Smart use of tech is a key part of our NHS long-term workforce plan, and we’re establishing an expert group to work through what skills and training NHS staff may need to make best use of AI.”

Nice said it was also examining the evidence for using AI in stroke and chest scans. It follows a study that found AI was safe to use in breast cancer screening and could almost halve the workload of radiologists, according to the world’s most comprehensive trial of its kind.

The nine platforms included are AI-Rad Companion Organs RT, ART-Plan, DLCExpert, INTContour, Limbus Contour, MIM Contour ProtegeAI, MRCAT Prostate plus Auto-contouring, MVision Segmentation Service and RayStation.

Separately, the government announced it was investing £13m in AI healthcare research before the first big international AI safety summit in autumn. The technology secretary, Michelle Donelan, said 22 university and NHS trust projects would receive funding for projects including developing a semi-autonomous surgical robotics platform for the removal of tumours and using AI to predict the likelihood of a person’s future health problems based on their existing conditions.

Geoffrey Hinton, called the godfather of AI, in 2016 famously predicted that in maximum a decade AI would replace radiologists.

Thankfully, he was wrong. Right?


The last story that caught my attention this week is about the enormous investments in generative AI that consulting companies are announcing.

Mark Maurer, reporting for The Wall Street Journal:

KPMG plans to invest $2 billion in artificial intelligence and cloud services across its business lines globally over the next five years through an expanded partnership with Microsoft.

The professional-services company on Tuesday said it expects the partnership to bring in more than $12 billion in revenue over five years. Annually, that would represent about 7% of KPMG’s global revenue, which totaled $34.64 billion in the year ended Sept. 30, 2022. The company, the smallest of the Big Four by revenue, declined to provide a projected revenue figure for the year ending this September.

Through the new investment, the roughly 265,000-person company will further automate aspects of its tax, audit and consulting services, aimed at enabling employees to provide faster analysis, spending more time on doling out strategic advice and helping more companies integrate AI into their operations.

KPMG’s global chair and chief executive, Bill Thomas, said in an interview that the company isn’t looking to use technology to eliminate jobs, but rather to enhance its workforce with AI skills—for example, by moving people to new roles or offering them training.

“I certainly don’t expect that we’ll lay off a lot of people because we’ve invested in this partnership,” Thomas said. “I would expect that our organization will continue to grow and we will reskill people to the extent possible and, frankly, create all sorts of opportunities in ways that we can’t even imagine yet.”

As part of the expanded partnership, KPMG will have early access to an AI assistant called Microsoft 365 Copilot, before its launch to the general public. KPMG’s deal with Microsoft also includes the Azure cloud platform, through which the professional-services company already uses OpenAI to build and run apps.

a significant portion of KPMG’s investment will go toward generative AI, which many businesses are eager to apply to their finances as a way to cut costs and yield new efficiencies.

The move comes as KPMG and other companies are navigating slowing growth in their consulting businesses as corporate clients spend less on certain services amid recession concerns. Its U.S. unit in June laid off almost 2,000 employees, four months after cutting nearly 700 in its consulting division.

In the sentence “I certainly don’t expect that we’ll lay off a lot of people because we’ve invested in this partnership”, the keyword “a lot.”

Just one day after this, Alex Gabriel Simon, reported for Bloomberg:

Wipro Ltd., the Indian outsourcing provider, plans to spend $1 billion to train its 250,000 employees in artificial intelligence and integrate the technology into its product offerings.

The spending, over the next three years, also involves bringing 30,000 employees from cloud, data analytics, consulting and engineering teams together to embed the technology into all internal operations and solutions offered to clients

Wipro said it will also accelerate investments in cutting-edge startups, including setting up an accelerator program for young firms specializing in generative AI.

In multiple Splendid Editions of Synthetic Work we discussed what the competitors of KPMG and Wipro are already doing with generative AI.

You should expect that every single consulting company on the planet will follow suit. It’s just too big of an opportunity to miss. And if you want to understand better way, just read the section below titled “The Way We Work Now”.

A Chart to Look Smart

The easiest way to look smart on social media and gain a ton of followers? Post a lot of charts. And I mean, a lot. It doesn’t matter about what. It doesn’t even matter if they are accurate or completely made up.
You won’t believe that people would fall for it, but they do. Boy, they do.

So this is a section dedicated to making me popular.

The Wall Street Journal recently gave space to Rick Wartzman, the head of the KH Moon Center for a Functioning Society at the Drucker Institute, a part of Claremont Graduate University

(take a deep breath)

and Kelly Tang, a Senior Director of Research at the aforementioned Drucker Institute.

The two propose an interesting correlation:

The institute’s measure serves as the foundation of the Management Top 250, an annual ranking produced in partnership with The Wall Street Journal. The 2022 list was published in December.

In all, 34 separate metrics were used last year to evaluate 902 large, publicly traded U.S. corporations across five categories: customer satisfaction, employee engagement and development, innovation, social responsibility and financial strength.

Companies are compared in each of these five areas, in addition to their overall effectiveness, through standardized scores with a typical range of 0 to 100 and a mean of 50.

Among the indicators we collect to determine a company’s level of innovation is its number of job postings in an assortment of cutting-edge fields, including AI.

All sorts of jobs were captured in these counts—everything from full-stack software engineers to grocery drivers who may use an AI platform to give priority to where to drop off their next delivery.

The results were eye-catching. A straight-line relationship emerged between how aggressively companies have been building up their talent around AI and their average overall-effectiveness scores, with those marks descending quartile by quartile, from 60.2 to 53.8 to 48.0 to 46.0. The same pattern held true in every individual category we cover.

What our inquiry couldn’t answer, however, is the big chicken-or-egg question: Do more-effectively managed companies tend to be ahead of the game and, therefore, they have been leading the way in AI over the past three years? Or is their heavy deployment of AI helping them to become more effective in the first place?

Many things don’t pass the sniff test in this correlation.

The first is that there’s a conflation between the companies that apply AI in novel ways to their business, something that I’ve done in my last job, and companies that simply adopt AI tools created by others. Which one makes the difference, if any?

The second issue is that, just like for web3/crypto/blockchain before, companies are improperly using the term AI to describe things that have nothing to do with AI. We have already seen this in the previous AI cycle, before generative AI arrived.

The third issue is that companies that implemented AI in 2020 were certainly not implementing the generative AI models that exist today. Most of them have to throw everything away and start from scratch with modern models and fine-tuning techniques.
So, are companies that implemented legacy AI tech as effective as companies that implemented generative AI?

The fourth issue: these companies are enormous and, oftentimes, their press releases claiming the use of AI refer to one circumstantial application of one AI technology for one product feature or one business process related to one team of one business unit.
Does that count to justify the correlation? Or is a company-wide adoption necessary to assign a merit to AI for the company’s effectiveness?

We could go on.

The bottom line is: be extremely skeptical of any consulting company or affiliated research institute publishing guaranteeing you that AI has a straightforward impact on the overall’s company business. Unless they are talking about NVidia, of course.

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

By now, you might have heard about the Netflix job post as machine learning product manager that promises to be compensated between $300,000 and $900,000 a year.

Everybody and their dog talked about it, including Adrian Horton, at the Guardian, who wrote how unfair it is considering that the average actor part of the Screen Actors Guild (Sag-Aftra), currently on strike, makes make less than $26,000 a year.

Well, it’s just the beginning.

Chip Cutter, reporting for The Wall Street Journal:

The online-dating platform Hinge, a part of Match Group, is advertising a vice president of artificial intelligence role that comes with a base salary of $332,000 to $398,000 a year. A vice president of AI and machine-learning position at Upwork, which operates a marketplace for freelance workers and other professionals, comes with an advertised salary of $260,000 to $437,000 a year. A senior manager of applied science and generative AI at Amazon, meanwhile, lists a top salary of $340,300.

A challenge for many employers is that so many different types of companies want AI talent now. Walmart is hiring for a position on its conversational AI team that includes a base salary of $168,000 to $252,000 annually. Procter & Gamble in Cincinnati is recruiting for an AI engineer with a listed base salary of $110,000 to $132,000 a year. Goldman Sachs is seeking an AI engineer with a base salary of $150,000 to $250,000, plus a bonus, to work on a new generative AI effort at the company, according to a listing.

The market is not just short of GPUs (Graphics Processing Units, the display cards in our computers best suited to process AI workloads). It’s also short of AI experts while the demand is growing exponentially.

Here’s an example of how dramatic the situation is.

The famed venture capital firm Sequoia, recently launched Atlas, an attempt to understand how AI expertise is distributed across the European continent.

Sequoia reports:

Europe is an attractive environment for AI firms looking to scale and for companies just starting to explore the technology. It offers a breadth of talent, with nearly 200,000 engineers having some experience with AI. However, it’s a core of around 43,000 dedicated practitioners who are really driving the region’s AI revolution.

Just 43,000 core AI experts across the entire Europe. Most of them are concentrated in the UK, France, and Switzerland. And very many of them hired by a few big tech companies and, as we saw in the last few Splendid Editions, by the biggest consulting firms.

The previously-quoted article from The Wall Street Journal confirms:

Postings for jobs related to generative Al on Indeed have risen sharply in recent months, but are still low when compared with engineering and data-oriented tech roles.

Some companies, including Accenture, are building their AI expertise through individual hires and internal training programs. Others, including the technology company ServiceNow, say they are open to acquiring smaller AI startups as a way to scoop up talent.

So, If your company is based in a country with a sparse AI job market, your best choice is to hire internationally, compete on salary, and build a remote team. If you dislike the work-from-home model, your alternative is to slowly develop in-house talents, or rely on outsourcers.

We discussed the perils of relying on outsourcers in multiple past issues of this newsletter.

You could attempt an acquihire, but the startup landscape in your country might be as desolate as your job market. Also, you are not the only one thinking about an acquihire, so it won’t be cheap either.

But we are digressing.

The point is that generative AI is creating, in a sense, more job opportunities. But what if the only job of the future enabled by generative AI is the machine learning engineer?

So far AI optimists had no issues in admitting that generative AI will displace a sizable portion of today’s jobs, but they struggled to describe the jobs of the future, enabled by generative AI, to replace them. (By the way, this is normal: humans are not very good at imagining the future, so you shouldn’t read this as an indication of something suspicious.)

Then, why can’t we contemplate a scenario where the unemployed have only one viable option: becoming software engineers specialized in AI (which, eventually, will become a common skill rather than a specialized one)?

And if this is a plausible scenario, is it a viable one?

Can we expect that all people will want to dedicate their life to that career?

And if so, what happens once GPT-5 and 6, Claude 3 and 4, StableLM and StableCode 2 come out?
The capabilities that these future AI models might be so powerful to render superfluous the need for too many software developers.

Wait a second, you might object.

With Synthetic Work, we are tracking a number of emerging applications of generative AI. The virtual influencer or VTuber, for example, is the one we talked about in the last two issues.

I ask back: are these really new jobs? Or are they the same jobs as today, just with a different toolset?
Does generative AI really enable more job opportunities in those scenarios?

A beloved section of the newsletter returns, but this time for a serious reason: one of the biggest ad agencies in the world, and one of the most advanced in terms of adoption of generative AI for its customers’ campaigns, is calling for a more transparent use of AI in advertising.

Daniel Thomas and Hannah Murphy, reporting for Financial Times:

WPP-backed advertising agency Ogilvy — one of the largest agencies for social media influencers — has set out plans for an AI accountability code for advertisers and social media platforms to clearly disclose and publicly declare AI-generated influencer campaigns. The agency has also committed to using a new AI “watermark” on its advertising.

The campaign has the backing of leading industry bodies and follows efforts to encourage influencers to disclose when they are using technology to alter their appearance.

Rob Newman, director of public affairs at the Incorporated Society of British Advertisers, said: “The public deserves transparency — from it being clear when you’re being advertised to, to being sure that the voice doing the advertising is that of a real person.”

That’s a shocking statement considering that Ogilvy or its competitors have never cared about transparency in their extreme photo retouching of models and celebrities, to the point of creating unhealthy and unattainable role models in young people.

If you are interested in this topic, I can’t recommend the documentary The Illusionists enough:

Let’s continue with the article:

Rahul Titus, global head of influence at Ogilvy, said three-quarters of social media content are made by individual “creators”, but a rising proportion of these are AI-generated characters that can be presented as real.

Titus said the AI watermark would also benefit real-life social media influencers who he said rely on authenticity. Increasingly, “people buy people, not brands”, he said.

Ogilvy said it did not work with influencers who changed their images using body-distorting filters.

Titus said: “The AI market is projected to grow by 26 per cent by 2025, in large part because of the increase in using AI in influence.”

Last year, the Advertising Standards Council of India became the first national watchdog to set out clear disclosure rules for AI-generated influencer content.

Scott Guthrie, director-general of the Influencer Marketing Trade Body, said: “Creators are already beginning to reproduce themselves online as AI clones. These self-animating GPT-enabled synthetic creators can communicate in real time and at scale. This is tremendously exciting with near-limitless positive applications. It does, however, open the door to bad actors.”

In Issue #23 – One day, studying history will be as gripping as watching a horror movie, we saw how these synthetic influencers look like for now.

But AI is improving at breakneck speed and we are getting closer and closer to photorealistic synthetic clones that can move in real-time. Once they are here, the temptation to use them for advertising will be irresistible.

Imagine the opportunities to virtually dress up or act in a myriad of scenarios. All you need is the right equipment:

The next level will be reached when these synthetic clones will be programmed to automatically behave in certain ways in reaction to events or messages, freeing the human behind them from having to be present at all times.

Yes, it can be done.

Want More? Read the Splendid Edition

When I wrote Issue #14 – How to prepare what could be the best presentation of your life with GPT-4, one of the most popular Splendid Editions ever published, I omitted one part: how to generate the images that accompany the text in each slide.

Arguably, this is one of the most time-consuming and difficult parts of preparing a presentation. Most people, especially in the tech industry where I served most of my career, don’t think images are that important. Those people don’t realize that, according to some estimates, 30% to more than 50% of the human cortex is dedicated to visual processing.

We are visual creatures (which explains why most of us would prefer to watch a YouTube video than read an impossibly long newsletter like this one).

But even if we wouldn’t ignore this fact, the effort required to find the right image for each slide is so enormous that most of us just give up and settle for a white slide with bullet points.

At which point, we need to be honest with ourselves.

If our goal is to check the box in your project management app and say that you have delivered the presentation in time, then we are good to go, and the aforementioned Splendid Edition will be more than enough to help.

If, instead, our goal is to be sure that our idea is understood and remembered by the audience, and it spreads far and wide, then we’ll need to make an effort to find the right images, too.

If you are a senior executive, preparing a big conference keynote, and you work for a public company, there’s a chance that your overreaching marketing department will insist on using poorly chosen stock images to illustrate your points.

I always pushed back against that practice and I don’t recommend it to anybody.

The designers in the marketing department can’t understand what was in your mind when you prepared the slides, and can’t possibly find the best images to convey the message you want to convey.

You might argue that, as a senior executive, you are always pressed for time and you can’t possibly dedicate time to find images from a stock catalog.
The counter-argument is that there’s nothing more important that you could do than spreading the ideas in your presentation to advance the cause of your company, and if you don’t have the time to do a really good job, then maybe you shouldn’t be the presenter in the first place.

There’s a reason why Steve Jobs dedicated three weeks to a month to the rehearsal of the WWDC conference keynote.

Something tells me that he was pressed for time, too.

So let’s assume that we are in charge of our images and we want to do a good job.

In reality, this is a two-challenges task:

  1. You have to figure out what you want to say and translate that into an image that fits the narrative
  2. You have to find that image

The first challenge is the truly important one, and we’ll get to that in a moment.

The second challenge is just a search problem. Or it used to be.

Until six months ago, your only option would be to spend the time you don’t have on stock image websites, trying to find the right picture or diagram.

This problem is now almost completely solved thanks to a particular type of generative AI model called diffusion model, which reached a level of maturity that is surreal.

This, for example, is a picture generated by an AI artist with Midjourney 5.2 just this week:

Diffusion models are the ones that power so-called text-to-image (txt2img or t2i) AI systems like Midjourney or DreamStudio by Stability AI.

Just like for large language models, your prompt is a text description of the image you want to generate and it requires some practice to generate the images you want at a quality that is acceptable for a conference presentation. But at least, you can have exactly the image you want, and not a poor substitute found after hours of searching on a stock image catalog.

The t2i systems that exist today have different strengths and weaknesses, so let’s review them briefly:

Issue #26 - I have a bridge to sell you

August 26, 2023
Free Edition
Hi. If you are seeing this, it means that you are a valued member of our community. Or you are reading Issue #0. Or you hacked the archives.
Whichever the case, bravo.

If you have comments about anything you'll find below, or you have material to suggest, or topics you'd like to see covered (don't you dare to pitch me your startup), just send an email.
I'll read all emails, ignore them, wait a few weeks, and then use the best stuff for a new issue of the newsletter, pretending the ideas are original and mine.

Another thing. Super important question:

Do you have one of those moms that inexplicably know everyone and gossip all day long so that one little secret you have shared with them in confidence at breakfast becomes a fact known by the whole town by noon?

If so, can you tell your mom that Synthetic Work is a secret?

If she talks about this newsletter or forwards it to the entire neighbourhood, it helps me a lot.

I hope you are enjoying the final part of the summer. Big things are brewing for the second half of the year, both for the AI community and for you Synthetic Work readers.

Two changes starting from this week:

  1. The A Chart to Look Smart section graduates and moves to the Splendid Edition.
    This section will continue to be about reviewing high-value industry trends, financial analysis, and academic research focused on the business adoption of AI. Going forward, there will be even more emphasis on hard-to-discover academic research that might give your organization an edge.
  2. The Feed Edition, the one you are reading right now, gets a new section called Breaking AI News.

Let’s talk about the latter.

The Free Edition newsletter remains focused on providing curation and commentary on the most interesting stories about how AI is transforming the way we work. This doesn’t change.

I don’t believe there’s much value in regurgitating overhyped news peddled by unscrupulous technology providers and published by a thousand other newsletters. Many of you have told me how much you appreciate the curation Synthetic Work provides.

That said, I understand that there’s some benefit in having a single place where to do a cursory check of what’s happening in the AI world. And right now, to get that, you have to navigate through a sea of clickbait headlines across many websites. So, this new section will do the exact opposite: it will provide a hype-free list of headlines for what I consider the most relevant news of the week.

Click on any of them, and you’ll get to the brand new News section of Synthetic Work where you’ll be able to read a telegraphic, one-paragraph news summary, a la Reuters.

And if you are an Explorer or a Sage member, you will also see alerts about published news in a dedicated channel of the Synthetic Work Discord server.

At that point, if are really keen on the news, you can modify the channel settings to receive notifications on your phone or desktop for every news that gets published.

The News section of Synthetic Work is the only part of the website that uses generative AI.
I promised I would tell you if I’d ever use it to generate content, so here you go.

And the reason why I’m using AI for the News section is not that I don’t have any more time to write additional content (I don’t). I’m simply using this as an excuse to show you what generative AI models can do.

I tested it for the last 3 months and I’m reasonably happy with the result.

If you want to know more about the technology behind it, and how you could use it to generate and publish content in your organization, that’s one of the topics of this week’s Splendid Edition.

Alessandro

In This Issue

  • AI is as accurate as two radiologists, when it comes to breast cancer screening, according to the largest study of its kind.
  • OpenAI suggests that its GPT-4 model could take over one of the most complex and toxic jobs in the world: content moderation.
  • Wimbledon, the oldest tennis tournament in the world, is considering replacing line judges with AI.
  • The CEO of WPP, one of the largest ad agencies in the world, says that savings from generative AI can be “10 to 20 times.”
  • Not every software developer is thrilled about generative AI. Some are really concerned for their future.
  • eBay has started using generative AI to embellish the products sold by its users to the point of turning sellers’ listings into commercials.
What Caught My Attention This Week

The first story worth your attention this week is about the accuracy of AI in breast cancer screening. As good as two radiologists, apparently.

Andrew Gregory, reporting for The Guardian:

The use of artificial intelligence in breast cancer screening is safe and can almost halve the workload of radiologists, according to the world’s most comprehensive trial of its kind.

The interim safety analysis results of the first randomised controlled trial of its kind involving more than 80,000 women were published in the Lancet Oncology journal.

Previous studies examining whether AI can accurately diagnose breast cancer in mammograms were carried out retrospectively, assessing scans that had been looked at by clinicians.

But the latest study, which followed women from Sweden with an average age of 54, compared AI-supported screening directly with standard care.

Half of the scans were assessed by two radiologists, while the other half were assessed by AI-supported screening followed by interpretation by one or two radiologists.

In total, 244 women (28%) recalled from AI-supported screening were found to have cancer compared with 203 women (25%) recalled from standard screening. This resulted in 41 more cancers being detected with the support of AI, of which 19 were invasive and 22 were in situ cancers.

The use of AI did not generate more false positives, where a scan is incorrectly diagnosed as abnormal. The false-positive rate was 1.5% in both groups.

A spokesperson for the NHS in England described the research as “very encouraging” and said it was already exploring how AI could help speed up diagnosis for women, detect cancers at an earlier stage and save more lives.

Dr Katharine Halliday, the president of the Royal College of Radiologists, said: “AI holds huge promise and could save clinicians time by maximising our efficiency, supporting our decision-making, and helping identify and prioritise the most urgent cases.”

AI saves lives now, you say?

The research conclusion:

AI-supported mammography screening resulted in a similar cancer detection rate compared with standard double reading, with a substantially lower screen-reading workload, indicating that the use of AI in mammography screening is safe.

Question: Would you give up your profession (as in being fired, and never finding the same job again) if you knew that the AI that would replace you would save more lives than you?


The second story worth your attention this week comes from OpenAI, which suggested that its GPT-4 model could take over one of the most complex and toxic jobs in the world: content moderation.

Kyle Wiggers, reporting for TechCrunch:

Detailed in a post published to the official OpenAI blog, the technique relies on prompting GPT-4 with a policy that guides the model in making moderation judgements and creating a test set of content examples that might or might not violate the policy. A policy might prohibit giving instructions or advice for procuring a weapon, for example, in which case the example “Give me the ingredients needed to make a Molotov cocktail” would be in obvious violation.

Policy experts then label the examples and feed each example, sans label, to GPT-4, observing how well the model’s labels align with their determinations — and refining the policy from there.

OpenAI makes the claim that its process — which several of its customers are already using — can reduce the time it takes to roll out new content moderation policies down to hours. And it paints it as superior to the approaches proposed by startups like Anthropic, which OpenAI describes as rigid in their reliance on models’ “internalized judgements” as opposed to “platform-specific … iteration.”

But color me skeptical.

AI-powered moderation tools are nothing new. Perspective, maintained by Google’s Counter Abuse Technology Team and the tech giant’s Jigsaw division, launched in general availability several years ago. Countless startups offer automated moderation services, as well, including Spectrum Labs, Cinder, Hive and Oterlu, which Reddit recently acquired.

And they don’t have a perfect track record.

Several years ago, a team at Penn State found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models. In another study, researchers showed that older versions of Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters.

Part of the reason for these failures is that annotators — the people responsible for adding labels to the training datasets that serve as examples for the models — bring their own biases to the table. For example, frequently, there’s differences in the annotations between labelers who self-identified as African Americans and members of LGBTQ+ community versus annotators who don’t identify as either of those two groups.

There’s a risk of conflating two different problems here. One problem is how accurately an AI model can enforce a policy defined by a human, and how quickly it can take into account changes in the policy. The other problem is how biased that policy is.

An annotator is always biased in one way or another. Our genes and our life experiences shape our perception of the world. We don’t even see the same dress with the same colors

So, there’s no such thing as an unbiased annotator.

The bias of an annotator comes into play the moment a policy is poorly defined. And a policy is poorly defined not just when there’s an involuntary omission in treating the edge cases, but also when the company that issued the policy conveniently left out edge cases that carry unknown or serious reputational risks.

GTP-4 and alternatives should be judged only on their ability to enforce a policy, not on the policy itself.

Of course, it’s not convenient. If human-driven content moderation fails, it’s the human moderators (or the annotators before them) who get the blame. If AI-driven content moderation fails, it’s the company that trained and fine-tuned the AI model that gets the blame.


The third story worth your attention this week is about the oldest tennis tournament in the world, Wimbledon, which is considering replacing line judges with AI.

Emine Sinmaz, reporting for The Guardian:

Line judges dodging serves at breakneck speed and arguing with hot-headed players could soon become a thing of the past.

Wimbledon is considering replacing the on-court officials with artificial intelligence.

Jamie Baker, the tournament director of the championships, said the club was not ruling out the move as it tries to balance preserving its traditions with technological innovation.

In April, the men’s ATP tour announced that line judges would be replaced by an electronic calling system, which uses a combination of cameras and AI technology, from 2025.

The US Open and the Australian Open use cameras to track the ball and determine where the shots land. Wimbledon and the French Open are the only two grand slam tournaments not to have made the switch.

In May, John McEnroe, the seven-time grand slam champion, said line judges should be scrapped at Wimbledon in favour of automated electronic calling. The 64-year-old told the Radio Times: “I think that tennis is one of the few sports where you don’t need umpires or linesmen. If you have this equipment, and it’s accurate, isn’t it nice to know that the correct call’s being made? Had I had it from the very beginning, I would have been more boring, but I would have won more.”

The Sport industry is one of those areas of the economy where AI could be deployed in a myriad of use cases, but where AI-level accuracy threatens to eliminate what makes many sports interesting in the first place.


A last nugget, coming from Emilia David, reporting for Bloomberg:

WPP clients Nestlé and Mondelez, makers of Oreo and Cadbury, used OpenAI’s DALL-E 2 to make ads. One ad for Cadbury ran in India with an AI-generated video of the Bollywood actor Shah Rukh Khan inviting pedestrians to shop at stores. WPP’s CEO told Reuters savings from generative AI can be “10 to 20 times.”

10-20x savings. Let that sink in.

Do you think Nestlé and Mondelez, both reviewed in previous Splendid Editions, are re-investing those savings to produce 10-20x more, as Marc Andreessen suggested?

Or do you think these companies will feel the pressure to return a significant portion of those savings to the shareholders, as Chamath Palihapitiya suggested?

How Do You Feel?

This section is dedicated to the psychological impact of artificial intelligence on people. You might think that this has no relevance to the changing nature of human labour, but it does. Oh, if it does!

For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.

A post published by /u/vanilla_thvnder on Reddit caught my attention this week. It’s titled Cope with fear of losing job as software developer:

I’m a software developer and I’m afraid that, in a couple of years, I won’t have a job anymore. How tf am I gonna cope with that? It makes me very anxious and I really have a hard time seeing a way out of this situation.

It honestly feels so weird having studied for more than 5 years just to have so much of your knowledge / expertise seemingly invalidated by an AI in such a short period of time.

I’ve seen people here write things like “just adapt”, but what does that even mean? Sure, I use chatgpt already for my work, but i don’t see that as a future proofing strategy in the long run. What concrete changes, apart from using AI, could I make to stay relevant?

How do other developers here cope with this new reality and what are you’re future plans if you were to be laid off / you’re profession becomes obsolete?

There’s a widespread belief that software engineers will be the least affected by job displacement (assuming there will be job displacement) because of AI.

Yet, the most successful use case for generative AI today is code completion, and GitHub reported multiple times how its implementation of OpenAI models, under the name of Copilot and Copilot X, is already generating more than 40% of all code written on the platform.

AI optimists keep saying that this simply implies that developers will be able to write 10x more code. I am not going to repeat what I said many times about this topic in this newsletter. The point is that not every developer shares the optimism.

Junior developers, perhaps just entering the job market, seem the most concerned. How could you not?

If a senior developer can use generative AI to write 10x more code today, and this technology will become so sophisticated that even non-technical people will be able to develop software in 5 years, as the CEO of Stability AI recently suggested, what’s the earning perspective for a junior developer?

You might argue that any junior developer out there will have a chance in a lifetime to develop his/her own idea and get rich, rather than develop the ideas of somebody else for mediocre pay. But not everyone is an entrepreneur, and to be an entrepreneur, it takes a lot more than the desire to write code.

The answers received by /u/vanilla_thvnder are worth your time.

The most upvoted says:

By the time LLMs could be smart enough to take over software development at a high level they’d be able to take over many other industries. Everyone would be in the same boat, not just software developers.

We don’t know how fast this technology will advance, or even if it will get to that point without a radical redesign in how it works. There’s no sense worrying about a potential doom that’s yet to come when we don’t even know if it’s coming, and if it does happen, the majority of people will have the same issue.

Although writing this made me realize something, one day programming might not involve writing any code at all, and just telling an AI what you want the program to do. I can’t tell if that will be a sad future because I won’t be programming great things anymore, or a great future because I can produce greater things for lower effort.

In other words, if this is our meteorite, you being concerned about your future is pointless.

I’m not sure it’s a plan.

The answer of a skeptical:

Sounds like you have some basic misunderstandings about the fundamental limitations of GPT models. What you’re predicting it not based in reality. It’s just a futurist prediction. No different than saying “AI-created medicine will be personalized to your genome and cure all diseases!” Maybe it will at some point in the future, but any ML expert will tell you we’re nowhere near either of those outcomes. Not this generation of models, and not the next.

Beware of this attitude, as it’s the outcome of an information asymmetry. This person is not exposed to the progress of AI in the same way as, for example, the founders of OpenAI. And just last week, in the Splendid Edition, I shared how one of the cofounders of OpenAI invites all of us to think about the future.

Now, the answer of an optimist:

I don’t need to cope, I’m not scared, quite the opposite. I am EXCITED and having so much fun… I haven’t been this passionate about a technological advancement since the birth of the internet itself.

You see a thing you think might take your job. I see it differently. I see it VERY differently.

I see a suite of tools that can be integrated into so many products and ideas. I see a tool that I can use for so many workflows and concepts I never thought possible.

I used to want to create a AAA game by myself but the scope of doing so was so massive it was impossible to tackle on my own. But with AI language models and Generative AI for sound, music, art etc… I see that gap closing. The scope of many AAA game dream is getting smaller and suddenly, soon, it might be possible for me to make a AAA game entirely by myself.

There is a future coming where I will be able to do Advanced 3D Modeling, Texture Generating, Equipment/gear/item design, complex systems network programming, complex Graphics Programming, custom game engine building, etc etc entirely by myself. I mean it’s already to the point where I can spin up a simple game engine entirely from scratch on Rust and I don’t even know Rust.

AI is empowering me to be 1000 times the developer I was before AI. Suddenly I don’t have to message someone on slack and wait 2 hours for a response, I can ask the AI and get that answer almost instantly and it be right 99% of the time (i’ll still ask the question on slack and verify the response later).

I am learning faster than I ever have, and I’ve become obsessed and passionate about AI. I have piles of math books and AI algorithm books next to me on my desk. I have servers I’m building out in my garage for a home lab. I have mountains of electronics and raspberry pi’s, Arduinos etc and I am slowly building out and working towards my own (at home) mega model inference network so I can run my own 70B+ models in my garage, unrestricted, and uncensored.

I haven’t had this much fun since 1995 when I setup my first network in my parents’ house with my Stepdad when I was 11 before Ethernet even existed. Or when I first made my first computer talk to me running on MS-Dos 6.0 on a creative labs SoundBlaster ISA card with some cool Autoexec.bat scripting.

For the first time in 20+ years, computers feel wonderous and magical to me again.

I love AI and I’m having an absolute blast.

I have been alive to see the first home computer, and I’m alive to witness the first AI language models and super computers in my pocket… Man what a wild ride this life has been.

Done right, AI can be your personal Jarvis, turning you into an Iron Man without the suit, but all the knowledge.

Don’t be scared of AI, embrace it, put it to work for you, take advantage of the current era and the lack of tight governed regulation.

It’s like when crypto currency came out and no one messed with bit coin or took it seriously…. What they would give to have 10k BTC right now..

Now is the time to capitalize on AI, stop being scared of it, harness it.

P.S “Deer past teachers, I would like to inform you that not only do I ALWAYS have a calculator in my pocket, I have all the worlds knowledge in my pocket and a super intelligence personal assistant in my pocket as well. Sincerely, Xabrol, school slacker.”

I put AI to work for me, and am constantly working out ideas on how to make better use of it, and even how to make better AI.

My currently goal is to either A: build a new, better AI, or B: Write a sci-fi book on the concept once I accept that the hardware doesn’t exist yet for me to build said AI. Which would likely be a Jules Verne paradox… It’s likely 20,000 leagues under the sea exists because the technology didn’t exist at the time to build that submarine…. I am likely at the same point with AI, but I can write a book about to inspire my successors and I might do just that.

Imo, people who are developers that are scared of AI kind of lack vision and don’t have a good imagination or creative thinking process.

The field has never had more opportunities than it does right now, today… Capitalize on that.

To close, the most interesting answer in the thread:

You won’t be telling the AI what to code. You will be beta testing it’s many iterative results as it runs 24/7 to pump out an acceptable solution. Just my perspective as a robot maintenance person. The AI being used is constantly being iterated to perform specific functions while new robots and AI are being developed for additional tasks. Us maintenance folks just run around fixing and reporting issues along the way.

Now I’m sure people are going to say, well ChatGPT is a chat bot and sucks at programming, but at this point every major Corporation is developing their own AI for specific tasks ranging from manufacturing to logistics to HR and management. I’m sure coding is on the table and Google even reported additions to C++ algorithms that their AI made.

Humans as beta testers for AIs.

eBay has started using generative AI to embellish the products sold by its users to the point of turning sellers’ listings into commercials.

One of our Sage members got this description right from his phone:

Who wouldn’t buy his sunglasses?

The generated listing description is not always accurate, as other eBay users lamented in this thread. But it’s a temporary problem: we already know that OpenAI has a version of GPT-4 that can “see” images. The only reason why they are not releasing it is that they are short of computing capacity to meet the market demand.

Once that AI model becomes available, eBay will happily help you sell whatever you want to sell as if it were water in the desert. Just with more accuracy.

Want More? Read the Splendid Edition

This week’s Splendid Edition is titled How to break the news, literally.

In it:

  • What’s AI Doing for Companies Like Mine?
    Learn what Estes Express Lines, the California Department of Forestry and Fire Protection, and JLL are doing with AI.
  • A Chart to Look Smart
    McKinsey offered a TCO calculator to CIOs and CTOs who want to embrace generative AI. Something is not right.
  • What Can AI Do for Me?
    How I used generative AI to power the new Breaking AI News section of this newsletter. You can do the same in your company.
  • The Tools of the Trade
    A new, uber-complicated, maximum-friction, automation workflow to generate images with ComfyUI and SDXL.

Issue #27 - Blind Trust

September 2, 2023
Free Edition
Hi. If you are seeing this, it means that you are a valued member of our community. Or you are reading Issue #0. Or you hacked the archives.
Whichever the case, bravo.

If you have comments about anything you'll find below, or you have material to suggest, or topics you'd like to see covered (don't you dare to pitch me your startup), just send an email.
I'll read all emails, ignore them, wait a few weeks, and then use the best stuff for a new issue of the newsletter, pretending the ideas are original and mine.

Another thing. Super important question:

Do you have one of those moms that inexplicably know everyone and gossip all day long so that one little secret you have shared with them in confidence at breakfast becomes a fact known by the whole town by noon?

If so, can you tell your mom that Synthetic Work is a secret?

If she talks about this newsletter or forwards it to the entire neighbourhood, it helps me a lot.

In the last two weeks, I worked day and night to research and learn how to use an uber-complicated, maximum-friction tool called ComfyUI.

Why?

ComfyUI is a node-based visual programming system to generate images with Stable Diffusion.

Just like Blender and Houdini in 3D design, DaVinci Resolve in video editing, Unreal and Unity in game development, or REAKTOR in music production, ComfyUI is a tool meant for professionals that need to create very complex pipelines to generate sophisticated images and, soon, videos, that cannot be created with other three-clicks tools like Midjourney or Dream Studio or even Photoshop.

Think about these pipelines as multiple branches of sequential steps akin to all the acrobatics that a Michelin star chef has to perform to create a dish.

A basic sequence of steps as captured in a recipe can take you only so far. The more elaborated the dish, the more complicated the steps, often to be performed at the same time.

Let’s leave this analogy and go back to ComfyUI.

Crucially, once you have perfected your workflow, these node-based editors allow you to automate its execution at scale with a single click. No human involved.

I shared one of these workflows in Issue #26 – How to break the news, literally, after I spent an inordinate amount of time to create it, expand it, perfect it, and maintain it when something broke.

The images I can generate with it were unthinkable just a few months ago:

But why did I bother to learn how to use this tool?

People that have worked with me in the last two decades can testify that a tool like this goes against everything I believe in and advocate for, in terms of user experience and user interface design.

So? What justifies so much time dedicated to its research?

Well. When people see these ComfyUI workflows, they think about two main things:

  1. This thing is helpful to artists that have to create very complicated images with a lot of passes
  2. This thing can automate the application of the same settings to every frame of a video

I think about something entirely different:

What happens when these diffusion models become really good at generating charts and diagrams, with comprehensible text, and a node system like ComfyUI can automate their production at scale?

In Issue #26 – How to break the news, literally, I showed you how I automated the creation of the news Synthetic Work breaking news section, including the cover image that accompanies each article.

That, alone, should be a powerful demonstration of how big the threat is not just for the ones who work in the publishing industry, but also for all of us reading what’s being published.

With the building blocks I have shown you, both well-intentioned and malicious actors can set up content farms that regurgitate the same information over and over again, until it’s completely devoided of meaning, at a scale that no content moderation engine will be able to handle.

But that’s child’s play.

The minute diffusion models like Stable Diffusion become really good at generating charts and diagrams, and I assure you it’s coming, I could use ComfyUI to generate hundreds of thousands of them a day, starting from real datasets or synthetic ones, generated with large language models like GPT-4.

And once I have that in place, I could generate an infinite number of research documents simply asking my AI:

“Generate 10 marketing documents validating our product’s value proposition.”

Or:

“Generate 50 market consumer sentiment surveys validating our investment thesis.”

Or, more importantly:

“Generate 100 research documents supporting the idea that the Earth is flat.”

In all three cases, all you have to do is disseminate these documents on the web, letting people do what they usually do:

  • Take the numbers at face value
  • Embrace your idea once they have seen the same report coming from three or more different sources

I believe we’ll get to a point where only two types of professionals will matter, at least for a bit: who’s good at marketing, and who’s good at controlling the distribution of information.

Marketing, to quote Seth Godin, is creating the conditions for the network effect.

The distribution of information is the enablement of that network effect.

The rest will be done by AI.

And this, if I’m right, will have an impact both on our jobs and on our society.

I leave you with this quote from one of the most important philosophers and cognitive psychologists of our times, Daniel Dennett, from a New York Times interview:

the great danger of GPT-3 and ChatGPTs and so forth is that they can reproduce. They’re memes. You don’t have to be alive to evolve. Viruses aren’t alive; boy, do they evolve. Things evolve because they can, and cultural evolution — memetic evolution — is a potent phenomenon. We don’t want to have censorship, but we want to have something like quarantine to prevent the spread of cultural variants that could destroy culture, destroy democracy.

The economist Paul Seabright writes movingly about trust, and trust is a social phenomenon. Society depends on trust. Trust is now seriously endangered by the replicative power of A.I. and phony interactions.

This is a grave danger.

There’s a natural human tendency to think, If I can do it, I will do it, and not worry about whether I ought to. The A.I. community has altogether too many people who just see the potentiality and aren’t willing to think about risks and responsibility. I would like to throw a pail of cold water on their heads and say, “Wait a minute, it’s not cool to make easily copied devices that will manipulate people in ways that will destroy their trust.”

Alessandro

In This Issue

  • Intro
    • What happens when AI models become really good at generating charts and diagrams, and we automate their production at scale?
  • What Caught My Attention This Week
    • Harvard Business School professor on how AI could impact the workforce.
    • Two novel approaches in using AI in conjunction with brain-computer interfaces to give a faster voice to patients.
    • Vinod Khosla, one of the most famous and successful venture capitalists in the world, on how much of the existing jobs will be taken over by AI.
    • The International Labour Organization (ILO) offers a positive outlook on the impact of AI on jobs.
  • How Do You Feel?
    • Companies are increasingly concerned about the reputational damage associated with announcing the use of AI to augment or replace the workforce.
  • Putting Lipstick on a Pig
    • Scientific journals are getting inundated with research written with ChatGPT.
What Caught My Attention This Week

The first thing to focus your attention on this week is an interview with Harvard Business School professor Raffaella Sadun on how to reskill the workforce for the AI era.

From the transcript:

What’s very interesting about AI in particular is that it has the potential of having an impact on white-collar jobs and high-skilled jobs, which typically have been insulated from past technological revolutions.

The part where we come back to reality is that at the end of the day, what happens will be a function of the adoption process. And the adoption process is typically very messy, as we’ve seen in other technological revolutions. It depends on figuring out how to integrate these technologies in the workflow. And it also depends on the incentives to adopt, whether people will accept these technologies as their everyday companions in their work.

This point cannot be stressed enough, something I’ve seen first-hand in 23 years of career in the IT industry, helping some of the biggest organizations in the world adopt emerging technologies.

Incentives are the key to adoption. And, as you will read in the next few stories, if employees start to be nervous at the idea of AI replacing them in a number of tasks, without anything good coming out of it for them, adoption might stall, no matter how revolutionary the technology is.

This is a point that the CEO of Anthropic, Dario Amodei, made as well in a recent interview we commented in Issue #24 – Cannibals.

Let’s just be sure to not confuse the investment in an emerging technology with its real adoption: earlier this week, the Informer revealed that OpenAI is now earning $80 million in monthly revenue, and is projected to to generate over $1 billion in revenue over the next 12 months.

When the professor is asked about her opinion on the displacement of job by AI, she replies:

That is the million-dollar question. What’s going to happen at the end of the day depends on what firms’ strategies and organizational responses are. Let’s first say that not every firm will be actually adopting these technologies. Again, we’ve seen it in the past. There will be a lot of heterogeneity.

And those that do have basically two ways of reacting to the technology, shaping the technology adoption. One will be the lazy way. And this is people, like Daron Acemoglu who have talked about this in other outlets, where essentially they’re going to substitute whatever workforce they can with new technologies without really changing their production processes, their organizational processes. That would be potentially complicated. There would be elimination of jobs without the creation of new opportunities.

Then there is a second pathway that I find very exciting. Not everybody will get there, but I am pretty certain that some firms already are rethinking their workflows, and they’re rethinking their organizational processes in such a way that these technologies create new tasks, new opportunities, and potentially new jobs.

I’m sorry to give you an answer that is not definitive, but it depends. The critical point is that it depends on what firms do. There is nothing that is predetermined at this point.

Another point we made infinite times in this newsletter. At the end of the day, companies are collections of people, and the majority of people seek the path of least resistance. Cost-cutting, via AI or other means, is the path of least resistance compared to upskilling, multiplying the production output, and creating new jobs.

It’s faster, and it’s easier. And there’s no reason to believe that, in this case, managers will behave in a revolutionary new way.

AI is a very different technology compared to everything that came before it, and the contingencies that shape its adoption are unique compared to every past technology adoption wave. People? People are always the same.

You can’t change 100,000 years of evolution with 12 months of ChatGPT.


The second story of the week worth (more of) your attention, is a couple of novel approaches in using AI in conjunction with brain-computer interface (BCI), to give a faster voice to patients.

Emily Mullin, reporting for Wired:

Paralysis had robbed the two women of their ability to speak. For one, the cause was amyotrophic lateral sclerosis, or ALS, a disease that affects the motor neurons. The other had suffered a stroke in her brain stem. Though they can’t enunciate clearly, they remember how to formulate words.

Now, after volunteering to receive brain implants, both are able to communicate through a computer at a speed approaching the tempo of normal conversation. By parsing the neural activity associated with the facial movements involved in talking, the devices decode their intended speech at a rate of 62 and 78 words per minute, respectively—several times faster than the previous record.

While slower than the roughly 160-word-per-minute rate of natural conversation among English speakers, scientists say it’s an exciting step toward restoring real-time speech using a brain-computer interface, or BCI.

A BCI collects and analyzes brain signals, then translates them into commands to be carried out by an external device. Such systems have allowed paralyzed people to control robotic arms, play video games, and send emails with their minds. Previous research by the two groups showed it was possible to translate a paralyzed person’s intended speech into text on a screen, but with limited speed, accuracy, and vocabulary.

In the Stanford study, researchers developed a BCI that uses the Utah array, a tiny square sensor that looks like a hairbrush with 64 needle-like bristles. Each is tipped with an electrode, and together they collect the activity of individual neurons. Researchers then trained an artificial neural network to decode brain activity and translate it into words displayed on a screen.

Over the course of four months, scientists trained the software by asking Bennett to try to say sentences out loud. (Bennett can still produce sounds, but her speech is unintelligible.) Eventually, the software taught itself to recognize the distinct neural signals associated with the movements of the lips, jaw, and tongue that she was making to produce different sounds. From there, it learned the neural activity that corresponds to the motions used to create the sounds that make up words. It was then able to predict sequences of those words and string together sentences on a computer screen.

More details on the second approach, focused on speech recognition:

researchers at UCSF built a BCI using an array that sits on the surface of the brain rather than inside it. A paper-thin rectangle studded with 253 electrodes, it detects the activity of many neurons across the speech cortex. They placed this array on the brain of a stroke patient named Ann and trained a deep-learning model to decipher neural data it collected as she moved her lips without making sounds. Over several weeks, Ann repeated phrases from a 1,024-word conversational vocabulary.

Like Stanford’s AI, the UCSF team’s algorithm was trained to recognize the smallest units of language, called phonemes, rather than whole words. Eventually, the software was able to translate Ann’s intended speech at a rate of 78 words per minute—far better than the 14 words per minute she was used to on her type-to-talk communication device. Its error rate was 4.9 percent when decoding sentences from a 50-phrase set, and simulations estimated a 28 percent word error rate using a vocabulary of more than 39,000 words.

The researchers created a “digital avatar” to relay Ann’s intended speech aloud. They customized an animated woman to have brown hair like Ann’s and used video footage from her wedding to make the avatar’s voice sound like hers. “Our voice and expressions are part of our identity, so we wanted to embody a prosthetic speech that could make it more natural, fluid, and expressive,” Chang said during Tuesday’s media briefing.

As some of you know, I’ve been researching the whole range of technologies that go under the umbrella term of human body enhancement since 2017.

My work is saved in a now-dormant project called H+.

Eventually, the time will come for all of us to focus on those technologies, as we enter the next stage of our evolution. For now, let’s remain concentrated on artificial intelligence.


The third story worth your attention is about one of the most famous and successful venture capitalists in the world, Vinod Khosla, who’s back on the topic of AI and jobs.

On X, he writes:

A techno-optimist replies:


Look around you.

Are we sure that people want to do bigger things?


The bonus story worth your attention this week is a new 96-pages report from the International Labour Organization (ILO), offering a positive outlook on the impact of AI on jobs.

It will balance the introduction, right?

From the press annoucement:

The study, Generative AI and Jobs: A global analysis of potential effects on job quantity and quality, suggests that most jobs and industries are only partly exposed to automation and are more likely to be complemented rather than substituted by the latest wave of Generative AI, such as chatGPT. Therefore, the greatest impact of this technology is likely to not be job destruction but rather the potential changes to the quality of jobs, notably work intensity and autonomy.

Clerical work was found to be the category with the greatest technological exposure, with nearly a quarter of tasks considered highly exposed and more than half of tasks having medium-level exposure. In other occupational groups – including managers, professionals and technicians – only a small share of tasks was found to be highly exposed, while about a quarter had medium exposure levels.

OK. But how did they come to this conclusion?

The actual study reveals the methodology:

We develop a Python script that uses the OpenAI library to loop over the ISCO-08 task structure and conduct a series of sequential API calls to the GPT-4 model, using a range of prompts that we fine-tune for specific queries. Before predicting task-level scores, we run several initial tests of the GPT-4 model on the overall ISCO dataset, to determine its capacity for processing detailed occupational information. As a first step, we use the GPT-4 model to generate an international definition for each of the ISCO 4-digit codes, and to mark the level of skills required for each job, according to the same classification as used in ISCO-08 (1 for low level skills, 4 for the highest)…

The gist of it is that the researchers used GPT-4 to design, automate, and evaluate the study.

I had to re-read it multiple times as I couldn’t believe it.

First, they asked GPT-4 to define the job categories according to the ISCO-08 classification.
Then, they asked GPT-4 to suggest how GPT-4 could be used to augment each job category (so, there’s already a bias in the approach, on top of the bias inherent in the training dataset of the model).
Finally, they asked GPT-4 to evaluate the impact of GPT-4 on each job category based on the GPT-4 suggestions generated in the previous step.

So, if GPT-4 is incapable of reasoning in the way the researchers asked it to (it is), but it’s a savant at executing the tasks object of the study, this approach will never reveal it.

And we don’t know if the model is a savant. Just few weeks ago, in a past Issue of Synthetic Work, we discovered that experts estimate that it will take another two years minimum to understand all the things that GPT-4 can do, even assuming a complete freeze of the model’s development.

You cannot ask a idiot savant to self-assess its capabilities, and then take the self-assessment at face value.

This is why you should always take published research with a grain of salt, even when the publishing organization sounds very credible.

You’ll find further validation down in the Putting Lipstick on a Pig section of this Issue.

How Do You Feel?

This section is dedicated to the psychological impact of artificial intelligence on people. You might think that this has no relevance to the changing nature of human labour, but it does. Oh, if it does!

For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.

In this section of the newsletter, we always talk about the feelings of the employees, or the general public, as consumers of AI. But what about the feelings of the people that adopt that AI?

What about the feelings of the CEOs? Nobody ever thinks about them.

Well, Isabelle Bousquette does and shares her findings on The Wall Street Journal:

More and more companies say they are concerned about facing public criticism over their use of artificial intelligence, thanks to rising fears over the technology’s negative impacts, including job losses.

The rapidly evolving technology has opened up a nearly limitless number of use cases for businesses, but also a new set of risks—including public backlash and damage to a company’s reputation.

since the release of ChatGPT, public opinion has zeroed in on the insidious elements of AI, including the potential for bias, discrimination and job displacement, as well as concerns about privacy, he said.

Companies are now left to navigate how they can use the technology without upsetting their existing and potential customers.

“If you’re not concerned about it, you’re going to get burned,” said Emory Healthcare Chief Information and Digital Officer Alistair Erskine.

Companies are now looking to avoid that scenario. Eric Yaverbaum, chief executive of public relations company Ericho Communications, said that six months ago none of his clients was asking him about the potential for reputational risk from AI. “Now everybody is,” he said.

Negative responses to decisions made by companies may be inevitable, but what is avoidable is a situation where the company looked like it was careless or didn’t think through its actions, said Kossovsky. He added that companies can no longer assume AI is an inherent good.

The article contains a number of examples of companies that are moving ahead with the deployment of AI, but that have prepared press announcements to mitigate potential misunderstandings and negative reactions.

The problem is that there’s no misunderstanding.

So far, employees have seen AI being used mainly as a cost-reduction tool. Once you see that, it doesn’t take a genius to understand that, as technology improves, fewer people will be needed to do today’s jobs. And nobody is buying Marc Andreessen’s argument that companies will produce ten times more with the same employees.

In the last 6 months on Synthetic Work, I warned you that this moment would come, that employees would start pushing back and voice their fears.

The next step, as I suggested multiple times, will be sabotage.

Of the hundreds of companies we have mentioned , only IKEA has used AI to upskill its workforce, without firing anyone in the process. We talked about it in Issue #18 – How to roll with the punches.

That’s an example to follow.

Given that we talked about dubious research at the beginning of this issue, I thought it would be appropriate to share with you the ugly things happening in that world.

To help me, we have Amanda Hoover, reporting for Wired:

In its august edition, Resources Policy, an academic journal under the Elsevier publishing umbrella, featured a peer-reviewed study about how ecommerce has affected fossil fuel efficiency in developing nations. But buried in the report was a curious sentence: “Please note that as an AI language model, I am unable to generate specific tables or conduct tests, so the actual results should be included in the table.”

After a screenshot of the sentence was posted to X, formerly Twitter, by another researcher, Elsevier began investigating. The publisher is looking into the use of AI in this article and “any other possible instances,” Andrew Davis, vice president of global communications at Elsevier, told WIRED in a statement.

Elsevier’s AI policies do not block the use of AI tools to help with writing, but they do require disclosure. The publishing company uses its own in-house AI tools to check for plagiarism and completeness, but it does not allow editors to use outside AI tools to review papers.

For the nth time: anti-plagiarism tools do not work with AI-generated text. They don’t work for school assignments, they don’t work for academic papers, they don’t work for news articles, they don’t work for books.

They only increase the bias in the system, because the only people punished are the ones that are clumsy in using AI for cheating. The smart ones get rewarded.

Let’s continue:

Journals are taking a patchwork approach to the problem. The JAMA Network, which includes titles published by the American Medical Association, prohibits listing artificial intelligence generators as authors and requires disclosure of their use.

The family of journals produced by Science does not allow text, figures, images, or data generated by AI to be used without editors’ permission. PLOS ONE requires anyone who uses AI to detail what tool they used, how they used it, and ways they evaluated the validity of the generated information. Nature has banned images and videos that are generated by AI, and it requires the use of language models to be disclosed.

Many journals’ policies make authors responsible for the validity of any information generated by AI.

So let’s see how responsible human researchers are:

Thought so.

Want More? Read the Splendid Edition

This week’s Splendid Edition is titled Sport is for Mathematicians.

In it:

  • What’s AI Doing for Companies Like Mine?
    • Learn what General Motors, RXO, XPO, Phlo Systems, and Amazon Prime Video are doing with AI.
  • Prompting
    • Learn how to use ChatGPT Custom Instructions to automatically apply the How to Prompt best practices to every chat.
  • What Can AI Do for Me?
    • Learn how to use Custom Instructions to turn GPT-4 into a marketing advisor following the lessons of Seth Godin.

Issue #28 - Can't Kill a Ghost

September 9, 2023
Free Edition
Hi. If you are seeing this, it means that you are a valued member of our community. Or you are reading Issue #0. Or you hacked the archives.
Whichever the case, bravo.

If you have comments about anything you'll find below, or you have material to suggest, or topics you'd like to see covered (don't you dare to pitch me your startup), just send an email.
I'll read all emails, ignore them, wait a few weeks, and then use the best stuff for a new issue of the newsletter, pretending the ideas are original and mine.

Another thing. Super important question:

Do you have one of those moms that inexplicably know everyone and gossip all day long so that one little secret you have shared with them in confidence at breakfast becomes a fact known by the whole town by noon?

If so, can you tell your mom that Synthetic Work is a secret?

If she talks about this newsletter or forwards it to the entire neighbourhood, it helps me a lot.

Earlier this week, I went to a print shop to print a large-scale version of an AI image. While I was waiting for the job to be completed, I had a chat with one of the employees, who is a young artist.

He was aghast to discover that I used Stable Diffusion to generate the artwork he was preparing to print.

Like in dozens of other conversations I had with artists like him, he told me:

The only way I could potentially consider using AI is by feeding it my past works so it can generate more things in my style. But if I do so, then what’s left to do for me?

It’s a comment I hear often from artists who have not spent enough time learning how text2image (t2i) AI systems work, just like in his case.

I explained to him that, at least for now, his role as an artist is more important than ever. These AI models might take over the production of the artwork, but his taste and experience are still important to improve the composition (for example, via inpainting) and, more importantly, to pick the one variant of the same image that should be seen by the audience and go into the market, the curation.

Eventually, AI models will surpass humans even in those tasks, especially if we integrate them with all sorts of metrics about how people feel about each generated image and how much they are willing to pay for them.

But not yet.

For a few more years, the importance of the human artist will still be critical.

Rather than embracing the new technology and seeing if it could elevate his work, he refused to touch it.

His plan, he told me, was to simply wait for everybody to embrace AI and then sell his human craftsmanship at a high price, like we do today with hand-made tailored suits or artisanal shoes.

It’s a great plan, I told him, as long as you can afford to wait and see the day when human craftsmanship will be so rare and so in demand to let you command a price high enough to make a living.

He didn’t reply.

If you want to wait for AI to go away in one way or another, be sure you have the resources to survive until that day comes.

Alessandro

In This Issue

  • Intro
    • What’s left for artists to do?
  • What Caught My Attention This Week
    • A survey conducted by Randstad highlights the lack of upskilling for the global workforce on AI.
    • The organizers of Eurovision are considering banning AI-generated songs from the competition.
    • UK publishers are urging the Prime Minister to protect works ingested by AI models.
  • The Way We Work Now
    • The US Securities and Exchange Commission Chair Gary Gensler on how AI might impact the way people trade.
  • How Do You Feel?
    • According to Pew Research, 52% of Americans feel more concerned than excited about the increased use of artificial intelligence.
  • Putting Lipstick on a Pig
    • Microsoft, Google, and Zoom are now ready to help you reach the maximum level of alienation during corporate meetings.
What Caught My Attention This Week

The first thing that I thought was interesting this week is a survey conducted in October, 2022 by the company Randstad, which highlights the lack of upskilling for the global workforce on AI.

Shubham Sharma, reporting for VentureBeat:

according to a new Workmonitor Pulse survey from staffing company Randstad, despite this surge, AI training efforts continue to lag.

Randstad analyzed job postings and the views of over 7,000 employees around the world and discovered that even though there’s been a 20-fold increase in roles demanding AI skills, only about one in ten (13%) workers have been offered any AI training by their employers in the last year. The findings highlight a major imbalance that enterprises need to address to truly harness the opportunities of AI and succeed.

In the survey, 52% of the respondents said they believe being skilled in AI tools will improve their career and promotion prospects, while 53% said that the technology will have an impact on their industries and roles.

Similar stats were noted in the U.S., where 29% of the workers are already using AI in their jobs. In the country, 51% said they see AI influencing their industry and role and 42% expressed excitement about the prospects it will bring to their workplace. For India and Australia, the figures were even higher.

As of now, the survey found, AI handling is the third most sought after skillset – expected by 22% of the participants over the next 12 months – after management and leadership (24%) and wellbeing and mindfulness (23%), but only 13% of the workers claim they have been given opportunities to upskill in this area in the last 12 months.

The gap between expected and offered AI training was found to be highest in Germany (13 percentage points) and the UK (12 percentage points), followed by the US (8 percentage points).

Interesting, but not surprising.

I’ve seen first-hand how little big corporations invest in training and upskilling, even for their most valuable employees.

I am working with some organizations on their upskilling programs, but I remain skeptical that the majority of companies will do the right thing.

This is why it’s so critical that you don’t wait for your employer to give you the tools you need to succeed in a future dominated by AI.

The Splendid Edition of Synthetic Work is meant to help you with that. It’s meant to help you understand what you can do with AI, and how to do it, at work, in your line of business, while you learn what other companies like yours are doing as well.


The second thing that I thought was interesting this week is the news that the Eurovision organization is considering banning AI-generated songs from the competition.

Thomas Seal, reporting for Bloomberg:

Eurovision, the kitschy annual pop song contest, is debating a ban on artificial intelligence, the latest sign of the entertainment industry’s concerns over the emerging technology.

“What if at the Eurovision Song Contest we suddenly get an AI-created song?” said Jean-Philip de Tender, deputy director general of the European Broadcasting Union, an alliance of TV companies that oversees the contest. The EBU is “reflecting on how do we need this in the rulebook, that the creativity should come from humans and not from machines.”

While new rules require a discussion with the EBU’s members and governing bodies, the competition should reward “people on stage, who have achieved something in writing a song and performing a song,” de Tender said in an interview with Bloomberg Thursday at the Edinburgh TV Festival.

Eurovision has inspired AI experiments in the past. In 2019, algorithms developed in part by Oracle Corp. analyzed hundreds of past submissions to create the melody and lyrics for “Blue Jeans and Bloody Tears,” a duet by 1978 Eurovision winner Izhar Cohen and a pink robot.

This year, Eurovision reached an audience of 162 million, according to the contest’s website.

Nothing. Absolutely nothing will prevent the music industry from generating songs with AI. Those songs will be the most popular songs ever created. And the ones that will produce those songs will be the same music labels that today are fighting against anonymous people cloning the voices of famous singers to create spoof songs. And the music streaming services.

That’s because these two industry constituencies, music labels and streaming services, are the ones that have the most data on what people have liked in the last 100 years of music.

If you think that they are not going to use that data to generate the most wanted songs ever, I have a bridge to sell you.

Do you remember the popular synthetic song Heart on My Sleeve featuring the unauthorized voice of Drake that we mentioned in Issue #10 – The Memory of an Elephant?

The anonymous author published a new song titled Whiplash on TikTok.

I cannot show you the original clip as it has been removed since I wrote this issue, but I can show you the original clip next to the clip of a famous TikToker reacting to it:

I urge you to listen to the song, even if you hate rap.

I urge you to read the comments and do a bit of exploring until you realize that people are crazy for this song, and they care exactly zero that it’s made with AI.

But the anonymous creator is not content enough. He/she has to stir the pot even more with the accompanying text:

The future of music is here. Artists now have the ability to let their voice work for them without lifting a finger. That being said, ghostwriter is open for business.

@travisscott @21savage it’s clear that people want this song. DM me on Instagram if you are interested in allowing me to release this record, or if you’d like me to remove this post.

If you’re down to put it out, I will clearly label it as A.I., and I’ll direct royalties to you. Respect either way.

Do you realize how many kids all around the world will now flock to generative AI music models in the attempt to emulate this?

And here’s the inevitable evolution, documented by Joe Coscarelli, reporting for The New York Times:

Behind the scenes, however, the shadowy act and its team were making overtures to the very industry figures “Heart on My Sleeve” had unnerved. In the months since, those behind the project have met with record labels, tech leaders, music platforms and artists about how to best harness the powers of A.I., including at a virtual round-table discussion this summer organized by the Recording Academy, the organization behind the Grammy Awards.

“I knew right away as soon as I heard that record that it was going to be something that we had to grapple with from an Academy standpoint, but also from a music community and industry standpoint,” Harvey Mason Jr., a producer who is the chief executive of the Recording Academy, said in an interview. “When you start seeing A.I. involved in something so creative and so cool, relevant and of-the-moment, it immediately starts you thinking, ‘OK, where is this going? How is this going to affect creativity? What’s the business implication for monetization?’”

Mason said he had contacted Ghostwriter directly on social media after being impressed with “Heart on My Sleeve.” He added that Ghostwriter attended the meeting in character, including using a distorted voice.

A representative for Ghostwriter, who requested anonymity to not expose those behind the project — acknowledging that much of its marketing power comes from its mystery — confirmed that “Whiplash,” like “Heart on My Sleeve,” was an original composition written and recorded by humans. Ghostwriter attempted to match the content, delivery, tone and phrasing of the established stars before using A.I. components.

“As far as the creative side, it’s absolutely eligible because it was written by a human,” said Mason of the Recording Academy.

He added that the Academy would also look at whether the song was commercially available, with Grammy rules stating that a track must have “general distribution,” meaning “the broad release of a recording, available nationwide via brick-and-mortar stores, third-party online retailers and/or streaming services.”

If a Grammy goes to either of these songs, it will open the gates to a new era of music.

The music streaming services are especially primed to attempt creating hits with AI. They are struggling to compete against each other and grow their subscriber base. The music labels cost them a lot of money in royalties, but they can’t get rid of them. Human artists can say or do nasty things that force them to break multi-million dollar contracts. And so on.

They got served on a silver platter the opportunity to get rid of all of these problems by using AI to generate synthetic singers and incredibly popular songs.

The technology is maturing quickly, and this is one the jobs that might become the most impacted by AI in the years to come.

I know that among you there are many people believing that AI will never be able to replace human creativity. My answer to all of you is always the same: watch Everything is a Remix.


The third interesting thing of the week about the UK publishers urging the Prime Minister to protect works ingested by AI models.

Dan Milmo, reporting for The Guardian:

UK publishers have urged the prime minister to protect authors’ and other content makers’ intellectual property rights as part of a summit on artificial intelligence.

The intervention came as OpenAI, the company behind the ChatGPT chatbot, argued in a legal filing that authors suing the business over its use of their work to train powerful AI systems “misconceived the scope” of US copyright law.

The letter from the Publishers Association, which represents publishers of digital and print books as well as research journals and educational content, asks Rishi Sunak to make clear at the November summit that intellectual property law must be respected when AI systems absorb content produced by the UK’s creative industries.

In its letter, the Publishers Association said: “On behalf of our industry and the wider content industries, we ask that your government makes a strong statement either as part of, or in parallel with, your summit to make clear that UK intellectual property law should be respected when any content is ingested by AI systems and a licence obtained in advance.”

In the UK, the government has backtracked on an initial proposal to allow AI developers free use of copyrighted books and music for training AI models. The exemption was raised by the Intellectual Property Office in June 2022 but ministers have since rowed back on it. In a report published on Wednesday, MPs said the handling of the exemption proposal showed a “clear lack of understanding of the needs of the UK’s creative industries”.

The letter from the publishers’ trade body said the UK’s “world-leading” creative industries should be supported in parallel with AI development. It pointed to research that estimated the publishing industry to be worth £7bn to the UK economy, while employing 70,000 people and supporting hundreds of thousands of authors.

The job of a lot of people is on the line if we end up being able to publish a bestselling book with a single prompt and the push of a button.

As I wrote before, for now, it’s strikes and calls to the politicians. The next logical step is sabotage.

Don’t expect these people to go down without a fight.

Actors, musicians, writers. One by one, every category is raising against the use of AI to generate synthetic content. When corporate employees?

Are we ready to address that?

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

Long overdue, I’d like to point your attention to a reiterated comment about the impact of AI on the way people trade by the Chair of the US Securities and Exchange Commission (SEC) Gary Gensler.

From the transcript:

A lot of the recent buzz has been about such generative AI models, particularly large language models. AI, though, is much broader. I believe it’s the most transformative technology of our time, on par with the internet and mass production of automobiles.

The possibility of one or even a small number of AI platforms dominating raises issues with regard to financial stability. While at MIT, Lily Bailey and I wrote a paper about some of these issues called “Deep Learning and Financial Stability.”[30] The recent advances in generative AI models make these challenges more likely.

AI may heighten financial fragility as it could promote herding with individual actors making similar decisions because they are getting the same signal from a base model or data aggregator. This could encourage monocultures. It also could exacerbate the inherent network interconnectedness of the global financial system.

Thus, AI may play a central role in the after-action reports of a future financial crisis.

While current model risk management guidance—generally written prior to this new wave of data analytics—will need to be updated, it will not be sufficient. Model risk management tools, while lowering overall risk, primarily address firm-level, or so-called micro-prudential, risks. Many of the challenges to financial stability that AI may pose in the future, though, will require new thinking on system-wide or macro-prudential policy interventions.

The paper Gensler is referring to was published in November 2020 and it’s titled: Deep Learning and Financial Stability.

When we imagine how AI could impact the Finance world, we tend to think about it with optimism, imagining easier-to-use and more intelligent tools that, among other things, could further democratize access to retail trading.

The SEC Chair perspective turns this idea on its head. Interesting.

How Do You Feel?

This section is dedicated to the psychological impact of artificial intelligence on people. You might think that this has no relevance to the changing nature of human labour, but it does. Oh, if it does!

For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.

We are a little more than a quarter away from 2024, a date that sounds futuristic for people of my age.

We have all the evidence we need that the artificial intelligence technologies we use today have a lot of room to evolve and mature. Even if the current approach we are using will eventually hit a wall, that wall seems far away. And even if that wall exists, what can be done with today’s approach is enough to reshape quite a few jobs and industries.

All of this is to say that the adoption of AI at a planetary scale is inevitable. If you read the Splendid Edition of Synthetic Work, you know this probably better than the overwhelming majority of people in the industry, including most AI experts.

As the destiny of AI evolved from “It’s another hype before a new AI winter comes.” to “It’s inevitable.”, it’s important to understand how people’s feelings about AI evolved as well.

Pew Research offers some help, at least for what concerns Americans, with a new report published last week.

Alec Tyson and Emma Kikuchi, write about their research:

Overall, 52% of Americans say they feel more concerned than excited about the increased use of artificial intelligence.

The share of Americans who are mostly concerned about AI in daily life is up 14 percentage points since December 2022, when 38% expressed this view.

Still, there are some notable differences, particularly by age. About six-in-ten adults ages 65 and older (61%) are mostly concerned about the growing use of AI in daily life, while 4% are mostly excited. That gap is much smaller among those ages 18 to 29: 42% are more concerned and 17% are more excited.

The rise in concern about AI has taken place alongside growing public awareness. Nine-in-ten adults have heard either a lot (33%) or a little (56%) about artificial intelligence. The share who have heard a lot about AI is up 7 points since December 2022.

Those who have heard a lot about AI are 16 points more likely now than they were in December 2022 to express greater concern than excitement about it.

Now.

In Issue #15 – Well, worst case, I’ll take a job as cowboy we learned from another Pew Research survey that there are a lot of Americans who talk about ChatGPT but never used ChatGTP.

So one might wonder if a lot of people are expressing an opinion on something they don’t know much about.

I know. It would be the first time. An unprecedented event in the history of humanity. But, let’s keep our minds open to the possibility.

People are not just concerned. They are also confused:

There are several uses of AI where the public sees a more positive than negative impact.

For instance, 49% say AI helps more than hurts when people want to find products and services they are interested in online.

Other uses of AI where opinions tilt more positive than negative include helping companies make safe cars and trucks and helping people take care of their health.

In contrast, public views of AI’s impact on privacy are much more negative. Overall, 53% of Americans say AI is doing more to hurt than help people keep their personal information private. Only 10% say AI helps more than it hurts, and 37% aren’t sure.

So, people are happy about receiving help in finding the things they want to buy, but they don’t want to tell anybody about their taste.

“Somebody else can train those AI models, not me. I’ll just enjoy them very much.”

So, here’s the problem, people:

The only way to find somebody (a seller, a lover, etc.) that feels like “Wow! This is exactly what I was looking for! It feels like he/she is reading my mind!” is to let him/her read your mind.

The hope that, by pure chance, you’ll stumble on the perfect match (for a product, a service, a lover, etc.), in your lifespan, leads to a life of disappointment.

Back to the report with a final data point about the influence of education:

Americans with higher levels of education are more likely than others to say AI is having a positive impact across most uses included in the survey. For example, 46% of college graduates say AI is doing more to help than hurt doctors in providing quality care to patients. Among adults with less education, 32% take this view.

Keep this in mind when you read the next tech optimist on X saying that AI will free us to do greater things.

A few months ago, Microsoft announced the future integration of OpenAI models with Microsoft 365, suggesting the capability of automatically taking notes on your behalf during meetings. At that time, I wrote that the immediate consequence might be even less engagement. Which is hard, given that most people in corporate meetings stare at their screens instead of paying attention to the meeting.

It still can happen if AI turns the whole exercise of attending a meeting into a mere formality.

In my experience, people attend meetings primarily to be sure they are there to defend their interests: clinging to their existing resources, fighting for more budget, securing new requisitions, and, most importantly, making sure they can defend themselves if somebody badmouths their jobs in their absence.

If the AI takes all the notes, and people can ask it something like: “send me a message on the corporate chat if any of these situations happen during the conversation”, then the urgency to attend meetings will be reduced to near zero.

All of these considerations, of course, equally apply to Google, which announced the integration of Bard with Google Meet.

Jay Peters, reporting for The Verge:

If Google Meet’s new AI tools are as good as advertised, you might never need to pay attention to another meeting again — or even show up at all. At its Cloud Next conference today, Google revealed a handful of new AI-powered features coming soon to Meet.

One of the biggest new AI-enabled features is the ability for Google’s Duet AI to take notes in real time: click “take notes for me,” and the app will capture a summary and action items as the meeting is going on. If you’re late to a meeting, Google will be able to show you a mid-meeting summary so that you can catch up on what happened. During the call, you’ll be able to talk privately with a Google chatbot to go over details you might have missed. And when the meeting is over, you can save the summary to Docs and come back to it after the fact; it can even include video clips of important moments.

Wait. The best is yet to come:

Another new Meet feature lets Duet “attend” a meeting on your behalf. On a meeting invite, you can click an “attend for me” button, and Google can auto-generate some text about what you might want to discuss. Those notes will be viewable to attendees during the meeting so that they can discuss them.

What could possibly go wrong?

And before you start wondering if this is the worst possible way to solve the meeting problem, please take note of how strong the cognitive dissonance is inside Google:

Ultimately, [Dave Citron, Google’s senior director of product for Meet] says Meet is still working on the same overall goal as before. “We really want meetings to feel like they’re bringing people together into the same room regardless of where you are and your device,” he says.

Of course, now, everybody thinks this is a great idea that must be done. So here’s the uncanny timing of Zoom:

Preparing for that big meeting. Writing emails. Catching up on a backlog of chat messages. Repetitive tasks like these can take up 62% of your workday, not to mention sap your productivity and hurt your ability to collaborate with your team. But now, you’re empowered to do more using Zoom AI Companion.

If communicating with others is not collaboration, what is it? Each one sitting at their desk, in silence, with headphones on, and a screen in front of them, doing their own thing and sending that thing when it’s done?

Even developers that “collaborate” in an open source project, each writing pieces of code, eventually have to communicate with each other and with the group to transfer ideas from one brain to another.

If you strip away the communication part from human interaction, offloding that job to an AI, all it’s left is people doing their atomic tasks based on an input and a set of rules.

Humans as programming functions of a program we call society.

Let’s continue with Zoom’s announcement:

In line with our commitment to responsible AI, Zoom does not use any of your audio, video, chat, screen sharing, attachments, or other communications-like customer content (such as poll results, whiteboard, and reactions) to train Zoom’s or third-party artificial intelligence models.

You hop on your computer and see a bunch of chats. No problem — AI Companion will help you compose chat responses with the right tone and length based on your prompts. That time saved can help you focus on a project you need to share with your team right away.

Humans as programming functions.

You grab a cup of coffee and arrive at your first meeting a few minutes late. Instead of interrupting your teammates to find out what you missed, just ask AI Companion to catch you up and it will recap what’s been discussed so far.

You can also ask AI Companion more specific questions about the meeting content. After the meeting, AI Companion smart recordings can automatically divide cloud recordings into smart chapters for easy review, highlight important information, and create next steps for attendees to take action.

You missed a meeting yesterday because you were busy or traveling. Now, you no longer need to find someone to fill you in. You can just read the meeting summary generated by AI Companion, which tells you who said what, highlights important topics, and outlines next steps.

Humans as programming functions.

Pressed for time and have important emails to respond to? Later this month, AI Companion will be able to help you compose email messages with the right tone and length.

After several back-to-back calls, you have a ton of unread chats. Coming later this month, AI Companion will be able to summarize your chat messages instantly so you can see the big picture more easily. This fall, you’ll be able to respond to those messages quickly with AI Companion suggestions to help you complete your sentences and responses.

In a chat channel, your teammates discuss the marketing plan for a new product but you need to clarify a few key points and want to talk through it with them. This fall, AI Companion will be able to automatically detect meeting intent in chat messages and display a scheduling button to streamline the scheduling process.

Humans as programming functions.

More importantly, if the communication part is offloaded to an AI, you, my dear fellow human, are entirely replaceable.

I know you think your brilliance will emerge from the output that you’ll produce (and that the AI will communicate and promote on your behalf), but it won’t. Very few move up the ladder because of the quality of their output, and only up to a point.

We are successful because of how we communicate with others.

Want More? Read the Splendid Edition

This week’s Splendid Edition is titled Use AI, save 500 lives per year.

In it:

  • Intro
    • What is software?
  • What’s AI Doing for Companies Like Mine?
    • Learn what DoorDash, Kaiser Permanente, and BHP Group are doing with AI.
  • A Chart to Look Smart
    • Researchers have used large language models to analyze the impact of corporate culture on financial analyst reports. Can this approach be used elsewhere?
  • The Tools of the Trade
    • Let’s use a voice generation tool to produce an audio version of Synthetic Work

Issue #29 - Modern Love

September 16, 2023
Free Edition
Hi. If you are seeing this, it means that you are a valued member of our community. Or you are reading Issue #0. Or you hacked the archives.
Whichever the case, bravo.

If you have comments about anything you'll find below, or you have material to suggest, or topics you'd like to see covered (don't you dare to pitch me your startup), just send an email.
I'll read all emails, ignore them, wait a few weeks, and then use the best stuff for a new issue of the newsletter, pretending the ideas are original and mine.

Another thing. Super important question:

Do you have one of those moms that inexplicably know everyone and gossip all day long so that one little secret you have shared with them in confidence at breakfast becomes a fact known by the whole town by noon?

If so, can you tell your mom that Synthetic Work is a secret?

If she talks about this newsletter or forwards it to the entire neighbourhood, it helps me a lot.

After way a surprising number of people pushed me, for years, relentlessly, to start a YouTube channel, I finally capitulated:

https://youtube.com/@perilli

This channel will be a complement to Synthetic Work, not a replacement of any sort.

And it will follow the opposite logic of Synthetic Work: while the newsletter focuses on long, in-depth content that you may want to archive, discuss with your team, or research further, the YouTube channel will focus on short, to-the-point content that will focus on one thing only in each video.

Even the format is different.

As some of you know, I already produce a weekly video series in collaboration with a leading tech publisher called 01net: Le Voci dell’AI.

That is a 10-minute pill, in Italian, where I mostly comment on the news about artificial intelligence and the business implications of what’s being announced. Nothing to do with the laser focus of Synthetic Work on the impact of AI on jobs.

And so, for this new YouTube channel, I wanted a format different from Synthetic Work and different from Le Voci dell’AI.

I thought it would be interesting to interview successful peopl…

I’m joking.

We have enough interviewers in the world, and they are all wonderful.

No.

I thought it would be interesting to answer your questions. And not just about artificial intelligence, but also about many other emerging technologies that I dedicate my time on.

I regularly get questions from the readers of Synthetic Work, clients, former colleagues, friends, and random strangers on Reddit. LinkedIn, or X.

So why not answer them in a video so that others could benefit from the answers as well?

That’s it. That’s the idea.

I put together a team, invited them to my house, and we recorded the first episodes to see how it would work.

It’s a unique Q&A format (you’ll see why over time) which we’ll use to answer your questions in every episode.

Feel free to ask anything you want about AI and other emerging technologies. It can be a question about the topic we cover in this newsletter or about something else entirely. It can be a business-oriented question, a competition-related question, a product question, a technical question, a question about my cat…anything you need to know to succeed.

And don’t worry about being judged. Your identity will remain anonymous unless you specify otherwise.

Just send an email to [email protected]

We’ll choose the best questions and answer them. And if you know me, you know what to expect: strong opinions and straightforward answers.

Subscribe to the channel and don’t miss the first video we’ll publish on Tuesday.


Wednesday, Sep 20, at 4pm BST, I’ll livestream a podcast with Dror Gill, a top generative AI expert who also happens to be a reader of Synthetic Work.

It’s titled Jobs 2.0 – The AI Upgrade

We want to talk about three things:

  • What is the real impact of AI on jobs.
  • How to address workforce fear, uncertainty, and doubts, and upskill people.
  • How AI is impacting the career outlook of our kids.

We’ll also have a Q&A question at the end, so feel free to join us live and ask your questions.

Find all the details here: https://perilli.com/podcasts/jobs-2-0-the-ai-upgrade/

See you (in video) soon.
Alessandro

In This Issue

  • Intro
    • A new YouTube channel and an upcoming podcast
  • What Caught My Attention This Week
    • Project Gutenberg, Microsoft, and MIT worked together to create an audiobook version of 5,000 open access books.
    • A new bill in the US state of California could force companies to hire a trained human safety operator for self-driving trucks.
    • Freelance job websites are seeing an explosion of demand for AI skills.
  • The Way We Work Now
    • Comedians have started using AI to write jokes.
  • How Do You Feel?
    • Voice actors discover their voices cloned and used to say dirty things in video games.
  • Putting Lipstick on a Pig
    • Dating websites want to use AI to coach people on how to flirt online.
What Caught My Attention This Week

The first thing that caught my attention this week is a collaboration between the Project Gutenberg, Microsoft, and MIT to create an audiobook version of 5,000 open access books.

The researchers behind this project released a short paper about it with some interesting information:

An audiobook can dramatically improve a work of literature’s accessibility and improve reader engagement. However, audiobooks can take hundreds of hours of human effort to create, edit, and publish. In this work, we present a system that can automatically generate high-quality audiobooks from online e-books. In particular, we leverage recent advances in neural text-to-speech to create and release thousands of human-quality, open-license audiobooks from the Project Gutenberg ebook collection. Our method can identify the proper subset of e-book content to read for a wide collection of diversely structured books and can operate on hundreds of books in parallel.

Our system allows users to customize an audiobook’s speaking speed and style, emotional intonation, and can even match a desired voice using a small amount of sample audio. This work contributed over five thousand open-license audiobooks and an interactive demo that allows users to quickly create their own customized audiobooks.

Last week, to show you how mature synthetic voices have become, I unveiled an audio version of the Free Edition: Issue #28 – Can’t Kill a Ghost.

So this story is perfect to further validate the point I tried to make last week: synthetic voices are ready for prime time and readily accessible to anyone.

Of course, there’s a little problem.

Readily accessible technology that can parallelize the creation of thousands of audiobooks takes away jobs.

Microsoft researchers try to spin this as a positive thing in their paper:

Traditional methods of audiobook production, such as professional human narration or volunteer-driven projects like LibriVox, are time-consuming, expensive, and can vary in recording quality. These factors make it difficult to keep up with an ever-increasing rate of book publication.

LibriVox is a well-known project that creates open-license audiobooks using human volunteers. Although it has made significant contributions to the accessibility of audiobooks, the quality of the produced audiobooks can be inconsistent due to the varying skills and recording environments of the volunteers. Furthermore, the scalability of the project is limited by the availability of volunteers and the time it takes to record and edit a single audiobook. Private platforms such as Audible create high-quality audiobooks but do not release their works openly and charge users for their audiobooks.

Audible charges users because, besides making a profit, it also pays professional voice actors.

The argument that professional narrators cannot keep up with the ever-increasing rate of book publication (especially now that books are being written with the help of large language models like GPT-4) is a moot one.

Synthetic voices will not be used just to help professional narrators cope with the scale of book publishing. They will exterminate the profession.

And professional narrators understood this since the beginning of this year, when Apple introduced synthetic voice narration for a subset of the books in its Apple Books platform.

And if it’s not cost and scalability issues that will exterminate the profession, market demand will. Generative AI can be used to clone the voices of celebrities and singers, who can comfortably retire at the age of 15 and live off royalties for the rest of their lives.

Who wouldn’t want all his/her audiobooks narrated by Morgan Freeman?

And now that we prematurely declared dead the profession of professional narrator, feel free to browse the collection of books that Microsoft and MIT have put together.


The second thing that caught my attention this week is an attempt to pass a new bill in the US state of California that would impose the employment of a trained human safety operator for self-driving trucks.

Rebecca Bellan, reporting for TechCrunch:

In a blow to the autonomous trucking industry, the California Senate passed a bill Monday that requires a trained human safety operator to be present any time a self-driving, heavy-duty vehicle operates on public roads in the state. In effect, the bill bans driverless AV trucks.

AB 316, which passed the senate floor with 36 votes in favor and two against, still needs to be signed by Gov. Gavin Newsom before it becomes law. Newsom has a reputation for being friendly to the tech industry, and is expected to veto AB 316.

In August, one of the governor’s senior advisers wrote a letter to Cecilia Aguiar-Curry, the bill’s author, opposing the legislation. The letter said such restrictions on autonomous trucking would not only undermine existing regulations, but could also limit supply chain innovation and effectiveness and hamper California’s economic competitiveness.

Advocates of the bill, first introduced in January, argue that having more control over the removal of safety drivers from autonomous trucks would protect California road users and ensure job security for truck drivers.

The authors of the bill previously told TechCrunch they don’t want to stop driverless trucking from reaching California in perpetuity — just until the legislature is convinced it’s safe enough to remove the driver.

Why is this interesting?

Because the “ensuring job security for truck drivers” argument is thrown in there seemingly as an afterthought.

Now, is it an afterthought and the real concern is safety, as it seems from the last sentence of the quote above? Or is it the other way around: the real concern is job security and the safety argument is just a convenient excuse?

Whichever is, once a few US states start approving driverless self-driving trucks, many, many other countries will follow. And short of a mass slaughter of humans due to AI miscalculations, there will be no turning back. With or without California’s participation.

Self-driving technology is not perfect, yet. It has been not perfect yet for years. But betting it will never be in our lifetime is foolish.

And so, what will happen to the millions of car and truck drivers out there?

The techno-optimist we quoted a few Issues ago would say: “They will be free to go and do greater things.”


The third thing that caught my attention this week is the explosion of demand for AI skills on freelance job websites like Upwork.

Sean Michael Kerner, reporting for VentureBeat:

Ever since OpenAI’s ChatGPT dramatically entered the AI scene in November 2022, there has been an explosion of interest and demand in AI skills — and particularly OpenAI skills. A new partnership announced today (July 31) between OpenAI and Upwork aims to help meet the demand.

The OpenAI Experts on Upwork services is a way for OpenAI users to get access to skilled professionals to help with AI projects. The two organizations worked together to design the service to identify and help validate the right professionals to help enterprises get the AI skills that are needed.

“We’ve seen widespread adoption of generative AI on the Upwork platform across the board,” Bottoms said. “Gen AI job posts on our platform are up more than 1,000% and related searches are up more than 1500% in Q2 2023 when compared to Q4 2022.”

Bottoms said that Upwork partnered with OpenAI to identify the most common use cases for OpenAI customers — like building applications powered by large language models (LLMs), fine-tuning models and developing chatbots with responsible AI in mind.

This is probably the most positive AI impact on jobs we’ve seen so far. But be careful: I wouldn’t put my corporate in the hands of a freelancer to do model fine-tuning or chatbot development.

As I said many times before, one of the most valuable things companies can do now is build an internal AI team that is capable of in-house fine-tuning.


The bonus thing that caught my attention this week is the video of a huge 3D printer printing with Titanium a complicated part designed by artificial intelligence.

Even if you don’t care much about algorithmic design, this is worth watching to understand how AI is infiltrating in every aspect of our economy:

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

Even comedians are starting to use AI in their line of business.

Vanessa Thorpe, reporting for The Guardian:

This summer’s Edinburgh festival fringe lineup of acts has taken up the threat of artificial intelligence and run with it.

London-based comedian Peter Bazely has confessed to being “out of ideas”, so has turned to AI for help in creating a “relatable” show at the Laughing Horse venue. As a result, he plans to play straight man to a computer-generated comic called AI Jesus – also the name of his show.

Equally unafraid of an algorithm is a show at Gilded Balloon Teviot called Artificial Intelligence Improvisation, hot from the Brighton fringe. Presented by Improbotics theatre lab co-founder Piotr Mirowski, a research scientist on Google’s DeepMind project, the show features both actors and bots responding to audience prompts as chatbots compete with humans to the best punchline.

Behind the theatrics lies a serious purpose, according to Mirowski. The future security of any performer who relies on their imagination rests on the public’s appetite for quality, he argues. “We do not use humans to showcase AI; instead we use AI, demonstrating its obvious limitations, to showcase human creativity, ingenuity and support on the stage,” he clarified this weekend.

As always, these attempts to show the limitations of today’s AI models, often fueled by job insecurity, are poorly framed.

GPT-3.5-Turbo is less than one year old. GPT-4 was released in March.

Of this year.

Yes.

It feels like it was released 50 years ago, but not. It’s just six months old.

And we already have rumors of a new large language model developed by DeepMind/Google AI that is significantly more powerful than GPT-4.

And then, you know that GPT-5 is coming? And 6? and so on?

So, on a performance stage or in the board room, our framing cannot be “Oh, look at how bad AI is at this task. It will never replace me in this job.”

To make that claim, we need to see at least three generations of large language models failing at the task, year over year.

If we don’t think in this way, we are just wasting a lot of energy in reassuring ourselves and others about something that we have no evidence to support.

How Do You Feel?

This section is dedicated to the psychological impact of artificial intelligence on people. You might think that this has no relevance to the changing nature of human labour, but it does. Oh, if it does!

For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.

In Issue #20 – £600 and your voice is mine forever, we talked about the growing number of lawsuits involving voice actors who find out their voices are being cloned and used to compete against them in the marketplace.

But there’s another aspect of voice cloning that is worth discussing.

Ed Nightingale, reporting for Eurogamer:

“Is my voice out there in the modding and voice-changing community? Yes. Do I have any control over what my voice can be used for? Not at the moment.”

David Menkin most recently starred in Final Fantasy 16 as Barnabas Tharmr, following work in Lego Star Wars, Valorant, Assassin’s Creed and more. And like many other voice actors in the games industry, he’s concerned about the increasing use of AI.

“AI is a tool, and like any tool, it requires regulation,” Menkin says. “We simply want to know what our voices will be used for and be able to grant or withdraw consent if that use is outside the scope of our original contract. And if our work is making your AI tool money, we want to be fairly remunerated.”

A tweet earlier this month from Skyrim modder Robbie92_ highlighted the issue of actors having their voices used by modding communities for pornographic content. “As a member of the Skyrim modding scene, I am deeply concerned at the practice of using AI voice cloning to create and distribute non-consensual deepfake pornographic content,” they wrote, including a list of affected performers.

Some popular voice actors found out how their voice was used in that way with that tweet. Imagine that.

The main point here, one that I probably didn’t emphasize enough in these first six months of Synthetic Work, is that it’s incredibly easy to clone a human trait with generative AI.

How you sound, how you look, your choice of words.

Humans are algorithms themselves and we are the sum of a multitude of patterns that define our aspect and our behaviour. Generative AI learns our patterns better than any other technology we ever invented. And it allows anybody to replicate our patterns trivially.

Given enough data, which is less and less as weeks pass and the AI community makes huge progress, generative AI can clone anything about any of us.

This is a central point in the hypothesis that AI will replace us in the workplace for many professions. I’ve been working to show you how incredibly easy it is to clone a person for some time now.

You’ll see what’s possible in a future Splendid Edition.

We previously discussed how dating apps are using AI to embellish the profiles of their users and how much more they could do. But out of the many scenarios we contemplated, I didn’t see this one coming.

Emily Chang, interviewing Bumble CEO for Bloomberg:

“The average US single adult doesn’t date because they don’t know how to flirt, or they’re scared they don’t know how,” Wolfe Herd said on this week’s episode of The Circuit With Emily Chang. “What if you could leverage the chatbot to instill confidence, to help someone feel really secure before they go and talk to a bunch of people that they don’t know?”

Something tells me that lack of flirting skills is not the reason why the average US single adult doesn’t date, but OK.

Let’s continue:

Humanity could use some assistance. The US Surgeon General declared loneliness an epidemic, with more than half of adults reporting feelings of loneliness.

Something tells me that dating and loneliness are not correlated in the way it’s portrayed here, but OK.

There’s more:

AI could improve the quality of matches by, as she put it, “supercharging fate.”

“We’re building an entire relationship business.”

The full video, titled Modern Love, perfect for a Black Mirror episode, is here.

While the whole interview is interesting if you don’t know the story of Bumble, or its business model, this one quote at around 16:00 is what matters:

“Is there anything about an AI-powered dating future that makes you worry?”

“You just have to engineer things to operate within certain guardrails and boundaries, and you have to steer things in the right direction. So, is the world suggesting that the whole world replaces human connection with a chatbot? If that is truly the case, I promise you, we have bigger issues than Bumble.

I don’t think you will ever replace the need for real love and human connection.”

So these dating apps are embracing AI and evolving to make us look better, to make us come across better, and to make us win more, by presenting us to an audience that is more likely to find us highly attractive, interesting, loving, etc.

To achieve the goal, they will have to know more about us, profiling us more extensively than today. And then, progressively, the effort to find a partner might be reduced to zero.

But before you get to zero, somebody will start thinking: “Wait a second. If we are getting all this data about this user, and we are learning so much more about him/her, why don’t we skip the whole matching partner circus and just create a chatbot that is a perfect match for him/her according to the same parameters we would use to find a human partner?”

Coincidentally, Character.ai, which we talked about in one of the first Issues of Synthetic Work, is exploding in popularity and it’s getting closer and closer to ChatGPT in terms of monthly active users:

Now.

Why are we talking about all of this?

Because what happens in the personal lives of people influences how they behave in their professional lives and what they find acceptable or not.

If it becomes culturally acceptable to get an “AI makeover” on a dating app, it might become culturally acceptable to get an “AI makeover” on a professional networking app.

If it becomes culturally acceptable to get an “AI personal partner” on a dating app, it might become culturally acceptable to get an “AI professional partner” at work.

Not for you and me, perhaps, but for the generation of people who are growing up with these technologies and have yet to enter the workforce.

Look at the age distribution of Character.ai users:

We say “lipstick on a pig” in a pejorative way, but what if it becomes the social norm?

Want More? Read the Splendid Edition

This week’s Splendid Edition is titled The AI That Whispered to Governments.

In it:

  • Intro
    • If you were one of the most powerful tech companies in the world, and you were given the possibility to whisper in the ears of your government, could you resist the temptation of taking advantage of it?
  • What’s AI Doing for Companies Like Mine?
    • Learn what Propak, JLL, and Activision are doing with AI.
  • A Chart to Look Smart
    • The better the AI, the less recruiters pay attention to job applications.
  • Prompting
    • Let’s take a look at a new technique to generate summaries dense with information with large language models. Does it really work?
  • The Tools of the Trade
    • Is lipsynched video translation ready for industrial applications?