- A big PR agency in China has stopped hiring because generative AI is more than enough. Let’s see the impact on the image of their clients.
- Independent and employed Chinese artists start to see dramatically fewer job commissions from gaming studios. They now just have to generate the image of a dinner, and they’ll be fine.
- The UK Trades Union Congress doesn’t find this whole AI thing very amusing.
- New research suggests that observing the AI crushing us in almost every single activity might help us think out of the box.
- When I grow up, I want to be a Knocker Upper.
- AI startups now require candidates to spend a lifetime learning new technologies from the second they are invented and then travel back in time to apply for jobs in the past.
- If you are an AI company and you are marketing the product or the features, you are missing the point.
- Making up research is now a completely legitimate activity. You don’t even have to pretend anymore that you did the work.
P.s.: This week’s Splendid Edition of Synthetic Work is titled The Tools of the Trade because it introduces a new perk for Splendid Edition members: a database of what I consider the best AI-centric tools/products/services to do specific tasks across multiple lines of work.
It describes why I think AI is making a difference and why these tools are better than their (listed) alternatives.
Second monthiversary of Synthetic Work. If you are wondering if I’ll keep doing this every month, the answer is: no.
It’s just that I find it remarkable that this publication is alive and growing, both in terms of sophistication and readership.
Paying members keep it alive, but it’s the curiosity of all of you, and the word of mouth that you keep alimenting, that helps this newsletter find its members.
So. Thanks for these two months and all the emails that you sent me. It’s been a fun experiment so far.
Alessandro
This week, too, here are three things to focus your attention to:
Thing number one: a big PR agency in China is making a move to replace some of its employees with AI.
This is the moment I would say something like: “To tell you this story, I quote XYZ, writing for ABC:”
But I can’t. And this is the second time in two weeks.
The article I intend to quote doesn’t have a byline other than “Bloomberg News” and a very suspicious final line that says “With assistance by Daniela Wei“.
Normally, I wouldn’t think much about it, but since I wrote the Splendid Edition of Issue #5 – The Perpetual Garbage Generator about how AI is impacting the publishing industry, I find some bylines quite concerning.
Anyway.
To tell you this story, I quote Bloomberg News, I suppose, with the not-better-qualified help of Daniela Wei:
Bluefocus Intelligent Communications Group Co. plans to replace its external copywriters and graphic designers with ChatGPT-like generative AI models, according to an internal staff memo seen by Bloomberg News. The $3 billion company, one of China’s best-known media and public relations outfits, has reached out to Alibaba Group Holding Ltd. and Baidu Inc. to explore licensing their technology
…
“To embrace the new wave of AI generated content, starting today we’ve decided to halt all spending on third-party copywriters and designers,” according to the internal memo.
Now. Aren’t you curious to know what type of customers Bluefocus serves?
Here are some examples. You find the rest on their clients’ page.
It will be super fun to see what happens if the customers of, say, BMW, start complaining to BMW that they are using an ad agency that is replacing people with AI.
But maybe it won’t happen. Maybe everybody will be super happy about the money that Bluefocus and BMW are now saving.
Thing number two: speaking of China, the emergence of AI models to generate images has already started crushing independent and employed artists.
This story is from Viola Zhou, who reports for Rest of World:
Freelance illustrator Amber Yu used to make 3,000 to 7,000 yuan ($430 to $1,000) for every video game poster she drew. Making the promotional posters, published on social media to attract players and introduce new features, was skill-intensive and time-consuming. Once, she spent an entire week completing one illustration of a woman dressed in traditional Chinese attire performing a lion dance — first making a sketch on Adobe Photoshop, then carefully refining the outlines and adding colors.But since February, these job opportunities have vanished, Yu told Rest of World. Gaming companies, equipped with AI image generators, can create a similar illustration in seconds. Yu said they now simply offer to commission her for small fixes, like tweaking the lighting and skewed body parts, for a tenth of her original rate.
In case you don’t know, dear reader, AI models for image generation (technically called diffusion models), like the ones used in Dall-E 2 by OpenAI, Stable Diffusion by Stability AI, or Midjourney, still have huge issues in generating anatomically correct hands (and feet).
You need to do incredible acrobatics to have decent-looking human limbs in AI-generated images. I have first-hand experience (see what I’ve done here???).
But diffusion models are maturing and I’d expect that Dall-E 3 and/or Stable Diffusion 3, probably out by the end of 2023, will have the problem fixed.
Bottom line: Amber won’t even get the small commissions she’s getting today very, very soon.
Let’s continue the article:
“AI is developing at a speed way beyond our imagination,” Xu Yingying, illustrator at an independent game art studio in Chongqing, told Rest of World. Xu’s studio produces designs for major game developers in China. Five of the studio’s 15 illustrators who specialize in character design were laid off this year, and Xu believes the adoption of AI image generators was partly to blame. “Two people could potentially do the work that used to be done by 10,” she said.
…
AI-generated art was so skilled that some illustrators talked about giving up drawing altogether. “Our way of making a living is suddenly destroyed,” said a game artist in Guangdong, speaking on condition of anonymity for fear of being identified by her employer, to Rest of World. Yu, the freelance illustrator, said it was “despicable” that algorithms — trained on vast datasets that took humans decades to produce — were on the verge of replacing the artists themselves. Still, Yu plans to train AI programs with her own drawings to improve her productivity. “If I’m a top-notch artist, I might be able to boycott [them]. But I have to eat.
…
The Guangdong-based game artist, who works at a leading gaming company, said that previously, employees could draw a scene or a character in a day; now, with the help of AI, they could make 40 a day for their bosses to choose from. “I wish I could just shoot down these programs,” the artist told Rest of World, after getting off work late one night. She said fear of impending layoffs had made her colleagues more competitive; many stayed at work late, working longer hours to try to produce more. “[AI] made us more productive but also more exhausted,” she said.”
As we said many times on Synthetic Work in these first two months of existence, the problem here is not much that ad agencies and gaming studious freeze hiring because their existing employees can do 10x with AI tools.
The problem is that, if you don’t learn how to master these new AI tools, you will end up competing against another artist or knowledge worker or developer that has, and you’ll be crushed because you can’t compete from a production output (and soon, quality) standpoint.
Generative AI comes at a time when millions have lost their jobs and hundreds of thousands more might. It can be the opportunity of a lifetime, or the final straw.
It depends on how resourceful and resilient each of us is:
The gaming industry’s job market was already precarious after the Chinese government’s monthslong licensing freeze in 2021 threw thousands of game developers out of business. Leo Li, a gaming industry recruiter in Hangzhou, told Rest of World the number of illustrator jobs plunged by about 70% over the last year — not only because of regulatory pressures and a slowing economy, but also the AI boom. Given the rising capabilities of AI tools, “bosses may be thinking they don’t need so many employees,” Li said.
Thing number three: It’s not that Synthetic Work readers are the only ones paying attention to the situation, you know? Trade Unions are preparing to fight.
Delphine Strauss, an economic correspondent, writes for the Financial Times:
The UK government is failing to protect workers against the rapid adoption of artificial intelligence systems that will increasingly determine hiring and firing, pay and promotion, the Trades Union Congress warned on Tuesday.
…Recent high-profile cases include an Amsterdam court’s ruling over the “robo-firing” of ride-hailing drivers for Uber and Ola Cabs, and a controversy in the UK over Royal Mail’s tracking of postal workers’ productivity. But the TUC said AI systems were also widely used in recruitment, for example, to draw conclusions from candidates’ facial expressions and their tone of voice in video interviews.
Here Delphine is referring to the so-called Automated Video Interviews (AVI) that we encountered in the Splendid Edition of Issue #3 – I will edit and humanize your AI content – a dreadful application of AI that I hope none of you will ever see (but if you have already had the displeasure, please send me an email and tell me more about it).
Let’s continue the article:
It had also encountered teachers concerned that they were being monitored by systems originally introduced to track students’ performance. Meanwhile, call-centre workers reported that colleagues were routinely allocated calls by AI programs that were more likely to lead to a good outcome, and so attract a bonus.
…
The TUC argues that the government is failing to put in place the “guard rails” needed to protect workers as the adoption of AI-powered technologies spreads. It described as “vague and flimsy” a government white paper published last month, which set out principles for existing regulators to consider in monitoring the use of AI in their sectors, but did not propose any new legislation or funding to help regulators implement these principles.
…
The TUC also said the government’s Data Protection and Digital Information Bill, which reached its second reading in parliament on Monday, would dilute important existing protections for workers. One of the bill’s provisions would narrow current restrictions on the use of automated decision-making without meaningful human involvement, while another could limit the need for employers to give workers a say in the introduction of new technologies through an impact assessment process, the TUC said.
Now, what do you think is the government’s position?
But a government spokesperson said, “This assessment is wrong,” arguing that AI was “set to drive growth and create new highly paid jobs throughout the UK, while allowing us to carry out our existing jobs more efficiently and safely”.
Why is the government not concerned more about the protection of existing jobs? For at least two good reasons:
- Nobody actually really for sure undoubtedly knows how this will play out. We know how other technology waves have played out, but AI is different from everything else we have ever invented.
The whole purpose of Synthetic Work is to understand what is happening to us and our jobs and our economy in the age of AI. - Goldman Sachs suggested that generative AI could increase the world GDP by as much as 7%, as we saw in Issue #7 – The Human Computer. And, of course, every government wants a piece of that. In fact, just like the artist Amber Yu we met at the beginning of this section, every government cannot afford to not want a piece of that.
This, of course, is only until somebody clever manipulates GPT-4, via fine-tuning or advanced prompting, to pose as a very credible politician and attempts to establish the first AI-powered political party.
Then, you’ll see a radical change in the mindset of governments around the world.
You won’t believe that people would fall for it, but they do. Boy, they do.
So this is a section dedicated to making me popular.
A group of researchers from the City University of Hong Kong, Yale School of Management, and Princeton University, asked a very interesting question:
What effect does AI have on people’s decision-making capabilities?
More specifically: does the quality of our decisions go up or down when we get exposed to AIs so capable to surpass us in a task?
A mighty fascinating question, if you ask me.
To find out, they conceived a Decision Quality Index (DQI) that they used to measure human behaviour in a very specific scenario: playing the game Go.
The researchers analyzed more than 5.8 million move decisions made by professional Go players over the past 71 years (1950-2021), and this is what they discovered:
Quoting their research:
We find that humans began to make significantly better decisions following the advent of superhuman AI. We then examine human players’ strategies across time and find that novel decisions (i.e., previously unobserved moves) occurred more frequently and became associated with higher decision quality after the advent of superhuman AI.Our findings suggest that the development of superhuman AI programs may have prompted human players to break away from traditional strategies and induced them to explore novel moves, which in turn may have improved their decision-making.
Isn’t this just called learning? Aren’t we simply learning the tricks that the AI used to win Go?
As life teaches us through an endless string of disappointments (about ourselves and others), learning is not a given:
Questions about the impact of superhuman AI on human behavior are related to the literature on cumulative cultural evolution. This literature shows that there is no guarantee that human decision-making will improve in response to innovations, despite the human ability to accumulate knowledge within and across generations. Often, cumulative cultural evolution does occur, as superior forms of decision-making are transferred from one group of individuals
to another. However, at times, intrinsic biases and frictions in human learning can delay or derail such process. When there exist suboptimal but familiar decisions whose efficacy has been demonstrated by others, even experts fail to adopt unfamiliar but objectively
better alternatives. It is thus not obvious whether human decision-making will improve following advancements in AI.
On top of this very important reminder, the researchers discovered that the quality of the decisions went up not just because people memorized what the AI has done before, but because the AI showed original ways to solve problems that humans had previously discarded for whatever reasons.
In other words, the researchers are suggesting that we are not simply parroting the new techniques that the AI shows us, but we are learning from the AI to think out of the box.
Which is the secret ingredient of the most disruptive startups out there.
Now, of course, this is just about playing Go. But it’s a fascinating observation that is worth paying attention to.
Have you ever thought in a novel way after the interaction with ChatGPT/GPT-4?
When we think about how artificial intelligence is changing the nature of our jobs, these memories are useful to put things in perspective. It means: stop whining.
The Cultural Tutor reminds us that once upon a time we used to have a job called Knocker Uppers.
How did people wake up before alarm clocks?!
In the 19th century there were professional "knocker uppers" who hit your window with a stick until you got out of bed.
And before that? Well, humans had an entirely different understanding of time… pic.twitter.com/qdX9f5k4wP
— The Cultural Tutor (@culturaltutor) April 17, 2023
Now. Where was the trade union when we lost that job to alarm clocks?
This is the material that will be greatly expanded in the Splendid Edition of the newsletter.
Let’s talk about absurd expectations and the job market for a second.
Scott Condron is a machine learning engineer working for the startup Weights & Biases.
Earlier this week, they arranged a great party for the AI community here in London. I attended and met some of the most brilliant minds the UK has to offer, come coming all the way from Cambridge for an opportunity to land a job or to network.
I was there and yes, the queue was a massive 45min wait from the point we got in.
And yes, I shook @emostaque's hand and thanked him (also on your behalf) for the gifts to humanity that #stablediffusion and @stabilityai are. You're welcome. https://t.co/2PG2uay3jp
— Alessandro Perilli ✍️ Synthetic Work 🇺🇦 (@giano) April 20, 2023
Before the event, Scott sends out an alert:
📣 We’re hiring a Prompt Engineer / Chaingineer!
5 years experience with ChatGPT and langchain required.
I'll be recruiting in person at @weights_biases & @StabilityAI's event next week in London 😉https://t.co/O5tF3rsbdY
— Scott Condron (@_ScottCondron) April 13, 2023
We already talked about the emerging job role of the “prompt engineer” multiple times on Synthetic Work. So we don’t care about that.
What we care about is the 5 years of experience.
ChatGPT was released in November 2022. LangChain (I won’t bother you with what it is, it’s not relevant now) was released in October 2022.
So, how can you hope to find somebody that has 5 years of experience in these technologies?
It reminds me of the absurd job positions seen on LinkedIn during the Web3 fever just two years ago. Requests for a decade of experience in technologies that are not even 10 years old.
Even Anthropic, one of the most advanced AI startups in the world, was looking for a much more reasonable 2 years of experience when they posted their Prompt Engineer position as we saw in Issue #1 – When I grow up, I want to be a Prompt Engineer and Librarian.
All this does is incentivise the candidates to lie. And at that point, you, as a hiring manager, end up with even more noise than what you were hoping to eliminate.
Even if Scott was simply imprecise (280 characters can force you to horrible compromises sometimes), you have to remember that the first GPT model was released by OpenAI in 2018.
So what is Scott looking for? Probably, a ML engineer that has been studying and working with these AI systems since their inception.
And this, finally, allows me to get to the point I want to make: Scott can afford to make a similar request because the UK is one of the top markets in the world for AI skills. Almost no other country has a similar luxury.
If you are a company operating in Spain, Italy, Germany, Luxembourg, Turkey, Sweden, Brazil,… you name it, you have almost no chance of finding an AI expert with that kind of experience.
And even in markets where there is a non-negligible presence of AI expertise, like the US, UK, France, Canada, and a few others, those professionals are constantly poached by top technology vendors and financial institutions.
Not only these experts are difficult to hire, but they are very expensive to hire.
If you insist on looking for candidates that are the most experienced in the market on emerging technologies you better have a multi-million dollar budget for recruitment. The alternative is that you reset your expectations and give the opportunity to these candidates to grow their experience internally.
Back to Scott. I truly hope he was trolling:
We are only looking for very experienced chaingineers, sorry. If you have experience with technologies like AGI, you are encouraged to apply.
— Scott Condron (@_ScottCondron) April 13, 2023
For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.
This week, I want to share something I recently said on social media.
Like all of you, I use various types of AI to boost my productivity in a number of ways (as the readers of the Splendid Edition know). Sometimes, these interactions generate a feeling that, hopefully, reflects how many others must be feeling in their exploration of what artificial intelligence can do:
Never wrote a line of Javascript in my life. Today, I had to. If I’d have used my usual approach, I’d have spent hours on Google & Stack Overflow. Instead, I asked GPT-4. I still spent hours, but I worked back & forth with the AI to arrive to a solution and I learned a ton.
For a software developer, this is nothing to pay attention to. But for a person that doesn’t code, this is a transformative experience (I already had it with Copilot for some Python code). The sense of empowerment that comes from it is impossible to explain.
This is the real revolution of generative AI: not what you can do with it, but how you feel when you do things with it, and the sense of possiblity that you have after you used it. It really feels like you are talking to a primitive version of JARVIS in Iron Man movies.
GPT-4 is not perfect, and I definitely look forward to the 32K context window, but you leave thinking “it doesn’t matter if it’s imperfect. I can already do more than I ever hoped I would. It will get better.”AI startups targeting non-technical people need to focus on how their product make you feel, not just what it does.
That is the message that will make the audience look and yet, no attention has been spent on that.This is also why it’s so hard to convince people to pause the training of AI models more powerful than GPT-4. Who has experience the feeling I’m talking about, doesn’t want it to stop.
He/she wants GPT-5, and 6, and 7, as fast as possible.I already said these things many times, but they are worth repeating over & over, every time I use generative models and they amaze me.
When we call for caution with AI, we need to keep in mind that this is one of the powerful feelings people believe we want to stop.
I find it odd that practically no AI startup or established vendor I’ve reviewed so far has made any attempt to describe how non-tech people can feel when AI helps them build something that they thought would be impossible before.
Perhaps, because most of these startups are still in a stage where they only employ engineers, and engineers are not exactly empathy champions.
Perhaps, because it’s much easier to talk about very tangible things like features and the number of parameters than about intangibles like emotions. And that’s what the marketing of the entire IT industry looks like today, and has looked like in the last 23 years I’ve been paying attention to it, regardless of AI.
Perhaps, because we have decided that only Apple can master this elusive skill and anybody else giving a go at it just looks clumsy and cringy (I’m looking at you Microsoft. Google, you, instead, should not even try it.)
Anyway.
I’m convinced that people welcome generative AI (different from every other AI we’ve seen so far) because they feel empowered by it. Not because of what it does (which often is very imperfect).
This week’s protagonist of my favourite section of the newsletter is a sketchy organization called Impossible Labs Limited. They have created a product called Synthetic Users.
Where to start?
Let’s start with something that has nothing to do with AI and this eclectic enterprise: there’s a mountain of research that shows that focus groups for product design are useless.
Useless.
I’ve been repeating this for 9 years in my previous company, but you don’t have to listen to me or to research. Don’t be rational. Instead, appeal to your emotions. Listen to Steve Jobs:
It’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.
Now that your beloved leader has said it, rather than a random Alessandro, will you stop using focus groups?
Of course not.
Because people don’t use market research (including focus groups) to find the truth. They use it to show that they are right and gain political power or resources within a social group.
The dear old confirmation bias.
Which is also why my 9 years of warnings against focus groups have changed exactly nothing. People don’t really care if the outcome of a focus group will eventually doom a product launch, especially in large organizations with little to no individual accountability.
Now. Let’s pretend for a second that you actually care about the truth.
If you do, this Synthetic Users thing is truly terrible.
As we said many, many times in this newsletter, generative AI models like GPT-4, simply spit out what they believe is the most appropriate way to complete a sentence or a question according to everything they have learned from the Internet.
But these models patch together pieces of answers from every corner of their datasets, without care for coherency or consistency that you would, hopefully, find when you probe a human mind.
These “synthetic users” are not fully formed synthetic personalities, with a set of perspectives, beliefs, educational backgrounds, attitude traits, etc. that shape their worldview and, in turn, the answers they might give in a focus group survey.
In fact, at the moment, the consistency of their opinions is limited to what fits an 8K token context window (what we compared to a memory in a loose analogy many times). Which is barely enough for a few minutes of conversation.
It’s like doing a focus group with a goldfish. (By the way, it’s not true that goldfishes have a memory of just 3 seconds. But let’s pretend because I can’t think of a better analogy).
There’s no worldview. There’s no personality. There’s no synthetic user.
But let’s hear the opinion of Niloufar Salehi, an Assistant professor at UC, Berkeley’s School of Information:
SyntheticUsers “interviews” are exactly what should be expected from a pattern synthesis engine stereotypically likely challenges for low income immigrant families that lack substance about their actual real lives. This is not a challenge that might be improved with more fine-tuning or better models.
A pattern synthesis engine is fundamentally unaware of anything in the real world, and can not produce any “insights” beyond whatever patterns exist in its training data. Priming it by telling it to act as though it’s someone else does not change this basic fact.
Finally, apart from the fact that misguiding users by acting as though a pattern synthesis engine can actually be “interviewed” is wrong, I’m also not sure why it’s even remotely useful.
The whole point of spending the time to interview people and then spending a lot more time analyzing the large amounts of data gathered, is the ability to connect with them, build trust, dig deeper, ask them to share stories, and learn about their feelings and emotions. Pattern synthesis engines have none of those.
So. What did we learn here? That, finally, you can get zero-friction market research to support whichever twisted idea about your product for less than $400.
Let’s celebrate this second month of Synthetic Work with some changes.
Change 1: The Section “If You Only Have a Hammer…” is now renamed “The Tools of the Trade”.
Change 2: Starting today, the Splendid Edtion will not dedicate each week to a single topic, but will have something about each and every section. Same content, just a different way to present it. You’ll understand better why below. Let me know if you like this format better or not.
Change 3: Splendid Edition members get a new perk: access to a new database sort of called The Best AI-Centric Tools for X or, more colloquially, Best For.
This database will list what I consider the best AI-centric tools/products/services to do specific tasks across multiple lines of work. It will describe why I think AI is making a difference and why these tools are better than their (listed) alternatives.
I’ll keep the database regularly updated. You can see an example of the content here. It’s a work in progress.
While I’ll mention the tools in the database in The Tools of the Trade section, you won’t receive the database updates via the Splendid Edition newsletter. It’s too impractical to deliver that kind of structured