- How to think about the information you get from Synthetic Work and what to do with it.
- Jakob Nielsen, a pioneer of user experience (UX), has opinions on the generative AI prompt.
- Nikon Peru launches a (very) defensive marketing campaign titled Natural Intelligence.
- The German tabloid Bilt announced it will cut job positions, replacing people with AI.
- GitHub reveals that 92% of U.S.-based developers are already using AI coding tools both in and outside of work.
- Tim Urban sees a future where kids will have one human parent and one AI parent.
- The CEO of VTech is preparing to launch Teddy bears powered by generative AI to read bedtime stories.
P.s.: this week’s Splendid Edition is titled How to roll with the punches.
In it, you’ll find out how IKEA, Marvel, and more than 200 game studios are using AI.
Also, in the What Can AI Do for Me? section, you’ll learn how to use GPT-4 as a coach to learn how to face criticism.
And, in The Tools of the Trade section, you’ll discover LM Studio, an invaluable tool to test open access AI models.
- Did you make a terrible decision by taking that loan to get your MBA because, by the time you hit the job market, AI will be faster and cheaper than you, eliminating your chance to earn the salary your college promised you’d get with your prestigious education?
- Are you going to lose your job in 3-5 years because your company replaces you with AI?
- Is your competitor going to drive you out of business as they commoditize your value proposition?
- Will you miss the chance to retire before AI-induced mass unemployment?
- If you retired, will there be nobody to pay for your pension once you retire?
- Will your government do too little, too late in terms of welfare policies to protect you and other millions of workers all losing their jobs at the same time?
It boils down to this.
And if the answer turns out to be “Yes” to all or some of these questions, the follow-up question is only one:
“What should you do to save yourself?”
So we got two imperatives here.
- Understand if, indeed, the answer to those fastidious questions is “Yes”.
Action to take: read Synthetic Work Free Edition every week. Read the signals that might hint at a “Yes” answer. - Understand what you can do to save yourself.
Action to take: read Synthetic Work Splendid Edition every week. It gives you the tools to dominate AI rather than be dominated by it and, crucially, it tells you how quickly others around you are acting to save themselves.
It doesn’t seem like a complicated plan to me.
Except that it is. Because nobody gets to ask the follow-up question: “What should you do to save yourself?”
To get to that question, you first need to assess that there’s a tangible threat to your livelihood. You have to perceive that the questions at the top of this intro are more than 50% likely to be answered with a “Yes”.
In fact, I’d say that most people start to look at that follow-up question when there’s a 90% chance that the answer to all those questions is “Yes”.
And today, you don’t believe that there’s a 50% chance that the answer to all those questions is “Yes”. You don’t believe there’s even a 1% chance.
It’s like being a dinosaur (very cool) and looking up at the sky and seeing a microscopic rock up there. You don’t think it’s going to smash into your planet and ruin your existence (very uncool), do you? Nobody would.
So your attitude likely is:
I don’t care anything about the doomsday scenario because it’s not happening and, even if it’s happening, I’ll be long dead by then. And even if it’s happening before I die, it won’t happen to me.
Just tell me how I can make more money/get ahead with AI.
Or, if you can’t help me make more money/get ahead, tell me about the cool things that can be done with AI today so, at least, I’m entertained.
To change this attitude I would have to tell you that you are at risk of unemployment, right now.
Or, I would have to tell you that your competitor is about to displace you thanks to AI, right now.
Or, I would have to tell you that your retirement, just five years away, won’t happen at all in five years.
And you would have to believe me.
But the truth is that I can’t tell you for sure.
If you have read this Free Edition of Synthetic Work for a while, you know that this newsletter is tracking a growing number of cases of people that have lost their jobs to AI. But it’s still a drop in the ocean. Probably less than 5,000 people. Let’s say 10,000, to be generous, assuming some people don’t even know they lost their jobs to AI.
You also know that we have a growing list of CEOs that have committed to replacing their workers with AI, and they openly admitted that. So the notion that AI is impacting the job market is definitely not a myth. But these are 3-5 year plans. I can’t tell you for sure if these CEOs will change their mind by the time they have to execute their plans.
What I can tell you is that, very likely, there is an information asymmetry between us.
By not sleeping very much at night, I get to research what the newest AI models can do.
You, instead, have your job, are a normally functioning human being and, hopefully, sleep at night.
From this observation point, I can tell you that, hard to believe, almost every day a new model comes out and it can do something that was previously thought impossible or unique to humans.
And while we get distracted by the deluge of useless posts on social media by hundreds of “the AI Guy”, these new AI models and their capabilities get adopted at lightning speed by companies in Advertising, Financial Services, Health Care, Retail, Tech, and many other industries.
All you have to do to get a sense of how fast these new AI models are being adopted is to read the Splendid Edition of Synthetic Work every week. Or, take a quick look at the AI Tracker.
After you do that, you are at a bifurcation point:
Every week, I do everything I can to level out the information asymmetry between us, by telling you how AI is reshaping our jobs, the way we work, and the industries of our economy.
You can use that information in two alternative ways: either you decide that the information is an indicator of great potential for you and your business, or you decide that the information is an indicator of a great threat to you and your business.
If you decide that the information is an indicator of great potential, you may want to take advantage of it to make more money or get ahead in your career. In that case, you may want to read the Splendid Edition of Synthetic Work (or something else that gives you the same kind of value).
If you decide that the information is an indicator of a great threat, you may want to take advantage of it to save yourself and your business. In that case, too, you may want to read the Splendid Edition of Synthetic Work (or something else that gives you the same kind of value).
All of this is NOT to promote the Splendid Edition of Synthetic Work. I know it feels a little hard to believe it at this point, but it’s true.
All of this is to say that however you decide to interpret the information you read in the Free Edition of this newsletter, my advice to you is to act on it.
Do something with that information. And convince others to do something with that information.
This is shaping to become either the biggest opportunity or the biggest tragedy of our life.
The last thing you want is to look at it and think “So what?”
Alessandro
The first thing worth your attention this week is the opinion of Jakob Nielsen, the pioneer of user experience (UX), on this new interaction paradigm: the prompt.
Nielsen is probably the second most famous UX expert in the world after Donald Norman, the former VP of Research at Apple, and the two have a renowned consulting firm together.
So, when Nielsen talks, you might want to pay attention. And this is what he says in his latest blog post:
Current generative AI systems like ChatGPT employ user interfaces driven by “prompts” entered by users in prose format. This intent-based outcome specification has excellent benefits, allowing skilled users to arrive at the desired outcome much faster than if they had to manually control the computer through a myriad of tedious commands, as was required by the traditional command-based UI paradigm, which ruled ever since we abandoned batch processing.
But one major usability downside is that users must be highly articulate to write the required prose text for the prompts. According to the latest literacy research (detailed below), half of the population in rich countries like the United States and Germany are classified as low-literacy users. (While the situation is better in Japan and possibly some other Asian countries, it’s much worse in mid-income countries and probably terrible in developing countries.)
…
A small piece of empirical evidence for my thesis is the prevalence of so-called “prompt engineers” who specialize in writing the necessary text to make an AI cough up the desired outcome. That “prompt engineering” can be a job suggests that many business professionals can’t articulate their needs sufficiently well to use current AI user interfaces successfully for anything beyond the most straightforward problems.Articulating your needs in writing is difficult, even at high literacy levels. For example, consider the head of a department in a big company who wants to automate some tedious procedures. He or she goes to the IT department and says, “I want such and such, and here are the specs.” What’s the chance that IT delivers software that actually does what this department needs? Close to nil, according to decades of experience with enterprise software development. Humans simply can’t state their needs in a specification document with any degree of accuracy. Same for prompts.
Why is this serious usability problem yet to be discussed, considering the oceans of analysis spilled during the current AI gold rush? Probably because most analyses of the new AI capabilities are written by people who are either academics or journalists. Two professions that require — guess what — high literacy. Our old insight you ≠ user isn’t widely appreciated in those lofty — not to say arrogant — circles.
I expect that in countries like the United States, Northern Europe, and East Asia, less than 20% of the population is sufficiently articulate in written prose to make advanced use of prompt-driven generative AI systems. 10% is actually my maximum-likelihood estimate as long as we lack more precise data on this problem.
For sure, half the population is insufficiently articulate in writing to use ChatGPT well.
I am not going to say that this is exactly why I dedicate a significant portion of the Splendid Edition to helping you write better prompts.
No. I am not going to say that.
Instead, I will say that this is a critical issue to keep in mind if you are thinking about implementing a large language model in your organization, like Allen & Overy in the Legal industry or McKinsey in the Professional Services industry have done.
It’s also a critical issue to keep in mind when you attempt to measure the productivity of your workforce after they have been given access to a large language model. If the adoption rate is low, is it because GPT-4 is not good enough, or is it because the employees struggle to articulate what they need as Nielsen is implying?
How to address this issue?
Nielsen suggests:
My best guess is that successful AI user interfaces will be hybrid and combine elements of intent-based outcome specification and some aspects of the graphical user interface from the previous command-driven paradigm. GUIs have superior usability because they show people what can be done rather than requiring them to articulate what they want.
I have been thinking about this for a while, long before this blog post was published, and I agree that the flexibility of this intent-based interface we call prompt is a double-edged sword.
To research and experiment more on this subject, I’m thinking about creating a web app that allows you to benefit from the prompt techniques I shared in the Splendid Edition without having to learn them.
You just go on the website, where there are different problems that you can solve with GPT-4, organized into categories:
You just pick the problem you need some help with, fill in the blank or answer the questions, and read the response of the AI oracle. Behind the scene, the prompts will do the work.
And if you want to learn what techniques have been used, you can just click on a button and see the prompt.
What do you think? Interesting?
The second thing worth your attention this week is a new media campaign launched by Nikon Peru called “Natural Intelligence”.
The images from this campaign are all over social media and people are hailing the campaign as extremely clever and creative.
The images are beautiful, but I don’t see much positive in this campaign. I see it as a profoundly defensive move.
I don’t believe that most people take photos to document the world for historical reasons. I believe that, mostly unconsciously, we take photos to tell a story. Even the most mundane photo is an attempt to tell a story. Even when we take a picture as a memory, we are trying to tell a story to our future self or our future family.
If this is accurate (for most people, not everybody!), then the camera is just a tool to tell that story. No more than a nail is a tool to hear that story back when we look at the photo hanging on the wall.
This context can help us reframe what’s been happening in the photography industry for the last 20 years.
First, the iPhone. Then, computational photography. Now, generative AI.
At every step, the friction to tell a story has been reduced, to the detriment of the ones that built the first tools for the job.
When first I suggested that the iPhone would destroy the digital reflex market over 10 years ago on Facebook, my friends fiercely pushed back. “The quality of the iPhone will never match a DSLR”, they said, missing the point: where is the friction in telling a story.
Then, this happened:
The job of a photographer is to tell a story, not to operate the camera equipment. If generative AI can tell the same story in a frictionless way compared to the camera, what does it mean to be a photographer going forward?
You might say that generative AI models lack the creativity to generate marvels like the ones found in our natural world. But that would be true only for the current generation of models, and only for the natural wonders that the AI models have never seen during their training phase.
Fast forward three years from now, in a near future where AI models are exceptional and the so-called training dataset is the entire Internet. And if you want to see a glimpse of this future, you just need to wait until this July (possibly August) for the release of MidJourney v6.
In that future, what would the photographer use the camera for?
You can say that the camera will still be good to capture stories about never-seen-before parts of our world. Like the abyss of the ocean, to reference the tragedy of the news of the week. But, if so, the “camera photographer” becomes a full-time explorer.
There are very few of them. What happens to the overwhelming majority that are not explorers?
The third thing worth your attention this week is about the German tabloid Bilt, which announced it will cut job positions, replacing people with AI.
Jon Henley, reporting for The Guardian:
Germany’s Bild tabloid, the biggest-selling newspaper in Europe, has announced a €100m cost-cutting programme that will lead to about 200 redundancies, and warned staff that it expects to make further editorial cuts due to “the opportunities of artificial intelligence”.
Bild’s publisher, Axel Springer SE, said in an email to staff seen by the rival Frankfurter Allgemeine (FAZ) newspaper that it would “unfortunately be parting ways with colleagues who have tasks that in the digital world are performed by AI and/or automated processes”. The short-term job-losses, expected to be in the region of 200, are due to a reorganisation of Bild’s regional newspaper business and are not believed to related to AI.
The moves follow an announcement in February by the chief executive, Mathias Döpfner, that the publisher was to be a “purely digital media company”. AI tools such as ChatGPT could “make independent journalism better than it ever was – or replace it”, he said.
…
A Bild spokesperson said: “We believe in the opportunities of AI. We want to use them at Axel Springer to make journalism better and maintain independent journalism in the long term.
We dedicated an entire Splendid Edition to the impact of AI on the Publishing Industry: Issue #5 – The Perpetual Garbage Generator
And, since then, documented dozens of publications embracing AI for article generation.
I hope that journalists are realizing that this is a Pandora’s box that can’t be closed anymore.
You won’t believe that people would fall for it, but they do. Boy, they do.
So this is a section dedicated to making me popular.
A new report published by GitHub reveals that 92% of U.S.-based developers are already using AI coding tools both in and outside of work.
GitHub partnered with Wakefield Research to survey 500 U.S.-based developers at enterprise companies. The relevant part of the resulting report:
Almost all developers have used AI coding tools—92% of those we surveyed say they have used them either at work or in their personal time. We expect this number to increase in the months to come.
70% of developers believe that using AI coding tools will offer them an advantage in their work, with upskilling being the top benefit followed by productivity gains.
Given that upskilling is the number one task developers say improves their workdays, this is notable because AI coding tools can integrate it directly into a developer’s workflow.
What matters here is this: in February 2023, GitHub reported that its AI system, Copilot, was behind an average of 46% of a developers’ code across all programming languages. And now we are talking about 92% of developers using AI coding tools.
How quickly is this adoption happening? And where is it going?
Behind the scenes, in academic circles, there’s a mad rush to create AI models that can write code autonomously. In many cases, the logic is that the first person that figures it out will be able to take 20 concurrent jobs as a junior developer and get rich quickly. Or build an outsourcing software development company at a microscopic fraction of today’s costs.
But until that happens, achieving a reasonable level of code quality, security, and reliability, something else might be happening, as captured in this same report:
57% of developers believe AI coding tools help them improve their coding language skills—which is the top benefit they see. Beyond the prospect of acting as an upskilling aid, developers also say AI coding tools can also help with reducing cognitive effort, and since mental capacity and time are both finite resources, 41% of developers believe that AI coding tools can help with preventing burnout.
In previous research we conducted, 87% of developers reported that the AI coding tool GitHub Copilot helped them preserve mental effort while completing more repetitive tasks. This shows that AI coding tools allow developers to preserve cognitive effort and focus on more challenging and innovative aspects of software development or research and development.
AI coding tools help developers upskill while they work. Across our survey, developers consistently rank learning new skills as the number one contributor to a positive workday. But 30% also say learning and development can have a negative impact on their overall workday, which suggests some developers view learning and development as adding more work to their workdays. Notably, developers say the top benefit of AI coding tools is learning new skills—and these tools can help developers learn while they work, instead of making learning and development an additional task.
See? This AI vs burnout angle doesn’t convince me very much.
In Issue #16 – Discover Your True Vocation With AI: The Dog Walker, we read Marc Andreessen, the most famous venture capitalist in the world, explain to us:
But the good news doesn’t stop there. We also get higher wages. This is because, at the level of the individual worker, the marketplace sets compensation as a function of the marginal productivity of the worker. A worker in a technology-infused business will be more productive than a worker in a traditional business. The employer will either pay that worker more money as he is now more productive, or another employer will, purely out of self interest.
This is not just Andreessen’s position. This idea of the 10x, 100x, 1000x engineer is all over social media.
But an employee that can produce 10/100/1000x in the same number of working hours doesn’t sound like somebody that is avoiding burnout to me. If anything, it sounds like somebody that, potentially, has to juggle with 10/100/1000x more context switching as he/she can take on more projects.
I would have understood if we were told that the aforementioned employee can now work 1/10 of the current working hours and still get paid the same. That would sound more like burnout reduction to me.
But we are starting to have data points, documented in the Issue #16 as well, suggesting that when AI can free an employee for 9/10th of his/her working hours, that employee is simply fired.
So, are we sure generative AI is going to make us less burned out in the long run?
For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.
Tim Urban, the world-famous author of the blog Wait But Why? tweets on topics we have covered for five months on Synthetic Work:
So it's pretty clear that lots of people are going to have AI friends and relationships. Which probably means there will also be people with AI spouses. And some of those couples will have kids (using a donor). So there will be future kids with one human parent and one AI parent.
— Tim Urban (@waitbutwhy) June 16, 2023
The protagonist of my favourite section of the Free Edition is VTech Holdings. VTech is a Hong Kong-based global supplier of electronic learning products from infancy to preschool and the world’s largest manufacturer of cordless phones.
Chan Ho-him, reporting for Financial Times:
Teddy bears kitted out with generative AI could end up chatting with children and telling them personalised bedtime stories as the costs of artificial intelligence fall, according to one of the world’s biggest toymakers.
The technology behind the ChatGPT chatbot could be available in toys as soon as 2028 and used to teach or even instil values such as not telling lies, said Allan Wong, the chair and chief executive of VTech Holdings, which owns US-based LeapFrog and already develops electronic learning products.
…
Smart toys could use “AI to generate stories customised for the kid rather than reading from a book,” he told the Financial Times.“You can incorporate not only the kid’s name, but the kid’s daily activities. [It] knows you go to which school . . . who your friends are. It can actually be telling a story and talking almost like a good friend,” he said.
“The kids . . . can actually talk to the toy, and the toy can actually give [them] a response,” he explained. “So [there are] many, many possibilities.”
…
“I think we will have to wait another about five years when the price comes down to a certain level, then we can adapt a subset of those AI chips for toy use. But it’s coming.”
There are so many things wrong with this that I don’t even know where to start.
I understand parents that work 2-3 jobs and can’t even arrive home in time to read bedtime stories to their kids. Or the ones that are traveling for business and cannot even call their kids because in their temporary time zone it’s meeting time.
But the others? For this to be profitable, VTech must be expecting mass adoption, which means an ocean of parents that can’t be bothered to read a bedtime story to their kids and personalize it on the spot.
Then there is the deeply troubling problem of what these AI models will say to the kids while the parents are not listening.
Do you remember the intro of Issue #16 – Discover Your True Vocation With AI: The Dog Walker?
That introduction was a long elaboration of the following idea:
For the first time in history, we are voluntarily seeking the help of an external entity that spoonfeeds us with what to write and, subtly, what to think. What does it mean?
You might want to go back and read it again thinking about these upcoming toys.
Third, and connected with the previous problem: you know that, if this takes off, the toy manufacturers will allow parents to clone their voice so that they Teddy bears can sound like them, right?
And if the AI says things influenced by an advertiser, a government, or a tech company to a young mind with the parent’s voice, it’s phenomenally more effective, right?
A new email just arrived:
I want to thank you for the work you’re doing with Synthetic Work. I recently applied the techniques learned by reading the prompt section in the Splendid Edition. I think I’m getting the point: using ChatGPT-4 is like having an assistant who strictly follows your instructions, but that is way too much creative whenever you are not accurate enough.
For example, I recently wrote a short piece from scratch in around 20 minutes. I wanted to improve it, so I gave it to ChatGPT with a prompt where I gave accurate instructions about length, tone, level of enthusiasm, involvement of the audience desired.
In a few seconds, I’ve obtained another version of my text that I used to write my own third version:
I used the shorter version as a baseline.
I modified the parts that were not consistent with the requests I had in mind (and that perhaps I didn’t explain well!)
I obtained a final version that is a blend between the two.I thought that I could have refined the prompt, but because the piece was not that long, I preferred to edit the content by myself. What impressed me, though, is not that I saved time, but that I’ve got several new ideas – some using directly statements produced by ChatGPT, and others by getting a different angle and inspiring me in writing a new statement that was not there at the beginning.
For me, the more important thing is that I improved the quality of the piece I wrote. The value for that is certainly a multiple of my monthly subscription. Even though I use it every three months from now on, I’m glad to pay for it monthly. I just don’t care.
Chief Marketing Officer
Jun 2023