Issue #45 - Overconfidently skilled

January 21, 2024
Splendid Edition
Generated with Stable Diffusion XL and ComfyUI
In This Issue

  • What’s AI Doing for Companies Like Mine?
    • Learn what Arizona State University, Selkie, Deloitte, DPD, Eli Lilly and Novartis are doing with AI.
  • A Chart to Look Smart
    • Key insights about generative AI in the latest PWC Global CEO Survey.
    • The first Deloitte State of Generative AI in the Enterprise report is a triumph of overconfidence.
    • 2024 GDC survey of over 3,000 game developers reveals key adoption trends of generative AI.
What's AI Doing for Companies Like Mine?

This is where we take a deeper look at how artificial intelligence is impacting the way we work across different industries: Education, Health Care, Finance, Legal, Manufacturing, Media & Entertainment, Retail, Tech, etc.

What we talk about here is not about what it could be, but about what is happening today.

Every organization adopting AI that is mentioned in this section is recorded in the AI Adoption Tracker.

Arizona State University is partnering with OpenAI to offer ChatGPT Enterprise to faculty and staff.

From ASU’s official announcement:

the university announced it has become the first higher education institution to collaborate with OpenAI

Starting in February, ASU will invite submissions from faculty and staff to implement the innovative uses of ChatGPT Enterprise. The three key areas of concentration include: enhancing student success, forging new avenues for innovative research and streamlining organizational processes.

“The goal is to leverage our knowledge core here at ASU to develop AI-driven projects aimed at revolutionizing educational techniques, aiding scholarly research and boosting administrative efficiency,” Gonick said.

ASU’s Knowledge Enterprise – which leads the university’s groundbreaking research activity – has 19 centers, initiatives and laboratories dedicated to exploring and activating AI models, resulting in over $340M in active awards.

Emilia David, reporting for The Verge, adds some details:

The university will begin taking project submissions from faculty and students on where to use ChatGPT beginning in February. Anne Jones, vice provost for undergraduate education, said in an interview some professors already use generative AI in their classes. She mentioned some composition classes that use AI to improve writing and journalism classes that use AI platforms to make multimedia stories. There may even be room for chatbots to act as personalized tutors for ASU students, said Jones.


In the E-commerce industry, the fashion brand Selkie used generative AI to design their new collection.

Morgan Sung, reporting for TechCrunch:

When Selkie, the fashion brand viral on Instagram and TikTok for its frothy, extravagant dresses, announces new collections, reception is generally positive. Known for its size inclusivity — its sizing ranges from XXS to 6X — and for being owned and founded by an independent artist who’s outspoken about fair pay and sustainability in fashion, Selkie tends to be highly regarded as one of the morally “good” brands online.

The brand’s upcoming Valentine’s Day drop was inspired by vintage greeting cards, and features saccharine images of puppies surrounded by roses, or comically fluffy kittens painted against pastel backdrops. Printed on sweaters and dresses adorned with bows, the collection was meant to be a nostalgic, cheeky nod to romance. It was also designed using the AI image generator Midjourney.

“I have a huge library of very old art, from like the 1800s and 1900s, and it’s a great tool to make the art look better,” Selkie founder Kimberley Gordon told TechCrunch. “I can sort of paint using it, on top of the generated art. I think the art is funny, and I think it’s cheeky, and there’s little details like an extra toe. Five years from now, this sweater is going to be such a cool thing because it will represent the beginning of a whole new world. An extra toe is like a representation of where we are beginning.”

Criticism flooded the brand’s Instagram comments. One described the choice to use AI as a “slap in the face” to artists, and expressed disappointment that a brand selling at such a high price point ($249 for the viral polyester puff minidress to $1,500 for made-to-order silk bridal gowns) wouldn’t just commission a human artist to design graphics for the collection. Another user simply commented, “the argument of ‘i’m an artist and i love ai!’ is very icky.” One user questioned why the brand opted to use generative AI, given the “overwhelming number” of stock images and vintage artwork that is not copyrighted, and “identical in style.”

Many of her popular designs incorporate motifs from famous works of art, like Van Gogh’s “Starry Night” and Monet’s “Water Lilies,” which she uses as a base to create a unique, but still recognizable pattern. After she alters and builds upon the already existing work, it’s printed onto gauzy fabric and used to construct billowing dresses and frilly accoutrements.

The Valentine’s Day drop, Gordon argued, is no different, except that she used generated images as the design base, instead of public domain artwork.

“I say this is art. This is the future of art and as long as an artist is utilizing it, it is the same as what we’ve been doing with clip art,” Gordon said. “I think it’s very similar, except it gives the artists a lot more power and allows us to compete in a world where big business has owned all of this structure.”

Resistance is futile. Every fashion brand in the world will rely on generative AI to design increasingly creative and original collections.

The smaller the brand, the more critical it will be to rely on AI to compete with the big brands. Nobody but them can afford to hire the top designers on the market.

We have come to terms with the fact that not every human is infinitely creative and original. In fact, most are not.

Generative AI gives us an infinitely creative and increasingly original synthetic designer. Eventually, no human will be able to compete with it.


In the Professional Services industry, Deloitte is expanding internal access to its generative AI chatbot to 75,000 employees across Europe and the Middle East.

Simon Foy, reporting for Financial Times:

Deloitte is rolling out a generative artificial intelligence chatbot to 75,000 employees across Europe and the Middle East to create power point presentations and write emails and code in an attempt to boost productivity.

The Big Four accounting and consulting firm first launched the internal tool, called “PairD”, in the UK in October, in the latest sign of professional services firms rushing to adopt AI.

However, in a sign that the fledgling technology remains a work in progress, staff were cautioned that the new tool may produce inaccurate information about people, places and facts.

Users have been told to perform their own due diligence and quality assurance to validate the “accuracy and completeness” of the chatbot’s output before using it for work, said a person familiar with the matter.

Unlike rival firms, which have teamed up with major market players such as ChatGPT maker OpenAI and Harvey, Deloitte’s AI chatbot was developed internally by the firm’s AI institute.

Deloitte said its “PairD” tool can be used by staff to answer emails, draft written content, write code to automate tasks, create presentations, carry out research and create meeting agendas.

The Big Four firm said it will provide UK disability charity Scope with free access to PairD.

Despite the journalist’s attempt to portray Deloitte’s chatbot as a work in progress, the recommendation to watch out for hallucination is standard practice and applies to any large language model, as the readers of this newsletter know well.

Instead, the journalist could have focused on the last quoted sentence. Maybe Deloitte is really generous. More likely, this is a clever way to acquire reinforcement learning with human feedback (RHLF) data to improve their AI model for free.


In the Logicists industry, DPD has disabled its customer service chatbot after it started to swear at customers.

Jane Clinton, reporting for The Guardian:

The delivery firm DPD has disabled part of its artificial intelligence (AI) powered online chatbot after a disgruntled customer was able to make it swear and criticise the company.

Musician Ashley Beauchamp, 30, was trying to track down a missing parcel but was having no joy in getting useful information from the chatbot. Fed up, he decided to have some fun instead and began to experiment to find out what the chatbot could do. Beauchamp said this was when the “chaos started”.

To begin with, he asked it to tell him a joke, but he soon progressed to getting the chatbot to write a poem criticising the company.

With a few more prompts the chatbot also swore.

DPD uses AI in its online chat to answer queries as well as human operators. The company said a new update had been behind the chatbot’s unusual behaviour and it had since disabled the part that was responsible and was updating its system as a consequence.

“We have operated an AI element within the chat successfully for a number of years,” the firm said. “An error occurred after a system update yesterday. The AI element was immediately disabled and is currently being updated.”

To be fair, there’s a difference between a large language model that peppers its answers to expected answers with profanity, and a chatbot that swears at customers because aforementioned customers use clever prompt engineering to override the system prompt.

The latter is a risk that every company deploying an LLM faces. And no week goes by without the cybersecurity community discovering a new, more creative way to attack these chatbots.

In this case, it’s not clear what really happened, but it’s clear that DPD didn’t deploy strong-enough safeguards to prevent the average customer from exploiting LLMs weaknesses.

Overconfidence or lack of skills is one way to end up in these situations. As you’ll read in the section below, overconfidence about own technical skills and in-house expertise is more common than it should be.


In the Pharmaceutical industry, Eli Lilly and Novartis are preparing to use AI models provided by Isomorphic Labs for drug discovery.

Kyle Wiggers, reporting for TechCrunch:

Isomorphic Labs, the London-based, drug discovery-focused spin-out of Google AI R&D division DeepMind, today announced that it’s entered into strategic partnerships with two pharmaceutical giants, Eli Lilly and Novartis, to apply AI to discover new medications to treat diseases.

The deals have a combined value of around $3 billion. Isomorphic will receive $45 million upfront from Eli Lilly and potentially up to $1.7 billion based on performance milestones, excluding royalties. Novartis, meanwhile, will pay $37.5 million upfront in addition to funding “select” research costs and as much as $1.2 billion (once again excluding royalties) in performance-based incentives over time.

Isomorphic, which Hassabis launched in 2021 under DeepMind parent company Alphabet, draws on DeepMind’s AlphaFold 2 AI technology that can be used to predict the structure of proteins in the human body. By uncovering these structures, the hope is that researchers can identify new target pathways to deliver drugs for fighting disease.

The tech isn’t perfect. A recent article in the journal Nature pointed out that AlphaFold occasionally makes obvious mistakes and, in many cases, is more useful as a “hypothesis generator” rather than a replacement for experimental data. But the scale at which the model can generate reasonably accurate protein predictions is beyond most methods that came before.

Researchers recently used AlphaFold to design and synthesize a potential drug to treat hepatocellular carcinoma, the most common type of primary liver cancer. And DeepMind is collaborating with Geneva-based Drugs for Neglected Diseases initiative, a nonprofit pharmaceutical organization, to apply AlphaFold to formulating therapeutics for Chagas disease and Leishmaniasis, two of the most deadly diseases in the developing world.

The latest version of AlphaFold can generate predictions for nearly all molecules in the Protein Data Bank, the world’s largest open access database of biological molecules, DeepMind announced in October. The model can also accurately predict the structures of ligands — molecules that bind to “receptor” proteins and cause changes in how cells communicate — as well as nucleic acids (molecules that contain key genetic information) and post-translational modifications (chemical changes that occur after a protein’s created).

Meanwhile, DeepMind continues to lose talent.

Mark Bergen, Benoit Berthelot, Lizette Chapman, and Sarah McBride, reporting for Bloomberg:

A pair of scientists at Google DeepMind, the Alphabet Inc. artificial intelligence division, have been talking with investors about forming an AI startup in Paris, according to people familiar with the conversations.

The team has held discussions with potential investors about a financing round that may exceed €200 million ($220 million) — a large sum, even for the buzzy field of AI, the people said. Laurent Sifre, who has been working as a scientist at DeepMind, is in talks to form the company, known at the moment as Holistic, with fellow DeepMind scientist Karl Tuyls, said the people, asking not to be identified discussing private information. They said the venture may be focused on building a new AI model.

Sifre was a co-author of the 2016 DeepMind research on Go, a seminal work that showed a computer system beating masters of the ancient game for the first time, which sparked an international frenzy over AI. Tuyls has worked on research into game theory and multi-agent reinforcement learning, a branch of AI that explores interactions between autonomous actors, often through video games.

Both Sifre and Tuyls are widely considered leaders in their field.

Google has a huge problem. On one side it continues to release the worst AI models in the industry. On the other hand, it continues to nurture the best AI talent in the industry just to see them leave.

Something has to change there.

A Chart to Look Smart

The easiest way to look smart on social media and gain a ton of followers? Post a lot of charts. And I mean, a lot. It doesn’t matter about what. It doesn’t even matter if they are accurate or completely made up.
You won’t believe that people would fall for it, but they do. Boy, they do.

So this is a section dedicated to making me popular.

Just before Davos, PWC published its annual Global Survey after polling 4,702 CEOs spread across 105 countries.

70% of the surveyed CEOs expect that generative AI to intensify competition in their industry. The same percentage expects the need to reskill their workforce.

The most critical insight is that those who actually tried generative AI are significantly more optimistic about its impact on their business.

Interestingly, only half of them expect a tangible increase in profitability or revenue. Most believe that generative AI will lead to an increase in efficiency.

And this leads to an expected headcount reduction in 2024 of at least 5% due to generative AI.

Unsurprisingly, many expect the Media and Entertainment industry to be the most impacted by generative AI. More surprising is that many believe that the second most impacted industry will be the Financial Services industry.

It’s true that the Financial Services industry is adopting generative AI faster than any other industry besides Media and Entertainment. But the data points we collected in almost one year with Synthetic Work don’t suggest a negative impact on the headcount. Quite the opposite.

When it comes to the adoption challenges, most CEOs are concerned about cybersecurity attacks, involuntary misinformation, legal liabilities, and reputational damage.

No disagreement here. The more we understand about large language models, the more cybersecurity attacks we seem to find.

The last chart from the survey that is worth mentioning is the one about the most significant barriers to reinventing the company. It’s not specific to generative AI, but it’s indirectly related.

Half of the global CEOs surveyed by PWC see the lack of internal skills as a factor inhibiting the capability to reinvent their company. Yet, companies rarely hire innovators, disincentivize them from taking risks, and do nothing to retain those talents. Change the way you hire and your incentives, and you perhaps will reinvent yourself.


Deloitte has published its first State of Generative AI in the Enterprise report.

They interviewed 2,835 business and technology leaders involved in piloting or implementing generative AI in their organizations.

Compared to the PWC survey, the audience here is probably more technical. You can tell by the overconfidence in generative AI skills present in the company:

Deloitte believes this is a byproduct of how they selected the subjects for the survey:

within the specific context of our survey, high levels of confidence seem entirely reasonable since we deliberately chose experienced leaders with direct involvement in AI initiatives at large organizations already piloting or implementing generative AI solutions. However, given how rapidly the field is unfolding, it may be worth questioning the extent to which any leader should feel highly confident in their organization’s expertise and preparedness.

I disagree here, and not just because even the people involved in the rolling out of generative AI in their organizations are not aware of how quickly the technology is evolving.

In one year of Synthetic Work, I’ve interacted with a surprising number of people who should really read the Splendid Edition of this newsletter to improve their generative AI skills, and they don’t because they are convinced that their skills are already high.
I’ve noticed that the more people are convinced that they don’t need to study more about the topic, the more beneficial it would be for them to learn more about it.

Back to the report, another interesting data point is that these AI adopters continue to hope that generative AI will lead to cost reduction (not in the sense of headcount reduction, but in the sense of reducing the cost of doing business).

As we said many times already in this newsletter, just like it happened for cloud computing, AI might make business operations more expensive, not less. Time will tell. But what artificial intelligence gives your organization for sure is an unprecedented speed in execution and an unprecedented capability to scale operations.

Sadly, but not surprisingly, only 26% of the respondents in this chart believe that generative AI will lead to a shift of workers from lower-value to higher-value tasks.

When it comes to adoption, the report highlights a trend towards off-the-shelf solutions:

the vast majority of respondents were currently relying on off-the-shelf solutions. These included productivity applications with integrated generative AI (71%); enterprise platforms with integrated generative AI (61%); standard generative AI
applications (68%); and publicly available large language models (LLMs) (56%), such as ChatGPT.

Relatively few reported using more narrowly focused and differentiated generative AI solutions, such as industry-specific software applications (23%), private LLMs (32%), and/or open-source LLMs (customized to their business) (25%).

This is not an indication of anything because open access and open source large language models are still subpar compared to GPT-4 and GPT-4-Turbo.

The minute a lab releases an open access/open source LLM that can achieve the same results as GPT-4, the adoption trends might change dramatically.

Moreover, enterprise adoption of generative AI is not limited to transformer models. Diffusion models are equally important in some industries, and there, the absolute majority of solutions are based on Stable Diffusion, which is an open access model, rather than on proprietary solutions like Midjourney, Adobe Firefly, or OpenAI DALL-E 3.

Another sign of overconfidence emerging from this report is in the following chart about the organization’s level of preparedness across four areas:

Once again, this doesn’t match my experience in the field in the last 12 months, even with technical leaders.

The fact that 60% of the respondents believe that their organization is moderately to very highly prepared, from a talent skill standpoint, is beyond optimistic.

Perhaps the most interesting chart in the report is about how the adoption of generative AI is influencing the workforce hiring and upskiling strategy:

Even in this case, the report paints a picture, especially about upskilling, that simply doesn’t match my experience in the field.

If your company is behaving in a way that is aligned with the data in this chart, let me know.


More than 3,000 game developers surveyed by the 2024 Game Developers Conference organizers show growing adoption of generative AI and growing worries about it.

The report shows growing adoption of generative AI tools across departments:

Developers at indie studios were most likely to use Generative AI tools, with 37% reporting that they are personally making use of the technology (compared to 21% of developers at AAA and AA studios). Business and marketing professionals were most likely to use them, while folks in quality assurance and narrative were the least likely.

What do game developers want to use these tools for? The bulk of respondents were interested in coding assistance and speeding up the content creation process. Developers were also intrigued by the idea of using AI to automate repetitive tasks.

However, there were several developers who made it clear that they see no use case for AI technology.

The report also shows widespread concern about the impact of AI on the gaming industry.

Unsurprisingly:

Developers working in business, marketing, and programming were more likely to say the technology would have a positive impact. Those in narrative, visual arts, and quality assurance were more likely to say the impact would be negative.

Some were worried about whether Generative AI usage could lead to more layoffs at game companies. Others expressed concerns about how the tools could supercharge copyright infringement of intellectual property, and whether AI toolmakers would train their models using data obtained without the creator’s consent.

In the meanwhile, I’m preparing to release a new version of my AP Workflow for ComfyUI that can upscale to 4K, with very high fidelity, any low-resolution image.

And given that seeing is believing:

As always, I did this with my consumer-grade computer in minutes. Industrial GPU farms could do entire movies, and entire video games, in a breeze.

If someone’s job in the gaming industry is related to creating or polishing assets, no wonder they are concerned.