Issue #32 - Moving Targets

October 7, 2023
Free Edition
Generated with Stable Diffusion XL and ComfyUI
In This Issue

  • Intro
    • Building a Human-AI company.
  • What Caught My Attention This Week
    • The chairman and CEO of JPMorgan Chase confirms that AI will displace some jobs, and predicts a work week of 3.5 days for the next generation.
    • The former chief economist at the US Department of Labor believes that AI is going to eliminate millions of jobs.
    • GitHub CEO promises that “The demand for software developers will continue to outweigh the supply”.
  • The Way We Work Now
    • Some schools have accepted that AI is here to stay and defined “acceptable AI uses” for their students.
  • How Do You Feel?
    • OpenAI encourages you to have a quite emotional, personal conversation with the new talking ChatGPT.
  • Putting Lipstick on a Pig
    • Daniel Dennett believes AI will destroy trust and, with it, civilization.
Intro

Since I left my former employer in December 2022, I’ve been building a company that, essentially, relies on human collaborators and AIs.

Not in the sense that the people I work with use AIs to automate part of their work, but in the sense that certain roles within the company are primarily assigned to AIs while others are primarily assigned to humans.

This is by design.

No human has lost his/her job because, from day 1, certain jobs were assigned to AIs and AIs only. I wanted to research and experience what happens when you treat AI models as collaborators, side by side with their human counterparts.

Well. It’s the strangest experience I’ve ever had in my career. And there’s no instruction manual to rely on.

Over the months, the astonishing progress in AI has allowed me to execute more and more of this vision.

For example, in the Splendid Edition of last week, I showed how I assembled a group of AI models to act as my advisors, assuming various roles: a marketing advisor, a business advisor, a sales advisor, a negotiator, etc.

The extraordinary thing is that, for the first time ever, I managed to get these synthetic advisors to debate with each other my problem, without my intervention.

I witnessed them asking each other questions, disagreeing and criticizing each other, until they converged to a consensus.

It’s mind-bending that we can do this.

It’s mind-bending to think that, in the near future, any business leader will have an entire council of AI advisors ready to discuss any challenge, 24/7, instantaneously, leveraging our species’ collective knowledge, first encoded in business/science/engineering/finance books, and then compressed in a single AI model.

In watching the exchange of their AI advisors, any business leader will have a chance to gain new perspectives or reconsider his/her biases. And, at some point, he/she will be able to count on the encoded experience of accomplished leaders who worked in the most admired companies in the world.

And now, the advent of GPT-4V will further accelerate my plan to build a human-AI company.

Alessandro

What Caught My Attention This Week

The chairman and CEO of JPMorgan Chase, Jamie Dimon, confirms that AI will displace some jobs, and predicts a work week of 3.5 days for the next generation.

Emily Chang, interviewing him for Bloomberg:

Q: But it is going to replace some jobs?

A: Of course. Yeah, but, look folks, people have to take a deep breath, okay? Technology has always replaced jobs. Your children live to 100 and not have cancer because of technology, and literally they’ll probably be working three and a half days a week.

If, for JP Morgan it replaces jobs, you know, we hope to redeploy people. Like at First Republic, you know, we’ve offered jobs to, like, 90% of the people, they accepted, but we also, you know, we’ve told them some of those jobs are transitory. But we hire 30,000 people a year. So we expect to be able to get them a job somewhere local in a different branch or different function if we can
do that. And we’ll be doing that with any dislocation that takes place as a as a result of AI.

The full interview is interesting and well worth listening to:


The former chief economist at the US Department of Labor, Betsey Stevenson, believes that AI is going to eliminate millions of jobs.

From her column for Bloomberg:

Artificial intelligence is going to eliminate jobs — millions of them. The uncertainty surrounds which jobs will be lost, and what kinds of jobs will arise to replace them.

The US (and the world) has more jobs today than it did at the start of the 21st century. At the same time, plenty of occupations have declined due to technology. In 2023, there were only 32,000 people working in word processing and typing occupations, a sharp fall from 282,000 in 2000. Similar trends were seen in larger occupations such as sales and office workers, a category that has shed 6 million jobs since the start of the century.

Jobs typically evolve rather than disappearing outright. Some tasks are eliminated, some are added, until eventually the new version of the job no longer looks like the old one. Strictly speaking, the old job has “disappeared,” but if this evolution works well, the worker has not. For example, some office workers are now classified as managers because they have learned to use sophisticated software to manage human resources or payroll functions. So while there may be fewer office workers, there are more people in management positions.

In the past, technological change tended to reward the most skilled workers, because the technology was a complement to their skills.

A recent survey asked a panel of economists whether they thought AI would have a negative impact on the earnings potential of highly skilled workers. They were largely split: Most were uncertain, and almost as many agreed that it would. No one strongly disagreed.

And it’s not just anxiety about jobs; there is also a lot of worry about education. Is a college degree still worth it? Here the surveyed economists were divided — a little more than half agreed that AI would lead to substantially greater uncertainty about the likely returns to investment in education, while nearly 40% disagreed.

My view is that the fear over declining returns to education is overblown. In 1980, college grads earned only about 40% more than those without a college degree. Today they earn roughly 80% more.

In theory, new technology allows humans to consume more while working less. In reality, however, technology displaces workers, as fewer people are needed to produce the same amount of goods or services.

The full survey is here, but let’s get to the key point:

I count myself among those economists who believe generative AI will lead to rising returns to human skill. The most highly skilled people tend to be better positioned to adapt as jobs change. They often find ways to ensure that their skills are enhanced by technology rather than replaced by it.

To be certain, there are big unknowns. Universities need to make sure that students get skills that AI can complement rather than replace.

This is the issue. Nobody, and certainly not the economists, has a clear idea of how much and how quickly large language models are evolving.

You cannot schools to focus on teaching skills that AI can complement rather than replace because generative AI will swallow even more skills in the next six months. You just have to read this week’s Splendid Edition to realize the enormous impact that the new GPT-4V model will have on a broad range of industries.

The only way to truly understand the pace of change is by monitoring the research that I regularly share in the Splendid Edition of Synthetic Work and testing how robust that research is. This is the job that I’ve been doing since the beginning of 2023.

It’s not a job that most companies are prepared to do. If you don’t change that, and you simply wait for that technology to hit the market, like you have done for every other technology wave, this time it might be too late to react.


GitHub CEO promises that “The demand for software developers will continue to outweigh the supply”.

Paul Sawers, reporting for TechCrunch:

GitHub CEO Thomas Dohmke considers AI and software development to now be inextricably linked…But speaking onstage at TC Disrupt today, Dohmke maintained that the snowballing AI revolution won’t be the death knell for the software development industry.

“The amount of software in 10 years is only going to exponentially grow,” Dohmke said. “We have an ever growing number of lines of code we have to manage, we have an ever-growing number of ideas that we have, and quite frankly, every company is now a software company.”

Although AI is undoubtedly here to stay, Dohmke noted that while software development might evolve, there are several reasons why developers will still be in high demand for the foreseeable future. One being the sheer amount of legacy code out there that still exists in its original form.

“If you go to the banks and financial institutions and talk to the CTO, they’ll tell you that they’re running COBOL code from the sixties, and those developers from the sixties are all retired now,” Dohmke said. “And that code back then was not written with unit tests and with CI/CD, so somebody has to maintain that and, hopefully, transform that COBOL code to Java or Python. And we’re not even talking yet about code from the seventies, the eighties, or the nineties.”

Of course. What human being wouldn’t be thrilled at the idea of starting a career as developer when the perspective is to spend the next 40 years maintaining COBOL code?

I spent the last ten years in a software company and the one thing that no software engineer wants to do is to port and maintain legacy code.

Also, this argument assumes that the process cannot be automated, and there’s no reason to believe that it can’t be.

But let’s put aside this particular use case for a moment. Let’s talk about the alluring part of Dohmke’s argument: the notion that we’ll have an ever-growing number of ideas to implement.

Nobody questions that.

If you are a long-time reader of Synthetic Work you read me saying that AI is finally democratizing the act of (digital) creation, allowing anyone to create anything they want without spending a vast amount of time and money to learn how to do it.

The point in question is who’s going to build those ideas.

If you are so sure that Dohmke is right, I invite you to read a recent comment made by the famous software developer Simon Willison, who co-created the popular Django web framework and is now working on Datasette, a tool for exploring and publishing data.

In the latest episode of the Rooftop Ruby podcast, he said about the GPT-4 Advanced Data Analysis model (previously called Code Interpreter):

It gave me an existential crisis a few months ago, because my key piece of open source software I work on, Datasette, is for exploratory data analysis. It’s about finding interesting things in data.

I uploaded a SQLite database to Code Interpreter and it did everything on my roadmap for the next two years. It found outliers, and made a plot of different categories.

On the one hand, I build software for data journalism and I thought “This is the coolest tool that you could ever give a journalist for helping them crunch through government data reports or whatever.”

But on the other hand, I’m like, “Okay, what am I even for?” I thought I was going to spend the next few years solving this problem and you’re solving it as a side effect of the other stuff that you can do.

So I’ve been pivoting my software much more into AI. Datasette plus AI needs to beat Code Interpreter on its own. I’ve got to build something that is better than Code Interpreter at the domain of problems that I care about, which is a fascinating challenge.

This is an arresting statement from an extra-ordinary software developer.

How will the average software developer react when facing the same existential crisis?

And always remember: these models are not static in time. These existential crisis won’t show up only once like mid-life crisis.

Every time a new generation of GPT will be released, average developers, designers, writers, etc. will have to ask themselves again: “Okay, what am I even for?”

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

The job of the teacher and the job of the student are changing dramatically. That’s why the very first Splendid Edition, back in February, was fully dedicated to the impact of AI on the Education industry: Issue #1 – Burn the books, ban AI. Screwed teachers: Middle Ages are so sexy.

Some schools insist on relying on AI detection tools. Despite every non-financially-biased AI expert in the world told them that no, they don’t work (including me, in February). Even OpenAI told them that they don’t work.

Other schools have accepted that AI is here to stay and are trying to set boundaries for its use. So, on Reddit, you get to see posts like this one:

While this approach is much better than an outright ban of ChatGPT and competitors, the non-obvious implication of this policy is that every time students will use AI for an “acceptable-use” situation, they will learn to trust that AI a little bit more and the knowledge of their teachers will be a little less relevant.

Fast forward a few years, and a few GPT models later, the job of the teacher might become very similar to the job of the manager in a big corporation: not much about what you know, but a lot about how you can manage people, students in this case, and push them to do more.

How Do You Feel?

This section is dedicated to the psychological impact of artificial intelligence on people. You might think that this has no relevance to the changing nature of human labour, but it does. Oh, if it does!

For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.

GPT-4V is not the only new feature that OpenAI introduced recently. Together with the vision, ChatGPT acquired a voice. And if you are a long-term reader of Synthetic Work, you know how much emphasis I put on the importance of voice in the future of AI.

ChatGPT can now speak with 5 different voices which, probably, one day, will become customizable according to user preferences.

Why is voice so incredibly important?

Because voice has an emotional impact on the user that no other expression can match. At least for now.

We humans do not create an emotional bond with graphical interfaces because their appearance is too different from how we are made. But we create an emotional bond with an entity that uses language like us. It’s the so-called Eliza effect, which we talked about in so many issues of Synthetic Work. And we certainly create an emotional bond with an entity that sounds like us.

If you still have doubts about this, even after reading all the stories we shared here about the users of Replika and Character.ai, you just have to look at what the OpenAI’s Head of Safety is suggesting people do on X:

What OpenAI’s Head of Safety doesn’t say, tho, is that if we develop a strong emotional bond with ChatGPT, everything ChatGPT says has a huge impact on our lives.

If, in its answers, ChatGPT suggests using product X instead of Y, or suggests going to restaurant A instead of B, that recommendation is perceived in a deeper way than the recommendation we perceive, for example, when we read Google search results.

And of course, if ChatGPT’s recommendation is the result of sponsorship, the voice of our AI becomes disproportionately valuable, having the ability to influence us more deeply than today’s advertising, and from a very young age.

We’ve already seen that Microsoft didn’t waste a second in inserting sponsored answers into the version of GPT-4 that’s part of Bing (more on this in a future Free Edition). So expect to see the same thing now that ChatGPT has a voice of its own.

The impact of a talking AI is underestimated because Siri, Alexa, and Google Assistant have been practically useless since, well, ever. We might have grown accustomed to their voices, but they never said anything remotely resembling a conversation with another person for more than ten seconds.

Our reaction to an AI that can talk and answer in the same way GPT-4 does in a written form is a whole different story.

Now, there isn’t just a dystopian side to all of this. We shouldn’t come to the conclusion that just because ChatGPT can talk, its voice will be used solely to manipulate us. There are also extremely positive applications worth considering.

For example, imagine the huge opportunity for people with certain disabilities. These people can now rely on GPT-4 to communicate in a way that’s infinitely more effective than the best voice interface ever developed. For these people, ChatGPT’s voice can truly elevate the quality of life.

And then, of course, there are endless business opportunities: a talking AI is an infinitely more interesting teacher to follow, or a doctor that’s infinitely easier to understand, or a customer service representative that’s infinitely more accessible and pleasant.

The voice of artificial intelligence is the most important quality to bet on in the long run. Our emotions will make it more powerful than any other interface.

Putting Lipstick on a Pig

This section of the newsletter is dedicated to AI tools and services that help people and things pretend to be who/what they are not. Another title for this section could be Cutting Corners.

Daniel Dennett, revered philosopher and cognitive scientist, just published his memoir: I’ve Been Thinking.

We have previously mentioned his position on generative AI in this newsletter. He’s sharing more now, and what it’s saying is worth framing in the context of this section of Synthetic Work.

Taylor McNeil interviewed him on behalf of the Tufts University, where he served as professor until last year:

Q: You talk in the book about the early days of AI. Do you think that back in those days you would have imagined that it could turn out the way it has?

A: The large language models, like ChatGPT—the generative pre-trained transformers—are largely unanticipated, certainly by me, but also by many people in the field. Even some of the developers had no idea that they would get so good, so fast. That’s been not just surprising, but shocking and even scary to some of the leaders in the field.

Q: Where do you see AI going? Do you think that it’s something we should be concerned about?

A: A thousand times yes. In fact, in the last few months, I’ve been devoting almost all my energy to this.

I’m an alarmist, but I think there’s every cause for alarm. We really are at risk of a pandemic of fake people that could destroy human trust, could destroy civilization. It’s as bad as that. I say to everybody I’ve talked to about this, “If you can show that I’m wrong, I will be so grateful to you.” But right now, I don’t see any flaws in my argument, and it scares me.

The most pressing problem is not that they’re going to take our jobs, not that they’re going to change warfare, but that they’re going to destroy human trust. They’re going to move us into a world where you can’t tell truth from falsehood. You don’t know who to trust. Trust turns out to be one of the most important features of civilization, and we are now at great risk of destroying the links of trust that have made civilization possible.

Q: AI destroying trust is an unintended consequence, not an intentional feature, right?

A: Yes. AI systems, like all software, are replicable with high fidelity and unbelievably fast mutations. If you have high fidelity replication and mutations, then you have evolution, and evolution can get out of hand, as it has in the past many times.

Darwin wonderfully pointed out that the key to domestication is control of reproduction. There are species that hang around human houses and farms that are synanthropic. They evolved to live well with human beings, but we don’t control their replication. Bedbugs, rats, mice, pigeons—those are synanthropic, but not domesticated.

Feral species are ones that were domesticated and then go feral. They don’t have our interests at heart at all, and they can be extremely destructive—think of feral pigs in various parts of the world.

Feral synanthropic software has arrived—today, not next week, not in 10 years. It’s here now. And if we don’t act swiftly and take some fairly dramatic steps to curtail it, we’re toast.

We will have created the viruses—the mind viruses, the large-scale memes—that will destroy civilization by destroying trust and by destroying testimony and evidence. We won’t know what to trust.

Q: This seems like evolution at work.

A: Absolutely it is. This is cultural evolution. My dear friend and colleague Susan Blackmore, who wrote the book The Meme Machine, has been talking since the time she wrote that book about a third kind of replicator, which she calls “tremes”—technological memes that don’t depend on being replicated by human minds, but can be replicated by other software, taking the human being right out of the picture.

We’ve known about this for 20 or 30 years, but now recent experiments have basically shown that this is not just possible in principle, it’s possible right now.

The counter-argument to this is that we live in a world where almost everybody knows that we can’t trust politicians, tabloids, big corporations, and even our own senses (as our perception of reality is mediated by our brains in odd ways).

Logic suggests that our society should descend into chaos, paralyzed by the impossibility of trusting anything. And yet, it’s not happening. We are, for the most part, a thriving species.

This apparent paradox is maddening.

Breaking AI News

Want More? Read the Splendid Edition

This week’s Splendid Edition is titled Hackathons For Fun and Profit.

In it:

  • Intro
    • Search is coming.
  • What’s AI Doing for Companies Like Mine?
    • Learn what Walmart, the UK Court of Appeal Civil Division, and JP Morgan Chase are doing with AI.
  • A Chart to Look Smart
    • The new GPT-4V model unlocks a wide range of business applications
  • What Can AI Do for Me?
    • Let’s organize an internal hackathon for the company employees to invent new business products with generative AI.