Issue #14 - How to prepare what could be the best presentation of your life with GPT-4

May 26, 2023
Splendid Edition
In This Issue

  • How to write a presentation with GPT-4 using many of the techniques in the How to Prompt section of Synthetic Work
Intro

Do you remember when I told you that I was considering activating a new section of Synthetic Work about “What to Prompt?”

Well, this week’s Splendid Edition is dedicated to that. As an experiment.

Let me know if you like it or if you detest it. Or both.
Alessandro

What Can AI Do for Me?

This is a section dedicated to applying the prompting techniques in the How to Prompt section of Synthetic Work to real-world use cases.

In the last few Splendid Editions, we learned about a lot of different techniques on how to interact with GPT-4. Those techniques are listed in the How to Prompt section of Synthetic Work for your reference.

It’s now time to use some of those techniques on some real-world use cases, discovering how AI can help you get the job done, start to finish.

Let’s start with something that I enjoy doing, and did for 20 years, but that many people all around the world detest profoundly: writing a presentation.

I have one main reason to start from this use case: on social media and elsewhere, we are inundated by enthusiastic calls for action from hundreds of “The AI Guy”, enticing us to try a wave of startups that promise to revolutionize our presentations with generative AI.

Sure.

These companies are using generative AI in the silliest way:

  • to select the best template for a slide YOU already created
  • to rephrase certain text YOU already wrote (something that dedicated tools like GrammarlyGO can do much better, as we saw in Issue #11 – Personalized Ads and Personalized Tutors)
  • to generate an image based on a text YOU have to provide
  • etc.

First of all, the burden of the creation is still on you. All this approach does is minimally refine what you have already come up with, often generating poor results that are rarely worth the effort and the money.

Second, they will all be blown to smithereens as soon as Microsoft and Google offer the same, for free, as part of Microsoft 365 and Google Workspace.

Third and more importantly…in fact, critically…these tools do nothing to define or improve your presentation narrative. They do nothing to help you achieve the only thing that matters when you present something to somebody: that your idea spreads.

This doesn’t mean that generative AI is useless for a complex use case like writing a presentation. Quite the opposite. As you’ll see below, it can transform the way you present.

In fact, we’ll go back over and over and over to this use case, as technology evolves and makes it easier and faster to accomplish the goal.

What you can do today is not flawless, but it’s already mindblowing.

Let’s get what you need

For this use case, I’m using GPT-4 Browsing, a particular variant of GPT-4 that is available as part of the ChatGPT Plus subscription. By the time you read this tutorial, everybody across the globe should have access to it.

Just go to chat.openai.com and log in, or sign up and pay your $20 ticket to the future.

OpenAI has also released an official iOS app but (1) for now, the app is available only in a few countries, (2) a smartphone is really not ideal for what we are about to do today, and (3) for now the app doesn’t have access to the GPT-4 Browsing variant that we want.

If you don’t want to pay for the ChatGPT Plus subscription, or you don’t have access to the GPT-4 Browsing variant, the next best thing is using the GPT-4 model that powers Bing Creative Mode inside the Microsoft Edge browser. Which is free.

If you are a Windows user, you already have Microsoft Edge and access to Bing.

If you are not a Windows user, download Microsoft Edge. Then, roll your eyes at the million popups and ads and surveys that Microsoft considers a great user experience.

Now, click on the huge Bing button in the top right corner. A sidebar will open and you’ll be able to choose a conversation style. Be sure to pick Creative Mode.

At this point, you are good to go.

Warning: using the GPT-4 model inside Bing instead of GPT-4 Browsing from the OpenAI website, is not exactly the same. Not everything you’ll read in this tutorial will apply 100%.

But you are a Synthetic Work member. As such, you are incredibly smart and handsome.

You’ll be able to make the necessary adjustments and go through it.

It’s a walk in the park for you.

You got this.

Let’s define the basics of the presentation

Let’s start by getting some help with the biggest problem we have when we write a presentation: most of us are not professional presenters and, despite we have to present our ideas every day (to colleagues, customers, partners, pets), our employers don’t think it’s absolutely imperative to pay for classes to master this discipline.

Excellent communication skills greatly improve collaboration and efficiency because -I hope you are seated- they allow us to understand each other immediately (instead of after 42 x 1h meetings).
So it makes complete sense that companies do not want to invest in developing presentation skills for the majority of their workforce.

Anyway.

AI can help us using the technique we called Get To Know What You Don’t Know. We encountered it in Issue #13 – Hey, great news, you potentially are a problem gambler.

This is a precious help both for you and for the AI to shape the narrative of the presentation.

If you are unfamiliar with some of the techniques mentioned by GPT-4, it’s a good idea to ask for an example about a random topic of choice. It doesn’t have to be the topic of the presentation.

Now we need to convince the AI to use these techniques. One thing is knowing that they exist. Another is applying them to the presentation we want to make.

To do this, we use the technique we called Assign a Role. We encountered it in Issue #8 – The Harbinger of Change.

As you’ll see below, the limited size of the GPT-4 context window, will force us to remind the AI about this role multiple times.

As a reminder, the context window of an AI model is like the short-term memory in a human brain. It’s a temporary storage of information that we need to carry on a conversation for a brief period of time (like chatting with a friend for 1h at a coffee shop), or to carry on a complex action (like going to the kitchen to get milk from the fridge).

Until OpenAI enables the upcoming 32K tokens context window, which we discussed in Issue #10 – The Memory of an Elephant, it’s a good practice to remind GPT-4 of its assigned role in every prompt, or in every few ones. You’ll see below.

Next, we want to be sure that the AI has all the information necessary to help us. For this, we use the technique called Ask For Follow-up Questions. We encountered it in Issue #8 – The Harbinger of Change as well.

At this point, we are ready to hear GPT-4’s recommendations for the high-level aspects of the presentation.

Don’t be afraid to push back if something doesn’t sound right. And, if it’s really terrible, remind GPT-4 its assigned role.

Let’s start with the title:

I don’t know about you but I’m already loving it. I know that this will not be a TED Talk quality presentation, but it’s the best thing ever invented to create a reasonable draft to brainstorm or to build on in no time.

Or, if nothing else, it’s a great tool to quickly understand what you don’t want to do, or what you don’t want to talk about. As you’ll see below, in fact, visualizing the structure of the presentation right away gives you an immediate idea of how good the approach is.

You are effectively prototyping a presentation.

Let’s ask about how many slides the presentation should have (we already told GPT-4 how much time is allocated for the speaking slot, so it should be able to do the job):

That’s a very good structure for a presentation, but way too dry for my taste. I didn’t expect GPT-4 to generate all deck structure as part of this answer, otherwise, this would have been a good opportunity to remind it of its assigned role.

Here you can proceed in many different ways. One approach is to ask for ten different variants. Remember that, differently from a person, the AI has infinite energy and can create as many variants as you need.

However, this approach works well only if you truly have no idea of what you want and you are looking for inspiration. In this case, I know what I want.

I want a presentation structure that is less dry. And so I’d rather use the technique we called Request a Refinement. We encountered it in Issue #8 – The Harbinger of Change as well.

Better, but not quite there. Some more refinement, please:

At this point, you might decide that the narrative structure suggested so far doesn’t follow closely enough one of the various techniques that GPT-4 listed in its answer to my first prompt. If so, this is the time to push for an alternative narrative based on the Hero’s Journey, for example.

I don’t want to make this tutorial too complicated, so we’ll skip this step for now and add it in future revisions.

Regardless, one thing that seems obvious to me from this and the following answers is that GPT-4 doesn’t quite understand what surprises people. Or, at least, we don’t have the same idea of what can be considered surprising. But, let’s go with it.

Also, GPT-4 is not exactly a champion of originality. So we better ask for less banal examples:

OK. Let’s say that we settle for this presentation structure.

Let’s define the high-level content of each slide

It’s now time to be properly spoon-fed by asking what we should put in each slide. This is a critical time to remind GPT-4 of its assigned role.

Notice that here, for the first time since we started, GPT-4 invokes the use of its browsing capability and retrieves a very useful statistic to put in slide 2.

From my tests, GPT-4 Browsing is really unstable and the websites it visits are questionable, but in this case, it did an excellent job.

However, my prompt presents a fatal mistake: it’s not clear enough on the fact that the AI should talk about one slide at a time and not progress any further until I ask it to do so.

Failing to achieve this, especially when the interactions have been going on for a while like in this case, may lead to the consumption of all the GPT-4 context window. And when that happens, as we said many times, a large language model like GPT-4 starts to forget everything it has been said before.

In this case, in fact, GPT-4 has completely forgotten the presentation structure it previously recommended, describing slides with the wrong title and fewer than what we agreed on. I cut the screenshot short to make it easier to read, so you have to trust me on this.

Going forward, this problem will go away. On top of the imminent 32K tokens context window for GPT-4, if you read the news, you know that Anthropic released a version of its AI model Claude with a 100K tokens context window.

Until then, how do we fix this problem?

In two ways. First, we remind the AI of what we previously agreed on. Second, we write a more exact prompt to explain that only one slide at a time must be elaborated:

Much better.

The question now is: given this first sign that GPT-4 is starting to forget things, will it have forgotten all the directives we have given it at the beginning on how to write a presentation?

Let’s see what it recommends for slide 2:

This is going better than I expected. It still remembers to be thought-provoking. And it volunteered the addition of a chart without me pushing for a more visual narrative.

Let’s see what happens with slide 3:

I’m very happy.

Notice that I’m not yet fact-checking the historical facts that, supposedly, the AI has retrieved from its training dataset or from the Internet via the browsing capability. It makes no sense to do so now as much of the presentation could still change. I’d rather do it at the very end when everything is crystallized.

Everything goes well up to slide 4:

From slide 5, unfortunately, GPT-4 starts to lose its memory again, giving us the content recommendation for a completely different slide than what we agreed on.

Just like before, I have to remind it what was supposed to be the rest of the presentation, and then it continues diligently with slide 5:

Here a good thing and a bad thing happen.
The good thing is that GPT-4 still remembers that it needs to be surprising for the audience.
The bad thing is that it slightly changed the structure of its content recommendation so there are more talking points, but no suggestion about a visual element anymore.

This gives me the excuse to formalize what I want to see in these content recommendations. I had no idea before, but after being inspired by the first few slides, now I can be more precise.

Let’s polish each slide

First, I let GPT-4 finish its recommendations for slides 6-11 (slide 12 is the Q&A placeholder). Then, it’s time to submit a more precise prompt to polish each slide:

As you can see, this approach doesn’t work because GPT-4 is clearly struggling with its context window.

I could attempt to correct the AI, reminding it that this is not the title of the first slide we agreed on, but things will probably go worse from here on.

A much better strategy is to start a new GPT-4 session, which a different prompt designed to take the previous recommendations for each slide and further polish them, one by one.

In general, I am against writing huge prompts. Contrary to the recommendations you’ll find on social media, I discourage using them because I believe they increase the friction in an interaction that should otherwise feel effortless.

It’s like asking a person to prepare a speech before talking to a colleague about a problem.

In this case, we have no choice, but the mega prompt you see below is nothing more than copy & paste what we previously told GPT-4 to do through multiple interactions. For now, we are forced to use this trick.

Let’s see if the AI has understood the task and its short-term memory, the context window, doesn’t fail me again. Slide 1:

As you can see, the prompt for the image generator is quite generic. It would work decently with Midjourney but not so much with Stable Diffusion, Adobe Firefly, or Dall-E 2.
This is because we didn’t give GPT-4 enough information on how to generate prompts for these systems. So it’s easily fixable. But for now, we won’t:

  • Midjourney is still only available via Discord, which means that many, many people won’t use it.
  • Stable Diffusion is the most powerful, but also the most complex AI model for image generation and very few experts can use it successfully.
  • Adobe Firefly is still in beta and exclusively trained on stock photos so that everything it generates looks plasticky.
  • Dall-E 2 is now embarrassingly inadequate compared to the other three and will soon be replaced by Dall-E 3.

For these reasons, there’s no point in being more precise about the image generation aspect of the presentation right now. These reasons are also why I don’t share prompting techniques about image generation in the Splendid Edition of Synthetic Work just yet.

We’ll revisit this tutorial on how to make an AI-assisted presentation when these tools are more readily available and mature.

For now, let’s see how GPT-4 refines slide 2:

As you can see, GPT-4 still remembers the format we need to follow and even tried to find an infographic online to support the slides. The OpenAI plugin technology is way too immature to deliver on this, but this gives you an idea of the potential of this technology.

Let’s see what happens with slide 3:

Unfortunately, here, GPT-4 starts to forget the structure I indicated for its recommendations. It’s a sign that the context window might be full.

So, the best approach here is to use the mega prompt I composed before in a new session for each slide. You have 12 slides like me in this case? 12 GPT-4 sessions, each dedicated to polishing one slide.

While this approach increases the friction, there are two benefits.

The first benefit is that this is still infinitely faster than writing the presentation all by yourself. I’ve been on stage for over 20 years, an average of once per week, and I can guarantee you that this would have helped me immensely in creating more presentations about more topics I wanted to discuss with my audience.

The second benefit is that, if we dedicated an entire GPT-4 session to polishing only a single slide, then we probably have enough space in the context window for a big explanation on how to create detailed prompts for Midjourney and the other AI models for image generation.

And that’s it.

If you have followed the process, you should have:

  • A presentation title
  • A slide deck structure
  • A first, high-level description of each slide in the deck
  • A second, detailed list of talking points for each slide

Given the presentations I’ve seen in my career, I’d say that at this point your presentation should be already significantly better than 99% of the decks that are out there.

Let’s go over the top

If you want to push further, there are a few additional things that you might consider exploring:

Ask GPT-4 to articulate the transition between each slide. This must be asked while it generates the first round of descriptions for each slide and not during the polishing phase. This is because, during the polishing phase, the AI looks at the slides individually and doesn’t have a sense of the whole narrative anymore.

Ask GPT-4 to further explain the talking points. If the talking points are still too generic or you don’t understand the logic behind them, you could use the technique we called Awake the Curious Child to get a more articulated chain of thoughts.
We encountered this technique in Issue #9 – The Tools of the Trade.

Ask GPT-4 for further refinements on one or more slides. If you don’t think that one slide carries the characteristics you detailed in your prompt, even after the polishing phase, you can use the technique Request a Refinement that we previously mentioned, until the slide is thought-provoking, surprising, inquisitive, unsettling, reassuring, etc. enough for your taste.

 

OK. I think that’s enough for this week.

If you use the approach described in this Splendid Edition, let me know how it goes. I’ll refine the approach also thanks to your help.

And if you know people that get anxious when they need to present to an audience, share this newsletter with them. They might be forever grateful.