Issue #48 - The forest for the trees

February 10, 2024
Free Edition
Generated with Stable Diffusion XL and ComfyUI
In This Issue

  • What Caught My Attention This Week
    • Microsoft announced a new initiative to train 2 million Indians on AI by 2025.
    • The UK Intellectual Property Office fails to convince AI providers and news organizations to agree on a voluntary code of practice on copyrighted material.
    • Sainsbury’s is preparing to roll out automated tills, warehouse robots, and AI forecasting tools. Job cuts are not excluded.
  • The Way We Work Now
    • The winner of the Japanese Akutagawa Prize reveals to have copied 5% of the book from ChatGPT.
    • A new book uncovers how employers are using AI candidate-screening algorithms.
  • How Do You Feel?
    • Nobel Prize winner cautioned younger generations against piling into (STEM) subjects.
What Caught My Attention This Week

Let’s start with some good news: Microsoft announced a new initiative to train 2 million Indians on AI by 2025.

From the official blog post:

The ADVANTA(I)GE INDIA initiative is part of Microsoft’s Skills for Jobs program, which is designed to empower India’s workforce with future-ready skills. The initiative is part of Microsoft’s broader commitment to accelerate India’s AI transformation. The skilling initiative is aligned with the company’s responsible AI principles, and training will be delivered in partnership with governments, nonprofit and corporate organizations, and communities.

According to Microsoft’s recent Work Trend Index, 90 percent of Indian leaders say the people they hire will need new skills to prepare them for the growth of AI. Furthermore, 78 percent of Indian workers say they don’t have the right AI capabilities to complete their current work.

To address this needs gap, ADVANTA(I)GE INDIA will focus on training individuals in Tier 2 and Tier 3 cities, as well as rural areas, enabling people to participate in the new era of AI and unlock inclusive socio-economic progress.

The ADVANTA(I)GE INDIA initiative will focus on three key areas to create AI fluency – Equipping India’s future workforce, upskilling government officials in AI and working to build the AI capability of nonprofit organizations.

To deliver ADVANTA(I)GE INDIA, Microsoft will partner with India’s Ministry of Skill Development and Entrepreneurship and 10 state governments to provide basic and advanced training in AI to 500,000 students and job seekers in 100 rural vocational education institutions and training centers.

In addition, Microsoft will provide in-depth AI technical skills training for 100,000 young women through 5,000 trainers at higher education institutions in Tier 2 and Tier 3 cities. This will be achieved by making Microsoft’s AI Trainer Toolkit Guide available for trainers and strengthening skilling programs for women in cloud, AI, and cybersecurity with AI credentials. Microsoft will also provide access to Azure AI services to build tech solutions, and foster industry collaborations for mentorship, internships, and jobs.

As part of the initiative, Microsoft will raise awareness of responsible AI use and AI-enabled careers for 400,000 students in schools in remote and tribal regions, enabling them to be next-generation AI innovators. This will be achieved by piloting three of Microsoft’s global initiatives: Technology Education and Literacy in Schools (TEALS), Farm Beats for Students, and the AI Guidance for Schools Toolkit for teachers.

Microsoft will strengthen its partnership with India’s National Programme for Civil Services Capacity Building, equipping 250,000 government officers with essential knowledge of generative AI and increasing their AI fluency.

This partnership will help enhance the productivity of government officers and transform digital governance in rural India. It will also build capabilities for investments in the next generation of AI-enabled citizen services, meeting citizens where they are located.

Building on the Generative AI Skills Challenge, Microsoft and LinkedIn will convene India Nonprofit Leaders Summit in April 2024. The summit will enable 2,500 nonprofits and nongovernment organizations to leverage AI skilling resources and technologies to further train 750,000 learners – including underserved youths, young women, and jobseekers – in AI fluency and technical skills.

Microsoft needs billions of customers hooked on Copilot to keep growing and keep training more capable models.

Billions of people need to develop AI skills to have a hope to remain competitive in the job market and keep growing the economy of their country.

It’s a win-win situation.

By the time Google figures out how to make an AI model that can actually compete with GPT-4 (the just released Gemini Ultra is not), Microsoft will have secured the gratitude and loyalty of millions of people in one of the most important markets in the world.


The UK Intellectual Property Office fails to convince AI providers and news organizations to agree on a voluntary code of practice on copyrighted material.

Daniel Thomas and Cristina Criddle, reporting for the Financial Times:

The Intellectual Property Office, the UK government’s agency overseeing copyright laws, has been consulting with AI companies and rights holders to produce guidance on text and data mining, where AI models are trained on existing materials such as books and music. 

However, the group of industry executives convened by the IPO that oversees the work has been unable to agree on a voluntary code of practice, meaning that it has returned the responsibility back to officials at the Department for Science Innovation and Technology, according to multiple people familiar with the discussions. Officials in the Department of Digital, Culture, Media and Sport are also involved, they said.

Representatives came from various arts and news organisations, including the BBC, the British Library and the Financial Times, and tech companies Microsoft, DeepMind and Stability.

The government is expected to publish a white paper in coming days that will set out further proposals around AI regulation in the UK. It is likely to refer to the need for industry agreement on AI and copyright in the UK, the people said, but will fall short of setting out any definitive policies.

The failure of the UK talks comes as AI has caused alarm among artists, authors, musicians and media groups who are concerned that their work will be copied and reproduced without payment.

“The industry is asking for transparency on what models have and haven’t been trained on, and what works are being used,” said Reema Selhi, head of policy at the Design and Artists Copyright Society, who was part of the groups tasked with devising the code. “The IPO hasn’t found answers to those questions.”

Two people with knowledge of the situation added that the government was again sounding out “stakeholders” among the different companies to try to get agreement. “The question is where to put the balance. The government will have to come to a position,” one of them said.

The government wants to avoid legislation in such a fast moving and contentious area, according to those people, and so still favours a voluntary approach such as a new code.

“The IPO has engaged with stakeholders as part of a working group with the aim of agreeing a voluntary code on AI and copyright,” a government spokesperson said. “We will update on that work soon and continue to work closely with stakeholders to ensure the AI and creative industries continue to thrive together.”

As we said many times on Synthetic Work, this decision has a much smaller impact on jobs in the creative industry than most people think.

In the long term, it will be irrelevant if AI providers have or do not have the right to use copyrighted material to train their models.
All these companies are racing to build synthetic data to train LLMs and diffusion models (the ones that generate pretty pictures). And they are racing to build synthetic characters and IPs that will be beloved way more than any existing copyrighted material.

In the long term, copyright holders are akin to middlemen. The goal is to cut the middlemen out of the equation.

Which is exactly the opposite of what the UK House of Lords is focusing on.

Dan Milmo, reporting for The Guardian:

Ministers must defend content creators whose work is being taken without permission by tech companies to build artificial intelligence products such as chatbots that generate “vast financial rewards”, a House of Lords committee has said.

The legal framework in the UK is failing to enforce the basic principles of copyright amid a rise in AI development, said the Lords’ communications and digital committee.

“Some tech firms are using copyrighted material without permission, reaping vast financial rewards,” said the committee.

Urging the government to take action on flouting of copyright, the committee said: “The current legal framework is failing to ensure these outcomes occur and the government has a duty to act. It cannot sit on its hands for the next decade and hope the courts will provide an answer.”

The committee recommended the government decides whether copyright law provides enough protection to copyright holders. If it believes there are legal uncertainties around the issue, peers said, it should set out options for updating legislation.

The government’s intellectual property office is drawing up a code of practice on copyright and AI. Under the 1988 Copyright Act an exemption is made for text and data mining if it is research for “a non-commercial purpose”. In 2022 the government indicated that it would widen that exemption to any use but has now rowed back on that.

Stowell added that the UK, with its wealth of private and government-owned data, could offer licensed datasets to AI firms hoping to build models on a secure legal basis. “If we can create new licensed datasets, there is a market we ought to be able to take advantage of,” she said

Not at all, per my previous comment.

Governments depend on the Broadcasting & Media industry to reach the electorate. So there will always be strong interest in protecting that industry.

But the Broadcasting & Media industry is on life support and has no money. Big tech, instead, has just driven the S&P 500 to 5000, and it’s promising to lift the GDP of any country that lets it operate freely.


Sainsbury’s is preparing to roll out automated tills, warehouse robots, and AI forecasting tools. Job cuts are not excluded.

Sarah Butler, reporting for The Guardian:

Sainsbury’s is to use more automated tills and warehouse robots as well as AI forecasting tools to ensure it has the right stock in stores as part of a £1bn cost-cutting effort over the next three years.

Simon Roberts, the chief executive of Sainsbury’s, did not rule out job losses as a result of the changes, but made no announcement on redundancies and said workers would be able to change their roles and adapt to new ways of working.

Roberts said the group’s “legacy systems” were slowing it down and leading to more waste than necessary. “We have got to find better ways of doing things,” he said.

Techno-optimists like Marc Andreessen maintain that AI will make companies 2x, 5x, 10x more productive and that will lead to an abundance of jobs.

In this newsletter, I have long argued that multiplied productivity must be sustained by a multiplied marketing and sales infrastructure, and must be absorbed by a multiplied demand.

More importantly, I have long argued that the humans running today’s companies find it infinitely easier to cut costs than to figure out how to multiply demand.

For most, AI is becoming a cost-cutting tool, not an abundance generator.

I’ll add Sainsbury’s to the AI Adoption Tracker once more details on how they are implementing AI will become available

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

The winner of the Japanese Akutagawa Prize reveals to have copied 5% of the book from ChatGPT.

Christy Choi and Francesca Annio, reporting for CNN:

After Japanese author Rie Kudan won one of the country’s most prestigious literary awards, she admitted she’d had help from an unusual source — ChatGPT.

“I plan to continue to profit from the use of AI in the writing of my novels, while letting my creativity express itself to the fullest,” said the 33-year-old, who was awarded the Akutagawa Prize for the best work of fiction by a promising new writer on Wednesday.

The author then confirmed at a press conference that around 5% of her book “The Tokyo Tower of Sympathy” — which was lauded by committee members as “practically flawless” — was word-for-word generated by AI.

The novel centers around the dilemmas of an architect tasked with building a comfortable high-rise prison in Tokyo where law breakers are rehabilitated, and features AI as a theme.

Kudan said that, in her own life, she would consult ChatGPT about problems she felt she couldn’t tell anyone. “When the AI did not say what I expected,” she said, “I sometimes reflected my feelings in the lines of the main character.”

Writer and prize committee member Keiichiro Hirano took to X, the social media company formerly known as Twitter, to say the selection committee did not see Kudan’s use of AI as a problem.

“It seems that the story that Rie Kudan’s award-winning work was written using generative AI is misunderstood… If you read it, you will see that the generative AI was mentioned in the work,” he wrote. “There will be problems with that kind of usage in the future, but that is not the case with ‘Tokyo Sympathy Tower.’”

If this doesn’t give you pause, nothing will.

The writer you compete with for that book prize is using AI.

The ad agency you compete with for that commercial is using AI.

The photographer you compete with for that editorial shoot is using AI.

The digital artist you compete with for that artwork commission is using AI.

The developer you compete with to develop that app is using AI.

Your colleague, the one you compete with for that promotion, is using AI.


A new book uncovers how employers are using AI candidate-screening algorithms.

Caitlin Harrington, reporting for Wired:

If you’ve worried that candidate-screening algorithms could be standing between you and your dream job, reading Hilke Schellmann’s The Algorithm won’t ease your mind. The investigative reporter and NYU journalism professor’s new book demystifies how HR departments use automation software that not only propagate bias, but fail at the thing they claim to do: find the best candidate for the job.

Schellmann posed as a prospective job hunter to test some of this software, which ranges from résumé screeners and video-game-based tests to personality assessments that analyze facial expressions, vocal intonations, and social media behavior. One tool rated her as a high match for a job even though she spoke nonsense to it in German. A personality assessment algorithm gave her high marks for “steadiness” based on her Twitter use and a low rating based on her LinkedIn profile.

Wired: Software companies often present their products as a way to remove human bias from hiring. But of course AI can absorb and reproduce the bias of the training data it ingests. You discovered one résumé screener that adjusted a candidate’s scores when it detected the phrase “African American” on their résumé.

Schellmann: Of course companies will say their tools ​​don’t have bias, but how have they been tested? Has anyone looked into this who doesn’t work at the company? One company’s manual stated that their hiring AI was trained on data from 18- to 25-year-old college students. They might have just found something very specific to 18- to 25-year-olds that’s not applicable to other workers the tool was used on.

Now obviously, the vendors don’t want people to look into the black boxes. But I think employers also shy away from looking because then they have plausible deniability. If they find any problems, there might be 500,000 people who have applied for a job and might have a claim. That’s why we need to mandate more transparency and testing.

Whichever company admits using these tools is potentially opening itself to a class-action lawsuit and might end up liable for millions of dollars in damages.

More on this:

A lot of lawyers say that the company that does the hiring is ultimately responsible, because that company makes the hiring decision. Vendors certainly always say, “We don’t make the decision. The companies make the decision. The AI would never reject anyone.”

That may be right in some cases, but I found out that some vendors do use automatic rejection cutoffs for people who score under a certain level. There was an email exchange between a vendor and a school district that stipulated that people who scored under 33 percent on an AI-based assessment would get rejected.

A lot of vendors use deep neural networks to build this AI software, so they often don’t know exactly what the tool is basing its predictions on. If a judge asked them why they rejected someone, a lot of companies probably could not answer.

I’ve heard from a couple of whistleblowers who found exactly that. In one case, a résumé screener was trained on the résumés of people who had worked at the company. It looked at statistical patterns and found that people who had the words “baseball” and “basketball” on their résumé were successful, so they got a couple of extra points. And people who had the word “softball” on their résumé were downgraded. And obviously, in the US, people with “baseball” on their résumé are usually men, and folks who put “softball” are usually women.

For the nth time: using AI to screen candidates is the most shortsighted approach companies could have taken, and one of the most damaging AI applications on the market.

For more on this topic, read the intro of Issue #45 – DeathGPT.

How Do You Feel?

This section is dedicated to the psychological impact of artificial intelligence on people. You might think that this has no relevance to the changing nature of human labour, but it does. Oh, if it does!

For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.

Nobel Prize winner cautioned younger generations against piling into (STEM) subjects.

Tom Rees, reporting for Bloomberg:

A Nobel Prize-winning labor market economist has cautioned younger generations against piling into studying science, technology, engineering, and mathematics (STEM) subjects, saying that “empathetic” and creative skills may thrive in a world dominated by artificial intelligence.

Christopher Pissarides, professor of economics at the London School of Economics, said that workers in certain IT jobs risk sowing their “own seeds of self-destruction“ by advancing AI that will eventually take the same jobs in the future.

While Pissarides is an optimist on AI’s overall impact on the jobs market, he raised concerns for those taking STEM subjects hoping to ride the coattails of the technological advances. He said that despite rapid growth in the demand for STEM skills currently, jobs requiring more traditional face-to-face skills, such as in hospitality and healthcare, will still dominate the jobs market.

“The skills that are needed now — to collect the data, collate it, develop it, and use it to develop the next phase of AI or more to the point make AI more applicable for jobs — will make the skills that are needed now obsolete because it will be doing the job,” he said in an interview. “Despite the fact that you see growth, they’re still not as numerous as might be required to have jobs for all those graduates coming out with STEM because that’s what they want to do.”

For many people, it might be hard to agree with this position as, today, the negative impact of AI on creative jobs is already very tangible while the impact on STEM jobs is mostly positive.

However, Pissarides’ position aligns well with the expression I used many times in this newsletter: “It might get better before it gets worse.”

For a brief explanation of this concept, take a look at this short video of mine:

To use another frequent expression:

While we wait for the utopia of a world where nobody has to walk thanks to cars, everybody might be asked to become a car mechanic. At least, as long as the cars don’t start fixing themselves.

Breaking AI News

Want More? Read the Splendid Edition

This week’s Splendid Edition is titled The Unbearable Lightness of Being Under Surveillance.

In it:

  • What’s AI Doing for Companies Like Mine?
    • Learn what Transport for London (TfL), Co-Op, Burger King, and the UK Advertising Standards Authority (ASA) are doing with AI.
  • A Chart to Look Smart
    • Promising research shows the positive impact of chatbots on 129,400 patients within England’s NHS services.
  • The Tools of the Trade
    • AP Workflow 8.0 for ComfyUI is out!