The dark side of the AI revolution

Hey there,

This week I taught another class for Executives studying at the Harvard School of Public Health. Before telling you today’s news, I want to leave you a thought I shared with students:

The most successful AI projects are not the ones with the best algorithms.

The most harmful case studies of AI rarely have poor algorithms to blame.

The single best predictor of success/harm is the group of people behind a project. We need more heterogeneous teams, with more domain experts contributing to successful AI design.

So the key to artificial intelligence is human interaction. Kinda ironic, isn’t it?

☝️ A picture of excited Gianluca before his class

Let’s get to this week’s news now:

  • 💔 The dark side of the AI revolution
  • Pizza Bytes 🍕: Tech layoffs, Pro ChatGPT, Meta frees the nipple

Let’s get started 🕺

💔 The dark side of the AI revolution

For the past few months, generative AI took the central stage in our attention. We’ve all been admiring in awe the stunning art people created with MidJourney, or the smart suggestions completed by ChatGPT.

Maybe it’s time to be a bit realistic and look at the reality behind how these technologies are built. A TIME investigation showed that to ensure the safety of ChatGPT, OpenAI used Kenyan workers paid less than $2 per hour to read violent and sexual content for 9 hours per day.

I’ll use extracts of the TIME investigation to tell the story:

ChatGPT’s predecessor, GPT-3, had already shown an impressive ability to string sentences together. But it was a difficult sell, as the app was also prone to blurting out violent, sexist and racist remarks. This is because the AI had been trained on hundreds of billions of words scraped from the internet—a vast repository of human language. That huge training dataset was the reason for GPT-3’s impressive linguistic capabilities, but was also perhaps its biggest curse. […]

It was only by building an additional AI-powered safety mechanism that OpenAI would be able to rein in that harm, producing a chatbot suitable for everyday use. To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook[…]: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild.

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between $1.32 and $2 per hour depending on seniority and performance. One Sama worker told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said.

All of the four employees interviewed by TIME described being mentally scarred by the work. Although they were entitled to attend sessions with “wellness” counselors, all four said these sessions were unhelpful and rare due to high demands to be more productive at work.

And that’s not all. It’s been a tough week for generative art as well. Getty Images is suing Stable Diffusion over an alleged copyright violation.

Getty Images CEO Craig Peters told The Verge in an interview that the company has issued Stability AI with a “letter before action” — a formal notification of impending litigation in the UK.

His comment:

“The driver of that [letter] is Stability AI’s use of intellectual property of others — absent permission or consideration — to build a commercial offering of their own financial benefit,”Craig Peters

At the same time, the same lawyer who is leading the class action against Github Copilot has started a new class action against Stability AI, Midjourney, and DeviantArt. He’s not gentle, he said:

“StableDiffusion is a par­a­site that, if allowed to pro­lif­er­ate, will cause irrepara­ble harm to artists, now and in the future.”Matthew But­t­er­ick

What I’m thinking:

This is all pretty bad. Right? What’s “bad” though? Some ideas:

  1. Workers are not paid enough (chatgpt) / not paid at all (stable diffusion)
  2. Rating content is bad for people’s mental health and they shouldn’t do it
  3. AI companies aren’t transparent about their processes
  4. Tech companies shouldn’t be allowed to make billions on the back of others’ poorly remunerated work.
  5. Labeling should reflect the overall spectrum of humankind's sensibility, and not be offloaded to low-income countries

You could say “all of the above” and I would be OK with that. However, I think it’s important to formalize what specific issues we want to fix if we want to do any practical, meaningful steps toward real solutions. The way you fix problem A is very different from how you can solve problem E.

There’s also another question: is tech fundamentally different from other industries? Let’s assume you’re angry at the low wage that these annotators are paid. Are you wearing a t-shirt you paid 9€ for right now? I got bad news for you then.

I am absolutely NOT saying “well, if industry X is fucked then tech can be fucked too”, but I do think there’s not enough clarity in the public discourse around those topics. We all feel tech needs to do better, but what does an ideal tech world look like? Let’s ask ourselves this question.

🍕 Pizza bytes - AI 🤖

🧑‍💻 Github announced code brushes, a new tool to allow developers to edit (rather than create) code with AI. Interesting spin, I think that there are a lot of elements of UX related to generative AI that is far from being defined yet.

💰 The Microsoft - Github deal is very complex. Give a look if you’re interested in startup financing. Also: Microsoft will integrate OpenAI technology in Azure.

💼 OpenAI is working on a professional, paid version of ChatGPT. You can add yourself to the waitlist here.

🧑‍🔬 Scientists are adding ChatGPT to their papers as co-author.

🍕 Pizza bytes - crypto 🔐

🥷 SBF opened a blog and wrote his view on what happened to his exchange FTX. Summary: he didn’t steal money, FTX could give money back to users, and there were billions in funding coming in before bankruptcy was declared. Funny thing: you can donate money to him through his blog. 0 shame.

😥 Coinbase just fired another 950 people.

🍕 Pizza bytes - more stuff

😥 Microsoft fired 10.000 people, that’s 5% of their employee base.

🔭 The James Webb Space Telescope has discovered an unexpectedly high number of early galaxies. This could mean that some of our assumptions about how the universe formed may be wrong. Read more here.

🚫 Meta’s advisory board recommended that Meta change its adult nudity and sexual activity community standard “so that it is governed by clear criteria that respect international human rights standards”. This would mean allowing women to show their nipples on the platform. Meta has 60 days to answer, it’ll be interesting to see what they’ll decide!

You reached the end of this edition 😢

I’ll talk to you next week.

Ciao 👋

What our subscribers say

Agatha Peptea

I totally recommend it, great info and also great fun! As you can deduct from above, I declare myself a fan🙂 happy to read this condensed newsletter about tech!
Cultural Change Director - E.ON

Lubov Ostrovskaya

I feel more empowered intellectually 🙂
Business development - FNA SPA

Subscribe to our newsletter

Your 10 min shortcut to understand how tech impacts society and business, available in audio, video or text format

Check Icon - Newsletter X Webflow Template
Don’t worry, One highly curated email per week, no SPAM
Cookies Preferences
Close Cookie Preference Manager
Cookie Settings
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
Strictly Necessary (Always Active)
Cookies required to enable basic website functionality.
Made by Flinch 77
Oops! Something went wrong while submitting the form.