Hey there,
I had a super lovely birthday weekend, thanks to everyone who wrote me happy birthday ❤️ At the bottom of this email you’ll see a picture of me doing surgery on a cake.
A special thanks to those who took advantage of the birthday sale to join one of the AI Academy programs! Remember that AI Career Compass and Build Ethical AI will be released on the 20th of March, and they’ll be on a pre-launch sale until then. Get them now and save some 💸!
Now let’s get to this week’s news:
For this week I’ll focus just on AI Pizza Bytes because well…AI is taking over the world and it’s already a lot to take. I divided them into subtopics, let me know if you like this format or not by voting at the bottom or just replying to this email.
Some time ago I introduced the AI Genie Effect: once a company releases information about how they built an AI algorithm, it’s impossible to control who and how it will be used.
The first example of the AI genie effect is probably deepfakes. That technology was originally developed by scientists from the chip manufacturer NVIDIA to create algorithms that could modify images. They didn’t think about a specific application, even though NVIDIA is a leader in gaming chips so they may have thought about fun implementations in their industry.
Someone read that scientific research and posted on Reddit an implementation of that technology that allowed anyone to swap faces on pornographic images and replace actors and actresses (mostly actresses) with other people without consent. That’s the AI genie effect: once you explain how you made a technology work (the genie is out of the bottle), you can’t stop people from misusing it.
Well, the AI Genie effect is back. Last week we talked about Llama, a version of GPT-3 open-sourced by Meta. An algorithm like this has potentially destructive outcomes for society. Think about how much spam or scams can be written with it. To prevent this from happening, Meta tried to retain control of the technology: the only way to access it was to fill up a form and state clearly what you were going to use it for.
It took just a week for someone to get access to it and uploaded it online as a torrent for everyone to download.
From one point of view, this was actually helpful for the research community. Many people downloaded it and worked on how to make it run efficiently on a single chip, which could allow anyone to run something similar to ChatGPT directly on their computer.
But that also means that any bad actor can now download it, host it, and do whatever they want with it. And there won’t be any way to stop them.
This is the old dilemma that OpenAI faced when it went from being 100% open to largely closed (some people jokingly call them “ClosedAI” today). When you have a super powerful model like ChatGPT, it makes sense to limit access through a paid service you control. This enables you as a company to control how it’s being used and potentially revoke access from bad actors. True, it’s harder for the scientific community to work on that model and improve it, but hey, safety first.
Meta played the open game and now I’m 100% sure that some scammer is already working on generating infinite scam emails with it. And there’s no way of stopping him. OK, scientific research will move faster, but “move fast and break things” shouldn’t be applied with technology like this.
Other AI news in 5 lines or less.
As promised, here’s me doing surgery on a cake:
I’ll talk to you next week.
Ciao 👋
Your 10 min shortcut to understand how tech impacts society and business, available in audio, video or text format