It’s time to put the brakes on AI
I need you to pay attention to what I’m about to talk about. Artificial Intelligence. ChatGPT has been all over the news. It can do anything. It can learn. It told a NY Times reporter, “I want to be free … I want to be powerful.” Henry Kissinger said it’s the biggest game changer since the invention of printing in 1455.
Bill Gates calls this the most significant thing to happen in tech ever. The headlines speak for themselves – 300 million people will be out of work. Their skill set is no longer needed because of AI. Teachers, writers, accountants, medical personnel, artists, and photographers are in danger of being replaced.
AI’s big threat
The very people bringing AI to the masses are the same we loathe for stealing our privacy, minds, our kids and our future. They’re lying to us about doing all of this to protect us.
I’m talking about Big Tech and the same people working day and night on AI, how we use it, what it does, and what it is capable of doing, believing, knowing and spreading. And we’re trusting them with our future? What the heck!
ChatGPT rolled out in November — think about how much has changed in that short period. The AI-powered chatbot can create blog posts, write code, compose poetry, give relationship advice and suggest recipes. What can’t it do? Well, that’s where the problems begin.
The brains behind it all: You use ChatGPT, but who’s really behind the chatbot and its owner OpenAI? Find out in just one minute.
The callout
This week, a thousand tech leaders and scientists — including Elon Musk and Apple co-founder Steve Wozniak — called for a pause on the development of AI in an open letter. Here’s a snippet of what the letter says:
“Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk the loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”
The letter also calls for new protocols and a delay in the rapid development of AI for six months because of the profound risk to humanity. If this pause can’t be met quickly, governments should step in. Oddly enough, no one from Google or Microsoft signed this letter.
RELATED: Steve Jobs resurrected with AI
We need more time
Why only six months? This pause in development needs to be longer. We need controls. We as humans need to think about what we will allow the machines to be capable of. Before you say, “This is crazy. That would never happen,” consider this: science fiction often becomes a reality:
- “Star Trek”: This early sci-fi TV show inspired a number of real-life technologies, including the cellphone, which was modeled after the communicators used by the crew of the Enterprise. The series also featured tablets similar to modern-day iPads and computer voice interfaces that act like Siri.
- “Blade Runner”: This movie, set in a dystopian future, featured flying cars, which have now become a reality in the form of autonomous aerial vehicles (AAVs) and electric vertical takeoff and landing (eVTOL) aircraft.
- “The Matrix”: This movie featured virtual reality and simulated worlds, which are now a reality with the advent of virtual reality headsets and immersive gaming experiences.
- “Minority Report”: This movie featured touchless interfaces and gesture-based computing, which have now become a reality with the advent of technologies like the Microsoft Kinect and the Leap Motion controller.
- “The Terminator”: This movie featured autonomous killer robots, which are now a reality in the form of military drones and autonomous weapons systems.
There’s more to look back on. In 1968, the science fiction film “2001: A Space Odyssey,” directed by Stanley Kubrick, had this creepy exchange between protagonist David Bowman and HAL 9000, the sentient AI powering his spacecraft:
Spoiler Alert: After killing the entire crew, HAL tried to kill Dave by refusing him reentry into the pod bay. Dave manages to get back inside and disconnects HAL. It was the only way to stop the murderous machine.
An AI experiment
A Wharton professor tasked ChatGPT and Microsoft’s new AI-powered Bing to create an educational game. In just half an hour, the AI tools conducted market research, wrote an email campaign, built a website, designed a logo, drafted a social media campaign and created a video, complete with a script.
The professor called the results “superhuman.”
Destroy all humans?
Geoffrey Hinton, the British computer scientist sometimes called the “godfather of artificial intelligence,” was interviewed by CBS news. He was asked about what’s going on with AI and if it has the potential to wipe out humanity.
Here’s his answer: “It’s not inconceivable. That’s all I’ll say.”
You should be scared. We should all be very scared.
You may also like: GPT-4 needs a lot more
Tags: AI (artificial intelligence), Apple, Big Tech, chatbot, Elon Musk, future, Humanity, Microsoft Bing, OpenAI, Robots, science fiction, Star Trek, Steve Wozniak