Why are the Tech-Kings Worried About AI?

Disclaimer: The information on this blog is for general informational purposes only and any opinions expressed are my own. I make no representations as to the accuracy or completeness of any information and will not be liable for any errors or omissions in this information.

Elon Musk, Geoffrey Hinton “Godfather of A.I.”, Steve Wozniak. These are just a few of the Tech-Kings in AI development who have expressed deep concerns about the use and regulation of AI on a global scale.

(Image credit in order: Debbie Rowe, Eviatar Bach, Gage Skidmore under CC BY-SA 3.0 license. Images cropped.)

In today's rapidly evolving technological landscape, few advancements have garnered as much attention and potential as Artificial Intelligence.

AI holds immense promise, however, even amidst this optimism, a growing chorus of concerns has emerged from influential figures in the technology industry.

Renowned names like Elon Musk, Geoffrey Hinton, and Steve Wozniak have expressed deep worries about the implications, risks, and regulation of AI.


Elon Musk (Founder of Tesla), was the Co-Founder of Open AI, the creators of the biggest revolution in tech right now, ChatGPT, but later stepped down as Chairman amongst fears that the technology poses a major existential risk to humanity.

Earlier last month, Geoffrey Hinton, cognitive psychologist and computer scientist, resigned from Google to speak more openly on the fears and dangers of AI.

Apple co-founder Steve Wozniak warns that AI will be the powerhouse of the next-gen scams.

Their apprehensions pose critical questions;

Why are these tech leaders, deeply involved in AI development, worried?

What are the specific concerns driving their anxieties?


AI Needs Regulation:

AI regulation

The biggest debate right now within the tech sector and beyond is the need for regulation.

Just like when the internet began back in 1983, there was limited regulations to make it safe for public use, making it a dangerous tool in early days with cyber criminals, leaking of personal information, and the introduction of social platforms. We’ve seen over the years regulations come into effect to protect people.

But with the acceleration in the technology, the race is on for big companies like Google, Microsoft and Bing to make large language model (LLM) AI bots to compete with Open AI’s ChatGPT.

With the rapid rate that AI is growing, will regulations be put into place before it’s too late?

Many experts believe AI needs to halt in order to give enough time for governments around the world to put in place regulations for AI and for the technology to go through more risk assessments and safety protocols before releasing more advanced models.

Musk said in an Interview with Fox News;

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production.”

“In the sense that it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction.”

In march of this year, Elon Musk launched a petition ordering a ‘halt’ on AI development for at least six months, which was signed by hundreds of other experts in the field.

This feels like a big wake-up call!

AI poses a unique challenge, because how an AI system behaves cannot necessarily be controlled by its creators. Many experts have stated that there is a major concern if AI is not regulated in its infancy stage, it could reap terrible outcomes outside of humans control.

If there is an understanding that we need to regulate it, why hasn’t it been done yet?

The truth is, regulating AI is tricky, because you first have to define what AI is, and understand the pros and cons that come with it. AI is still developing, so it makes it hard to define in legal terms. To begin looking to regulate AI, we need to have a multi-faceted approach.

It needs to encompass government policies that address ethical consideration, data privacy, and algorithmic transparency, while considering international collaboration to establish global standards.

This is way easier said than done, and a lot of processes need to be put into place to ensure the ongoing safety of the public by implementing continuous monitoring and adoption of regulations.

AI’s Potential Ability to Manipulate Humans

AI Manipulating humans

Many people fear that AI will take over as the technology has the potential to be much smarter than us.

Hinton stated his concern is the rate at which AI is growing, it could blur the lines between what is real and what is not, with AI having the capability to generate videos and images seamlessly, how will one know the difference between what is generated and what is real?

Hinton told New York Times in an interview;

“The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Hintons’ fears center around the alignment of humans and technology.

Hinton said at the EmTech Digital conference, hosted by MIT Technology Review;

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial for us,” Hinton said. “But we need to try and do that in a world where there [are] bad actors who want to build robot soldiers that kill people. And it seems very hard to me.”

A big concern he shares is AI having the ability to create its own sub-goals. He believes that AI will realise that getting more control is a good ‘subgoal’ because it helps humans achieve other goals, but he warns if this gets out of hand, humanity is in trouble.

He claims if AI models are smarter than us, they’ll be very good at manipulating us.

AI manipulation poses many worries about the blurring of lines between reality and generated content.

This raises concerns about misinformation, fake news, identity theft, and the manipulation of public opinion.

In terms of where AI is headed, this is plausible as systems have advanced so quickly in the last few years. Necessitating safeguards is crucial to mitigate these risks. This is why the Tech-Kings are so worried about AI.

Next-Gen Scams:

Next gen scams

With new technology always comes the worry of scams, and one’s ability to access to private, personal or sensitive information. With everything online these days, it only makes sense that this is a common fear with AI for people, and they aren’t alone.

A major concern of Apple Co-Founder, Steve Wozniak is the ability of AI to generate convincing scams.

I’m sure by now you have seen the news, many people are getting scammed out of their hard earned money from calls of their supposed ‘loved ones’ saying they’ve lost their wallet and need money transferred to them, with replications of their voice and infliction to a tee with modern AI technology.

Its ability to replicate video and audio has already come to fruition, and people are already experiencing the downsides to this advancement in technology.

While he says he can see the advantages the tech provides, there are risks that need to be addressed. Speaking to both the BBC and Fox News,

“AI is so intelligent it’s open to the bad players, the ones that want to trick you about who they are”

These scams are emerging as the technology advances, and while scary, there are things that can help identify these scams more transparently.

Enhancing caller identification and verification systems, implementing voice biometrics, and utilising AI-powered algorithms to detect and flag fraudulent calls can be used to help identify a scam.


With Open AI being the most advanced AI system out there to date, the Founder, Sam Altman, says they are not currently training ChatGPT-5. He is only focused on fixing existing issues that have been presented with ChatGPT-4.

Recently he testified in the Senate judiciary committee panel, amongst safety concerns stating;

“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models”


The concerns around the lack of regulation, the potential for AI manipulation, and the rise of convincing scams call for urgent action. While it is clear there is a need for responsible development, regulation and mitigation of potential risks, this technology has the ability to revolutionise our world as we know it.

There are many upsides to having such advanced technology work with us to help combat issues within society like climate change, assist us to work more efficiently in the workplace, and be able to solve problems much faster than we can.

It is most important to regulate AI before advancements in the technology continue, but with the right protocols in place will ensure a smooth transition of AI into the world.

Do you think AI needs regulations? What can the tech companies be doing to ensure safety?

Previous
Previous

How Adobe Firefly Revolutionises AI for Creatives

Next
Next

Snapchat AI - Is There Something WRONG?