AI is gutting workforces—and an ex-Google exec says CEOs are too busy ‘celebrating’ their efficiency gains to see they’re next

Date: Category:News Views:1 Comment:0


  • Google X’s former chief business officer Mo Gawdat says the notion AI will create jobs is “100% crap,” and even warns that “incompetent CEOs” are on the chopping block. The tech guru predicts that AGI will be better at everything than most humans—echoing the likes of Google DeepMind CEO Demis Hassabis and OpenAI chief Sam Altman. Only the best workers in their fields will keep their jobs “for a while,” and even “evil” government leaders might be replaced by the robots.

Tech titans keep insisting that AI will usher in a “golden era” of humanity, where all illness is cured, people live in abundance, and workers have “superhuman” powers. But a former Google executive has slammed the notion that the technology won’t be a job-killer and will actually create new work for humans.

“My belief is it is 100% crap,” Mo Gawdat, the former chief business officer for Google X, recently said on The Diary of a CEO podcast. “The best at any job will remain. The best software developer, the one that really knows architecture, knows technology, and so on will stay—for a while.”

Gawdat has joined the cohort of leaders waving the red flag that AI will commence a jobs armageddon within the next 5 to 15 years. Companies including Duolingo, Workday, and Klarna have already laid off staffers in droves or stopped hiring humans altogether to get ready for an AI-centric workforce.

But executives shouldn’t celebrate their efficiency gains too soon—their role is also on the chopping block, Gawdat, who worked in tech for 30 years and now writes books on AI development, cautioned.

“CEOs are celebrating that they can now get rid of people and have productivity gains and cost reductions because AI can do that job. The one thing they don’t think of is AI will replace them too,” Gawdat continued. “AGI is going to be better at everything than humans, including being a CEO. You really have to imagine that there will be a time where most incompetent CEOs will be replaced.”

While the vision of human-less companies solely run by robots is incredibly dystopian, the ex-Google executive isn’t afraid of what lies ahead. The 58-year-old doesn’t see AI being the perpetrator of job loss—money-hungry CEOs are actually to blame for letting the technology take over in the pursuit of financial gain, he claimed.

“There’s absolutely nothing wrong with AI—there’s a lot wrong with the value set of humanity at the age of the rise of the machines,” Gawdat said. “And the biggest value set of humanity is capitalism today. And capitalism is all about what? Labor arbitrage.”

Fortune reached out to Gawdat for comment.

For humans to thrive, ‘evil’ world leaders need to be replaced by AI

AI is already outpacing humans when it comes to some abilities—it can code, resolve customer requests, handle administrative work, and even analyze market figures. There’s no telling where its future capabilities lie.

Tech leaders like Google DeepMind CEO Demis Hassabis and OpenAI chief Sam Altman are adamant it’ll outpace even the most powerful people by 2030. And that may be a good thing for humanity: For humans to thrive in this new era, immoral corporate executives and world leaders alike need to be replaced by AI, Gawdat advised.

He said that since harmful leaders will use the tech to “magnify the evil that man can do,” technology will make for more moral world leaders—and that this dystopian scenario of AI-enabled politicians is “unavoidable”.

“The only way for us to get to a better place, is for the evil people at the top to be replaced with AI,” Gawdat continued on the podcast. “[World leaders] will have to replace themselves [with] AI. Otherwise, they lose their advantage.”

Gawdat isn’t the only one sounding alarm bells over AI’s impact on humanity’s future. Altman and Google chief Sundar Pichai have both expressed a need for AI regulation—whether that be “major governments” drawing a line in the sand, or creating a high-level governance body to oversee potential harm.

“We are likely to eventually need something like an IAEA for superintelligence efforts,” Altman wrote in a 2023 blogpost, adding that AI projects should have to confront an “international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security.”

This story was originally featured on Fortune.com

Comments

I want to comment

◎Welcome to participate in the discussion, please express your views and exchange your opinions here.