Who will Govern AI: Ensuring a Safe Future for Humanity
- Reni Siddall

- May 21, 2024
- 2 min read
With corporatocracy dominating the globe and the current rate of artificial intelligence development, the dystopian depiction of tomorrow seems closer than ever.

AI is Everywhere
In 2024, artificial intelligence investment has found its way into every facet of business. As the world's tycoons strive to use technology to streamline their business model, the bourgeoisie of today are paining to stay relevant.
"More than 85% of businesses examined for the report see the adoption of big data, cloud computing, and AI as central to their development and success." (Cecchi-Dimeglio, 2023)
The capacity of AI to take our jobs is here, and its inherently progressive autonomy has the potential to leave humans powerless.
"AI has set its sights on 300 million jobs around the globe, about 9.1% of all the world's jobs, and here in the U.S., 25% of the workforce is concerned about being replaced by AI." (Cecchi-Demeglio, 2023)
From Fiction to Reality
With humans already facing a relatively alarming level of unemployment, AI continues to get jobs, and with more power and autonomy, it has the potential to pose a threat to humanity on a Terminator level.
Experts say that within this decade, without intervention, a Hollywood tales such as the 1927 film Metropolis and the 2023 movie The Creator, may be our reality not just the probable future. But what’s the plan?

Who’s in Control?
Used now as a tool humans can manipulate to benefit and advance mankind, but in the wrong hands and without immediate consideration for risk mitigation, this powerful technology has the means of advanced autonomy to the point of no return.
Technology is progressing at a pace unseen, and the world’s governments are falling behind in regulating the use of AI, says a group of 25 computer-science thought leaders in a paper called “Managing Extreme AI Risk Amid Rapid Progress.” (Milmo & editor, 2024)
“AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.” (Bengio et al., 2024)
The risk of malicious intent embedded in AI systems is a reality we can’t ignore. Even developers who create systems and training data to solve the world's problems may inadvertently create AI that pursues undesirable goals in situations never imagined by its inventor.
Staying on Top
Backed by the fear of falling behind, many dedicated research and engineering efforts amongst companies, militaries and governments may overlook ethics and make risky choices in the name of scientific advancement, “reaping the rewards of AI development while leaving society to deal with the consequences.” (Bengio et al., 2024)

Urgent proactive preparation is needed to help prevent AI from destroying humanity. National and global oversight institutions are needed to help with the following:
Foster Scientific Collaboration
Create Industry Standards
Educate and Raise Public Awareness
Limit Autonomy of AI
Develop Ethical Guidelines
Ensure Transparency
For AI to remain a tool humans retain control over and use to our advantage, immediate caution must be taken to prevent reckless misuse of this rapidly advancing unpredictable technology.
Allowing the industry to continue with little governance leaves way for erratic behaviour and the potential of human or automated malice; this is a war where no one wins.
Comments