Is AI the Last Invention Humans Will Make?

AI Safety – Part 1
Exploring the Future of Artificial Intelligence

’ChatGPT’ - No AI (Artificial Intelligence) blog would be complete without a nod to ChatGPT, right? Let's dive into the world of artificial intelligence. Simply put, ChatGPT is an AI model designed to generate human-like responses from text-based input. We use such models for various tasks like content creation, writing emails, crafting blogs, coding, debugging, parsing information from given context, creating strategies and more.
There are other AI models everyone is unknowingly exposed to, from targeted advertising to recommendation systems on platforms like YouTube, Amazon, Netflix, etc as well as virtual assistants and algorithmic trading. On the Large-scale applications part, AI models contribute to predictive maintenance, developing self-driving cars, revolutionizing medical diagnosis and more. Each of these models is designed to excel at specific tasks.
This is not the peak, as a tech-savvy generation we envision developing more powerful AI models that do this all at once, without the hassle of training them all with specific inputs and asking them for a specific output. AI that can do any human work, it can write code, can cook, can drive, can attend meetings, and drive big organizations. Have we achieved this level of AI sophistication? No…Not Yet. The impact will be noticeable once we do it.

Artificial General Intelligence (AGI)

Artificial General Intelligence (AGI) could potentially learn to accomplish any intellectual or cognitive tasks that human beings or animals can perform. Many research groups are dedicated to developing these AI models like Google DeepMind, Singularity Studio, GoodAI and more. It is inevitable that someday AGI will arrive and will be marked as one of the most remarkable human innovations. Though at the moment it is a theoretical phase of research just imagine its contribution to almost all the activities we perform today, be it driving, manufacturing goods, supply chain, writing software (do we even need software in AGI case is the question?) and more.

We, Humans, will have nothing to do other than enjoy the passage of time, a true paradise, what a time to be alive on this planet!

The Dual Nature of Innovation

Every significant innovation comes with positive and negative aspects. This dual nature of innovation underscores the importance of potential drawbacks, which are usually ignored by innovators, and are only recognized when the downside is visible. AGI, a remarkable innovation, what could be bad about it? AGI is supposed to be human-level or more intelligent, which means that given a written set of hardware, it can perform any cognitive task that humans can do. Examples of such hardware are Atlas (Boston Dynamics), Optimus (Tesla Bot), and more.

This directly points to unemployment as the first drawback, a feared outcome even now in 2024 with AI becoming a norm at every workplace. Most likely AGI will be used in highly Automated systems and will eliminate complete human factors, including logical and emotional decisions. Capitalism with such a highly automated system leads to concentration of power and money, causing Socioeconomic inequalities. A lot of bad things can happen with AGI, there is a good level of speculation in movies like Space Odessey (2001), I Robot (2004), Her (2013), Transcendence (2014), Chappie (2015), Terminator Series (1984- 2019).

Movie Reference - 2001: A Space Odyssey

Control Over AI (Artificial Intelligence)

There are a lot of good and bad speculations surrounding the potential of AI, but the most important question is, are we doing something about it? Are AGI innovators doing something about it? Are we as a user at least aware of it? Are we asking policymakers to employ policies regarding it? Capitalists will never invest in AI safely if Government bodies are not controlling. We should employ something like carbon credit for AI research.

Make it compulsory to consider AI safety and create measures to mitigate potential bad scenarios. We don’t want to be locked out of our EVs because of an unmonitored AI system, Right?
Or What if AI systems automate warfare, leading to autonomous weapons that could kill without human intervention? Could AI infiltrate social media to manipulate public opinions and control our behaviour in harmful ways? (Again speculations!)
What we talked about here is why we need AI safety. What we are missing here is the measures we can take. Can we add a law book for AGI, defining how it should behave? Can we add a kill switch to the AI system, which can stop such a system if they are misbehaving? Is it even possible to add a kill switch, as systems will be so intelligent that, they will override the kill switch Lots of questions, unanswered!

Stay tuned….

Previous
Previous

Secure Teamcenter Active Workspace: Convert HTTP to HTTPS

Next
Next

Exploring the Features of Teamcenter Data Model