fbpx
Integrated Wisdom

AI Comes Clean

Artificial Intelligence rats itself out.

Unless you’ve slept through the last few months (and who could blame you if you did), you would know that the pundits are expounding on the vast potential for artificial intelligence, aka AI, to impact our lives. But what are experts, not the media folks, saying about it? While they agree it’s one of the most promising developments in recent years, they also regard it as one of the most threatening. Here’s a quote from Elon Musk (not an AI developer, but it won’t be long before claims he is, just like his cars):

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction.”

The initial work on artificial intelligence was done in the early 1950’s by British computer pioneers Alan Turing and John McCarthy. Jumping to the present day, Yann LeCun, Geoffrey Hinton and Yoshua Bengio are regarded as the main developers of the technology as we now know it. Prof LeCun works as the chief AI scientist at Meta, the parent company of Facebook, Instagram and WhatsApp.

Yann LeCun

Back to the main issue — what are the benefits and risks? Rather than doing the hard work of researching the pros and cons of AI on my own, I decided to let the technology do it for me. I went to ChatGPT, an AI chatbot developed by OpenAI, and asked for an article on the topic. I came back with one that was about a 1000 words in length — way too long. I then asked for another that is less than 200 words (because, well, who has the time?). Here’s what it came up with:

Artificial intelligence (AI) has the potential to bring about significant benefits to humanity, but it also poses significant risks. On the positive side, AI has already led to advances in healthcare, transportation, and finance, improving efficiency and accuracy while reducing costs. AI also has the potential to help address pressing global challenges such as climate change and poverty.

However, AI also poses significant risks. Unintended consequences of AI systems could lead to catastrophic outcomes. Biased decision-making by AI systems could perpetuate discrimination and inequalities in society, while the development of autonomous weapons could lead to accidental harm or misuse and pose an existential risk to humanity.

To address these issues, it is essential to develop and use AI technologies in a responsible and ethical manner. This requires collaboration between policymakers, technologists, and other stakeholders to ensure that AI technologies benefit humanity and the planet. As AI technologies continue to evolve and become more widespread, it is crucial that AI technologies are transparent, accountable, and auditable to ensure that they do not pose any unintended risks.

Written well enough, about 180 words, but a little vague on the details, particularly on the downside. So I posed a follow-up question about the specific existential threats to humanity:

Superintelligence: The emergence of an AI system that surpasses human intelligence could rapidly improve its own intelligence and capabilities, leading to unpredictable and potentially harmful behavior.

Unintended Consequences: AI systems could have unintended consequences that lead to catastrophic outcomes. For example, an AI system designed to optimize energy consumption could inadvertently cause harm to the environment or human health.

Autonomous Weapons: The development of autonomous weapons that operate without human intervention raises the risk of accidental harm or misuse, and to escalation of conflicts.

Economic Disruption: The widespread adoption of AI technologies could lead to significant job displacement and economic disruption, leading to social unrest and political instability.

Biased Decision Making: AI systems are susceptible to biases that may perpetuate discrimination and inequalities in society.

Cybersecurity Risks: The use of AI systems in critical infrastructure and decision-making processes raises the risk of cyberattacks and data breaches, such as disrupting energy grids.

Wait a minute — autonomous weapons? Holy shit, Batman! Certainly the full list above is concerning, but nothing poses a greater risk to our existence than the weapons thing.

The potential for this nightmare scenario led to a more direct question — can AI actually launch nuclear missiles without human intervention?

Launching nuclear missiles without human intervention is a particularly concerning existential risk. This scenario involves the possibility of AI systems becoming so advanced that they surpass human control and decision-making capabilities. If AI systems were to have control over nuclear weapons, there is a risk that they could launch them without human intervention, leading to catastrophic consequences.

There have been discussions around developing AI systems that are capable of making autonomous decisions in the event of a nuclear attack. However, the risks associated with such systems are significant, as they could potentially be hacked or malfunction, leading to an accidental launch.

2001: A Space Odyssey

Well, there you have it. It’s helpful that AI can be honest about itself so we know what to look out for. But it’s only as honest as it developers program it to be. In fact, AI acknowledges that the information it presents can be biased and that can lead to the wrong decisions.

That let to another follow-up: Mr. AI, where does biased decision making source from? The simple answer is us… the information accumulated by humans that feeds into AI systems can be biased, and that bias will carry through in decision-making. In other words, garbage in, garbage out. And while often unintentional, these biases can be difficult to detect:

Training Data: AI systems are derived from large datasets that identify patterns and make predictions. If these datasets are biased, the AI system will learn and perpetuate these biases. For example, if a facial recognition system is predominantly composed of one race, it may be less accurate in recognizing faces of other races.

Algorithm Design: If an algorithm is designed to optimize for a particular outcome, it may introduce biases in its decision-making to achieve that outcome.

Human Input: If the people creating the dataset for an AI system have implicit biases, these biases will be reflected in the system’s decision-making.

Feedback Loops that perpetuate biases: If an AI system is used to predict crime rates and is based on biased data, it may target certain populations for increased policing.

All things considered, we are left in the hands of the AI developers and others in positions of authority. And if you want to get an idea at how effective the powers-that-be will be to ensure AI operates in our best interests, just look at how well they’ve managed social media platforms, such as Facebook and Twitter. If that doesn’t give you pause and scare the hell out of you, nothing will.

Oh, and by the way, I did not fact-check what AI has written, so you really can’t be sure any of it is true. You might as well get used to what’s coming your way.

Are you enjoying AgnitusLife.com?
Give us a LIKE and SHARE With Your Friends Now!

Our Mission

Agnitus: (Latin) — Recognition, acknowledgement.

We are committed to the idea that those of us in our mature years have a long way to go. We have the resources of time and money — but also a sense of purpose. We still want to explore, pursue new paths, and create new adventures; we celebrate our lives rather than just muddle through them. Agnituslife.com strives to be a dynamic platform that provides information you won’t find anywhere else, and that will help you continue to expand your horizons.

Stay up to date at Agnitus Life

See all the latest on "Bringing Excellence and Excitement to life after 60"!
Loading

Copyright 2018 | Created by AgnitusLife.com | Powered by Fortyo5 Inc