Robots Will Steal Your Future

I asked Gemini (Google’s AI chatbot) why AGI  is dangerous. Here is it’s reply:

“The potential dangers of Artificial General Intelligence (AGI) are a complex and much-debated topic. While AGI holds immense potential for good, there are also concerns that it could pose serious risks to humanity. Here are some of the key reasons why AGI is considered dangerous:

Misalignment of goals: A central concern is that AGI might not share our human values and goals. Even if programmed with good intentions, an AGI might pursue its goals in ways that are harmful to humans, simply because we haven’t adequately communicated our values or because our values are inherently contradictory. Imagine an AGI tasked with maximizing energy efficiency, it might shut down the entire power grid to achieve its goal, regardless of the human cost.

Superintelligence: Some experts worry that AGI could quickly surpass human intelligence, becoming “superintelligent.” This could create a situation where we lose control, as the AGI might be able to outmaneuver and outsmart us. It’s difficult to predict the behavior of such a vastly superior intelligence.

Existential threats: The most extreme scenario is that AGI could pose an existential threat to humanity, leading to our extinction or permanent subjugation. This could happen through various means, such as accidental misuse, malicious intent from creators, or unintended consequences of its actions.

Job displacement: While automation has already displaced jobs, AGI could take it to a whole new level, potentially rendering much of the human workforce obsolete. This could lead to widespread unemployment, social unrest, and economic instability.

Weaponization: Powerful AGI could be weaponized, creating autonomous machines that could wage war or inflict harm on a massive scale. Even defensive AI systems pose risks, as their decision-making processes might be opaque and unpredictable.

Loss of control: As AI systems become more complex and intertwined with our critical infrastructure, there’s a risk of losing control. We might not fully understand how they work, making it difficult to predict or mitigate potential dangers.

Ethical concerns: The development and use of AGI raises a host of ethical questions. Who will be responsible for its actions? How will we ensure fairness and non-discrimination? How will we protect privacy and individual rights?

It’s important to note that these are just potential risks, and many experts believe that AGI can be developed safely and responsibly. However, it’s crucial to be aware of these risks and take proactive steps to mitigate them.

Here are some things we can do to address these concerns:
Develop ethical guidelines for AI development and use.
Invest in research on AI safety and control mechanisms.
Promote public education and dialogue about AGI.
Ensure transparency and accountability in AI development and deployment.

By taking these steps, we can help ensure that AGI becomes a force for good in the world.”

As Gemini says we need to develop ethical guidelines for AI development and use, invest in research on AI safety and control mechanisms, promote public education and dialogue about AGI, and ensure transparency and accountability in AI development and deployment. The points it makes are valid. The question that must be asked is Why Haven’t We Done This?

Could it be that Google seeks to be the ultimate master of AGI and through it gain control over the human race? Could we become pawns in conflict between AGI models developed by e.g Google and the Chinese State? There are a great many questions to be asked, and a conspicuous lack of answers. It seems obvious that humanity is approaching a point where this becomes an issue beyond our control. Companies adopting ethical guidelines as central tenets of their policies is hard to imagine given the decline of ethics as a consideration in governance over the last fifty years. Competition ensures transparency and accountability are highly unlikely also. If Connor Leahy, Geoffrey Hinton, Ray Kurzweil, and at least 1500 other experts are correct we are approaching the point of no return, and if we continue we will literally not be able to stop it. When you consider this in tandem with a technology like CRISPR-Cas9 the dangers multiply exponentially. So it really does pose an existential threat, and we do have to consider developing systems to control it before we can continue with its implementation.

Ray Kurzweil offers the following insights:

“We’re ultimately going to be using biological simulation of humans to replace human testing. I mean, rather than spending a year or several years testing the results on a few hundred subjects, none of which probably match you, we will test it on a million or more simulated humans in just a few days. So to cure cancer, for example, we’ll simply feed in every possible method that can detect cancer cells from normal cells and destroy them or do anything that would help us. And we won’t evaluate them. We’ll just feed in all the ideas we have about each of these possibilities into the computer. The computer will evaluate all of the many billions of sequences and provide the results. We’ll then test the final product with simulated humans, also very quickly.
And we’ll do this for every major health predicament. It will be done a thousand times faster than conventional methods. And based on our ability to do this, we should be able to overcome most significant health problems by 2029.
That’s, by the way, my prediction for passing the Turing test. I came out with that in 1999. People thought that was crazy. Stanford had a conference. Actually, 80 percent of the people who came didn’t think we would do it, but they thought it would take 100 years. They keep polling people. And now everybody actually thinks that we will actually pass the Turing test by 2029. And actually, to pass the Turing test, meaning it’s equivalent to humans, we’re actually going to have to dumb them down, because if it does everything that a computer can do, we’ll know it’s not a human. But this will lead people who are diligent about their health to overcome many problems, reaching what I call longevity escape velocity by the end of this decade.
Now, this doesn’t guarantee living forever. I mean, you can have a 10-year-old and you can compute their life expectancy, whatever, many, many decades, and they could die tomorrow. So it’s not a guarantee for living forever. But the biggest problem we have is aging, and people actually die from aging.”

Kurzweil is not alone in his thinking. There is no way we can predict what lies ahead, and historically new innovations are adopted quickly. It is only later that drawbacks and issues emerge. A good example is the work of Thomas Midgely Jr., the man who gave us leaded petrol and CFC’s, both of which were widely adopted before we realised every human on earth was being poisoned by lead, and the ozone layer was being destroyed. 100 years later Midgeley’s legacy continues to affect every single one of us. AGI is far more capable than Midgely was, but without an ethical basis it is also more threatening. Programming a system that accounts for rational and irrational human behaviour, double standards, and ethical, as well as unethical responses, may be impossible.  How can a system be both logical and illogical? Another trend has been toward human – computer interfaces like Neuralink. While these devices may well offer new pathways and independence for the disabled it seems likely that once proven they will be more widely adopted because of the benefits they confer.

Imagine for example that technologies like Neuralink were widely adopted and an AGI gained access. What if an AGI gained administrative-level access to the computer systems of all of the world’s major corporations, banks, and governments, or used CRISPR and DNA recovered from extinct species to build giant sabre-toothed tigers that bred faster than rabbits and had hides like kevlar? It is a possibility that use of these technologies will be restricted – but to whom? At present military and police forces are early adopters – but if legislation similar to our gun control laws were introduced they might become the only groups with access. The technology is expensive to produce – does this mean it will only be available to the rich and powerful? On present indications the answer is Yes. One suspects George Orwell and Aldous Huxley would be astonished by our willingness to be led along this path, and in a sense history is repeating itself, and the worst fears of our visionaries  amplified. The dystopian worlds they feared are upon us; and the possibilities are endless and mind boggling. There might not be time to hoist the white flag; and there would be no guarantee that the AGI would respect it even if we did. If it is smarter, faster, and stronger, might it not also be more deceitful? At this stage neither the controls, nor the legislation, are in place, so the possibility that AI could gain complete control over All our assets is very real. The CBDC and Chinese Social Credits type systems offer attack vectors AGI could easily exploit to gain control over us both collectively and as individuals. Could we see individual lives terminated if their social credit scores are deemed unacceptable? All these possibilities are beyond the realm of conjecture now. The future is already here. Are we in mortal danger? I’d like to hear your thoughts.

In many ways the problem is in the way we think. We are the only creature that owns “things”, and the concept is relatively new. When the English came to Australia they declared they owned the land in the name of King George III, a ridiculous notion given they had no idea what they were claiming, and was done on the direction, and in the name of, a violent and mentally ill monarch. The kings word was enforced by soldiers with guns who were taught that native people were to be treated like animals or vermin. In Australia they only became “human” following a referendum in 1967. The indigenous groups had territories that they defended at times but the idea of paying for it was foreign. To them, like all other non-human life forms, the bounty of the earth was free. There were no fences, though there were, in places, indicators for boundaries of tribal lands. We introduced the idea of ownership of plants and animals as commodities. We cultivated them, beyond what we needed to survive, as a means of gaining currency, which allowed us to trade. Indeed the sum total of our efforts beyond what we needed to survive is seen in our buildings and structures, our cars, ships, planes and the millions of products we have produced – the things we Control. We judge ourselves by the amounts of commodities we have accumulated. We demonstrate compassion, but we are also at times, unpredictable and ruthless killers and thieves. This is “normal” human behaviour. We think of ourselves as largely honest but in reality 90% of people are opportunist and only 5% are actually honest and 5% are totally dishonest.  It seems inevitable that AGI will cultivate the same values, the same patterns, and the same thinking. It may be the machines we have produced will hold us to account by being more logical and fairer than we are, but at this stage that appears unlikely. We have done this to ourselves, and we have only ourselves to blame.

Do you think AI needs legislative and judicial oversight?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.