Countries around the world see Artificial Intelligence (AI) as a critical general technology to give them competitive advantage in productivity, economics and of course, warfare, be it cyber or real-world. Governments have poured billions of dollars into academia and industry to gain competitive edge. China, Israel, America, the UK, Canada and others have laid claim to staking out leadership roles with AI.
While the type of AI that we see in Hollywood movies, what is termed as general AI remains a ways off and some think it may never happen, the development of AI is moving fast and even in 2020 we saw significant progress around the world.
We’ve also learned over the past few years that Ai is influenced by gender, race and culture and thusly, societal values. A UNESCO report delved into the issue of gender equality in AI and found its impacts to be significant. Other research has shown the impacts of race in the development of AI. Then there are organisations and some governments calling for countries to not weaponize AI. But the recent killing of an Iranian nuclear scientist used AI in a machine gun to deadly effect. That rabbit, is out of the bag.
Which leads to the challenge of developing AI for the betterment of human society as a whole and that leads to geopolitics. To develop AI to benefit humanity, it means agreeing to a consistent set of common values by governments who then put regulations or legislation in place to ensure these values are followed by academia, government departments and Industry.
Such values might be ensuring gender and race equality in the development of AI tools and applications. Protecting citizens privacy rights, ensuring that AI is not used to manipulate consumer buying decisions (although that may be too late already) and guarding human agency and free will. And of course, limitations on the use of AI as a weapon; that too may already be too late.
These would be significant challenges in the old age of globalization, but as that age comes to a close, we are facing a period of disorder. One issue of this disorder is changing values in various countries. Whereas America and other democracies believe in freedom of expression and open markets, countries like China and Russia believe the State is the authority and citizens must follow the thinking of the State with no choice in the matter. These are diametrically opposed values. The United Nations is increasingly less effective and democracies around the world face significant challenges as authoritarian regimes seek to leverage technologies such as social media to destabilize them.
For Artificial Intelligence (AI) to be of true benefit to humanity, it needs to have the values of respecting human rights, gender and racial equality, respect for privacy and human agency baked in. But that means countries that don’t really believe in these values, agreeing to that and ensuring suitable rules and processes are put in place to govern those values and ensure they are followed.
In an an increasingly fractured and divisive world, this can seem almost and perhaps inevitably impossible. Perhaps it is. In and of itself, AI is not dangerous (arguably), for like any information technology, it does what we tell it to. Should we discover what consciousness is and how to deploy it into AI, then that AI, when it shifts from being narrow to general, may well inherit the values of the nations in which it is developed. We may then have far more serious consequences than we do today.
For Artificial Intelligence to benefit humanity, we must agree upon what human values for good are.
There may be some hope. Humanity realized fairly quickly the destructive power of nuclear weapons and we came to the realization of MAD (Mutually Assured Destruction) and ever since then we have had a tenuous and fragile peace and agreements between nations. If humanity can realize that AI represents the same existential threat to our planet and most importantly, the future of humanity, then perhaps we may come to similar treaties around the use of AI as a weapon. This eventuality may also lead to the rise of a new Cold War, not based so much around nuclear weapons (though that threat will remain until we get rid of nuclear weapons), but around recognizing that as AI and other related technologies evolve, we must look to the fact that no matter if a country is democratic and free or authoritarian and oppressive, the survival of our species transcends mutually assured destruction.