Digital supremacy: there’s a reason the US only has the world’s most powerful drones | Urmila Chakrabarti

The news that Google could use its AI in warfare is yet another reminder of the need for regulators to make sure algorithms and algorithms in general are not a significant cause of any…

Digital supremacy: there’s a reason the US only has the world’s most powerful drones | Urmila Chakrabarti

The news that Google could use its AI in warfare is yet another reminder of the need for regulators to make sure algorithms and algorithms in general are not a significant cause of any harm. As the Trump administration continues to mess up America’s chances for success in the trade war with China by fighting a technology war, smart people inside the White House and outside its complex are still caught in an extremely outdated sense of geopolitics. Meanwhile, large American companies such as Alphabet (Google’s parent company) and IBM are planning to bid for contracts in America’s $600bn military-spending budget. In some ways, this is not surprising, because intelligence agencies have been using artificial intelligence and other technologies for quite some time, but the Pentagon’s budget is especially lucrative for large tech companies.

And while it would be easy to write off the use of AI in these potentially lucrative areas as corporate greed, it is essential that regulators ensure that AI does not create similar risks that have plagued some other countries. For instance, the US is the only major military power without even an ethics charter or regulatory framework for AI research or deployment. Instead, agencies such as the Defense Department and the CIA rely on a thicket of largely non-transparent defense-spending “areas of activity” and “challenge funding”. These controversies should only heighten awareness that public accountability is important. This level of secrecy places the chance of self-reinforcing policy that endangers our national security.

On a global level, human rights expert James Lewis reported that the United States had the “highest number of armed drones in the world” of any country that keeps these figures, and our relative monopoly on these lethal applications of technology is already an open secret. In fact, any AI-related policy change has to include safeguards that prevent the misuse of emerging technologies.

This is not so much about the risk that the US government will “go rogue” and turn AI weapons against domestic or foreign populations, as some internet activists have attempted to portray it, but instead whether US companies will eventually be successful at implementing AI without human intervention. The biggest use of artificial intelligence is in the military and public sector. However, that doesn’t mean the public should not have a say in how AI is used by our civilian leaders.

No other country uses AI for civilian purposes without reserving for itself the right to defend against foreign or other threats. Similarly, in US law, the executive branch is allowed to deploy its own weapons systems, but it must outline their scientific basis in congressional oversight. Therefore, the US should be doing more to ensure that we, as citizens, do not have to rely solely on the executive branch for any implementation of AI technology in domestic defense policy.

In addition to policy debates regarding national security and domestic law, even state and local governments must have a way to regulate human involvement in the development and use of algorithms such as algorithms that are used in criminal justice cases. The police and criminal justice officials are quickly learning that algorithm use in their systems is a likely gateway to mass incarceration, human rights abuses, and a range of other wrongs. In some states, so-called “commonsense” rules have been introduced. In California, for example, officials are not allowed to implement algorithmic criminal justice algorithms without some “informed affirmative consent” from people who have had personal contact with the system.

Meanwhile, the Trump administration has not yet settled on a strategy for steering AI in a positive direction. An executive order from President Trump requiring the federal government to set ethical guidelines for its use does not address the vital issue of accountability for our own use of AI technologies. In particular, because of the secrecy, the future of AI in a democratic society is likely to take place in adversarial environments without transparency.

Leave a Comment