Defense AI and Arms Control Network
Defense AI and Arms Control Network is a platform for monitoring, aggregating, and analyzing world wide advances and policies on defense AI and arms control. We search and collect reports and analysis related to military AI ethics and governance, AI arms control, and update our services on a daily basis. While this does not imply we agree on any of the opinions expressed in these resources. Recommending sources and reports for Defense AI and Arms Control Network is highly welcome and appreciated.

Filter 707 Publications

commentary

GUEST BLOG: Questions about: SR-72 aircraft, 6G NGAD fighter plane, and B-21 Stealth Bomber

There’s Department of Defense (DoD) Directive 3000.09. It basically says that a human must pull the trigger on lethal weapons in a war (man in the loop), but the wording is ambiguous. It looks like fully autonomous weapons are fine for defensive operations, or where appropriate in offensive operations. We already have some of those systems: the AEGIS Combat System and the Phalanx System can operate autonomously. This directive suggests limits on what the Air Force can do with artificial intelligence (AI) and weapons on these new aircraft.

military embedded systems

statement

Stop Killer Robots statement to the CCW annual Meeting of High Contracting Parties, 13-15 November 2024

Stop Killer Robots statement to the annual Meeting of High Contracting Parties to the CCW

stop killer robots

commentary

Eurofighter Typhoon: It Might Be Europe's Best Fighter Jet Ever

The Eurofighter Typhoon, a joint project of the UK, Germany, Italy, and Spain, represents a landmark in European defense integration. The advent of artificial intelligence (AI) has led many European defense analysts to call for an entirely new Eurofighter, a sixth-generation bird that incorporates things like AI.

national interest

commentary

How to Manage AI Big-Data Risks

Establishing a taxonomy for AI risks would enable researchers, policymakers, and industries to communicate effectively and coordinate their efforts.

national interest

statement

Stop Killer Robots statement to the First Committee on Disarmament and International Security

Read the Stop Killer Robots statement to First Committee at the 79th United Nations General Assembly.

stop killer robots

meeting report

Ethical use of big data for healthy communities and a strong nation: unique challenges for the Military Health System

Recent advances in artificial intelligence (AI) created powerful tools for research, particularly ... , health-related data in the Department of Defense (DoD). Discussions explored researchers’ ethical...

springer

open forum

What’s wrong with “Death by Algorithm”? Classifying dignity-based objections to LAWS

The rapid technological advancement of AI in the civilian sector is accompanied by accelerating attempts to apply this technology in the military sector. This study focuses on the argument that AI-equipped let...

springer

commentary

New AI-powered strike drone shows how quickly battlefield autonomy is evolving

First-person drone piloting is yesterday’s news. Drones are becoming smarter as the electronic environment around them makes operator communication more difficult.

defense one

commentary

Forget F-22, F-35 or NGAD: What a 7th Generation Fighter Could Be Like (In 2070)

Future jets could be fully unmanned, with rapid design and production through 3D printing and AI-driven simulations.

national interest

analysis

Will AI fundamentally alter how wars are initiated, fought and concluded?

In this post, Erica Harper sets out the possible implications of AI-enabled military decision-making as this relates to the initiation of war, the waging of conflict, and peacebuilding. She highlights that while such use of AI may create positive externalities — including in terms of prevention and harm mitigation — the risks are profound. These include the potential for a new era of opportunistic warfare, a mainstreaming of violence desensitization and missed opportunities for peace. Such potential needs to be assessed in terms of the current state of multilateral fragility, and factored into AI policy-making at the regional and international levels.

icrc blog

analysis

Transcending weapon systems: the ethical challenges of AI in military decision support systems

In this post, Matthias Klaus, who has a background in AI ethics, risk analysis and international security studies, explores the ethical challenges associated with a military AI application often overshadowed by the largely dominating concern about autonomous weapon systems (AWS). He highlights a number of ethical challenges associated specifically with DSS, which are often portrayed as bringing more objectivity, effectivity and efficiency to military decision-making. However, they could foster forms of bias, infringe upon human autonomy and dignity, and effectively undermine military moral responsibility by resulting in peer pressure and deskilling.

icrc blog

open forum

International governance of advancing artificial intelligence

New technologies with military applications may demand new modes of governance. In this article, we develop a taxonomy of technology governance forms, outline their strengths, and red-team their weaknesses.

springer

toolkit

Parliamentary engagement

While parliamentarians won’t participate directly in the negotiation of an international treaty banning and regulating killer robots – diplomats do under instructions from their government – parliamentarians will have a vital role in rejecting the automation of killing and ensuring meaningful human control over the use of force and building momentum towards a treaty. Engaging with them is therefore important in our lobbying efforts. This guide aims to assist campaigners in parliamentary engagement. Although there are specificities in approaching and engaging parliamentarians and you will have to adjust based on your national context, parliamentary outreach should be a critical part of your overall advocacy and lobbying activities.

stop killer robots

commentary

How to Slow the Spread of Lethal AI

Today, it is far too easy for reckless and malicious actors to get their hands on the most advanced and potentially lethal machine-learning algorithms.

national interest

analysis

The risks and inefficacies of AI systems in military targeting support

As AI-based decision support systems (AI DSS) are increasingly used in contemporary battlefields, Jimena Sofía Viveros Álvarez, member of the United Nations Secretary General’s High-Level Advisory Body on AI, REAIM Commissioner and OECD.AI Expert, advocates against the reliance on these technologies in supporting the target identification, selection and engagement cycle as their risks and inefficacies are a permanent fact which cannot be ignored, for they actually risk exacerbating civilian suffering.

icrc blog

analysis

The problem of algorithmic bias in AI-based military decision support systems

Algorithmic bias has long been recognized as a key problem affecting decision-making processes that integrate artificial intelligence (AI) technologies. The increased use of AI in making military decisions relevant to the use of force has sustained such questions about biases in these technologies and in how human users programme with and rely on data based on hierarchized socio-cultural norms, knowledges, and modes of attention.

icrc blog

report

Nuclear Weapons and Artificial Intelligence: Technological Promises and Practical Realities

Recent advances in the capabilities of artificial intelligence (AI) have increased state interest in leveraging AI for military purposes. Military integration of advanced AI by nuclear-armed states has the potential to have an impact on elements of their nuclear deterrence architecture such as missile early-warning systems, intelligence, surveillance and reconnaissance (ISR) and nuclear command, control and communications (NC3), as well as related conventional systems.

sipri

analysis

Artificial intelligence in military decision-making: supporting humans, not replacing them

Militaries incorporating increasingly complex forms of artificial intelligence-based decision support systems (AI DSS) in their decision-making process, including decisions on the use of force. The novelty of this development is that the process by which these AI DSS function challenges the human’s ability to exercise judgement in military decision-making processes. This potential erosion of human judgement raises several legal, humanitarian and ethical challenges and risks, especially in relation to military decisions that have a significant impact on people’s lives, their dignity, and their communities. It is in light of this development that we must urgently and in earnest discuss how these systems are used and their impact on people affected by armed conflict.

icrc blog

statement

Statement by Stop Killer Robots to the GGE on lethal autonomous weapons systems, 26-30 August

Read the statement in full from Stop Killer Robots to the first 2024 session of the Group of Governmental Experts of the High Contracting Parties related to emerging technologies in the area of lethal autonomous weapons systems (LAWS).

stop killer robots

report

Overview of state submissions to UN Secretary-General report on Autonomous Weapons

Stop Killer Robots’ research and monitoring team has produced a publication summarising State submissions to the highly anticipated UN Secretary-General’s report on Autonomous Weapons.

stop killer robots

analysis

Building the Tech Coalition

How Project Maven and the U.S. 18th Airborne Corps
Operationalized Software and
Artificial Intelligence for the
Department of Defense

cset

commentary

Drones Are Destroying Everything In Ukraine: War Will Never Be the Same

There are still concerns about leaving too much control to the machines, lest stories from science fiction become self-fulfilling prophecies. But the fact remains that the use of drones could help keep soldiers out of harm's way.

national interest

commentary

Forget NGAD or F/A-XX: What a 7th Generation Fighter Could Be Like (In 2070)

While the seventh generation isn't yet defined, it may feature autonomous capabilities, advanced materials, and multinational collaboration. However, such advancements could be decades away, possibly emerging in the 2070s or later.

national interest

commentary

'I'm afraid I can't do that': Should killer robots be allowed to disobey orders?

Militaries need to show it’s possible to build ethical killer robots that don’t say no, or engineer a safe right-to-refuse while keeping humans in the loop.

bulletin

report

Towards a Two-tiered Approach to Regulation of Autonomous Weapon Systems: Identifying Pathways and Possible Elements

As the global conversation on how to address the challenges posed by autonomous weapon systems (AWS) evolves, there is now growing support among states that one possible way to proceed is through a ‘two-tiered approach’. Such an approach would, on the one hand, prohibit certain types and uses of AWS and, on the other hand, place limits and requirements on the development and use of all other AWS. A critical task facing states is to agree on how such a two-tiered approach could be enacted.

sipri

original research

Command responsibility in military AI contexts: balancing theory and practicality

Artificial intelligence (AI) has found extensive applications to varying degrees across diverse domains, including the possibility of using it within military contexts for making decisions that can have moral consequences. A recurring challenge in this area concerns the allocation of moral responsibility in the case of negative AI-induced outcomes.

springer

commentary

Here Comes Terminator: Former Joint Chiefs Chairman Predicts U.S. Military Will be Armed With Robots

Retired U.S. Army General Mark Milley predicts that robots and autonomous systems could comprise up to one-third of the U.S. military by 2039, potentially operated and commanded by artificial intelligence (AI).

national interest

commentary

Bomber Drama: The B-21 Raider Nightmare Has Just Begun

The B-21 Raider stealth bomber, still in development, faced a $1.6 billion cost overrun in late 2023, raising concerns about its expense amid the rise of drone warfare. Despite this, the B-21 is intended to replace the B-2 Spirit, whose stealth capabilities are becoming outdated.

national interest

commentary

Is the Age of the Submarine Over?

As warfare evolves, traditional military strategies and platforms must adapt or face obsolescence. The U.S. military, heavily reliant on aircraft carriers, now confronts the growing threat of anti-access/area denial (A2/AD) systems.

national interest

analysis

Enabling Principles for AI Governance

How to govern artificial intelligence is a concern that is rightfully top of mind for lawmakers and policymakers.To govern AI effectively, regulators must 1) know the terrain of AI risk and harm by tracking incidents and collecting data; 2) develop their own AI literacy and build better public understanding of the benefits and risks; and 3) preserve adaptability and agility by developing policies that can be updated as AI evolves.

cset

original article

AI and Warfare: A Rational Choice Approach

Artificial intelligence has been a hot topic in recent years, particularly as it relates to warfare and military operations. While rational choice approaches have been widely used to understand the causes of war, there is little literature on using the rational choice methodology to investigate the role of AI in warfare systematically.

springer

analysis

Reinventing the wheel? Three lessons that the AWS debate can learn from existing arms control agreements

To help states elaborate on possible elements of a two-tiered approach to the governance of AWS, Laura Bruun from the Stockholm International Peace Research Institute (SIPRI) points to three lessons from past arms control negotiations that can be applied to the AWS debate

icrc blog

annual report

Campaign to Stop Killer Robots - 2023 Annual Report

The 2023 annual report provides an overview of activities carried out by the Campaign to Stop Killer Robots from April 2023 to March 2024.

stop killer robots

annual report

Campaign to Stop Killer Robots - 2022 Annual Report

The 2022 annual report provides an overview of activities carried out by the Campaign to Stop Killer Robots from April 2022 to March 2023.

stop killer robots

annual report

Campaign to Stop Killer Robots - 2021 Annual Report

The 2021 annual report provides an overview of activities carried out by the Campaign to Stop Killer Robots from April 2021 to March 2022.

stop killer robots

commentary

What Could Go Wrong? Russia Vows to Develop Autonomous Drones

That should be seen as a danger not just for those in Ukraine today, but perhaps all of humanity in the not-so-distant future.

national interest

commentary

The B-21 Raider Question the U.S. Air Force Needs to Ask

Manned aircraft are still relevant. And with the onset of the B-21, manned aircraft should continue to be relevant. But automation and artificial intelligence are coming, and will one day encroach upon the pilot’s job security.

national interest

response paper

Stop Killer Robots submission on autonomous weapon systems to the UN Secretary-General

The submission from Stop Killer Robots to the United Nations Secretary-General in response to Resolution 78/241 on autonomous weapons systems.

stop killer robots

commentary

NATO alleges intensifying campaign of Russian hybrid activities on alliance territory

A 2020 study identified at least 12 NATO member states as using social media to spread computational propaganda and disinformation, while two (the UK and USA) were shown to have high “cyber troop” (government or political party actors tasked with manipulating public opinion online) capacity. Such activities appear to be connected to US special forces and intelligence agencies, and are being linked to private sector initiatives using artificial intelligence.

nato watch

commentary

Autonomous F-16 Fighters Are ‘Roughly Even’ With Human Pilots Said Air Force Chief

The future loyal wingmen of the United States Air Force are inching closer to becoming a reality, and more importantly, the artificial intelligence (AI) controlled aircraft could be on track to be as good as any human pilot. That was the assessment from Air Force Secretary Frank Kendall, who recently took flight in an autonomously-controlled X62A VISTA (Variable In-flight Simulation Test Aircraft), a modified F-16 Fighting Falcon

national interest

commentary

How AI is Redefining Middle Eastern Warfare

Israel and the Gulf States are betting on Artificial Intelligence to help them fend off Iranian drones and proxies.

national interest

commentary

Drink the Kool-Aid all you want, but don't call AI an existential threat

Generative AI can wreak havoc in many ways but it’s not an existential threat any more than computer code is.

bulletin

commentary

The U.S. Air Force Is Mock Dogfighting AI Piloted F-16 Fighter Jets

The U.S. Air Force is advancing artificial intelligence (AI) capabilities within its ranks by incorporating AI pilots into F-16 combat aircraft as part of DARPA's Air Combat Evolution (ACE) program.

national interest

commentary

AI Top Gun?: Autonomous F-16 Just Took Part in a Dogfight With Manned Fighter

In the recently disclosed flights, the ACE AI algorithms took control of a specially modified F-16 Fighting Falcon test aircraft designated as the X-62A, or VISTA (Variable In-flight Simulator Test Aircraft), at the Air Force Test Pilot School at Edwards Air Force Base (AFB), California. The demonstrations of autonomous combat maneuvers began last year.

national interest

commentary

Listen up, UN: Soldiers aren't fans of killer robots

Surprisingly, people serving in the US military are less likely than the general public to support using unmanned vehicles in military operations, even when doing so could save soldiers’ lives.

bulletin

original paper

Explainable AI in the military domain

In the military domain, numerous bodies have argued that autonomous and AI-enabled weapon systems ought not incorporate unexplainable AI

springer

commentary

Algorithms of War: The Use of AI in Armed Conflict

As countries prepare to deploy lethal autonomous weapon systems at scale, artificial intelligence is being integrated into drone operations and to support human decision-making in conflicts around...

carnegie council

commentary

Coming Soon: Autonomous F-16 Fighting Falcons?

The U.S. Air Force is advancing its Next Generation Air Dominance (NGAD) program by integrating autonomous capabilities into older F-16 Fighting Falcons as part of the VENOM-AFT program.

national interest

commentary

US-UK safety pact could shape the future of AI

Two research institutes will collaborate on AI safety tests, among other things.

defense one

commentary

Lawmakers want answers from Pentagon on AI developments with Australia, UK

Senators are seeking more information about AI safety within the AUKUS program.

defense one

article

General Assembly adopts landmark resolution on artificial intelligence

The UN General Assembly announced the unanimous adoption of a 13-point resolution aimed at regulating and ensuring the security of the field of artificial intelligence (AI). The resolution was...

new defence order strategy

analysis

Falling under the radar: the problem of algorithmic bias and military applications of AI

Last week, states parties met for the first session of the Group of Governmental Experts ...

icrc blog

research article

From AI Ethics Principles to Practices: A Teleological Methodology to Apply AI Ethics Principles in The Defence Domain

This article provides a methodology for the interpretation of AI ethics principles to specify ethical criteria for the development and deployment of AI systems in high-risk domains.

springer

commentary

The big AI research DARPA is funding this year

The Defense Department’s key research arm will experiment with ethical chatbots and new robot super pilots.

defense one

report

Enabling Technologies and International Security: A Compendium (2023 Edition)

There is an urgent need for a more thorough and comprehensive examination of enabling technologies as well as their potential impacts on international security.

unidir

statement

Statement by Stop Killer Robots to the GGE on lethal autonomous weapons systems, 4-8 March

Read the statement in full from Stop Killer Robots to the first 2024 session of the Group of Governmental Experts of the High Contracting Parties related to emerging technologies in the area of...

stop killer robots

commentary

The worldwide market for unmanned naval vessels

WARFARE EVOLUTION BLOG. In our last escapade, we investigated the worldwide market for warships and submarines. Out of respect for the literary principle of subject matter continuity, we are forced...

military embedded systems

main paper

Public perceptions of the use of artificial intelligence in Defence: a qualitative exploration

There are a wide variety of potential applications of artificial intelligence (AI) in Defence settings, ranging from the use of autonomous ... assurance relating to the development and use of AI in military setti...

springer

open forum

Ethical governance of artificial intelligence for defence: normative tradeoffs for principle to practice guidance

The rapid diffusion of artificial intelligence (AI) technologies in the defence domain raises challenges for the ethical governance...what to the how of AI ethics sees a nascent body of literature published by defence

springer

commentary

Killer Robots Are Coming to the Battlefield

The proliferation of autonomous weapons systems (AWS)—often (mis) labeled ‘killer robots’—is a modern concern.

national interest

commentary

The Future of Missile Defense

New technologies for anti-missile defense are challenging the assumed priority of offense over defense.

national interest

commentary

How NGAD, F/A-XX and B-21 Raider will Transform the U.S. Military

As tensions between Washington and Beijing continue to ramp up, the arms race to develop the world’s first next-generation fighters is on. From submarines and fighters to bombers and main battle tanks, the U.S. and China are prioritizing the development of advanced and cutting-edge technologies. Perhaps the most anticipated sixth-generation designs are the upcoming Next-Generation Air Dominance (NGAD) program, the B-21 stealth bomber and the F/A-XX airframe.

national interest

commentary

Will Artificial Intelligence Lead to War?

The impact of generative AI on Asian deterrence is not well understood and may create greater risks of conflict.

national interest

commentary

Autonomous drone swarms and the contested imaginaries of artificial intelligence

AI-based, autonomous weapon systems (AWS) have the potential of weapons of mass destruction, and effects of AWS are downplayed by the military and the arms industry staging these systems, it is also argued that they can be built on the basis of a ‘responsible’ or ‘trustworthy’ artificial intelligence (AI).

springer

news

Killer Robots: UN Vote Should Spur Action on Treaty

Countries that approved the first-ever United Nations General Assembly resolution on “killer robots” should promote negotiations on a new international treaty to ban and regulate these weapons,...

human rights watch

commentary

Forget NGAD: What a 7th Generation Fighter Could Be Like

While 6th generation fighters like the NGAD, Tempest, and F/A-XX are all the rage, a 7th generation fighter is already being considered in some defense circles

national interest

commentary

B-21 Raider: The Last Stealth Bomber with a Pilot?

Could the new B-21 Raider stealth bomber be the last U.S. Air Force bomber that has pilots at the control of this expensive U.S. Air Force warplane?

national interest

commentary

Algorithmic predictions and pre-emptive violence: artificial intelligence and the future of unmanned aerial systems

The military rationale of a pre-emptive strike is predicated upon the calculation and anticipation of threat. The underlying principle of anticipation, or prediction, is foundational to the operative logic of ...

springer

perspective

Adopting and expanding ethical principles for generative artificial intelligence from military to healthcare

With the ongoing AI arms race in the Russia-Ukraine War, it is expected that AI-powered lethal weapon systems will become commonplace in warfare

springer

report

Exploring Synthetic Data for Artificial Intelligence and Autonomous Systems: A Primer

The Primer explores existing data challenges, both technical and organizational, introduces key technical characteristics and methods of generating synthetic data, and analyzes implications of using synthetic data in the context of international security, including for autonomous systems and in the cyber realm.

unidir

commentary

AI and the future of warfare: The troubling evidence from the US military

US military officers can approve the use of AI-enhanced military technologies that they don't trust. And that's a serious problem.

bulletin

main paper

Three lines of defense against risks from AI

Organizations that develop and deploy artificial intelligence (AI) systems need to manage the associated risks—for economic, legal, and ethical reasons. However, it is not always clear who is responsible for A...

springer

article

Autonomous weapons and digital dehumanisation

This short explainer paper discusses autonomous weapons in the context of digital dehumanisation.

stop killer robots

briefing paper

Targeting people and digital dehumanisation

This short briefing paper addresses the need for a prohibition on autonomous weapons systems designed or used to target humans, and the digital dehumanisation inherent in such systems.

stop killer robots

report

Convergences in state positions on human control

This paper presents an examination of convergences in state positions on human control in the context of autonomy in weapons systems.

stop killer robots

news

Today's D Brief: EU to miss Ukraine-aid goal; F-16 training base opens; NATO’s cable focus; DOD’s new ethical-AI tools; And a bit more.

The Pentagon just released a new set of “ethical artificial intelligence tools” to help users use the technology more responsibly

defense one

analysis

Algorithms of war: The use of artificial intelligence in decision making in armed conflict

IHL calls for a ‘human-centered’ approach to the development and use of AI in armed conflict – to try to preserve humanity in what is already an inhumane activity.

icrc blog

interview

'AI Godfather' Yoshua Bengio: We need a humanity defense organization

In this interview, AI godfather Yoshua Bengio discusses attention-grabbing headlines about AI, taboos among AI researchers, and why top AI researchers may disagree about the risks AI may pose to humanity...

bulletin

analysis

The Inigo Montoya Problem for Trustworthy AI (International Version)

Australia, Canada, Japan, the United Kingdom, and the United States emphasize principles of accountability, explainability, fairness, privacy, security, and transparency in their high-level AI policy documents. But while the words are the same, these countries define each of these principles in slightly different ways that could have large impacts on interoperability and the formulation of international norms.

cset

news

Protect Humanity from Killer Robots

United Nations Secretary-General António Guterres and President of the International Committee of the Red Cross Mirjana Spoljaric have a new message for governments: “act now to preserve human...

human rights watch

commentary

An Autonomous Osprey MK III Just Passed a Key Military Test

The flights with the Osprey MK III were the first major experimental effort for the new ADAx proving ground, but others are soon to follow.

national interest

q&a

FAQs on UNGA Resolution

A resolution on Autonomous Weapons Systems (AWS) will be tabled at the First Committee of the 78th Session of the United Nations General Assembly in October 2023. This document provides brief answers to some frequently asked questions on the issue.

stop killer robots

policy brief

5 reasons why a resolution at the UNGA is vital for progress on Autonomous Weapons Systems

The First Committee session of the 78th Session of the United Nations General Assembly (UNGA) in October 2023 provides a crucial opportunity for progress.

stop killer robots

commentary

NSA to stand up AI security center

The National Security Agency is standing up an artificial intelligence security center, with an end goal of promoting the secure development, integration, and adoption of AI capabilities within national security systems, and our defense industrial base.

defense one

commentary

Washington’s Bet on AI Warfare

The future of warfare will certainly be data-driven and AI-enabled, and, in many ways, it already is.

national interest

commentary

AI and atoms: How artificial intelligence is revolutionizing nuclear material

There's a three-dimensional solution to manage the evolving dual-use concern of AI: advance states-centric monitoring and regulation, promote intellectual exchange between the non-proliferation...

bulletin

commentary

Interview: Emerging military technology expert Paul Scharre on global power dynamics in the AI age

The author of "Four Battlegrounds: Power in the age of artificial intelligence" surveys in matter-of-fact detail the struggle for world leadership in AI—especially as it relates to US-China power...

bulletin

commentary

Why AI for biological design should be regulated differently than chatbots

LLM-based chatbots and bio-design tools influence the biosecurity landscape in different ways and require independent governance.

bulletin

commentary

Biotech promises miracles. But the risks call for more oversight

Despite the dramatic pace of discoveries in the life sciences, the regulatory systems established for other dual-use risk domains, such as chemical and nuclear research, remain far more mature than...

bulletin

conference paper

Nuclear Weapons and the Militarization of AI

This contribution provides an overview of nuclear risks emerging from the militarization of AI technologies and systems. These include AI enhancements of cyber threats to nuclear command, control and communica...

springer

commentary

Convergence: Artificial intelligence and the new and old weapons of mass destruction

Artificial intelligence is capable of amplifying the risks of other technologies, and demands a reevaluation of the standard policy approach.

bulletin

commentary

Inside the messy ethics of making war with machines

AI is making its way into decision-making in battle. Who’s to blame when something goes wrong?

mit technology review

commentary

War is messy. AI can't handle it.

As AI becomes part of military decision-making, it’s important to be wary of the pristine ideas of how technology can transform conflict.

bulletin

report

U.S.-China Competition and Military AI

This report explores how the United States can manage strategic risks—defined as increased risks of armed conflict or the threat of nuclear war—that could be created or exacerbated by military AI in its relationship with China...

cnas

commentary

US is losing AI edge to China, experts tell lawmakers

China is directing more of its AI-related research into defense applications than the United States, whose tech sector is more focused on consumer AI services such as ChatGPT.

defense one

commentary

'Artificial Escalation': Imagining the future of nuclear risk

The reasons not to integrate AI into comprehensive nuclear command, control, and communications systems are manifold. They involve increased speed of warfare, accidental escalation, misperception...

bulletin

analysis

Artificial Intelligence and Arms Control – How and Where to Have the Discussion

The UN Security Council will discuss the implications of artificial intelligence for the maintenance of international peace and security for first time in July 2023. The impact on arms control is a crucial element. So far, though, discussions have been limited and disjointed.

gcsp

report

Weaponizing Innovation? Mapping Artificial Intelligence-enabled Security and Defence in the EU

The paper provides a cautionary tale regarding the mainstreaming of AI-driven technological solutions into security and defence across the EU, noting that this legitimizes a specific geopolitical and militaristic imaginary of innovation that might not be compatible with the EU’s promotion of responsible, trustworthy and human-centric visions of such systems...

sipri

commentary

Most AI research shouldn't be publicly released

Transparency in scientific research is undeniably valuable. But it would be a mistake for AI research to be completely transparent. To minimize harm, dual use technologies—especially those like AI...

bulletin

news

Palestinian Forum Highlights Threats of Autonomous Weapons

Autonomous weapons systems could help automate Israel’s uses of force. These uses of force are frequently unlawful and help entrench Israel’s apartheid against Palestinians. Without new international law to subvert the dangers this technology poses, the autonomous weapon systems Israel is developing today could contribute to their proliferation worldwide and harm the most vulnerable...

human rights watch

news

Cluster Munition Convention Offers Roadmap for New Autonomous Weapons Treaty

Although political and procedural hurdles have impeded progress on addressing autonomous weapons systems, proponents of a new treaty should look to the success of the Convention on Cluster Munitions, and the negotiations that led to it, for inspiration....

human rights watch

opinion

Autonomous Weapons: Implications and Countermeasures

Defeating autonomous weapons requires a constant, preventative effort, as technology development can sometimes outpace politics. Governments, civil society organizations, researchers, and industry players must work together to properly navigate this complex topic and assure the right and ethical implementation of emerging technology …

wgi

event

RoboEmercom-2023

On May 31, 2023, the III Scientific and Practical Conference on the Development of Robotics in the Field of Life Safety, known as «RoboEmercom», will take place. The conference will be discuss Experience in the use of Robotics and Technical Systems (RTS) in special military operations, along with associated problems and potential solutions...

new defence order strategy

commentary

Why the United States should prioritize autonomous demining technology

If the United States decides to send cluster munitions to Ukraine, it should consider investing in autonomous capabilities for demining.

bulletin

commentary

Regulate AI to Boost Trustworthiness and Avoid Catastrophe, Experts Tell Lawmakers

The difference between AI that’s a boon to society or a curse lies in truthfulness, a uniquely human concept.

defense one

paper

Adopting AI: how familiarity breeds both trust and contempt

Familiarity plays little role in support for AI-enabled military applications, for which opposition has slightly increased over time.

springer

commentary

How politics and business are driving the AI arms race with China

Commercial competition, politics, and public opinion are driving AI development in the United States—and unnecessarily escalating the AI arms race with China.

bulletin

report

Proposals Related to Emerging Technologies in the Area of Lethal Autonomous Weapons Systems: A Resource Paper (updated)

This resource paper offers a comparative analysis of the content of the different proposals related to emerging technologies in the area of lethal autonomous weapon systems (LAWS) submitted by...

unidir

commentary

Alteration of Rivalry in the 21st Century: from Oil to Artificial Intelligence (AI)

Among the major powers that have recognized the connotation of AI in shaping the future of global supremacy dynamics are the United States, China, and Russia. With the goal of having an edge over each other, these countries have made significant investments in AI exploration and growth.

wgi

commentary

To avoid an AI “arms race,” the world needs to expand scientific collaboration

What should be done to manage AI and other technological advances that pose catastrophic risks? What the world should have done with nuclear technology: Expand scientific collaboration and avoid...

bulletin

article

AI, Automation, and the Ethics of Modern Warfare

In this blog post, Palantir Global Director of Privacy & Civil Liberties Engineering Courtney Bowman and Privacy & Civil Liberties Government and Military Ethics Lead, Peter Austin, explore the ethical role of technology providers in the defense domain. Future posts will explore Palantir’s work supporting defense workflows in the most consequential settings.

palantir

commentary

What happened when WMD experts tried to make the GPT-4 AI do bad things

The creators of ChatGPT decided to test whether their AI systems can teach someone to build and use nuclear and biological weapons. Was it enough?

bulletin

commentary

How science-fiction tropes shape military AI

Pop culture influences how people think about artificial intelligence, and that spills over to how military planners think about war—obscuring the more mundane ways AI is likely to be used.

bulletin

commentary

There's a 'ChatGPT' for biology. What could go wrong?

As cutting-edge AI-powered chatbots like ChatGPT come online, observers have begun to worry about the implications of content producing AI in areas like employment and disinformation...

bulletin

speech

Expert Panel on the Social and Humanitarian Impact of Autonomous Weapons at the Latin American and Caribbean Conference on Autonomous Weapons

Thank you to Costa Rica and FUNPADEM for organizing this important conference. I will address some of the social and humanitarian consequences of autonomous weapons systems. By autonomous weapons...

human rights watch

news

Latin America and Caribbean Nations Rally Against Autonomous Weapons Systems

The push to prohibit and regulate autonomous weapons systems made significant progress last month when nearly every country in Latin America and the Caribbean endorsed a new communiqué calling for the “urgent negotiation” of a binding international treaty.

human rights watch

news

Digital Dehumanization Paves Way for Killer Robots

Last month, members of the Stop Killer Robots campaign met in Costa Rica with 68 campaigners from 29 countries for their first in-person global conference since the Covid-19 pandemic. A central...

human rights watch

analysis

Three lessons on the regulation of autonomous weapons systems to ensure accountability for violations of IHL

We argue that looking at how responsibility for IHL violations is currently ascribed under international law is critical not only to ensuring accountability but also to identifying clearer limits and requirements for the development and use of AWS.

icrc blog

analysis

Reducing the Risks of Artificial Intelligence for Military Decision Advantage

Militaries seek to harness artificial intelligence for decision advantage. Yet AI systems introduce a new source of uncertainty in the likelihood of technical failures. Such failures could interact with strategic and human factors in ways that lead to miscalculation and escalation in a crisis or conflict. Harnessing AI effectively requires managing these risk trade-offs by reducing the likelihood, and containing the consequences of, AI failures.

cset

report

Compliance with International Humanitarian Law in the Development and Use of Autonomous Weapon Systems: What does IHL Permit, Prohibit and Require?

It is undisputed that the development and use of autonomous weapon systems (AWS) must comply with international humanitarian law (IHL). However, how IHL rules should be interpreted and applied in the context of AWS remains, in some respects, unclear or disputed. With a particular focus on human–machine interaction, this report aims to facilitate a deeper understanding of this issue. The report provides a baseline for policymakers to advance discussions around what types and uses of AWS are (or should be) prohibited or regulated under existing IHL.

sipri

commentary

Is Humanity Risking Disaster? The Necessity of Autonomous Weapons System Governance

The logic supporting the development and deployment of autonomous weapons system (AWS) is a continuation of the escalatory deterrence strategy that characterized the Cold War, and fails to grasp how such systems will change the conduct of warfare.....

carnegie council

policy

REAIM Call to Action

Government representatives meeting at the REAIM summit have agreed a joint call to action on the responsible development, deployment and use of artificial intelligence (AI) in the military domain.

government of the netherlands

commentary

US Woos Other Nations for Military-AI Ethics Pact

State Department and Pentagon officials hope to illuminate a contrast between the United States and China on AI.

defense one

commentary

Keeping humans in the loop is not enough to make AI safe for nuclear weapons

Increasing automation within nuclear weapon command systems means putting faith, and lives, in the hands of algorithms that may never fully understand.

bulletin

original paper

The irresponsibility of not using AI in the military

The ongoing debate on the ethics of using artificial intelligence (AI) in military contexts has been negatively impacted by the predominant focus on the use of lethal autonomous weapon systems (LAWS) in war. H...

springer

news

US: New Policy on Autonomous Weapons Flawed

A new United States Department of Defense directive concerning development of autonomous weapons systems is an inadequate response to the threats posed by removing human control from the use of...

human rights watch

news

Review of the 2023 US Policy on Autonomy in Weapons Systems

A new directive on autonomy in weapons systems issued on January 25, 2023 shows the United States Department of Defense (DoD) is serious about ensuring it has policies and processes in place to...

human rights watch

original paper

Artificial intelligence and humanitarian obligations

Artificial Intelligence (AI) offers numerous opportunities to improve military Intelligence, Surveillance, and Reconnaissance operations. And, modern militaries recognize the strategic value of reducing civili...

springer

news

Implementation and Innovation

It is a time for innovation, especially in addressing the risks and dangers posed by autonomous weapons systems...

human rights watch

speech

Remarks by NATO Secretary General Jens Stoltenberg at the CHEY Institute during his visit to the Republic of Korea

Remarks by NATO Secretary General Jens Stoltenberg at the CHEY Institute during his visit to the Republic of Korea

nato

policy

AI Risk Management Framework (AI RMF 1.0)

On January 26, 2023, NIST released the AI Risk Management Framework (AI RMF 1.0) along with a companion NIST AI RMF Playbook, AI RMF Explainer Video, an AI RMF Roadmap, AI RMF Crosswalk, and various Perspectives.

nist

commentary

NATO's new AI initiatives: full speed ahead for new military technologies

In November 2021 NATO Watch published a critique of NATO's approach to the use of artificial intelligence (AI) for military purposes. This article provides a brief update to the critique following...

nato watch

research article

The Responsibility Gap and LAWS: a Critical Mapping of the Debate

AI has numerous applications and in various fields, including the military domain. The increase in the degree of ... is the assignment of moral responsibility for some AI-based outcomes. Several authors claim tha...

springer

essay

Meeting China's Emerging Capabilities: Countering Advances in Cyber, Space, and Autonomous Systems (Introduction)

This is the introduction to the report “Meeting China’s Emerging Capabilities: Countering Advances in Cyber, Space, and Autonomous Systems.”

nbr

essay

China's Cyber, Space, and Autonomous Weapons Systems: India's Concerns and Responses

This essay examines India’s key concerns about China’s growing technological prowess in the areas of cyberspace, outer space, and artificial intelligence and automation; the Indian response; and...

nbr

essay

Philippine Security Implications from China's Autonomous, Cyber, and Space Weapons Systems

This essay explores the implications of the use of established and emerging technologies by the People’s Republic of China (PRC), the Philippines’ limited response in countering the PRC’s...

nbr

report

Meeting China's Emerging Capabilities: Countering Advances in Cyber, Space, and Autonomous Systems

In this NBR report, experts from Australia, India, Japan, the Philippines, Taiwan, and Vietnam discuss China’s emerging cyber, space, and autonomous weapons capabilities. They examine regional...

nbr

essay

New Domains of Chinese Military Modernization: Security Implications for Japan

This essay examines Japan’s perceptions of and responses to major threats posed by China’s emerging capabilities in space, cyber, and autonomous weapons systems and considers policy options for...

nbr

interview

Artificial Intelligence Warfare in Ukraine: an interview with Fatima Roumate

World Geostrategic Insights interview with Fatima Roumate on the use of artificial intelligence in the Ukrainian conflict. Fatima Roumate  Ph.D. is a Full Professor of  International Law at the Faculty of Law, Economic and Social Sciences Agdal, Mohammed V University, Rabat, Morocco. Founding President of the International Institute of Scientific Research, Marrakech since 2010.  She is a Member of …

wgi

essay

Introduction: The Ethics of Automated Warfare and AI

Without a doubt, the most complex global governance challenges surrounding AI today involve its application to defence and security.

cigi

essay

AI and the Future of Deterrence: Promises and Pitfalls

If ubiquitous sensors result in a tsunami of real-time data, AI might provide the analytic potency needed to anticipate an adversary’s next step, down to the very minute.

cigi

essay

The Third Drone Age: Visions Out to 2040

Each attack during 2022 has acted as a pertinent reminder of what happens when state-manufactured advanced weapons technologies fall — or are perhaps placed — into the hands of hostile non-state organizations.

cigi

essay

Civilian Data in Cyberconflict: Legal and Geostrategic Considerations

Assessing cyberthreats and gaps in legal protection in the biosecurity sector would therefore gain from being considered by technical and legal experts in the field.

cigi

essay

AI and the Actual IHL Accountability Gap

The incentives to network and link military systems have resulted in civilian objects...increasingly becoming dual-use and thus possibly targetable infrastructure.

cigi

essay

Autonomous Weapons: The False Promise of Civilian Protection

Who is to be held accountable for civilians who are hurt or killed and civilian infrastructure that is damaged or destroyed?

cigi

video

The Ethics of Automated Weapons

Current applications of automated systems to many aspects of war and conflict have opened a new Pandora’s box. Systems operating autonomously with little human intervention raise ethical and legal concerns. Ethicists, international legal experts and international affairs specialists have been sounding the alarm on the potential misuse of this technology and the lack of any regulations governing its use.

cigi

video

Regulating Autonomy in Weapons Systems

When deciding how much power an autonomous system has, governments need to consider the impacts of international humanitarian law and ethics, because allowing AI complete, unregulated control could be a runaway nightmare.

cigi

video

The Legal Void in Which AI Weapons Operate

When states consider deploying modern autonomous systems powered by artificial intelligence (AI), they must consider the legal and ethical concerns in addition to the technical specifications of the tool.

cigi

opinion

Autonomy in Weapons Systems and the Struggle for Regulation

Humans have to decide what, when and where to engage, in particular when an application of military force could endanger human life.

cigi

report

Retaining human responsibility in the development and use of autonomous weapon systems

In a report for the Stockholm International Peace Research Institute (SIPRI), Marta Bo with Laura Bruun and Vincent Boulanin tackle how humans can be held responsible for violations of...

asser

article

Autonomy With Limits Essential For Future Drones Air Force Generals Say (Updated)

Advanced autonomy is key to the Air Force’s future drone plans, but humans will still make key decisions like about when to fire weapons.

the warzone

commentary

Canada selects Halifax to host new NATO military technology innovation centre

The offices are meant to hone NATO's technological edge by working with private sector companies and academics. Their mandate is to engage with both high-tech startups and established companies to...

nato watch

report

Artificial Intelligence and Nuclear Command, Control, & Communications: The Risks of Integration

The increasing autonomy of nuclear command and control systems stemming from their integration with artificial intelligence (AI) stands to have a strategic level of impact that could either increase nuclear stability or escalate the risk of nuclear use.

cser

research article

The Utility of Weapons Reviews in Addressing Concerns Raised by Autonomous Weapon Systems

This article describes the traditional weapons review process and explains why this process may need to be modified to adequately evaluate autonomous weapon systems (AWS)

oxford academic

policy

Position Paper of the People's Republic of China on Strengthening Ethical Governance of Artificial Intelligence (AI)

China, based on its own policies and practices and with reference to useful international experience, published the position paper in the aspects of regulation, research and development, utilization and international cooperation.

mfac

speech

Responsible AI to promote World Peace and Sustainable Development

The United Nations Office for Disarmament Affairs (UNODA) and the European Commission co-hosted a workshop on "Ethics and Emerging Technologies in Weapons Systems" in April 2022. The director of Center for Long-term AI, Prof. Yi Zeng was invited as a speaker. The following is a recording of his speech.

clai

policy

Principles on Military Artificial Intelligence [Draft for Comments]

The military applications of Artificial Intelligence (AI) have already introduced great risks and challenges to the world. As such, we should be vigilant about lowering the threshold of war due to the development of military AI , and actively work to prevent avoidable disasters. "Defense Artificial Intelligence and Arms Control Network" published the principles that the design, research, development, use and deployment of military AI throughout the whole life cycle should comply with.

defense ai and arms control network

interview

How Artificial Intelligence Affects International Security: an interview with Fatima Roumate

World Geostrategic Insights interview with Fatima Roumate on the main opportunities, challenges, and concerns related to the application of Artificial Intelligence (AI) in international relations and global governance, as well as the malicious uses of AI and the impact of AI in the Russia-Ukraine war. Fatima Roumate Ph.D. is a Full Professor of International Law …

wgi

article

Command responsibility of autonomous weapons under international humanitarian law

The use of autonomous weapons is becoming one of the most significant threats to humanity in today’s society. One of the major issues confronting the use of autonomous weapons is that of command...

taylor & francis

press release

CSIS Launches AI Council

The Center for Strategic and International Studies (CSIS) is pleased to announce the formation of the CSIS AI Council.

csis

article

A Manifesto on Enforcing Law in the Age of "Artificial Intelligence"

"A Manifesto on Enforcing Law in the Age of 'Artificial Intelligence'" was recently presented at a gathering in Rome, with a focus of design of ...

carnegie council

commentary

The US Navy wants swarms of thousands of small drones

Budget documents reveal plans for the Super Swarm project, a way to overwhelm defenses with vast numbers of drones attacking simultaneously.

mit technology review

opinion

The Rise of the Digital Cold War

“The truth is that In this digital world, we all live in the prison and surveillance 24/7, Yes, in the prison of SmartPhones, Sims card, Social applications and definitely Artificial Intelligence (AI).” ~ Dr. Rana Danish Nisar ~ Welcome to digital cold war: truth is bitter and the world ready for this …

wgi

commentary

DILEMA Lecture by Sven Nyholm

DILEMA Lecture on the topic of ‘The Ethics of Human–Robot Interaction and Traditional Moral Theories’.

asser

statement

Statement by Stop Killer Robots to the 77th UNGA First Committee on Disarmament and International Security

Read a copy of the statement delivered by Stop Killer Robots at the 77th UN General Assembly (UNGA) First Committee on Disarmament and International Security.

stop killer robots

opinion

Summary of NATO's Autonomy Implementation Plan

Summary of NATO's Autonomy Implementation Plan

nato

report

Artificial Intelligence and Arms Control

Advances in artificial intelligence (AI) pose immense opportunity for militaries around the world. With this rising potential for AI-enabled military systems, some activists a...

cnas

commentary

Taylor Woodcock: We should focus on the effects of decision-making aids, tasking, intelligence, surveillance and reconnaissance technology in warfare

In a new podcast episode by On Air, Asser Institute researcher Taylor Woodcock discusses today’s ‘overshadowing focus on autonomous weapon systems (aws) in warfare’, and the consequential lack of...

asser

commentary

Who’s going to save us from bad AI?

About damn time. That was the response from AI policy and ethics wonks to news last week that the Office of Science and Technology Policy, the White House’s science and technology advisory agency, had unveiled an AI Bill of Rights.

mit technology review

white paper

Increasing Autonomy in Weapons Systems

This paper highlights ten weapons systems with features that might be informative to considerations around autonomy in weapons systems. It seeks to showcase the diversity of types of weapon systems...

stop killer robots

white paper

Artificial intelligence and automated decisions: shared challenges in the civil and military spheres

This paper provides an initial sketch of responses to AI and automated decision-making in wider society while contextualising these responses in relation to autonomy in weapons systems.

stop killer robots

policy

Blueprint for an AI Bill of Rights

To advance President Biden’s vision, the White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values. Responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by From Principles to Practice—a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.

ostp

report

Retaining Human Responsibility in the Development and Use of Autonomous Weapon Systems: On Accountability for Violations of International Humanitarian Law Involving AWS

It is undisputed that humans must retain responsibility for the development and use of autonomous weapon systems (AWS) because machines cannot be held accountable for violations of international humanitarian law (IHL). However, the critical question of how, in practice, humans would be held responsible for IHL violations involving AWS has not featured strongly in the policy debate on AWS. This report aims to offer a comprehensive analysis of that very question.

sipri

report

Ascribing Moral Responsibility for the Actions of Autonomous Weapons Systems – Taking a Moral Gambit

In this article we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human...

ssrn

commentary

In or out of control? Criminal responsibility of programmers of autonomous vehicles and autonomous weapon systems

In a new paper, Asser Institute researcher Marta Bo examines when programmers may be held criminally responsible for harms caused by self-driving cars and autonomous weapons.

asser

report

Artificial Intelligence Crime: An Overview of Malicious Use and Abuse of AI

The capabilities of Artificial Intelligence (AI) evolve rapidly and affect almost all sectors of society. AI has been increasingly integrated into criminal and harmful activities, expanding...

ssrn

commentary

State responsibility in relation to military applications of artificial intelligence

In a new paper, Asser Institute senior researcher Bérénice Boutin explores the conditions and modalities under which a state can incur responsibility in relation to violations of international law involving military applications of artificial intelligence (AI) technologies.

asser

commentary

New Publication on State Responsibility and Military AI

Berenice Boutin has recently published a new article entitled ‘State Responsibility in Relation to Military Applications of Artificial Intelligence’.

asser

paper

State Responsibility in Relation to Military Applications of Artificial Intelligence

This article explores the conditions and modalities under which a state can incur responsibility in relation to violations of international law involving military applications of artificial...

ssrn

research article

“Autonomous weapons” as a geopolitical signifier in a national power play: analysing AI imaginaries in Chinese and US military policies

“Autonomous weapon systems” (AWS) have been subject to intense discussions for years. Numerous political, academic and legal actors are debating their consequences, with many calling for strict regulation or e...

springer

report

Managing the risks of US-China war: Implementing a strategy of integrated deterrence

If the United States is to maintain a constructive role in preventing the outbreak of a cross-Strait war, it will need to implement a strategy to deter Chinese aggression against Taiwan that is consistent with U.S. interests and capabilities, and that provides clarity around the existentially important matter of preventing nuclear escalation, in the event a conflict does occur.

brookings

original paper

Artificial intelligence and responsibility gaps: what is the problem?

Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty ...

springer

original paper

The Challenge of Ethical Interoperability

Defense organizations are increasingly developing ethical principles to ... the design, development, and use of responsible AI, most notably for defense, security, and intelligence uses. While these ... lead to m...

springer

statement

Autonomous weapons: The ICRC calls on states to take steps towards treaty negotiations

ICRC statement following the final 2022 session of government experts on lethal autonomous weapons systems of the UN Convention on CCW from 25 to 29 July.

icrc

commentary

Can we Bridge AI’s responsibility gap at Will?

Artificial intelligence (AI) increasingly executes tasks that previously only humans ... medical operation. However, as the very best AI systems tend to be the least controllable ... longer be morally responsible...

springer

article

What you need to know about autonomous weapons

Autonomous weapons are an immediate cause of humanitarian concern. Senior scientific and policy adviser at the ICRC, Neil Davison, explains.

icrc

commentary

AI Is Taking the Army Future Vertical Lift Program to the Next Level

Advanced algorithms equipped with databases of terrain maps, weather, and navigation information can help an aircraft correct its flight path without human intervention.

national interest

paper

Cyborg Soldiers: Military Use of Brain-Computer Interfaces and the Law of Armed Conflict

Recent years have seen a spotlight aimed at new technologies and how they might be used by the military. Scholars and policymakers have given much attention to autonomous weapons systems and...

ssrn

review

“Ethically contentious aspects of artificial intelligence surveillance: a social science perspective”

Artificial intelligence and its societal and ethical implications are complicated and conflictingly interpreted. Surveillance is one of the most ethically challenging concepts in AI. Within the domain of artifici...

springer

commentary

Why business is booming for military AI startups

The invasion of Ukraine has prompted militaries to update their arsenals—and Silicon Valley stands to capitalize.

mit technology review

briefing paper

Negotiating a Treaty on Autonomous Weapons Systems – The Way Forward

This briefing paper sets out a positive vision to encourage governments to commence negotiations on a new treaty on autonomous weapons systems.

stop killer robots

research article

Imaginaries of omniscience: Automating intelligence in the US Department of Defense

The current reanimation of artificial intelligence includes a resurgence of investment in automating military intelligence on the part of the US Department of Defense. A series of programs set forth a technopolitical imaginary of fully integrated, ...

sage

policy

Defence Artificial Intelligence Strategy

This strategy sets out how we will adopt and exploit AI at pace and scale, transforming Defence into an ‘AI ready’ organisation and delivering cutting-edge capability...

government of the uk

commentary

Is It Too Late to Stop the Spread of Autonomous Weapons?

If autonomous weapons are the future of warfare, then the United States has no choice but to grapple with their complexities.

national interest

commentary

Focus on the Human Element to Win the AI Arms Race

The United States must refine its investments to incorporate a deliberate and sustained campaign of mission engineering to accelerate and improve the delivery of trustworthy AI.

national interest

report

Estonia: A Curious and Cautious Approach to Artificial Intelligence and National Security

In this chapter we provide an overview of Estonia’s current AI landscape, detailing a number of public sector use-cases and developments across both industry and the military to examine AI in a...

ssrn

commentary

A Refreshed Autonomous Weapons Policy Will Be Critical for U.S. Global Leadership Moving Forward

The updated policy will hopefully reflect developments in the field and incorporate recent DoD initiatives, paving the way for what future governance of emerging capabilities should look like.

council on foreign relations

commentary

DOD Is Updating Its Decade-Old Autonomous Weapons Policy, but Confusion Remains Widespread

In November 2012, the Department of Defense (DOD) released its policy on autonomy in weapons systems: DOD Directive 3000.09 (DODD 3000.09). Despite being nearly 10 years old, the policy remains frequently misunderstood, including by leaders in the U.S. military. For example, in February 2021, Colonel Marc E. Pelini, who at the time was the division chief for capabilities and requirements within the DOD’s Joint Counter-Unmanned Aircraft Systems Office, said, “Right now we don't have the authority to have a human out of the loop. Based on the existing Department of Defense policy, you have to have a human within the decision cycle at some point to authorize the engagement."

csis

policy

Robotics and autonomous systems: defence science and technology capability

Dstl exploits the latest in robotics and AI to create effective and trustworthy uncrewed platforms and autonomous systems for the UK’s security and defence.

government of the uk

statement

High Representative’s statement to the Human Rights Council on the topic of lethal autonomous robotics

Below is the statement of the High Representative for Disarmament Affairs to the 23rd session of the Human Rights Council, on the topic of lethal autonomous robotics. It was delivered on behalf of the High Representative by Mr. Jarmo Sareva, Director of the Geneva Branch of UNODA

unoda

commentary

‘Collaborative, Portable Autonomy’ Is the Future of AI for Special Operations

Creating autonomous teams in contested environments will be a challenge of technology—and policy.

defense one

report

Military Artificial Intelligence as Contributor to Global Catastrophic Risk

Recent years have seen growing attention for the use of AI technologies in warfare, which has been rapidly advancing. This chapter explores in what ways such military AI technologies might...

ssrn

article

Great power identity in Russia’s position on autonomous weapons systems

ABSTRACTThis article proposes an identity-based analysis of the Russian position in the global debate on autonomous weapons systems (AWS). Based on an interpretation of Russian written and verbal...

taylor & francis

original research

Meaningful human control of drones: exploring human–machine teaming, informed by four different ethical perspectives

A human-centric approach to the design and deployment of AI systems aims to support and augment human ... But what could this look like in a military context? We explored a human-centric approach...

springer

commentary

Artificial intelligence and warfare

As part of the Asser Institute research paper series, Asser researchers Berenice Boutin, Taylor Woodcock and Tomasz Zurek from the research strand ‘Regulation in the public interest: Disruptive...

asser

report

Aspects of Realizing (Meaningful) Human Control: A Legal Perspective

The concept of ‘meaningful human control’ (MHC) has progressively emerged as a key frame of reference to conceptualize the difficulties posed by military applications of artificial intelligence...

ssrn

commentary

New Publication on Meaningful Human Control

Berenice Boutin and Taylor Woodcock have recently published a new Chapter entitled ‘Aspects of Realizing (Meaningful) Human Control: A Legal Perspective’.

asser

research article

Minimum Levels of Human Intervention in Autonomous Attacks

This article discusses an important limitation on the degree of autonomy that may permissibly be afforded to autonomous weapon systems (AWS) in the context of an armed conflict: the extent to which international humanitarian law (IHL) requires that human beings be able to intervene directly in the operation of weapon systems in the course of an attack.

oxford academic

report

Predictability, Distinction & Due Care in the use of Lethal Autonomous Weapon Systems

In this article we address the possibility of using Lethal Autonomous Weapon Systems (LAWS) in compliance with the jus in bello principle of distinction. This principle requires that parties to an...

ssrn

report

Jus in Bello Necessity, the Requirement of Minimal Force, and Autonomous Weapon Systems

In this article we focus on the jus in bello principle of necessity for guiding the use of autonomous weapon systems (AWS). We begin our analysis with an account of the principle of necessity as it...

ssrn

report

Ascribing Moral Responsibility for The Actions of Autonomous Weapons Systems: A Moral Gambit

In this article we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). We begin our analysis with a description of the ‘responsibility gap’ and the...

ssrn

paper

Autonomous Weapons and Their Compliance with International Humanitarian Law (LLM Thesis)

This research will firstly, try to analyze as well try to bring light on the recent entry of autonomous weapons together with the issues pertaining to the usage of these lethal weapons and the...

ssrn

commentary

Shared Responsibility: Enacting Military AI Ethics in U.S. Coalitions

America needs to enlist its oldest allies and new partners to build a safer and freer world for the AI era.

national interest

article

Governing through Anticipatory Norms: How UNIDIR Constructs Knowledge about Autonomous Weapons Systems

ABSTRACTThe need for normative change is rarely self-evident but requires the sustained efforts of actors to create a demand for action. With emerging technologies such as autonomous weapons...

taylor & francis

commentary

In Defence of Principlism in AI Ethics and Governance

It is widely acknowledged that high-level AI principles are difficult to translate into practices via explicit rules and design guidelines. Consequently, many AI research and development groups that claim to a...

springer

commentary

Sitting Out of the Artificial Intelligence Arms Race Is Not an Option

The race to build autonomous weapons will have as much impact on military affairs in the twenty-first century as aircraft did on land and naval warfare in the twentieth century.

national interest

report

Utility of Artificial Intelligence to Authoritarian Governance

This paper aims to provide a brief descriptive overview of potential scenarios enabled by AI for the development of authoritarian states by reviewing and discussing recent literature on the impact...

ssrn

report

Autonomous weapons and ethical judgments: Experimental evidence on attitudes toward the military use of "killer robots"

The advent of autonomous weapons brings intriguing opportunities and significant ethical dilemmas. This article examines how increasing weapon autonomy affects approval of military strikes...

ssrn

report

Autonomous Weapon Systems and Jus ad Bellum

In this article we focus on the scholarly and policy debate on autonomous weapons systems (AWS) and particularly on the objections to the use of these weapons which rest on jus ad bellum principles...

ssrn

commentary

Russia may have used a killer robot in Ukraine. Now what?

If open-source analysts are right, a loitering munition capable of using AI to pick a target--a killer robot--was used in the Russia-Ukraine conflict. Autonomous weapons using artificial...

bulletin

statement

Autonomous weapons: The ICRC remains confident that states will adopt new rules

The International Committee of the Red Cross (ICRC) welcomes the continued work of the Group of Governmental Experts (GGE) and urges the High Contracting Parties to the CCW to take their important work forward in line with one of the main purposes of this Convention, namely "the need to continue the codification and progressive development of the rules of international law

icrc

original research

Dual-Use and Trustworthy? A Mixed Methods Analysis of AI Diffusion Between Civilian and Defense R&D

Artificial Intelligence (AI) seems to be impacting all industry sectors ... a motor for innovation. The diffusion of AI from the civilian sector to the defense sector, and AI’s dual-use potential has drawn attent...

springer

original paper

The Dawn of the AI Robots: Towards a New Framework of AI Robot Accountability

Business, management, and business ethics literature pay little attention to the topic of AI robots. The broad spectrum of potential ethical issues pertains to using driverless cars, AI robots in care homes, a...

springer

commentary

Arms control law chair Thilo Marauhn: We need to adapt current arms control law to address new political challenges

Generally speaking, it is my view that international arms control law has lost too much of public support in the past decade, so it is one of my goals to make people aware of the relevance of the arms control field. In collaboration with political activists and government experts, I want to contribute to this field's potential to enhance international peace and security.

asser

commentary

Fully autonomous weapon systems

Presentation by Kathleen Lawand, head of the arms unit, ICRC. Seminar on fully autonomous weapon systems, Mission permanente de France, Geneva, Switzerland.

icrc

commentary

The challenges raised by increasingly autonomous weapons

On June 24, 2014, the ICRC Vice-President, Ms Christine Beerli, opened a panel discussion on...

icrc

commentary

Autonomous weapons: What role for humans?

Geneva (ICRC) – Addressing a meeting of experts at the United Nations in Geneva this week, the International Committee of the Red Cross (ICRC) will urge governments to focus on the issue of human control over the use of force in their deliberations on autonomous weapons.

icrc

commentary

Autonomous weapons: ICRC addresses meeting of experts

The ICRC spoke at the meeting of experts on lethal autonomous weapons systems held in the framework of the Conventional Weapons Convention in Geneva from 13 to 16 May 2014.

icrc

report

Jewish Law, Techno-Ethics, and Autonomous Weapon Systems: Ethical-Halakhic Perspectives

Techno-ethics is the area in the philosophy of technology which deals with emerging robotic and digital AI technologies. In the last decade, a new techno-ethical challenge has emerged: Autonomous...

ssrn

report

Deontology of Lethal Autonomous Weapon Systems in The Total People's Defense and Security System

The total people's defense and security system (Sistem pertahanan dan keamanan rakyat semesta-Sishankamrata) is an implementation of the total defense system in Indonesia. A lethal autonomous...

ssrn

report

Does the Use of Lethal Autonomous Weapon Systems Create a Special Problem for International Human Right Law?

Lethal Autonomous Weapon System (LAWS) is discussed and considered to the principle of International Humanitarian Law (IHL) and International Human Right Law (IHRL). In line with legal, moral and...

ssrn

report

Ethical and Legal Limits to the Diffusion of Self-Produced Autonomous Weapons

The theme of self-produced weapons intertwines diversified ideas of an ethical, legal, engineering and data science nature. The critical starting point concerns the use of 3D printing for the self-...

ssrn

report

Preventive Ban Lethal Autonomous Weapons Systems True False Good Idea

After a campaign calling for a ban of Lethal Autonomous Weapons Systems, an expert meeting was held in Geneva in May. The introduction of LAWS raise legal and ethical questions. It should be noted...

ssrn

report

Role of AI in Cyber Crime and hampering National Security

Role of AI in cyber crime and hampering National Security“The development of full AI could spell the end of the human race.. It would take off on its own and re-design itself at an ever increasing...

ssrn

report

Role of Artificial Intelligence and Data Science in Lethal Autonomous Weaponry Systems

The advent of Lethal Autonomous Weapon Systems (LAWS) is rapidly a matter of scholarly and public interest. This research primarily focuses on LAWS' implications using Artificial Intelligence (AI)...

ssrn

commentary

Research Project

The ethical and legal implications of the potential use of AI technologies in the military has been on the agenda of the United Nations, governments, and non-governmental organisations for several...

asser

white paper

Protect AI systems from criminal use

The possible consequences have a particular scope: malicious attacks can manipulate AI systems - and thus also the actions of people who use the technology as the basis for certain decisions. Similarly, given a lack of safeguards, AI systems can be used to monitor people, for industrial espionage, or as weapons. Protecting AI systems from misuse by criminals, terrorists, competitors, or employers is therefore a highly relevant task for responsible use of the technology.

lernende systeme

article

Artificial Intelligence and Autonomous Weapons Systems: Technology, Warfare, and Our Most Destructive Machines

The Stanley Center, in partnership with The Origins Project at Arizona State University and the Bulletin of the Atomic Scientists, will co-host a workshop to consider the risks and opportunities...

stanley center

article

Military Applications of Artificial Intelligence

Advances in artificial intelligence (AI), deep-learning, and robotics are enabling new military capabilities that will have a disruptive impact on military strategies. The effects of these...

stanley center

commentary

Conference on Law and Ethics of AI in the Public Sector

The conference will address the multiple challenges raised by the increasing use of artificial intelligence (AI) in the public sector. As AI is progressively deployed in various domains such as...

asser

article

The Techno-Military-Industrial-Academic Complex

The Harvard Strike in the spring of 1969 emerged out of what we students perceived as the university’s complicity in the Vietnam War. After Harvard ...

carnegie council

essay

A necessary step back?

A few years back, the rapid progress of international efforts to ban lethal autonomous weapon systems (LAWS) left arms controllers amazed: only five years after the founding of the International Committee for ...

springer

analysis

A new Solferino moment for humanitarians

This year marks the 160th anniversary of the publication of Henri Dunant’s classic text, ‘A Memory of Solferino’, in 1862. Dunant’s powerful book ...

icrc blog

report

Innovation-Proof Governance for Military AI? How I Learned to Stop Worrying and Love the Bot

Amidst fears over artificial intelligence ‘arms races’, much of the international debate on governing military uses of AI is still focused on preventing the use of lethal autonomous weapons systems...

ssrn

commentary

Giving an AI control of nuclear weapons: What could possibly go wrong?

If an autonomous nuclear weapon concluded with 99 percent confidence a nuclear war is about to begin, should it fire?

bulletin

original research

Responsibility assignment won’t solve the moral issues of artificial intelligence

Who is responsible for the events and consequences caused by using artificially intelligent tools, and is there a gap between what human agents can be responsible for and what is being done using artificial in...

springer

opinion

Keynote speech by NATO Deputy Secretary General Mircea Geoană at the Cybersec Global 2022 event

Keynote speech by NATO Deputy Secretary General Mircea Geoană at the Cybersec Global 2022 event

nato

research article

The Compatibility of Autonomous Weapons with the Principles of International Humanitarian Law

The emergence of autonomous weapons remains a hot topic in international humanitarian law. Much has been said by States, international organisations, non-governmental organisations and academics on the matter in recent years. However, no agreement has been reached on how best to regulate this nascent technology.

oxford academic

analysis

Shifting the narrative: not weapons, but technologies of warfare

Debates concerning the regulation of choices made by States in conducting hostilities are often limited ...

icrc blog

analysis

Commitment to Control Weaponised Artificial Intelligence: A Step Forward for the OSCE and European Security

Current practices related to the use of weaponised AI are already impacting European stability and security. The OSCE is a promising platform to build on the stalled discussions at the CCW, because it has a history of acting as a bridge between various perspectives of European security.

gcsp

commentary

A new year and a new research agenda: 'Rethinking public interests in international and European law'

At the Asser Institute, we start the new year with a brand-new research agenda (2022-2026), entitled ‘Rethinking public interests in international and European Law: Pairing critical reflection with perspectives for action’. It is organised around questions pertaining to the public interest in international and European public and private law.

asser

open forum

Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence

This article argues that an artificial superintelligence (ASI) emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation–s...

springer

commentary

China's New AI Governance Initiatives Shouldn't Be Ignored

The government’s three approaches will profoundly shape how algorithms are regulated within China and around the world.

carnegie endowment

report

Integrating Privacy Concerns in the Development and Introduction of New Military or Dual Use Technologies

New and emerging technologies impact the ways in which military operations are conducted. Notable quantum leaps are being achieved in three fields: autonomous weapon systems, military use of...

ssrn

commentary

Dog catchers, drone swarms, anti-vaxxers, gain of function, and more: Some of our best 2021 disruptive tech stories

The Bulletin produced a lot of great coverage of biosecurity, lethal autonomous weapons, and more. Take a look at some of our best disruptive technology stories of the year.

bulletin

commentary

How Does China Aim to Use AI in Warfare?

AI in particular is seen as a “game-changing” critical strategic technology.

the diplomat

statement

The ICRC urges States to achieve tangible results next year towards adopting new legally binding rules on autonomous weapons

ICRC Head of the Arms and Conduct of Hostilities Unit Laurent Gisel on humanitarian concerns raised by the use of certain conventional weapons at the 6th Review Conference of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons

icrc

research

The Role of National Parliaments in EU Defense

The EU’s pursuit of a single European defense market necessitates stronger democratic oversight. Members of the European Parliament and national legislative bodies should play a more proactive role...

carnegie endowment

commentary

Australia Could Be Arming Its Unmanned Aircraft

With Boeing's ATS, the operating air force would enlarge an enemy’s risk in entering airspace within the ATS’s radius. Either the enemy’s fighter force would be burdened with more escort work or vulnerable aircraft might just have to be kept out of the area.

national interest

report

LAWS in International Law

The emerging international regulatory framework for lethal autonomous weapon systems (LAWS) relies on the continuing applicability of international law and the maintenance of human control and...

ssrn

policy

Position Paper of the People’s Republic of China on Regulating Military Applications of Artificial Intelligence (AI)

The rapid development and wide applications of AI technology has profoundly changed the way people work and live, bringing great opportunities as well as unforeseeable security challenges to the world. One particular concern is the long-term impacts and potential risks of military applications of AI technology in such aspects as strategic security, rules on governance, and ethics.

mfac

statement

Peter Maurer: "Autonomous weapon systems raise ethical concerns for society"

Responsible choices about the future of warfare are needed, including clear and legally binding boundaries to prohibit autonomous weapons systems that are unpredictable or designed to target humans, and strict regulation of the design and use of all others.

icrc

research article

Military autonomous drones (UAVs) - from fantasy to reality. Legal and Ethical implications

Autonomous drones raise important judicial and ethical issues about responsibility for unintentional harm which will be discussed in this paper.

science direct

opinion

Remarks by NATO Secretary General Jens Stoltenberg in a panel discussion at the Friedrich-Ebert-Stiftung Symposium in Berlin

Remarks by NATO Secretary General Jens Stoltenberg in a panel discussion at the Friedrich-Ebert-Stiftung Symposium in Berlin

nato

perspective

Innovation and opportunity: review of the UK’s national AI strategy

The publication of the UK’s National Artificial Intelligence (AI) Strategy represents a step-change in the...signalling’ document. Indeed, we read the National AI Strategy as a vision for innovation and... We pro...

springer

report

Campaign to Stop Killer Robots - 2020 Annual Report

The 2020 annual report provides an overview of activities carried out by the Campaign to Stop Killer Robots from April 2020 to March 2021.

stop killer robots

commentary

Artificial Intelligence Is the F-16's New Secret Weapon

The F-16 may soon operate within a complex digital ecosystem. 

national interest

report

Artificial Intelligence, Law and National Security

This chapter outlines different implications of artificial intelligence for national security. It argues that AI overlaps with many challenges to the national security arising from cyberspace, but...

ssrn

commentary

The Department of Defense is issuing AI ethics guidelines for tech contractors

The controversy over Project Maven shows the department has a serious trust problem. This is an attempt to fix that.

mit technology review

original research

Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US

Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential har...

springer

analysis

Autonomous weapon systems: what the law says – and does not say – about the human role in the use of force

Intergovernmental discussions on the regulation of emerging technologies in the area of (lethal) autonomous weapon ...

icrc blog

opinion

Keynote speech by NATO Deputy Secretary General Mircea Geoană at the GoTech World 2021 Conference

Keynote speech by NATO Deputy Secretary General Mircea Geoană at the GoTech World 2021 Conference

nato

briefing paper

Stopping Killer Robots: A Guide for Policy Makers

This pamphlet provides guidance for policy makers around the world in developing a new international treaty to overcome the dangers posed by autonomy in weapon systems.

stop killer robots

article

Views and recommendations of the ICRC for the Sixth Review Conference of the Convention on Certain Conventional Weapons

The Sixth Review Conference of the Convention on Certain Conventional Weapons (CCW), in December 2021 in Geneva, is a key moment for High Contracting Parties to take stock of, and build on, the important role the CCW has played in minimizing suffering in armed conflict.

icrc

commentary

NATO's new AI strategy: lacking in substance and lacking in leadership

The October 2021 meeting of NATO's Defence Ministers in Brussels (see NATO Watch Briefing no.87) saw Ministers agreeing to adopt NATO’s new strategy for Artificial Intelligence (AI). The strategy...

nato watch

research article

Truth, Lies and New Weapons Technologies: Prospects for Jus in Silico?

This article tests the proposition that new weapons technology requires Christian ethics to dispense with the just war tradition (JWT) and argues for its development rather than dissolution. Those working in the JWT should be under no illusions, however, ...

sage

commentary

NATO seeks to sharpen its technological advantage and adopts a Janus-inspired strategy: one face towards Russia and the other towards China

An analysis of the NATO Defence Ministers Meeting, Brussels, 21-22 October 2021

nato watch

commentary

Russia Looks to Combat Drones with Marker Robots

The Marker is expected to become the foundation for testing the interaction between ground robots, unmanned aerial vehicles and special operations forces.

national interest

article

An Artificial Intelligence Strategy for NATO

At their October 2021 meeting, Allied Defence Ministers formally adopted an Artificial Intelligence Strategy for NATO. Current and former NATO staff with direct involvement in the development and...

nato review

opinion

Press conference by NATO Secretary General Jens Stoltenberg ahead of the meetings of NATO Defence Ministers on 21 and 22 October at NATO Headquarters

Press conference by NATO Secretary General Jens Stoltenberg ahead of the meetings of NATO Defence Ministers on 21 and 22 October at NATO Headquarters

nato

commentary

Law and ethics of AI in the public sector

The Asser Institute invites abstracts on the topic of ‘Law and ethics of artificial intelligence in the public sector: From principles to practice and policy’, for an interdisciplinary conference...

asser

commentary

Call for Papers: Law and Ethics of AI in the Public Sector

The conference seeks to address the multiple challenges raised by the increasing use of artificial intelligence (AI) in the public sector. As AI is progressively deployed in various domains such as...

asser

research article

Ethical Principles for Artificial Intelligence in National Defence

Defence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an ... a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Un...

springer

report

NATO's Role in Responsible AI Governance in Military Affairs

In this chapter, we explore a role for the North Atlantic Treaty Organization (NATO) in the emerging military artificial intelligence (AI) governance architecture. As global powers compete for...

ssrn

statement

Civil Society Statement on Race and Intersectionality in Humanitarian Disarmament

Civil Society Statement to the UN General Assembly First Committee on Disarmament and International Security delivered on 8 October 2021.

stop killer robots

analysis

Engaging with the industry: integrating IHL into new technologies in urban warfare

Alongside the urbanization of armed conflict lies a second trend: the increase in the use ...

icrc blog

commentary

Applying arms-control frameworks to autonomous weapons

The development of autonomous weapons and robotics technology is rapidly advancing and poses hard questions about how their use and proliferation should be governed. Existing arms-control regimes may offer a model for how to govern autonomous weapons.

brookings

commentary

DILEMA Lecture by Dr Ingvild Bode

Topics of interest within the scope of this lecture series include technical perspectives on military applications of AI, philosophical enquires into human control and human agency over technologies, analyses of international law in relation to (military) AI, including international humanitarian law and international human rights law, and interdisciplinary contributions related to these topics.

asser

commentary

An Autonomous Robot May Have Already Killed Humans

Here is how the weapons could be more destabilizing than nukes. 

national interest

analysis

Autonomy in weapons systems: playing catch up with technology

For almost eight years now, the international community at the United Nations (UN) has been ...

icrc blog

opinion

Remarks by NATO Deputy Secretary General Mircea Geoană at the AI & Cyber Conference titled “An Abundance of Potential”

Remarks by NATO Deputy Secretary General Mircea Geoană at the AI & Cyber Conference titled “An Abundance of Potential”

nato

policy

The Ethical Norms for the New Generation Artificial Intelligence, China

The National Governance Committee for the New Generation Artificial Intelligence published the “Ethical Norms for the New Generation Artificial Intelligence”. It aims to integrate ethics into the entire lifecycle of AI, to provide ethical guidelines for natural persons, legal persons, and other related organizations engaged in AI-related activities.

most

response paper

Response to GGE Chairs Guiding Questions

This paper sets out the Campaign to Stop Killer Robots’ response to the additional questions circulated by the Chair of the Group of Governmental Experts on 12th August 2021.

stop killer robots

report

Big Data and the Future of Belligerency: Applying the Rights to Privacy and Data Protection to Wartime Artificial Intelligence

The race for military AI is in full swing. Militaries around the world are developing and deploying various AI applications including tools for the advancement of surveillance, command and control...

ssrn

commentary

Israel’s Newest High-Tech Border Guard: The Jaguar Robot

The IDF has broader ambitions to eventually integrate the Jaguar into its conventional warfighting capabilities.

national interest

report

Code of conduct on artificial intelligence in military systems

This draft Code of Conduct for AI-enabled military systems is the product of a two-year consultation process among Chinese, American,…

humanitarian dialogue

original research

Mapping global AI governance: a nascent regime in a fragmented landscape

The rapid advances in the development and rollout of artificial intelligence (AI) technologies over the past years have triggered a frenzy of regulatory initiatives at various levels of government and the priv...

springer

open forum

Professional ethics and social responsibility: military work and peacebuilding

This paper investigates four questions related to ethical issues associated with the involvement of engineers and scientists in 'military work', including the influence of ethical ... )-centred systems perspectiv...

springer

analysis

The value (and danger) of ‘shock’ in regulating new technology during armed conflict

The rules and standards of war are not self-correcting. Contradictions, gaps, and ambiguities often endure until an external pressure makes them salient. This ...

icrc blog

commentary

After Its Gaza War, Israel Is Sending Armed Robots to Watch Hamas

Supposedly Jaguars will assume routine patrol duties for the Gaza division, reducing by one battalion the forces deployed to guard the barrier.

national interest

statement

Autonomous weapons: The ICRC recommends adopting new rules

The ICRC recommends that states adopt new, legally binding rules to regulate autonomous weapon systems to ensure that sufficient human control and judgement is retained in the use of force. It is the ICRC's view that this will require prohibiting certain types of autonomous weapon systems and strictly regulating all others.

icrc

analysis

Responsible and Ethical Military AI

Allies of the United States have begun to develop their own policy approaches to responsible military use of artificial intelligence. This issue brief looks at key allies with articulated, emerging, and nascent views on how to manage ethical risk in adopting military AI. The report compares their convergences and divergences, offering pathways for the United States, its allies, and multilateral institutions to develop common approaches to responsible AI implementation.

cset

analysis

Military AI Cooperation Toolbox

The Department of Defense can already begin applying its existing international science and technology agreements, global scientific networks, and role in multilateral institutions to stimulate digital defense cooperation. This issue brief frames this collection of options as a military AI cooperation toolbox, finding that the available tools offer valuable pathways to align policies, advance research, development, and testing, and to connect personnel–albeit in more structured ways in the Euro-Atlantic than in the Indo-Pacific.

cset

advisory note

Autonomous Weapon Systems that Target Humans

This advisory note, circulated to campaigners and diplomats, provides the basis for a prohibition on autonomous weapon systems that target humans.

stop killer robots

report

Autonomous Weapon Systems Understanding the Potential Human Rights Violations

The evolution of artificial intelligence (AI) over the years has led to the realization of the dreams of robot-human interaction. This idea of a robot-human interaction on a whole new level has...

ssrn

commentary

The year of Zoom

With another year behind us, the Asser Institute wants to highlight our achievements of the last year in academic research, collaborations, events and publications through our 2020 Annual Report....

asser

commentary

Lethal Autonomous Weapons: 10 things we want to know

A new podcast series 'Lethal Autonomous Weapons: 10 things we want to know' was launched with Asser researcher Marta Bo. The podcast series is a part of the LAWS & War Crimes research project at...

asser

article

Can AI Weapons Make Ethical Decisions?

The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming...

taylor & francis

article

Weaponizing Artificial Intelligence: The Scary Prospect Of AI-Enabled Terrorism

There has been much speculation about the power and dangers of artificial intelligence (AI), but it’s been primarily focused on what AI will do to our jobs in the very near future. Now, there’s...

bernard marr

article

Is Artificial Intelligence Dangerous? 6 AI Risks Everyone Should Know About

Should we be scared of artificial intelligence (AI)? Since recent developments have made super-intelligent machines possible much sooner than initially thought, the time is now to determine what dangers artificial intelligence poses.

bernard marr

article

Is Artificial Intelligence (AI) Dangerous And Should We Regulate It Now?

Now that artificial intelligence (AI) is no longer just a what-if scenario that gets tech gurus frenzied with the possibilities, but it’s in use and impacting our everyday lives, there is renewed...

bernard marr

article

A Short History of Machine Learning — Every Manager Should Read

In this post I offer a quick trip through time to examine the origins of machine learning as well as the most recent milestones.

bernard marr

article

Is Artificial Intelligence (AI) A Threat To Humans?

Are artificial intelligence (AI) and superintelligent machines the best or worst thing that could ever happen to humankind? This has been a question in existence since the 1940s when computer...

bernard marr

report

Key Elements of the Treaty Banning Fully Autonomous Weapon Systems: Perspectives in Southeast Asia

This document interprets the key elements of a treaty through a Southeast Asian perspective, recognising the diversity of national interests in the region.

stop killer robots

report

Bibliography of Resources Relating to Lethal Autonomous Weapon Systems

An interdisciplinary bibliography of resources relating to lethal autonomous weapon systems produced by the LAWS & War Crimes research project team at the Graduate Institute of International and...

ssrn

statement

Statement to the CCW informal discussions on autonomous weapon systems

This statement was delivered to CCW participants at the informal discussions on autonomous weapon systems on 29 June 2021.

stop killer robots

commentary

Israel is Using Robots with Machine Guns to Patrol Gaza Border

The Jaguar's role in a border patrol and possibly anti-riot capacity will likely continue to receive scrutiny as public security services across the world explore deploying unmanned systems with offensive capabilities.

national interest

analysis

Future developments in military cyber operations and their impact on the risk of civilian harm

Over the past decade, several States have begun to develop military cyber elements capable of ...

icrc blog

research article

Locating LAWS: Lethal Autonomous Weapons, Epistemic Space, and “Meaningful Human” Control

This paper analyzes the excessive epistemic narrowing of debate about lethal autonomous weapon systems (LAWS), and specifically the concept of meaningful human control, which has emerged as central to regulatory debates in both the scholarly literature and policy fora.

oxford academic

commentary

US Needs to Defend Its Artificial Intelligence Better, Says Pentagon No. 2

AI safety is often overlooked in the private sector, but Deputy Secretary Kathleen Hicks wants the Defense Department to lead a cultural change.

defense one

analysis

Stepping into the breach: military responses to global cyber insecurity

As the global geo-political landscape continues to experience increasing fragmentation, cyberspace grows in importance as ...

icrc blog

analysis

Avoiding civilian harm during military cyber operations: six key takeaways

In today’s armed conflicts, cyber operations are increasingly used in support of and alongside kinetic ...

icrc blog

commentary

Killer Algorithms: How to Keep Military AI under Human Control

In an interview with the University of Amsterdam, project leader Dr Berenice Boutin discussed some of the challenges associated with military AI and how the DILEMA research project seeks to address them.

asser

policy

Norway’s Policy on Emerging Military Technologies: Widening the Debate on AI and Lethal Autonomous Weapon Systems

Stai, Nora Kristine & Bruno Oliveira Martins (2021) Norway’s Policy on Emerging Military Technologies: Widening the Debate on AI and Lethal Autonomous Weapon Systems, PRIO Policy Brief, 11. Oslo: PRIO.

prio

article

Governance of artificial intelligence

ABSTRACTThe rapid developments in Artificial Intelligence (AI) and the intensification in the adoption of AI in domains such as autonomous vehicles, lethal weapon systems, robotics and alike pose...

taylor & francis

commentary

If a killer robot were used, would we know?

After a recent UN report suggested that a Turkish-made Kargu-2 had autonomously hunted down retreating troops in Libya, numerous media outlets devoted coverage to the issue of so-called lethal...

bulletin

report

Autonomous Weapon Systems and International Humanitarian Law: Identifying Limits and the Required Type and Degree of Human–Machine Interaction

Compliance with international humanitarian law (IHL) is recognized as a critical benchmark for assessing the acceptability of autonomous weapon systems (AWS). However, in certain key respects, how and to what extent existing IHL rules provide limits on the development and use of AWS remains either subject to debate or underexplored.

sipri

report

Warfare’s Future in the Coming Decade: Technologies and Strategies

The intention of this study is to find audience amongst the policy making circles, academia, and those interested in the topic of future warfare. It will aim to elucidate and incorporate novel...

ssrn

advisory note

Recommendations on the Normative and Operational Framework for Autonomous Weapon Systems

This advisory note, circulated to campaigners and diplomats, provides recommendations for the normative and operational framework for autonomous weapon systems.

stop killer robots

research article

Presidential use of diversionary drone force and public support

During times of domestic turmoil, the use of force abroad becomes an appealing strategy to US presidents in hopes of diverting attention away from internal conditions and toward a foreign policy success. Weaponized drone technology presents a low cost and ...

sage

commentary

Was a flying killer robot used in Libya? Quite possibly

The Turkish made Kargu-2 drone can operate in autonomous mode and may have been used to attack retreating soldiers fighting against the UN-recognized government in Libya. There's an ongoing global...

bulletin

position paper

ICRC Position on Autonomous Weapon Systems [position and background paper]

The International Committee of the Red Cross (ICRC) has, since 2015, urged States to establish internationally agreed limits on autonomous weapon systems to ensure civilian protection, compliance with international humanitarian law, and ethical acceptability. With a view to supporting current efforts to establish international limits on autonomous weapon systems that address

icrc

report

Mördarrobotar: framtid eller fiktion?

The May edition of the Internationella Kvinnoförbundet för Fred och Frihet (IKFF) membership magazine focuses on autonomous weapons.

stop killer robots

commentary

Red Cross Calls for More Limits on Autonomous Weapons

Experts said the group’s unique stature might get governments to the negotiating table at last.

defense one

position paper

ICRC position on autonomous weapon systems [position on autonomous weapon systems paper]

With a view to supporting current efforts to establish international limits on autonomous weapon systems that address the risks they raise, ICRC recommends that States adopt new legally binding rules, in this position and background paper.

icrc

statement

Peter Maurer: “We must decide what role we want human beings to play in life-and-death decisions during armed conflicts”

Speech given by Mr Peter Maurer, President of the International Committee of the Red Cross (ICRC), during a virtual briefing on the new ICRC position on autonomous weapon systems.

icrc

article

Leadership Challenges from the Deployment of Lethal Autonomous Weapon Systems: How Erosion of Human Supervision Over Lethal Engagement Will Impact How Commanders Exercise Leadership

Lethal autonomous weapon systems (LAWS) – robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed – will be capable of selecting targets and...

taylor & francis

report

The Lethal Autonomous Weapons Systems: A Concrete Example of AI’s Presence in the Military Environment

Comprehending and analysing Artificial Intelligence (AI) is fundamental to embrace the next challenges of the future, specifically for the defence sector. Developments in this sector will involve...

ssrn

report

Grejen med mördarrobotar

Swedish-language document introducing autonomous weapons and the moral, ethical, humanitarian, operational and legal challenges they present.

stop killer robots

report

Principles for the Combat Employment of Weapon Systems with Autonomous Functionalities

These seven new principles concentrate on the responsible use of autonomous functionalities in armed conflict in ways that preserve human judgment and responsibility over the ...

cnas

commentary

Worried about the autonomous weapons of the future? Look at what's already gone wrong

When it comes to future autonomous weapons, many governments say they want to ensure humans remain in control over lethal force. The example of the heavily automated air defense systems that...

bulletin

brief

Securing the heavens

This Brief outlines the major space threats and makes concrete suggestions on how space can support the EU's Strategic Compass.

euiss

commentary

Meet the future weapon of mass destruction, the drone swarm

Drone swarms are getting larger and, coupled with autonomous capability, they could pose a real threat. Think “Nagasaki” to get a sense of the death toll a massive drone swarm could theoretically...

bulletin

commentary

The Air Force Is Testing the Weapon of the Future: Drone Swarms

The future is drones and modern warfare will never be the same. 

national interest

analysis

Ethics and Artificial Intelligence

The law plays a vital role in how artificial intelligence can be developed and used in ethical ways. But the law is not enough when it contains gaps due to lack of a federal nexus, interest, or the political will to legislate. And law may be too much if it imposes regulatory rigidity and burdens when flexibility and innovation are required. Sound ethical codes and principles concerning AI can help fill legal gaps. In this paper, CSET Distinguished Fellow James E. Baker offers a primer on the limits and promise of three mechanisms to help shape a regulatory regime that maximizes the benefits of AI and minimizes its potential harms.

cset

report

Robots tueurs: bientôt opérationnels?

L’objectif de la présente: Note d’analyse est d’aller plus loin en examinant les informations fournies par une demi-douzaine de producteurs d’armes autonomes, en particulier des munitions rôdeuses.

stop killer robots

commentary

Roundtable on international laws role in the governance of AI

Friday March 26, Janne E. Nijman, chair of the board and academic director of the Asser Institute, will convene the online closing plenary of the 2021 virtual annual meeting of the American Society...

asser

commentary

China Is ‘Danger Close’ to US in AI Race, DOD AI Chief Says

JAIC leader stresses that AI ethics guidelines don’t slow down the United States. In fact, they are essential.

defense one

report

The evolution of disruptive technologies and lethal autonomous weapons systems: considerations from the military field

This document addresses the issues of international politics and the different positions and strategies of the main international actors regarding the evolution of Lethal Autonomous Weapons Systems...

stop killer robots

report

Perverse Consequences of Lethal Autonomous Weapons Systems

Lethal Autonomous Weapons Systems (LAWS) refer to military systems that employ human-made algorithms to independently identify, search for, and engage targets without human intervention. LAWS refer...

ssrn

report

Challenges in Regulating Lethal Autonomous Weapons Under International Law

Since 2017, the United Nations (UN) has regularly convened a group of government experts (GGE) to explore the technical, legal, and ethical issues surrounding the deployment of lethal autonomous...

ssrn

report

War without Oversight: Challenges to the Deployment of Autonomous Weapons

Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed. Such human-out-of-the-loop platforms will be...

ssrn

report

Leadership Challenges to the Deployment of Autonomous Weapons (AWS)

Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed. Such human-out-of-the-loop platforms will be...

ssrn

commentary

Regulating military AI will be difficult. Here's a way forward

If the international community doesn’t properly manage the development, proliferation, and use of military AI, international peace and stability could be at stake

bulletin

report

Explaining the Nuclear Challenges Posed by Emerging and Disruptive Technology: A Primer for European Policymakers and Professionals

This paper is a primer for those seeking to engage with current debates on nuclear risk in Europe. It demystifies and contextualizes the challenges posed by emerging and disruptive technologies in the nuclear realm. It looks in detail at five significant and potentially disruptive technological developments—hypersonic weapons, missile defence, artificial intelligence and automation, counterspace capabilities, and computer network operations (cyber)—to highlight often-overlooked nuances and explain how some of the challenges presented by these developments are more marginal, established and manageable than is sometimes portrayed. By emphasizing the primacy of politics over technology when it comes to meeting nuclear challenges, this paper also seeks to provide a basis for targeted risk reduction and arms control, as well as normative recommendations for policymakers and professionals working across Europe.

sipri

position paper

ICRC Position Paper: Artificial intelligence and machine learning in armed conflict: A human-centred approach

At a time of increasing conflict and rapid technological change, the International Committee of the Red Cross (ICRC) needs both to understand the impact of new technologies on people affected by armed conflict and to design humanitarian solutions that address the needs of the most vulnerable.

icrc

book chapter

Applying AI on the Battlefield: The Ethical Debates

Reichberg, Gregory M. & Henrik Syse (2021) Applying AI on the Battlefield: The Ethical Debates, in von Braun, Joachim; Margaret S. Archer; Gregory M. Reichberg; & Marcelo Sánchez Sorondo, eds, Robotics, AI, and Humanity: Science, Ethics, and Poli...

prio

commentary

Illiteracy, Not Morality, Is Holding Back Military Integration of Artificial Intelligence

A data-illiterate culture in the military is widening the gap between the United States and its competitors. Success will require deeper and more direct congressional action.

national interest

commentary

The next frontier in drone warfare? A Soviet-era crop duster

Azerbaijan showed during the battle for Nagorno-Karabakh that even an old Soviet-era crop duster could be repurposed and used effectively in drone warfare—another example of how militaries continue...

bulletin

commentary

Morality Poses the Biggest Risk to Military Integration of Artificial Intelligence

Waiting to act on AI integration into our weapons systems puts us behind the technological curve required to effectively compete with our foes.

national interest

analysis

Reducing Military Risks through OSCE Instruments

The OSCE should develop CBMs for partially autonomous weapons systems. Such CBMs should provide information about AWS features and doctrine for their use, to increase transparency and build trust between states.

gcsp

analysis

AI Verification

The rapid integration of artificial intelligence into military systems raises critical questions of ethics, design and safety. While many states and organizations have called for some form of “AI arms control,” few have discussed the technical details of verifying countries’ compliance with these regulations. This brief offers a starting point, defining the goals of “AI verification” and proposing several mechanisms to support arms inspections and continuous verification.

cset

report

Lethal Autonomous Weapons Systems: A Primer for Cambodian Policy

This primer lays out the basics and issues of lethal autonomous weapons and their relevance to Cambodian policy and law.

stop killer robots

report

Lethal Autonomous Weapons: A Primer for Indonesian Policy

This primer lays out the basics and issues of lethal autonomous weapons and their relevance to Indonesian policy and law.

stop killer robots

commentary

How Joe Biden can use confidence-building measures for military uses of AI

The Biden administration has an opportunity to foster international cooperation on military AI to reduce the risk of inadvertent conflict while still pursuing US military leadership in AI.

bulletin

commentary

Don’t Just Harden U.S. Military Bases, Make Them Smarter

While the main threat to military facilities may come from enemy ballistic and cruise missiles, it is time to consider the possibility of unconventional attacks involving small drones and infiltrators.

national interest

report

Lethal Autonomous Weapons Systems: A Primer for Thai Policy

This primer lays out the basics and issues of lethal autonomous weapons and their relevance to Thai policy and law.

stop killer robots

report

Lethal Autonomous Weapons Systems: A Primer for Philippine Policy, Second Edition

This second edition of the primer lays out the basics and issues of lethal autonomous weapons and their relevance to Philippine policy and law.

stop killer robots

commentary

U.S. Military Bases: Could a Drone Swarm Attack Mean Doom?

U.S. military installations, command and control centers and even air, ground and sea war platforms could themselves quickly fall victim to drone swarm strikes.

national interest

report

Lethal Autonomous Weapons Systems: A Primer for Nepalese Policy

This primer lays out the basics and issues of lethal autonomous weapons and their relevance to Nepalese policy and law.

stop killer robots

report

Addressing the Threat of Autonomous Weapons

This paper argues for a legally-binding instrument on lethal autonomous weapons systems (LAWS) and for strong positive obligations to ensure meaningful human control over the use of force.

stop killer robots

report

Putting Humans at the Centre of the Governance of the Integration and Deployment of Artificial Intelligence in Peace Support Operations

The United Nations and the North Atlantic Treaty Organization (NATO) have put in place systems governing the integration and deployment of Artificial Intelligence (AI) in their Peace Support...

ssrn

commentary

Meet the U.S Navy’s Unmanned Ships of the Future

The service will need newer, better high-tech drones to help fight future conflicts.

national interest

commentary

DILEMA Lecture by William Boothby

DILEMA Lecture on the topic of ‘Remote, Autonomous Weapons and Human Agency’.

asser

commentary

Meaningful human control over autonomous weapons and International Criminal Law

In a contribution to international law blog OpinioJuris, Asser researcher Marta Bo writes that international criminal law could provide guidance for operationalising the concept of meaningful human...

asser

commentary

DILEMA lecture: Remote, autonomous weapons and human agency

What does it mean to have autonomy in the age of AI? How are remote, autonomous weapons regulated under international law? On Monday 22 February 2021 (16.00 CET / 15.00 GMT), Professor Bill Boothby...

asser

report

Campaign to Stop Killer Robots - 2019 Annual Report

The 2019 annual report provides an overview of activities carried out by the Campaign to Stop Killer Robots from April 2019 to March 2020.

stop killer robots

report

The Role of International Organizations in WMD Compliance and Enforcement: Autonomy, Agency and Influence

This paper looks at the role of multilateral verification bodies in dealing with compliance and enforcement, the extent to which they achieve ‘agency’ and ‘influence’ in doing so, and whether and how such capacities might be enhanced. 

unidir

article

Artificial Intelligence at NATO: dynamic adoption, responsible use

The deputy head of NATO’s Innovation Unit lays out current efforts to develop Artificial Intelligence policy at NATO.

nato review

essay

Influence Operations and Disinformation on Social Media

Although COVID-19 has highlighted new and incredible challenges for our globalized society, foreign influence operations that capitalize on moments of global uncertainty are far from new.

cigi

essay

Artificial Intelligence and Keeping Humans “in the Loop”

AI now exceeds our performance in many activities once held to be too complex for any machine to master.

cigi

essay

Public and Private Dimensions of AI Technology and Security

Public-private collaboration is essential to creating innovative governance solutions that can be adapted as the technology develops.

cigi

essay

International Legal Regulation of Autonomous Technologies

Legislatures across the globe should be preparing to amend their laws, and possibly adopt new ones, governing autonomous technologies.

cigi

essay

AI and the Diffusion of Global Power

The oft-used phrase that data is the new oil is, in the context of AI, probably wrong.

cigi

essay

A New Arms Race and Global Stability

Despite the headlines and the catchy titles, the nature and the extent of the AI arms race are hard to discern at this stage.

cigi

commentary

Introduction: How Can Policy Makers Predict the Unpredictable?

While AI applications are expected to have a significantly positive impact on our lives, those same applications will also likely be abused or manipulated by bad actors.

cigi

essay

Renewing Multilateral Governance in the Age of AI

The dream of the intelligent machine now propels computer science, and therefore regulatory systems, around the world.

cigi

book review

AI ethics – a review of three recent publications

In recent years, AI has become a hotly debated topic across different disciplines and fields of society. Rapidly advancing technological innovations, especially in areas such as machine learning (as well as increasingly widespread uses of AI-based systems), have brought about a growing awareness of the need for AI ethics, whether in politics, industry, science, or in society at large.

springer

commentary

Rebecca Crootof on Artificial intelligence, autonomous weapon systems, and accidents in war

The DILEMA Project, led by Asser senior researcher Dr Berenice Boutin, is launching a new lecture series on legal, ethical, and technical perspectives on human agency over military Artificial Intelligence (AI).

asser

commentary

Launch of the DILEMA Lecture Series

The DILEMA Lecture Series will regularly invite academics and other experts working on issues related to the project to present their work and share reflections with a general audience comprising...

asser

policy brief

Policy Brief for CCW meeting on lethal autonomous weapons systems, 2-6 November

This policy brief highlights the need for states to outline parameters of unacceptability for autonomous weapon systems and steps to ensure meaningful human control over the use of force.

stop killer robots

statement

Statement to CCW meeting on lethal autonomous weapons systems, 2-6 November

This statement highlights the challenges that autonomous weapons present and the urgent need for a treaty.

stop killer robots

report

Responsible Artificial Intelligence Research and Innovation for International Peace and Security

In 2018 the United Nations Secretary-General identified responsible research and innovation (RRI) in science and technology as an approach for academia, the private sector and governments to work on the mitigation of risks that are posed by new technologies.

sipri

report

Responsible Military Use of Artificial Intelligence: Can the European Union Lead the Way in Developing Best Practice?

The military use of artificial intelligence (AI) has become the focus of great power competition. In 2019, several European Union (EU) member states called for greater collaboration between EU member states on the topic of AI in defence. This report explores why the EU and its member states would benefit politically, strategically and economically from developing principles and standards for the responsible military use of AI. It maps what has already been done on the topic and how further expert discussions within the EU on legal compliance, ethics and technical safety could be conducted. The report offers concrete ways for the EU and its member states to work towards common principles and best practices for the responsible military use of AI.

sipri

commentary

U.S. Military Bases Of The Future Must Be Smart And Secure

All the service branches understand that both their legacy fixed facilities (such as Guam) and new expedient bases will be subject to an expanded array of threats. Given this, they must be rendered more secure.

national interest

research article

Artificial intelligence and rationalized unaccountability: Ideology of the elites?

In this Connexions essay, we focus on intelligent agent programs that are cutting-edge solutions of contemporary artificial intelligence (AI). We explore how these programs become objects of desire that contain a radical promise to change organizing and ...

sage

report

New Weapons, Proven Precedent: Elements of and Models for a Treaty on Killer Robots

This report outlines how legal and policy precedent can serve as a foundation for constructing a legally binding instrument without starting from scratch.

stop killer robots

commentary

In the debate over autonomous weapons, it's time to unlock the “black box” of AI

As countries around the world race to incorporate AI and greater autonomous functionality into weapons, the years-long debate at the United Nations over what if anything to do about lethal...

bulletin

statement

Statement to the 75th UN General Assembly First Committee on Disarmament and International Security

This statement was delivered to delegates attending the 75th UN General Assembly (UNGA) First Committee on Disarmament and International Security on 13 October 2020.

stop killer robots

discussion

Screening and Discussion of "The Perfect Weapon"

Panelists discuss how cyber became the weapon of choice for nonstate actors and states alike. Directed by John Maggio and based on the book of the same name by David Sanger, The Perfect Weapon explores the rise of cyber conflict as a primary way in which nations now compete with and sabotage one another. 

council on foreign relations

commentary

Pentagon Hosts Meeting on Ethical Use of Military AI With Allies and Partners

This comes in the backdrop of growing interest in global technology cooperation.

the diplomat

statement

Statement to the CCW meeting on lethal autonomous weapons systems, 24 September

This statement was delivered to delegates attending the CCW meeting on lethal autonomous weapons systems on 24 September 2020.

stop killer robots

statement

Statement to the CCW meeting on lethal autonomous weapons systems, 21 September

This statement was delivered to delegates attending the CCW meeting on lethal autonomous weapons systems on 21 September 2020.

stop killer robots

report

Los riesgos de las armas autónomas: una perspectiva interseccional latinoamericana

This Spanish-language publication explores the potential consequences of lethal autonomous weapons among marginalized populations from an intersectional Latin American perspective.

stop killer robots

research article

Dreaming with drones: Palestine under the shadow of unseen war

This article discusses how the first-person genre, especially a Gazan wartime diary, allows both writer and reader to imagine new possibilities for understanding contemporary colonial drone warfare, which is instrumental in the strategic silencing and ...

sage

report

Artificial Intelligence, Emerging Technology, and Lethal Autonomous Weapons Systems: Security, Moral, and Ethical Perspectives In Asia

A report covering the key concerns that development of artificial intelligence, emerging technologies, and autonomous weapons presents to the Asia region.

stop killer robots

report

LAWS and Lawyers: Lethal Autonomous Weapons Bring LOAC Issues to the Design Table, and Judge Advocates Need to be There

This article discusses how risks normally associated with battlefield considerations of the Law of Armed Conflict must be considered and addressed during the design of autonomous platforms'...

ssrn

original paper

Operations of power in autonomous weapon systems: ethical conditions and socio-political prospects

The purpose of this article is to provide a multi-perspective examination of one of the most important contemporary security issues: weaponized, and especially lethal, artificial intelligence. This technology ...

springer

commentary

Should Drones and AI Be Allowed to Kill by Themselves?

It’s a simple question, should robots kill by themselves?  The technology is here. Unmanned systems, both ground and air robots, can autonomously seek, find, track, target and destroy enemies without human intervention. 

national interest

report

Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control

This report elaborates country positions on banning fully autonomous weapons and retaining human control.

stop killer robots

analysis

Regulating and Limiting the Proliferation of Armed Drones: Norms and Challenges

The international market in armed drones is booming, creating risks of widespread proliferation, especially to non-state actors or states known for their lack of respect for the laws of warfare. This paper analyses these proliferation risks and formulates recommendations on how to mitigate them.

gcsp

commentary

Developing Artificial Intelligence in Russia: Objectives and Reality

Even if AI development becomes Russia’s highest priority, Moscow has no chance of catching up with Washington and Beijing in this field. Under favorable conditions, however, Russia is quite capable...

carnegie endowment

report

Lethal Autonomous Weapons Systems: A Primer for Philippine Policy

This primer lays out the basics and issues of lethal autonomous weapons, and other emerging technologies in the field of weapons development, and their relevance to Philippine policy and law.

stop killer robots

article

World of Drones

This from the International Security Program examines the proliferation, development, and use of armed drones.

new america

research

Empowering the European Parliament: Toward More Accountability on Security and Defense

The European Parliament should be an important source of democratic oversight and accountability as the EU continues to pursue greater security and defense integration.

carnegie endowment

report

Artificial Intelligence Will Merely Kill Us, Not Take Our Jobs

A prominent recent development in governance of artificial intelligence is the White House Office of Management and Budget’s 2020 Guidance for Regulation of Artificial Intelligence Applications...

ssrn

report

I Met Viki on the 29th of August: Why Autonomous Weapon Systems are Defensible and Should Be Developed

In this paper, I argue that there is no theoretical bar to the development of autonomous weapon systems, and that their practical benefits must be considered. Further, I argue that meaningful human...

ssrn

report

Data is Dangerous: Comparing the Risks that the United States, Canada and Germany See in Data Troves

Data and national security have a complex relationship. Data is essential to national defense — to understanding and countering adversaries. Data underpins many modern military tools from drones to...

ssrn

research

Collaborative Models for Understanding Influence Operations: Lessons From Defense Research

As fears rise over disinformation and influence operations, stakeholders from industry to policymakers need to better understand the effects of such activity. This demands increased research...

carnegie endowment

commentary

Call for Papers on expert panel manuals for the Yearbook of International Humanitarian Law

Manuals on the law of armed conflict come in different guises. The most common one is the military manual, which is a publication issued by a State’s Ministry of Defence or a branch of the armed...

asser

original paper

The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation

In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence (AI), entitled ‘New Generation Artificial Intelligence Development Plan’ (新一代人工智能发展规划). This strategy ...

springer

publication

Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons

The development of autonomous weapon systems raises the prospect of the loss of human control over weapons and the use of force.

icrc

publication

Expert Meeting: Autonomous Weapon Systems, Technical, Military, Legal and Humanitarian Aspects

The ICRC convened an international expert meeting on autonomous weapon systems from 26 to 28 March 2014. It brought together government experts from 21 States and 13 individual experts, including roboticists, jurists, ethicists, and representatives from the United Nations and non-governmental organizations.

icrc

article

Limits on Autonomy in Weapon Systems

Limits on Autonomy in Weapon Systems: Identifying Practical Elements of Human Control

icrc

report

Artificial Intelligence, Strategic Stability and Nuclear Risk

This report aims to offer the reader a concrete understanding of how the adop­tion of artificial intelligence (AI) by nuclear-armed states could have an impact on strategic stability and nuclear risk and how related challenges could be addressed at the policy level. The analysis builds on extensive data collection on the AI-related technical and strategic developments of nuclear-armed states. It also builds on the authors’ conclusions from a series of regional workshops that SIPRI organized in Sweden (on Euro-Atlantic dynamics), China (on East Asian dynamics) and Sri Lanka (on South Asian dynamics), as well as a transregional workshop in New York. At these workshops, AI experts, scholars and practitioners who work on arms control, nuclear strategy and regional security had the opportunity to discuss why and how the adoption of AI capabilities by nuclear-armed states could have an impact on strategic stability and nuclear risk within or among regions.

sipri

report

Limits on Autonomy in Weapon Systems: Identifying Practical Elements of Human Control

There is wide recognition that the need to preserve human control over weapon systems and the use of force in armed conflict will require limits on autonomous weapon systems (AWS).

sipri

article

The Militarization of Artificial Intelligence

Revolutionary technologies hold much promise for humanity. When taken up for military uses, they can affect international peace and security. The challenge is to build understanding among...

stanley center

report

Morally Opposed? A Theory of Public Attitudes and Emerging Military Technologies

Technology does not exist in a vacuum; it is mediated by individual and institutional choices about development and use. In the case of autonomous weapon systems (AWS), which select military...

ssrn

research article

Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance

Achieving the global benefits of artificial intelligence (AI) will require international cooperation on many areas of governance and ethical standards, while allowing for diverse cultural perspectives and prio...

springer

commentary

NATO's science and technology organisation details innovations set to be 'strategic disruptors'

The report argues that the crossovers between these technologies, such as with data, AI and autonomy, would be highly influential on the development of future military capabilities. Commenting on...

nato watch

article

Game of drones? How new technologies affect deterrence, defence and security

Exponential technological progress, especially in the digital domain, is affecting all realms of life. Emerging mainly from the commercial sector, it has led to a democratisation of technologies...

nato review

report

Military Applications of AI Raise Ethical Concerns

Artificial intelligence offers great promise for national defense. For example, a growing number of robotic vehicles and autonomous weapons can operate in areas too hazardous for soldiers. But what are the ethical implications of using AI in war or even to enhance security in peacetime?

rand

commentary

Is Russia Developing Robots Capable of Launching Kamikaze Drones?

A threat to NATO? 

national interest

briefing paper

FAQ Key Elements of a Treaty on Fully Autonomous Weapons

A Frequently Asked Questions paper expands on the Campaign’s position and addresses questions raised by the Key Elements of a Treaty proposal.

stop killer robots

research article

How to translate artificial intelligence? Myths and justifications in public discourse

Automated technologies populating today’s online world rely on social expectations about how “smart” they appear to be. Algorithmic processing, as well as bias and missteps in the course of their development, all come to shape a cultural realm that in turn determines what they come to be about. It is our contention that a robust analytical frame could be derived from culturally driven Science and Technology Studies while focusing on Callon’s concept of translation. Excitement and...

sage

report

The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, Volume III, South Asian Perspectives

This edited volume is the third in a series of three. The series forms part of a SIPRI project that explores regional perspectives and trends related to the impact that recent advances in artificial intelligence could have on nuclear weapons and doctrines, as well as on strategic stability and nuclear risk. This volume assembles the perspectives of eight experts on South Asia on why and how machine learning and autonomy may become the focus of an arms race among nuclear-armed states. It further explores how the adoption of these technologies may have an impact on their calculation of strategic stability and nuclear risk at the regional and transregional levels.

sipri

original paper

Knowledge in the grey zone: AI and cybersecurity

Cybersecurity protects citizens and society from harm perpetrated through computer networks. Its task is made ever more complex by the diversity of actors—criminals, spies, militaries, hacktivists, firms—opera...

springer

research article

Ethics of autonomous weapons systems and its applicability to any AI systems

Most artificial intelligence technologies are dual-use. They are incorporated into both peaceful civilian applications and military weapons systems. Most of the existing codes of conduct and ethical principles on artificial intelligence address the former while largely ignoring the latter.

science direct

report

Table-Top Exercises on the Human Element and Autonomous Weapons System

The project brought together 198 experts from 75 different countries to discuss the technical, military and legal implications of introducing autonomy in various steps of the targeting cycle.

unidir

commentary

How Far Are We From Developing AI-Powered Tanks?

A remote control tank could be in the near future, but a killer robot tank is likely still many years away.

national interest

report

Double Elevation: Autonomous Weapons and the Search for an Irreducible Law of War

What should be the role of law in response to the spread of Artificial Intelligence in war? Fuelled by both public and private investment, military technology is accelerating towards increasingly...

ssrn

report

2018 Activity Report

The 2018 activity report provides an overview of activities carried out by the Campaign to Stop Killer Robots from April 2018 to March 2019.

stop killer robots

brief

Digitalising defence

Digital technologies can vastly improve the operational readiness and effectiveness of Europe’s armed forces. As this Brief shows, however, the EU needs to better understand the risks and opportunities involved in the digitalisation of defence and it needs to financially invest in its technological sovereignty.

euiss

report

Killer robots: fact or fiction? autonomous weapon systems within the framework of International Humanitarian Law (Killer Robots: Reality or Fiction? Autonomous Weapons Systems in the Context of International Humanitarian Law)

Autonomous weapons systems have presented an accelerated development in recent years. The use of this type of weapon in scenarios of armed conflict is not expressly regulated...

ssrn

brief

Digital divide? Transatlantic defence cooperation on AI

In the wake of the Artificial Intelligence Strategy unveiled by the US Department of Defense in 2019, this Brief examines the implications of the initiative for Europe and for transatlantic defence cooperation. It argues that Europeans need to develop a strategy for military innovation, including Artificial Intelligence (AI), while the transatlantic partners need to design a common approach to AI governance.

euiss

toolkit

Campaigners Kit

A toolkit for new and existing campaigners to get an overview of the key issues and steps to take action to prohibit autonomous weapons.

stop killer robots

report

Action kit: Save your university from killer robots

This new PAX action kit provides background reading and resources in order to take action and save universities from killer robots.

stop killer robots

commentary

US Department of Defense Adopts Artificial Intelligence Ethical Principles

The Pentagon adopted a set of ethical guidelines on the use of AI.

the diplomat

commentary

Pentagon to Adopt Detailed Principles for Using AI

Sources say the list will closely follow an October report from a defense advisory board.

defense one

report

Conflicted Intelligence: How universities can help prevent the development of lethal autonomous weapons

This report investigates how universities are contributing to the development of autonomous weapons.

stop killer robots

toolkit

Intersectionality and Racism

This publication explores why intersectionality is important when we are discussing killer robots and racism.

stop killer robots

analysis

A New Year’s resolution: bringing IHL home

As the old year bids farewell and the new year takes shape, we tend to ...

icrc blog

report

A WILPF Guide to Killer Robots

A new resource guide for the WILPF network and broader public about autonomous weapon systems, also known as killer robots, bringing a gender lens to the issue.

stop killer robots

commentary

Killer robots reconsidered: Could AI weapons actually cut collateral damage?

Although activists are calling for an international ban on lethal autonomous weapons, incorporating AI into weapons systems may make them more accurate and result in fewer civilian casualties...

bulletin

commentary

Elsa B. Kania on Artificial Intelligence and Great Power Competition

On AI’s potential, military uses, and the fallacy of an AI “arms race.”

the diplomat

commentary

Death of efforts to regulate autonomous weapons has been greatly exaggerated

Some say trying to use the Convention on Certain Conventional Weapons to pre-emptively ban lethal autonomous weapons systems has failed—and consequently should be abandoned. This argument is wrong.

bulletin

report

Training and Education of Armed Forces in the Age of High-Tech Hostilities

This chapter focuses on the legal challenges posed to States by new technologies in relation to the education and training of the personnel of the armed forces. In recent decades, new technologies...

ssrn

commentary

AI for Peace

The United States should apply lessons from the 70-year history of governing nuclear technology by building a framework for governing AI military technology. An AI for Peace program should articulate the dangers of this new technology, principles to manage the dangers, and a structure to shape the incentives for other states.

rand

analysis

‘Act today, shape tomorrow’: the 33rd International Conference

Today we launch the 33rd International Conference of the Red Cross and Red Crescent, a ...

icrc blog

news release

Asser researcher wins NWO grant to research AI in the military context

As more Artificial Intelligence (AI) technologies are being integrated in different areas of our lives, and as much benefit that brings to our societies, there are also challenges. In the military context, which is the focus of Boutin’s project, AI technologies have the potential to greatly improve military capabilities and offer significant strategic and tactical advantages. At the same time, the increasing use of autonomous technologies and adaptive systems in the military context poses profound ethical, legal, and policy challenges.

asser

statement

States must address concerns raised by autonomous weapons

Convention on prohibitions or restrictions on use of certain conventional weapons which may be deemed to be excessively injurious.

icrc

statement

Statement to the CCW annual Meeting of High Contracting Parties, 13-15 November

This statement was delivered to delegates attending the CCW annual Meeting of High Contracting Parties on 14 November 2019.

stop killer robots

report

Slippery Slope: The arms industry and increasingly autonomous weapons

This report analyses developments in the arms industry, pointing to areas of work that have potential for applications in lethal autonomous weapons and shows the trend of increasing autonomy in...

stop killer robots

commentary

The United States should drop its opposition to a killer robot treaty

Active US engagement in negotiating a relatively modest treaty offers the best hope for mitigating the humanitarian risks of autonomous weapons.

bulletin

commentary

DeepMind’s AI has now outcompeted nearly all human players at StarCraft II

AlphaStar cooperated with itself to learn new strategies for conquering the popular galactic warfare game.

mit technology review

original research

Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability

This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence (AI) technologies. It is assumed that only humans can be responsible agents; yet this alone already r...

springer

commentary

Non-Proliferation and Emerging Technologies

Policymakers are under the false impression that they are coming too late and that there is no time for regulating new dual-use technologies. But that impression is misleading.

carnegie endowment

commentary

Military artificial intelligence can be easily and dangerously fooled

AI warfare is beginning to dominate military strategy in the US and China, but is the technology ready?

mit technology review

statement

Military needs can never justify using inhumane or indiscriminate weapons

Statement to UN General Assembly First Committee: General debate on all disarmament and international security agenda items

icrc

statement

Statement to the 74th UN General Assembly First Committee on Disarmament and International Security

This statement was delivered to delegates attending the 74th UN General Assembly (UNGA) First Committee on Disarmament and International Security on 18 October 2019.

stop killer robots

commentary

Ethics of AI and Cybersecurity When Sovereignty is at Stake

Sovereignty and strategic autonomy are felt to be at risk today, being threatened by the forces of rising international tensions, disruptive digital transformations and explosive growth of cybersecurity incide...

springer

commentary

The Air Force Is Testing A Secret Weapon: Drone Swarms

Modern warfare will never be the same.

national interest

report

The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, Volume II, East Asian Perspectives

This edited volume is the second of a series of three. They form part of a SIPRI project that explores regional perspectives and trends related to the impact that recent advances in artificial intelligence could have on nuclear weapons and doctrines, as well as on strategic stability and nuclear risk. This volume assembles the perspectives of 13 experts from East Asia, Russia and the United States on why and how machine learning and autonomy may become the focus of an arms race among nuclear-armed states. It further explores how the adoption of these technologies may have an impact on their calculation of strategic stability and nuclear risk at the regional and transregional levels.

sipri

analysis

Autonomous Weapons Systems: When is the right time to regulate?

Those wishing to control the spread and use of autonomous weapons systems generally favour pre-emptive ...

icrc blog

report

An AI Race for Strategic Advantage: Rhetoric and Risks

The rhetoric of the race for strategic advantage is increasingly being used with regard to the development of artificial intelligence (AI), sometimes in a military context, but also more broadly....

ssrn

report

The Militarization of Artificial Intelligence: A Wake-Up Call for the Global South

The militarization of artificial intelligence (AI) is well under way and leading military powers have been investing large resources in emerging technologies. Calls for AI governance at...

ssrn

commentary

The Role of the United Nations in Addressing Emerging Technologies in the Area of Lethal Autonomous Weapons Systems

It is only natural that advances in the intelligent autonomy of digital systems attract the attention of Governments, scientists and civil society concerned about the possible deployment and use of lethal autonomous weapons. What is needed is a forum to discuss these concerns and construct common understandings regarding possible solutions. ...

unoda

commentary

Responsible Innovation for a New Era in Science and Technology

Today we are at the dawn of an age of unprecedented technological change. In areas from robotics and artificial intelligence (AI) to the material and life sciences, the coming decades promise innovations that can help us promote peace, protect our planet and address the root causes of suffering in our world. ...

unoda

position paper

Responsible AI: requirements and challenges

This position paper discusses the requirements and challenges for responsible AI with respect to two interdependent objectives: (i) how to foster research and development efforts toward socially beneficial app...

springer

analysis

Black magic, zombies and dragons: a tale of IHL in the 21st Century

As we marked the 70th anniversary of the Geneva Conventions last month, I want to ...

icrc blog

research

New Tech, New Threats, and New Governance Challenges: An Opportunity to Craft Smarter Responses?

The array of new technologies emerging on the world stage, the new threats they can pose, and the associated governance dilemmas highlight a set of common themes.

carnegie endowment

article

IHL session in Viet Nam: Experts tackle tough questions on cyber warfare and autonomous weapons

As harsh as it may sound, what do you think is "better"? Being killed by a human being or by a robot? If international humanitarian law (IHL) applies to humans and they are obliged to respect it, what body of law prohibits armed drones or robots from killing people? In the context of cyber warfare and autonomous weapons, is IHL still relevant? Or, is it too old to adapt to the

icrc

q&a

Intel, Ethics, and Emerging Tech: Q&A with Cortney Weinbaum

Cortney Weinbaum studies topics related to intelligence and cyber policy as a senior management scientist at RAND. In this interview, she discusses challenges facing the intelligence community, the risks of using AI as a solution, and ethics in scientific research.

rand

commentary

Dual-use Distinguishability: How 3D-printing Shapes the Security Dilemma for Nuclear Programs

Additive manufacturing is being adopted by nuclear programs to improve production capabilities, yet its impact on strategic stability remains unclear. This article uses the security dilemma to...

carnegie endowment

statement

Statement to CCW GGE meeting on lethal autonomous weapons systems, 20-21 August

This statement was delivered to delegates participating at the CCW GGE meeting on lethal autonomous weapons systems on 20 August 2019.

stop killer robots

research

What the Machine Learning Value Chain Means for Geopolitics

Artificial intelligence, or AI, has become a major source of economic value, contributing as much as $2 trillion to today’s global economy. Sophisticated machine learning technology is driving this...

carnegie endowment

report

Autonomous Weapons

This entry puts forth a proposed definition of autonomous weapons, explains the basis of that definition, distinguishes autonomous weapons from drones and explains how autonomous weapons are not...

ssrn

report

Autonomous Weapon Systems, the Law of Armed Conflict and the Exercise of Responsible Judgment

Military technology continues to outpace the law. Recent developments in cyber warfare, space warfare and air and missile warfare have generated creative initiatives by groups of lawyers,...

ssrn

commentary

In Syria, Russia found the chance to showcase its swagger–and its robot weapons

The Syrian civil war gave Russia the chance to test and purportedly improve its robotic and autonomous weapons. Weapons makers showcased some of their products at a recent convention in Moscow.

bulletin

commentary

The Navy Will Soon Have a New Weapon to Kill 'Battleships' or Submarines

The future is now? 

national interest

policy

Governance Principles for the New Generation Artificial Intelligence--Developing Responsible Artificial Intelligence

In order to promote the healthy development of the new generation of AI, better balance between development and governance, ensure the safety, reliability and controllability of AI, support the economic, social, and environmental pillars of the UN sustainable development goals, and to jointly build a human community with a shared future, all stakeholders concerned with AI development should observe the following principles

most

report

Asleep at the Switch? How Killer Robots Become a Force Multiplier of Military Necessity

Lethal autonomous weapons — machines that might one day target and kill people without human intervention or oversight — are gaining attention on the world stage. While their development,...

ssrn

commentary

Blog: Lethal autonomous weapons, war crimes, and the Convention on Conventional Weapons

Asser Institute and Graduate Institute researcher Dr Marta Bo and Taylor Woodcock argue, in a blog post written for The Global, that there is a lack of discussion on autonomous weapons and criminal...

asser

report

When Speed Kills: Autonomous Weapon Systems, Deterrence, and Stability

While the applications of artificial intelligence (AI) for militaries are broad and go beyond the battlefield, autonomy on the battlefield, in the forms of lethal autonomous weapon systems (LAWS),...

ssrn

commentary

The US Air Force is enlisting MIT to help sharpen its AI skills

The Air Force Artificial Intelligence Incubator aims to develop technologies that serve the “public good,” not weapons development.

mit technology review

commentary

Can We Still Regulate Emerging Technologies?

The rapid pace of advances in technology, from artificial intelligence to miltiary robotics, raises the question of whether it is too late to begin regulating emerging technologies.

carnegie endowment

commentary

The United Nations and the future of warfare

The United Nations has debated whether to ban lethal autonomous weapons for years now. As countries make rapid progress in the autonomous capabilities of weapons systems, will any ban be too late...

bulletin

analysis

Legal regulation of AI weapons under international humanitarian law: A Chinese perspective

Arguably, international humanitarian law (IHL) evolves with the development of emerging technologies. The history of ...

icrc blog

report

The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, Volume I, Euro-Atlantic perspectives

This edited volume focuses on the impact on artificial intelligence (AI) on nuclear strategy. It is the first instalment of a trilogy that explores regional perspectives and trends related to the impact that recent advances in AI could have nuclear weapons and doctrines, strategic stability and nuclear risk. It assembles the views of 14 experts from the Euro-Atlantic community on why and how machine learning and autonomy might become the focus of an armed race among nuclear-armed states; and how the adoption of these technologies might impact their calculation of strategic stability and nuclear risk at the regional level and trans-regional level.

sipri

report

The Lawful Use of Autonomous Weapon Systems for Targeted Strikes (Part 3): Evaluating the Outer Limits

Lethal Autonomous Weapon Systems (LAWS) are essentially weapon systems that, once activated, can select and engage targets without further human intervention. While these are neither currently...

ssrn

commentary

Laying Down the LAWS: Strategizing Autonomous Weapons Governance

This post is the second entry in the blog series Transformative Technology, Transformative Governance, which examines the global implications of emerging technologies, as well as measures to mitigate their risks and maximize their benefits.

council on foreign relations

analysis

Safety net or tangled web: Legal reviews of AI in weapons and war-fighting

Editor’s note: For those interested in the topic of legal reviews of weapons, it is ...

icrc blog

analysis

The viability of data-reliant predictive systems in armed conflict detention

Editor’s note: In this post, Tess Bridgeman continues the discussion on detention and the potential use of ...

icrc blog

analysis

Enhanced distinction: The need for a more focused autonomous weapons targeting discussion at the LAWS GGE

The meeting of the Lethal Autonomous Weapon Systems (LAWS) Group of Governmental Experts (GGE) has been taking place in Geneva this week. This ...

icrc blog

analysis

The need for clear governance frameworks on predictive algorithms in military settings

Editor’s note: In this post, as part of the AI blog series, Lorna McGregor continues the discussion on ...

icrc blog

commentary

Why AI researchers should reconsider protesting involvement in military projects

One Defense Department advisor suggests that “constructive engagement” will be more successful than opting out.

mit technology review

analysis

Detaining by algorithm

Editor’s note: As part of this AI blog series, several posts focus on detention and the ...

icrc blog

analysis

Legal reviews of weapons, means and methods of warfare involving artificial intelligence: 16 elements to consider

What are some of the chief concerns in contemporary debates around legal reviews of weapons, ...

icrc blog

analysis

Expert views on the frontiers of artificial intelligence and conflict

Recent advances in artificial intelligence have the potential to affect many aspects of our lives ...

icrc blog

commentary

Chinese, Local Drones Reflect Changing Middle East

Ever since 9/11, drones have been amongst the most visible, and often controversial, signs of American power in the Middle East and beyond. But as regional powers look to chart their own course, a new generation of cheaper unmanned aerial vehicles - Chinese or locally built, with far fewer restrictions on their use - are taking to the skies.

new america

report

Bio Plus X: Arms Control and the Convergence of Biology and Emerging Technologies

Technological advances in the biological sciences have long presented a challenge to efforts to maintain biosecurity and prevent the proliferation of biological weapons. The convergence of developments in biotechnology with other, emerging technologies such as additive manufacturing, artificial intelligence and robotics has increased the possibilities for the development and use of biological weapons.

sipri

commentary

The next ‘Deep Blue’ moment: Self-flying drone racing

In 1997, IBM’s “Deep Blue” computer defeated grandmaster Gary Kasparov in a match of chess. It was an historic moment, marking the end of an era where humans could defeat machines in complex strategy games. Today, artificial intelligence (AI) bots can defeat humans in not only chess, but nearly every digital game that exists.…

mit technology review

commentary

Trump orders some sort of vague action in the AI arms race

Through an executive order, President Donald Trump launched the American AI Initiative, further underscoring the importance of a group of technologies that are reshaping everything from medical...

bulletin

commentary

Does the U.S. Navy's Next Super Weapon Have a Fatal Flaw?

A new study on naval drones warns the real problem with autonomous drones isn’t going berserk, but rather the inability to adapt to the unexpected.

national interest

commentary

China’s military is rushing to use artificial intelligence

A new report shows that a more literal AI arms race is also under way.

mit technology review

commentary

China's Olive Branch to Save the World from AI Weapons

Is China open to arms control over AI weapons development? The United States should find out.

national interest

commentary

Panel discussion: Human control over Autonomous Military Technologies

On Wednesday 13 February, the Asser Institute will be hosting a panel discussion on the topic of ‘Human Control over Autonomous Military Technologies’ from 14:30 to 17:00. The event is organised in...

asser

commentary

Cyberweapons: A Growing Threat to Strategic Stability in the Twenty-First Century

The impact of cyberweapons on strategic stability is a growing problem that extends well beyond the security of the control and communication systems of nuclear forces.

carnegie endowment

analysis

Is arms control over emerging technologies just a peacetime luxury? Lessons learned from the First World War

At the turn of the twentieth century, many engineers with fertile imaginations—from France’s Gustave Gabet to America’s Orville Wright—hoped that their inventions would ...

icrc blog

commentary

Does the United States Face an AI Ethics Gap?

Instead of worrying about an artificial intelligence “ethics gap,” U.S. policymakers and the military community could embrace a leadership role in AI ethics. This may help ensure that the AI arms race doesn't become a race to the bottom.

rand

report

Legal, Regulatory, and Ethical Frameworks for Development of Standards in Artificial Intelligence (AI) and Autonomous Robotic Surgery

Background: This paper aims to move the debate forward regarding the potential for artificial intelligence (AI) and autonomous robotic surgery with a particular focus on ethics, regulation and...

ssrn

commentary

Never mind killer robots—here are six real AI dangers to watch out for in 2019

Last year a string of controversies revealed a darker (and dumber) side to artificial intelligence.

mit technology review

analysis

Machine autonomy and the constant care obligation

The debate about the way the international community should deal with autonomous weapon systems has ...

icrc blog

article

Ethical Aspects of Military Maritime and Aerial Autonomous Systems

ABSTRACTTwo categories of ethical questions surrounding military autonomous systems are discussed in this article. The first category concerns ethical issues regarding the use of military...

taylor & francis

commentary

Air Force Hopes to Arm Stealth F-35s, F-15s and F-16s with the Ultimate Weapon

The Air Force and DARPA are now testing new hardware and software configured to enable 4th-Generation aircraft to command drones from the cockpit in the air, bringing new levels of autonomy, more attack options and a host of new reconnaissance advantages to air warfare.

national interest

commentary

Artificial intelligence: a detailed explainer, with a human point of view

Is artificial intelligence, AI, a threat to our way of life, or a blessing? AI seeks to replicate and maybe replace what human intelligence does best: make complex decisions. Currently, human...

bulletin

commentary

Autonomous Weapons Are Coming, This is How We Get Them Right

Fully autonomous weapons are not only inevitable; they have been in America’s inventory since 1979.

national interest

news release

Autonomous weapons: States must agree on what human control means in practice

Should a weapon system be able to make its own “decision” about who to kill?

icrc

commentary

Learning from South Korea: How artificial intelligence can transform US export controls

How can civilian agencies in the national-security space leverage artificial intelligence to fortify security interests, with far fewer resources than their heavyweight military and intelligence counterparts...

bulletin

commentary

Air Force Plans to Arm Stealth F-35s, F-15s and F-16s with the Ultimate Weapon

Advances in computer power, processing speed and AI are rapidly changing the scope of what platforms are able to perform without needing human intervention.

national interest

commentary

AI is not “magic dust” for your company, says Google’s Cloud AI boss

Andrew Moore says getting the technology to work in businesses is a huge challenge.

mit technology review

commentary

Autonomous Weapons: The Ultimate Military Game Changer?

Know this: if autonomous weapons are developed and introduced into the world’s arsenals, then they are unlikely to immediately revolutionize warfare.

national interest

discussion

Retaining Meaningful Human Control of Weapons Systems

A panel discussion entitled Retaining Meaningful Human Control of Weapons Systems was held on the side of the First Committee on Disarmament and International Security.

unoda

statement

Weapons: Statement of the ICRC to the United Nations, 2018

United Nations General Assembly, 73rd Session, First Committee. Statement delivered by Ms. Kathleen Lawand, Head of Arms Unit, ICRC.

icrc

report

The Lawful Use of Autonomous Weapon Systems for Targeted Strikes (Part 2): Targeting Law & Practice

Lethal Autonomous Weapon Systems (LAWS) are essentially weapon systems that, once activated, can select and engage targets without further human intervention. While these are neither currently...

ssrn

analysis

Perils of Lethal Autonomous Weapons Systems Proliferation: Preventing Non-State Acquisition

Terrorist groups, illicit organisations, and other non-state actors have a long fascination with advanced weapons technologies. However, international efforts to restrict proliferation of such weapons are currently lagging behind the emergence of new, possibly as-destructive, technologies. In particular, the last few years have marked the rapid development of lethal autonomous weapons systems (LAWS).

gcsp

commentary

The U.S. Air Force Plans to Arm F-15s, F-22s and F-35s with the Ultimate Weapon

The Air Force and DARPA are now testing new hardware and software configured to enable 4th-Generation aircraft to command drones from the cockpit in the air.

national interest

commentary

The Pentagon is putting billions toward military AI research

DARPA, the US Defense Department’s research arm, will spend $2 billion over the next five years on military AI projects.

mit technology review

analysis

The (im)possibility of meaningful human control for lethal autonomous weapon systems

This week, the Group of Governmental Experts (GGE) on lethal autonomous weapon systems (LAWS) is holding their third meeting at the UN Certain ...

icrc blog

commentary

What the Campaign to Stop Killer Robots can learn from the antinuclear weapons movement

What today's campaigners against the battlefield use of A.I.-powered autonomous robots can learn from the successful antinuclear movements of yesteryear.

bulletin

analysis

The impact of gender and race bias in AI

Automated decision algorithms are currently propagating gender and race discrimination throughout our global community. The ...

icrc blog

analysis

The human nature of international humanitarian law

International humanitarian law (IHL) regulates the use of force in armed conflict. It inherently provides ...

icrc blog

commentary

Air Force to Arm F-15s, F-22s and F-35s with the Ultimate Weapon

The Air Force and DARPA are now testing new hardware and software configured to enable 4th-Generation aircraft to command drones from the cockpit in the air, bringing new levels of autonomy, more attack options and a host of new reconnaissance advantages to air warfare.

national interest

analysis

Autonomous weapons: Operationalizing meaningful human control

For the second time this year, States will come together in the UN Convention on ...

icrc blog

commentary

Why AI researchers shouldn’t turn their backs on the military

The author of a new book on autonomous weapons says scientists working on artificial intelligence need to do more to prevent the technology from being weaponized.

mit technology review

report

Autonomous Weapon Systems: The Possibility and Probability of Accountability

This paper addresses the challenge of accountability that arises in relation to autonomous weapon systems (AWS), a challenge which focuses on the hypothesis that AWS will make it impossible to...

ssrn

commentary

The Army’s New Futures Command Will Succeed or Fail by Congress’s Hand

Until Congress straightens its never-ending fiscal rollercoaster and Army leadership demonstrates that it has learned from its past, the success of Futures Command remains dubious.

national interest

analysis

Autonomous weapons and human control

Concerns about ensuring sufficient human control over autonomous weapon systems (AWS) have been prominent since ...

icrc blog

analysis

The Impact of Autonomy and Artificial Intelligence on Strategic Stability

The article discusses artificial intelligence, human activities and future autonomous weapons systems.

gcsp

commentary

India Should Be Ready To Reap Military Potential Of AI, It Can Redefine Warfare As We Know It

India has a vast talent pool and a burgeoning start-up scene which, if properly tapped and encouraged, could not only provide indigenous military solutions, but could also create significant...

carnegie endowment

analysis

New types of weapons need new forms of governance

The existing national and international tools used to control the emergence and use of weapons that may contravene international humanitarian law (IHL) have ...

icrc blog

report

The Lawful Use of Autonomous Weapon Systems for Targeted Strikes (Part 1): Concepts, Advantages and Technologies

Lethal Autonomous Weapon Systems (LAWS) are essentially weapon systems that, once activated, can select and engage targets without further human intervention. While these are neither currently...

ssrn

commentary

“A Tale of Two Cities”: The Roles of Geneva and The Hague, two UN cities, in Driving Global Justice

The event brought together speakers from both Geneva and The Hague, to explore and highlight the role of these two UN Cities in linking research to policy in the areas of peace and justice.

asser

interview

Killer Robots and Autonomous Weapons With Paul Scharre

Paul Scharre, senior fellow and director of the technology and national security program at the Center for a New American Security (CNAS), discusses autonomous weapons and the changing nature of warfare with CFR's James M. Lindsay. 

council on foreign relations

commentary

China In Race To Overtake U.S. Military in AI Warfare

AI Weapons: China and America Are Desperate to Dominate This New Technology

national interest

commentary

Call for Papers - Yearbook of International Humanitarian Law, Vol. 21 (2018)

In recent years numerous developments have again highlighted the importance of Weapons Law for preventing and regulating armed conflict. The use of chemical weapons in Syria, the ups-and-downs of...

asser

commentary

The Army's Next Super Weapon: Robot Tanks?

Yes, this is coming. 

national interest

research article

“The Computer Said So”: On the Ethics, Effectiveness, and Cultural Techniques of Predictive Policing

In this paper, I use The New York Times’ debate titled, “Can predictive policing be ethical and effective?” to examine what are seen as the key operations of predictive policing and what impacts they might have in our current culture and society.

sage

report

Normal Autonomous Accidents: What Happens When Killer Robots Fail?

Over the past decade there has been much written on lethal autonomous weapons systems (LAWS) commonly known as “killer robots”. This includes legal, ethical and moral concerns as well as issues...

ssrn

commentary

Why the world needs to regulate autonomous weapons, and soon

If machines that autonomously target and kill humans are fielded by one country, it could be quickly followed by others, resulting in destabilizing global arms races. And that’s only a small part...

bulletin

commentary

Manifestos and open letters: Back to the future?

Why UN discussions on the management of lethal autonomous weapons need greater participation by the scientific and research communities and representatives of the private sector. Statements of...

bulletin

commentary

Defending against “The Entertainment”

Amid the published angst about AI and its hypothetical threats, more attention ought to be given to the threat that AI-enabled entertainment poses to our brains and our civilization.

bulletin

commentary

An expert collection on the military applications of AI

Over the course of this week, the Bulletin, in partnership with the Stanley Foundation, is publishing top experts on how to manage the explosion of military AI research and development around the...

bulletin

commentary

The promise and peril of military applications of artificial intelligence

The promise of AI—including its ability to improve the speed and accuracy of everything from logistics and battlefield planning to human decision making—is driving militaries around the world to...

bulletin

report

Preventing Autonomous Weapon Systems from Being Used to Perpetrate Intentional Violations of the Laws of War

Autonomous Weapon Systems (AWS) are essentially weapon systems that, once activated, can select and engage targets without further human intervention. While these are neither currently fielded nor...

ssrn

analysis

Human judgment and lethal decision-making in war

For the fifth year in a row, government delegates meet at the United Nations in ...

icrc blog

commentary

Assers First Winter Academy on Artificial Intelligence and International Law

For this first edition, the Winter Academy will include general sessions on legal and theoretical perspectives on AI (including on legal personality, collective agency, human control,...

asser

analysis

Autonomous weapon systems: A threat to human dignity?

In the opening scene of Christopher Nolan’s Dunkirk, six British soldiers, looking for food and ...

icrc blog

statement

Towards limits on autonomy in weapon systems

Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts on Lethal Autonomous Weapons Systems, statement of the ICRCThe International Committee of the Red Cross (ICRC) is pleased to contribute its views to this second meeting of the Group of Governmental Experts on “Lethal Autonomous Weapon Systems”.

icrc

commentary

Here’s how the US needs to prepare for the age of artificial intelligence

Government indifference toward AI could let the US lose ground to rival countries. But what would a good AI plan actually look like?

mit technology review

commentary

To kill killer robots, a brief boycott

If your university partners with a defense contractor to research autonomous weapons, do not expect AI researchers to sit still for it.

bulletin

commentary

Big organizations may like killer robots, but workers and researchers sure don’t

Tech firms and universities interested in building AI-powered weapons for lucrative military contracts are, predictably, facing some significant pushback.

mit technology review

article

Ethics and autonomous weapon systems: An ethical basis for human control?

As part of continuing reflections on the legal and ethical issues raised by autonomous weapons systems, the ICRC convened a round-table meeting in Geneva from 28 to 29 August 2017 to explore the ethical aspects.This report - "Ethics and autonomous weapon systems: An ethical basis for human control?" - summarizes discussions and highlights the ICRC's main conclusions:

icrc

analysis

Autonomous weapon systems: An ethical basis for human control?

The requirement for human control The risks of functionally delegating complex tasks—and associated decisions—to sensors ...

icrc blog

article

The Critical Human Element in the Machine Age of Warfare

As major militaries progress toward the introduction of artificial intelligence (AI) into intelligence, surveillance, and reconnaissance, and even command systems, Petrov’s decision should serve as a potent reminder of the risks of reliance on complex systems in which errors and malfunctions are not only probable, but probably inevitable.

stanley center

report

The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence

This paper is an introductory primer for non-technical audiences on the current state of AI and machine learning, designed to support the international discussions on the weaponization of increasingly autonomous technologies.

unidir

commentary

The Army Wants a New Tank to Take On Russia and China

The Army is massively speeding up its early prototyping of weapons and technology for its Next-Gen Combat Vehicle.

national interest

research

The Dangerous Illogic of Twenty-First-Century Deterrence Through Planning for Nuclear Warfighting

Rather than use Cold War principles, nuclear states should shift their nuclear doctrines and capabilities to strategic deterrence as needed by the twenty-first century.

carnegie endowment

commentary

How AI Can Help the Indian Armed Forces

The controversies surrounding autonomous weapons must not obscure the fact that like most technologies, AI has a number of non-lethal uses for militaries across the world, and especially for the...

carnegie endowment

report

The Roboticization of Warfare with Lethal Autonomous Weapon Systems (Laws): Mandate of Humanity or Threat to It?

LAWS are a threat to humanity, and after an objective analysis without a preconceived attachment to a particular outcome they are prohibited by the lex lata. The analysis is not conducted in a...

ssrn

report

The Principle of Proportionality in an Era of High Technology

This chapter, in the forthcoming book Complex Battlespaces: The Law of Armed Conflict and the Dynamics of Modern Warfare (published by Oxford University Press) explores the application of a key...

ssrn

article

Autonomous weapon systems under international humanitarian law

The United Nations Office for Disarmament Affairs published a collection of articles: "Perspectives on Lethal Autonomous Weapon Systems"

icrc

video

India’s Expertise and Influence Essential to Address Challenges of Autonomous Weapons

Dr Hugo Slim, Head of Policy and Humanitarian Diplomacy at the ICRC, visited New Delhi this week to speak at the Raisina Dialogue organised by the Ministry of External Affairs of India and the Observer Research Foundation 16-18 January 2018.

icrc

commentary

A Good Year for Artificial Intelligence

It is necessary to move past the idea of artificial intelligence being a replacement for humans across the board, and begin having a deeper conversation about its effectiveness as a tool in the...

carnegie endowment

interview

Don't fear the robopocalypse: Autonomous weapons expert Paul Scharre

A former Army Ranger—who happens to have led the team that established Defense Department policy on autonomous weapons—explains in a Bulletin interview what these weapons are good for, what they’re...

bulletin

report

Autonomous Weapon Systems: A New Challenging for International Humanitarian Law and International Human Right Law

This article considers the recent literature concerned with establishing an international prohibition on autonomous weapon systems. It seeks to address concerns expressed by some scholars that such...

ssrn

commentary

Neuro, cyber, slaughter: Emerging technological threats in 2017

Looking back at our best coverage in 2017 of emerging technological threats.

bulletin

commentary

“As much death as you want”: UC Berkeley's Stuart Russell on “Slaughterbots”

If you never dreamed that toy-like drones from off the shelf at the big-box store could be converted—with a bit of artificial intelligence and a touch of shaped explosive—into face-recognizing...

bulletin

report

Article 36 Reviews: Dealing with the Challenges posed by Emerging Technologies

Article 36 of the 1977 Additional Protocol to the 1949 Geneva Conventions imposes a practical obligation on states to determine whether ‘in the study, development, acquisition or adoption of a new weapon, means or method of warfare’ its use would ‘in some or all circumstances be prohibited by international law’. This mechanism is often colloquially referred to as an ‘Article 36 review’.

sipri

perspective

UNODA Occasional Papers – No. 30, November 2017

Perspectives on Lethal Autonomous Weapon Systems

unoda

commentary

Beware the Robotic Empire

Given the importance of artificial intelligence (AI) in the coming years, India must keep a wary eye on Chinese developments in this field, and develop its own strategic vision of how AI...

carnegie endowment

report

The Weaponization of Increasingly Autonomous Technologies: Autonomous Weapon Systems and Cyber Operations

the interaction of cyber operations and increasingly autonomous physical weapon systems may give rise to new security challenges, as these interactions can multiply complexity and introduce new vulnerabilities.

unidir

statement

Expert Meeting on Lethal Autonomous Weapons Systems

The ICRC welcomes this first meeting of the Group of Governmental Experts on "Lethal Autonomous Weapons Systems".

icrc

commentary

The critical human element in the machine age of warfare

As multiple militaries have begun to use AI to enhance their capabilities on the battlefield, several deadly mistakes have shown the risks of automation and semi-autonomous systems, even when human...

bulletin

analysis

Ethics as a source of law: The Martens clause and autonomous weapons

Ethics evolves, the law changes. In this way, moral progress may occur. Yet the relation ...

icrc blog

report

Mapping the Development of Autonomy in Weapon Systems

The Mapping the Development of Autonomy in Weapon Systems report presents the key findings and recommendations from a one-year mapping study on the development of autonomy in weapon systems.

sipri

discussion

Pathways to Banning Fully Autonomous Weapons

On 16 October 2017, the Permanent Mission to the United Nations of Mexico partnered with the International Committee for Robot Arms Control, Human Rights Watch, Seguridad Humana en Latinoamérica y el Caribe and the Campaign to Stop Killer Robots to host a panel discussion entitled “Pathways to Banning Fully Autonomous Weapons” as part of the First Committee side event series for the 72nd Session General Assembly.

unoda

commentary

The B-52 Bomber: Now Armed with Lasers?

As technology progresses, particularly in the realm of autonomous systems, many wonder if a laser-drone weapon will soon have the ability to find, acquire, track and destroy and enemy target using sensors, targeting and weapons delivery systems – without needing any human intervention.

national interest

discussion

Autonomous Weapon Systems: Understanding Learning Algorithms and Bias

On 5 October 2017, the United Nations Institute for Disarmament Research (UNIDIR) hosted a side event, “Autonomous Weapons Systems: Learning Algorithms and Bias” at the United Nations Headquarters in New York.

unoda

commentary

Why “stupid” machines matter: Autonomous weapons and shifting norms

Should legal and regulatory norms be adjusted to address the threat of hyperintelligent autonomous weapons in the future? Maybe—but dumb autonomous weapons are altering norms right now.

bulletin

analysis

Autonomous weapons mini-series: Distance, weapons technology and humanity in armed conflict

In this blog post, I look at the ethical and legal ramifications of distance in ...

icrc blog

analysis

Introduction to Mini-Series: Autonomous weapon systems and ethics

Autonomous weapon systems & the dictates of public conscience:  An ethical basis for human control? On 28–29 August 2017, the ICRC convened a ...

icrc blog

report

Disarmament: A Basic Guide – Fourth Edition (2017)

Conceived as a comprehensive introduction to a field central to the work of the United Nations, Disarmament: A Basic Guide aims to provide a useful overview of the nuanced challenges of building a more peaceful world in the twenty-first century.

unoda

commentary

These are the Weapons China Needs to Crush America in a War

Will Beijing build them? 

national interest

research article

Robot Wars: US Empire and geopolitics in the robotic age

How will the robot age transform warfare? What geopolitical futures are being imagined by the US military? This article constructs a robotic futurology to examine these crucial questions. Its central concern is how robots – driven by leaps in artificial ...

sage

commentary

Killer Robots are Coming, and the U.S. Isn't the Only Buyer

Other countries are competitive when it comes to artificial intelligence and robotics, and much of the skill and technology is available in the private sector - not controlled by governments.

carnegie endowment

research article

When AI goes to war: Youth opinion, fictional reality and autonomous weapons

This paper relates the results of deliberation of youth juries about the use of autonomous weapons systems (AWS). The discourse that emerged from the juries centered on several key issues. The jurors expressed the importance of keeping the humans in the decision-making process when it comes to militarizing artificial intelligence, and that only humans are capable of moral agency.

science direct

commentary

'Terminator' Robots: The U.S. Military's Ultimate Weapon or Ultimate Nightmare?

“Lethal autonomous weapons threaten to become the third revolution in warfare.”

national interest

commentary

Should We Fear Artificial Intelligence?

It is necessary to be open-eyed and clear-headed about the practical benefits and risks associated with the increasing prevalence of artificial intelligence.

carnegie endowment

report

Fully Autonomous Weapons Systems and the Principles of International Humanitarian Law

The development of military technology during the 20th Century is increasing the capabilities of the machines and computers while downgrading the number and complexity of tasks conducted by the...

ssrn

article

Security by Design

Nuclear weapons have been around for 70 years. They are an old technology, and the norms and institutions that govern them are fairly well established. Emerging technologies, however, could create...

stanley center

article

Autonomous military drones: no longer science fiction

The possibility of life-or-death decisions someday being taken by machines not under the direct control of humans needs to be taken seriously. Over the last few years we have seen a rapid...

nato review

commentary

Why India Needs a Strategic Artificial Intelligence Vision

The present trajectory of AI advancement indicates that future economies and national security will be defined by it, making it among a handful of technologies that will shape global politics.

carnegie endowment

report

At the Crossroads of Control: The Intersection of Artificial Intelligence in Autonomous Weapon Systems with International Humanitarian Law

This Article explores the interaction of artificial intelligence (AI) and machine learning with international humanitarian law (IHL) in autonomous weapon systems (AWS). Lawyers and scientists...

ssrn

report

Defending the Boundary: Constraints and Requirements on the Use of Autonomous Weapon Systems Under International Humanitarian and Human Rights Law

The focus of scholarly inquiry into the legality of autonomous weapon systems (AWS) has been on compliance with IHL rules on the conduct of hostilities. Comparably little attention has been given...

ssrn

report

Autonomous Weapon System: Law of Armed Conflict (LOAC) and Other Legal Challenges

The legality of autonomous weapon systems (AWS) under international law is a swiftly growing issue of importance as technology advances and machines acquire the capacity to operate without human...

ssrn

commentary

The case for banning autonomous weapons rests on morality, not practicality

The failure of the chemical weapons ban in Syria is not a strike against a proposed global ban on autonomous weapons. Bans derive their strength from morality, not practicality.

bulletin

commentary

The Dark Secret at the Heart of AI

No one really knows how the most advanced algorithms do what they do. That could be a problem.

mit technology review

commentary

Autonomous weapon systems: Is a space warfare manual required?

The legalities for the use of Autonomous Weapon Systems (AWS) in space warfare are examined. Currently, there are manuals for air and missile warfare, naval warfare and cyber warfare, a clear gap in the literature is that there is no manual for space warfare.

science direct

report

Toward Meaningful Human Control of Autonomous Weapons Systems Through Function Allocation

One of the few convergent themes during the first two United Nations Meeting of Experts on autonomous weapons systems (AWS) in 2014 and 2015 was the requirement that there be meaningful human...

ssrn

article

The upside and downside of swarming drones

ABSTRACTThe US and Chinese militaries are starting to test swarming drones – distributed collaborative systems made up of many small, cheap, unmanned aircraft. This new subset of independently...

taylor & francis

commentary

U.S. Air Force Chief Scientist: Stealth Drones and Killer Swarms Could Be Coming Soon

The future looks HUGE for the U.S. Air Force. 

national interest

commentary

How America's Mighty F-15, F-16 or F-35s Could Soon Be Firing Lasers

A big development is almost here.

national interest

analysis

The evolution of warfare: Focus on the Law

How has warfare changed over the past 100 years?  Is the international community still sufficiently equipped to reasonably minimize its negative effects on ...

icrc blog

commentary

The U.S. Military Might Be on the Verge of the Ultimate Naval Weapon

Thanks to DARPA and BAE Systems. 

national interest

commentary

The Swarm of War: Is India Ready?

There is a new arms race taking shape, centered around three interconnected technologies: autonomous weapons, swarms, and cyberwarfare.

carnegie endowment

commentary

AI Rules: From Three to Twenty Three

Given the pace of progress in AI development, the expanding scope for its application, and the growing intensity of the current research effort suggests that it may not be too soon to revisit and...

carnegie endowment

commentary

India Has an Opportunity to Shape the Future of War

India’s elevation as chair of a group designed to kick-start talks on lethal autonomous weapon systems gives it the unique opportunity to take a leadership role in global debates on the issue.

carnegie endowment

report

Legality of Lethal Autonomous Weapons AKA Killer Robots

Automated warfare including aerial drones that are extensively used in ongoing armed conflicts is now an established part of military technology worldwide. It is only logical to assume that the...

ssrn

commentary

ISIL ramps up fight with weaponised drones

In addition to using drones for reconnaissance in Iraq, ISIL has been sending them out with bombs attached.

new america

commentary

America's Master Plan to Turn the M1 Abrams Tank Into a Super Weapon

Algorithms are progressing to the point wherein they will be able to allow an Abrams tank crew to operate multiple nearby “wing-man” robotic vehicles in a command and control capacity while on the move in combat.

national interest

report

Autonomous weapon system: Law of armed conflict (LOAC) and other legal challenges

The legality of autonomous weapon systems (AWS) under international law is a swiftly growing issue of importance as technology advances and machines acquire the capacity to operate without human control. This paper argues that the existing laws are ineffective and that a different set of laws are needed. This paper examines several issues that are critical for the development and use of AWS in warfare.

science direct

report

Autonomous Weapon Systems and the Threshold of Non-International Armed Conflict

The ongoing international humanitarian law (IHL) discussion predominantly centers on whether States’ development and employment of AWS can comply with certain fundamental obligations contained in...

ssrn

report

Mapping the Development of Autonomy in Weapon Systems: A Primer on Autonomy

Since 2013 the governance of lethal autonomous weapon systems (LAWS) has been discussed under the framework of the 1980 United Nations Convention on Certain Conventional Weapons (CCW). The discussion is still at an early stage, with most states parties still in the process of understanding the issues at stake—beginning with the fundamental questions of what constitutes ‘autonomy’ and to what extent it is a matter of concern in the context of weapon systems and the use of force. A number of states parties have stressed that future discussions could usefully benefit from further investigation into the conceptual and technical foundations of the meaning of ‘autonomy’.

sipri

report

Mapping the Innovation Ecosystem Driving the Advance of Autonomy in Weapon Systems

Since 2013 the governance of lethal autonomous weapon systems (LAWS) has been discussed internationally under the framework of the 1980 United Nations Convention on Certain Conventional Weapons (CCW). Thus far, the discussion has remained at the informal level. Three informal meetings of experts (held in 2014, 2015 and 2016) have been convened under the auspices of the CCW to discuss questions related to emerging technologies in the area of LAWS. Several delegations have, however, already indicated that they have concerns as to the impact that a new protocol on LAWS could have on innovation, particularly in the civilian sphere, since, arguably, much of the technology on which LAWS might be based could be dual use.

sipri

commentary