Artificial Intelligence And The Ethics Of Autonomous Weaponry

Artificial Intelligence And The Ethics Of Autonomous Weaponry – “It is not often that the UN Special Rapporteur on extrajudicial, summary or arbitrary executions makes headlines or that his report attracts attention beyond a small circle of human rights activists. The use of force lethal through armed drones from the perspective of protecting the right to life.

It is not often that a UN Special Rapporteur on extrajudicial, summary or arbitrary executions makes headlines, or that his report attracts attention beyond a small circle of human rights activists. However, this is what happened when Christoph Heinz presented his report on the use of lethal force by armed drones from a right to life perspective.

Artificial Intelligence And The Ethics Of Autonomous Weaponry

Artificial Intelligence And The Ethics Of Autonomous Weaponry

The report was prepared in response to a request from the General Assembly, but the text of the resolution only used general language on extrajudicial, summary or arbitrary executions: focusing on armed drones that – although not it is not illegal, as noted. – and that could make it easier for countries to deploy lethal and targeted force on the territories of other countries, a departure from previous reports. Haines argued that the modern concept of human rights is based on the fundamental principle that those responsible for violations must be held accountable, and that a lack of transparency and proper responsibility regarding the deployment of drones threatens the rule of law and may threaten international security. .

Artificial Intelligence: Autonomous Technology (at), Lethal Autonomous Weapons Systems (laws) And Peace Time Threats

In his conclusion, he called for action by the UN Security Council and recommended that countries using armed drones should be transparent about their development, purchase and use, and that governments should provide a significant oversight of drone use and ensure investigation. Liability and compensation for their abuse.

Artificial Intelligence And The Ethics Of Autonomous Weaponry

Not surprisingly, these recommendations did not have an immediate following. As the High Representative for Disarmament Affairs, I met with Mr. Heinz shortly after the report was released and he raised it repeatedly in my discussions with member states to transform the negotiations from the Third Committee of the General Assembly (Human Rights). ) to the first (disarmament and international security) where I clearly feel I belong.

Perhaps this was overly ambitious, but the sixteen countries in the First Committee expressed their support for action under the umbrella of the Convention on Certain Conventional Weapons (CCW) – also known as the Convention on Inhumane Weapons – to discuss issues related to emerging technologies in the Area. of Lethal Autonomous Weapon Systems (LEGI). It is true that the structure of the CCW – a Chapeau Convention and five annexed protocols – is flexible in that it can accommodate additional protocols and offers the possibility of adding a protocol to the rules, but the CCW has an additional function that is clear for the members they were important. States: While the General Assembly decides by majority, the CCW operates by consensus, which means that orders or decisions can be blocked by an object state.

Artificial Intelligence And The Ethics Of Autonomous Weaponry

Banning Lethal Autonomous Weapons: An Education

France, as president of the CCW, therefore proposed an informal meeting of experts, which was convened for four days in May 2014 with the participation of member states as well as industry representatives and a large number of civil society organizations.

It was the first time that many member states spoke about the problem, showing that few countries had developed a national policy on the matter and that many other countries, less technologically advanced, had questions about it. Thematic sessions, with significant input from AI scientists, academics and activists, address legal aspects, ethical and sociological aspects, significant human control over targeting and attack decisions, as well as operational and military aspects. While it was accepted that international humanitarian and human rights law applied to all new weapons, there was no agreement on whether these weapons were illegal under existing law or whether their use was permitted under certain circumstances.

Artificial Intelligence And The Ethics Of Autonomous Weaponry

The report 3 published after this meeting emphasizes that this meeting helped to form a common understanding, but questions still remain and many countries are of the opinion that this process should continue.

Responsibility And Autonomy In Artificial Intelligence

Clearly a deal had been struck. In 2015, the topic of rules became more prominent in the media (where rules are often called “killer robots”), in academic circles, in conferences, workshops, plenary sessions and panels. That year, at one of the world’s leading artificial intelligence conferences, the International Joint Conference on Artificial Intelligence (IJCAI 15), an open letter from AI and robotics researchers – written by nearly 4,000 leading scientists such as Stuart Russell, Ian Lycan It was signed. . , Demis Hasabis, Noel Sharkey and many others – and more than 22,000 supporters, including Stephen Hawking, Elon Musk, John Tallin, to name a few – who have warned against the development of AI weapons and argued that laws could “Becoming a Kalashnikov. The letter says that most AI researchers are not interested in building AI weapons and do not want others to damage their field by doing so.

Artificial Intelligence And The Ethics Of Autonomous Weaponry

These actions encouraged the defense organizations. The Stop Killer Robots Campaign, part of Human Rights Watch, was created in 2013 as an international coalition of NGOs working to preemptively ban fully autonomous weapons. Influential organizations are taking note: At its 2016 annual meeting, the World Economic Forum hosted a one-hour televised debate called “What if Robots Go to War?” And it’s followed with articles by Stuart Russell and others, as well as topics like “How artificial intelligence could raise the risk of nuclear war.” The International Committee of the Red Cross (ICRC) has stated that the decision to kill and destroy is a human responsibility and has published articles on the subject.

The second informal meeting under the CCW umbrella was held in Geneva, one year after the first meeting, in April 2015. There was a strong and diverse participation: 90 countries (out of 125 High Contracting Parties) and also delegations from other agencies of the UN. , ICRC, academics, scientists, military experts and industry representatives. When the meeting ended, 58 countries submitted comments on the rules, mainly to show support for multilateral dialogue. Some countries have expressed clear support for a preventive ban, while others say that the ban should remain on the table for review.

Artificial Intelligence And The Ethics Of Autonomous Weaponry

The Pentagon Inches Toward Letting Ai Control Weapons

Interestingly, no country said that they were actively pursuing fully autonomous weapons or that their armed forces should have them in the future, although there was extensive discussion of the potential benefits of such weapons and the benefits of the technological advances. can bring

The only countries that explicitly stated that they would open the door to the acquisition of fully autonomous weapons were Israel and the United States, while others noted that they had no plans to acquire them (Canada, France, Japan, United Kingdom).

Artificial Intelligence And The Ethics Of Autonomous Weaponry

The second session – now called the Meeting of Experts on Lethal Automatic Weapon Systems (LAWS) – was clearly productive and moved the discussion forward. The President (Ambassador Biontino of Germany) had consulted extensively before the meeting and prepared a “food for thought”6 document outlining the issues to be addressed at the meeting. Leading industry experts (including Stuart Russell, Andrea Ominci, Paul Schar) informed attendees and called to action as AI rapidly advances.

Ban Lethal Autonomous Weapons

The sessions were dedicated to technical issues, specifics of the law, potential challenges to IHL, general issues, and transparency and ways forward. Documents and presentations have been uploaded to the UN CCW website. They were the 8 most detailed contributions to this forum so far.

Artificial Intelligence And The Ethics Of Autonomous Weaponry

At an upcoming meeting, he called for a continued debate on the rules, and dozens of CCW members called for a group of government experts to push the issue. A later report 9 (issued in his personal capacity) was very detailed, expressing different opinions and differences of opinion, while insisting that the discussion be thorough, especially including a thorough review of the legal review of weapons (Article 6 , Additional Protocol). I10), discussing the general acceptability of laws with reference to the Martens Clause, ethical issues, and the concept of meaningful human control, autonomy in vital functions, autonomy, command and control, and the human system. reaction

This meeting was held in April, one year after the second meeting, and again, the number of participating countries increased to 94. Again, the participants of the article “Food for thought”11 from the president (Ambassador Biontino by Biontino Germany). He also requested the presentation of working documents at the meeting, to which five countries (Canada, France, Holy See, Japan and Switzerland) responded with the Red Cross. Another innovation was the inclusion of a new element in the order stating that participating countries “may agree to a consensus on recommendations for further work to be considered at the Fifth CCW Review Conference in 2016” . Not to think otherwise, this next step was very important for the UN.

Artificial Intelligence And The Ethics Of Autonomous Weaponry

The Effective And Ethical Development Of Artificial Intelligence: An Opportunity To Improve Our Wellbeing

At the meeting, there was general agreement that lethal autonomous weapons do not yet exist, and the concept of “significant human control over weapons systems” was raised during the discussions. Other formulas have been proposed as well, including by

Autonomous artificial intelligence, ethics of artificial intelligence essay, the ethics of artificial intelligence nick bostrom, ethics in artificial intelligence, ethics and governance of artificial intelligence fund, artificial intelligence autonomous car, artificial intelligence and the future of work, robotics autonomous vehicles drones and artificial intelligence, artificial intelligence and ethics, artificial intelligence in autonomous vehicles, the ethics of artificial intelligence, ethics of artificial intelligence and robotics