Autonomous Drones and Warfare

Our team decided to discuss the use of autonomous drones in warfare as the topic of our ethics case study. Over the past few years, nations have been investing in the production of "Remotely Piloted Aircraft" or autonomous drones, which are aircraft with controlled flight by a human operator at a distant base station. Advanced drones may even be designed with an artificial intelligence that will allow for the complete autonomous functionality of the drone without a human operator. There has been debate regarding both the legality of using such technology in warfare, as technology seems to outpace the current laws in place, and the morality of using drones capable of taking human life without being directly controlled by another human. Those who support the use of autonomous drones in warfare claim that such drones will reduce the risk of injury or death for human soldiers and give friendly forces a huge technological edge over their enemies. Those against their use point to: the possible risk of unnecessary collateral damage in drone warfare, ethical issues with allowing drones to "decide" to kill humans, and questions of whether autonomous drones would violate the Law of Armed Conflict.

Here is a link to an article we refer to in our discussion: Autonomous military drones: no longer science fiction

Stakeholders

It is important to identify the stakeholders when discussing an ethical issue because the impact that different actions or decisions have on the various stakeholders in the scenario will ultimately determine what the best or most ethical course of action is in that situation. In the discussion of autonomous drones in warfare, there are several stakeholders that need to be considered. These include: the military as a whole, government and military leaders, individual soldiers, civilians in active war zones, and the defense contractors who actually create and develop the technology.

Constraints

When considering fully autonomous drones with lethal capabilities, there are obviously many concerns politically. One of the biggest constraints to consider is the laws of war that drones would have to abide by. As stated by the law of armed conflicts any attacks must be directed at military objects and combatants and must not have excessive collateral damage. The collateral damage part of the law has posed a problem that needs to be addressed. These drones would need an extremely accurate verification system in order to avoid collateral damage. If a high level of accuracy is not achieved then there may be certain locations, for example dense cities with a high population of civilians, where the drone should decide to shut off and wait for human verification. Also, like with any technology, malfunctions need to be considered. If your rumba happens to fall down the stairs due to system malfunctions, the consequences are low. But, if a lethal drone malfunctions, the consequences would be human life. There would need to be extensive fail safes so that the system would not result in excessive casualties of civilians. The law also states that there must be a reasonable commander acting in good faith. One question that needs to be answered is how much human control would actually be required by these laws? If a drone is acting fully autonomously, then there is technically no human controlling the weaponry, yet it was programmed by a human and its intentions would be set out by that program. How often would the drone be allowed to run on its own without violating these rules? There are many blurry lines that the laws of war need to address before autonomous drone technology is fully developed and in use. Another political constraint deals with how public the knowledge of drone technology may be. With the amount of ethical issues that come into play with autonomous drones, countries may not be willing to announce that they have the technology. On the other hand, if they could potentially be perceived as an illegal act of war, they might not even want to devote money and resources to developing autonomous drones.

Not only do political constraints on lethal autonomous drones exist, but so do social constraints. The value of a life has been brought into question when discussing the ethics of autonomous drones. By placing lives in the hands of machines, some people believe that less value is placed on a life. There is also an issue of trust. Plenty of movies, like IRobot, have depicted what could go wrong when an autonomous system is able to learn and think on its own. In the movie, robots, which previously were unable to harm any human, developed the ability to kill over time, threatening the human population. With ideas such as that planted into societies minds, many people would have a problem knowing that autonomous drones are flying around with the ability and instructions to kill human beings. Although technology and machines have been fully integrated into the daily lives of most, we believe that very few would trust technology with the lives of human beings.

Lastly, we would like to discuss potential economic constraints. Developing the technology and training officials to maintain and use the drones would not be any more cost effective as training pilots and other types of operators that currently oversee missions. Even so, once people are trained, one drone operator could be in charge of many drones, making the system much more efficient and cost effective. Although human lives could be saved by not needing to enter combat, countries may not want to devote government money to projects with so much controversy surrounding them. It would also lead to a decrease in the amount of military jobs needed, which could increase unemployment and negatively affect the economy.

Overall, there are many political, social and economic constraints that need to be considered when developing this new technology. We must always try to abide by the law and remember that human lives are at stake.

Ethics Tests

The formal methods for judging ethical issues include the utilitarian test, the justice test, and the virtue test. The utilitarian test is concerned with producing the best outcomes for everyone affected. The consequences or outcomes of a scenario determine what is ethically right or wrong. Applying this test follows the principle of "the ends justify the means." The action or decision that produces the best possible outcomes is considered the most ethical choice. However, the utilitarian test requires that we predict the outcomes for all stakeholders when choosing an alternative, and these predictions can sometimes be wrong, producing unforeseen negative consequences for one or multiple stakeholders.

The justice test is concerned with whether benefits and burdens are shared fairly among the stakeholders. When applying this test, if an alternative action or decision fairly distributes the benefits and burdens of the outcome, then that alternative is an ethical one. However, there will often be debate and disagreement about what it means for a distribution to be fair, making it difficult to decide which alternative is most fair.

The virtue test asks whether an action represents the kind of people we are or, more importantly, want to be. When applying this test, we must ask ourselves whether a particular action or decision complies with the way we and/or society believe good or ethical people ought to act. This test can be especially useful when existing laws do not completely overlap with existing ethical beliefs or standards. However, this test is prone to the fact that we as individuals don't always act consistently across different situations, meaning the test may not always be applied consistently or fairly.

Since there is much uncertainty with how the technology will develop and how quickly the developments will be made, it is difficult to apply the utilitarian test. Predicting the best outcomes is very difficult when you don't know exactly what the technology will be or how it will behave. For example, we don't know how an autonomous drone would assess the benefit to cost ratio of its actions, which could lead to degrees of collateral damage considered unacceptable by current standards. The virtue test may also not be the best test to apply in this discussion because there may be a lot of disagreement about whether an ethical society would allow autonomous drones to take human lives. Some people will feel that we ought to use drones in order to protect soldiers. Others will believe that using autonomous drones devalues human life. Since society itself will be divided on this issue, it doesn't make sense to judge this issue based upon what society believes is ethical.

Applying the justice test seems like the best option because determining what is fair or unfair to each stakeholder should not be too difficult in this case. If autonomous drones were allowed to be used in warfare, it would be fair for defense contractors to benefit (profit) from the work they do. It would also be fair if soldiers were taken out of harms way by using drones instead. It would be fair to friendly military if they gained a technological advantage over their enemies by using drones. However, any unnecessary lose of civilian life resulting from the use of autonomous drones would be an unfair burden on civilians in active war zones. At the same time, a decrease in collateral damage caused by drones not being susceptible to human emotions would be a fair benefit to civilians. Increased responsibility and accountability for deciding to use autonomous drones would actually be a fair burden to government and military leaders because they would have willingly chose to use drones instead of another alternative. For these reasons, applying the justice test is the best choice for determining whether using autonomous drones in warfare is ethical.

Possible Solution

A possible solution to this issue that could benefit all the stakeholders at least a little bit is to allow military drones to have a small degree of autonomy, in order to gather data and make a recommendation about what action to take, but require that a qualified military official approve any military action a drone makes before it does so. This benefits military as a whole because the technological capabilities of such drones would enhance the military's strength. It benefits soldiers and pilots because drones could eliminate some of the dangerous tasks and missions they would otherwise have to perform. This solution would be preferable to full autonomy for military leaders because they would have some control over the actions of drones that they would be responsible for. Civilians would be further protected against unnecessary collateral damage because an actual human could choose to abort an action that would risk excessively harming civilians in a war zone. The solution would also benefit defense contractors because they would profit off the development of the drones. Due to these anticipated benefits, this solution seems like a viable option.