White Paper / April 2020

New Paper: “Artificial Intelligence Ethics in Practice”

In a recent paper, “Artificial Intelligence Ethics in Practice,” published in ;login:, the USENIX Magazine, Jessica Cussins Newman, Research Fellow at CLTC and Program Director for the AI Security Initiative, together with Rajvardhan Oak, a graduate researcher at CLTC, described some of the key ethical challenges associated with artificial intelligence.

Jessica Cussins Newman and Rajvardhan Oak
Jessica Cussins Newman and Rajvardhan Oak

In a recent paper, “Artificial Intelligence Ethics in Practice,” published in ;login:, the USENIX Magazine, Jessica Cussins Newman, Research Fellow at CLTC and Program Director for the AI Security Initiative, together with Rajvardhan Oak, a graduate researcher at CLTC, describe some of the key ethical challenges associated with artificial intelligence.

“We go beyond naming the problems that have garnered significant attention in recent years, and additionally reference several ongoing efforts to mitigate and manage key ethical concerns,” they wrote. “We hope that this article will result in researchers as well as industry practitioners being more mindful in their design and use of AI systems.”

In the paper, Cussins Newman and Oak provide a variety of examples of ethical challenges related to AI, organized into four key areas: design, process, use, and impact. “The design category includes decisions about what to build, how, and for whom,” they explain. “The process category includes decisions about how to support transparency and accountability through institutional design. The use category includes ways in which AI systems can be used and misused to cause harm to individuals or groups. Lastly, the impact category includes ways in which AI technologies result in broader social, political, psychological, and environmental impacts.”

;login magazine cover
Members of USENIX can read the full article in ;login:

Design

Cussins Newman and Oak argue that the design of AI systems can have “profound implications” as systems in some cases “still make mistakes that a human would never make. Data sets are always imperfect representations of reality and can generate blind spots and biases.”

They cite the example tech giant Amazon, where it was discovered in 2018 that  algorithms the company had developed to match candidates with jobs were systematically discriminating against female candidates. “The AI system… ranked male candidates over female candidates, since it had seen a greater number of them succeeding,” they explain. “These failure modes are particularly disturbing when they impact people’s livelihoods.”

They also point to how bias can be built into AI systems used for facial recognition, noting that an AI researcher “found that the facial recognition algorithms she was working with could not ‘see’ her because of her dark skin…. As we rely on algorithmic decision-making in an increasing number of high-stakes environments, including decisions about credit, criminal justice, and jobs, the design and training of the systems should be an area of active consideration.”

Process

Cussins Newman and Oak’s paper also describes a variety of processes related to AI — including “implementation of standards and legal requirements, the recognition of principles and best practices, communication with users, and the monitoring of systems’ efficacy and impacts” — that can raise (and mitigate) ethical concerns.

They cite how regulations like the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) can help protect user data, and how companies should be careful to limit potential human harms when testing AI systems. AI developers should also focus on mitigation and monitoring of adversarial attacks, when machine-learning systems are manipulated by “examples that have been crafted specifically to fool a classifier. Typically, these are constructed by adding a small perturbation to the input. This change is so small that humans cannot identify it; but an algorithm might produce a completely different result.”

“All machine learning models are capable of making mistakes and being tricked in these ways,” Newman Cussins and Oak write. “And these flaws can be exploited to damaging effect in the real world.”

Use

Ethical concerns can emerge from the possible uses and misuses of AI, Cussins Newman and Oak explain. “For example, recent advances in AI systems capable of generating synthetic text, audio, and video have beneficial uses, but they can also be used to cause significant harm,” they write. “Language models can write short stories and poetry, but they can also generate misleading news articles, impersonate others online, automate the production of abusive content, and automate phishing content.”

Technologies like deep fakes and facial recognition, while potentially beneficial, have already been misused; for example, the Chinese government is using a  network of facial recognition technology to track and monitor the Uighurs, a largely Muslim minority. And AI has also been used to develop autonomous weapons, which may reduce casualties, but can also be used to “conduct assassinations, destabilize nations, and even execute terror attacks on a large scale…. These systems are also susceptible to adversarial attacks, biases, and mistakes. Biases in Amazon’s systems caused discrimination against women; biases in autonomous weapons can lead to deaths of innocent people.”

Impact

As their final category for classifying ethical AI issues, Cussins Newman and Oak note that “AI technologies have economic, political, social, psychological, and environmental impacts that extend well beyond their immediate uses.” They cite long-term impacts of AI and robotics on labor markets, for example, and the potential for worsening of economic inequality regionally and between nations.

“The so-called ‘race’ for AI advancement risks other consequential impacts,… including international instability and underinvestment in key safety and ethical challenges,” the authors write. “Additionally, AI systems can have long-lasting psychological impacts, as algorithms can be programmed for “attention hacking” and manipulation of human emotions and relationships.

They also note that AI has implications for security infrastructure (as it can introduce new loopholes such as “susceptibility to adversarial attacks and privacy concerns due to leakage of model parameters.” And AI has potential impacts for the environment. “Deep learning is particularly energy intensive,” they explain, “as it requires the use of significant computational power for processing vast amounts of data.”

Ethics should not be thought of as an add-on to be considered at the end of production but as a key part of the design process from the outset.

Ongoing Efforts

A variety of initiatives have been undertaken to address these ethical challenges, Cussins Newman and Oak explain, including the Asilomar AI Principles, Google’s AI Principles, and the Organization for Economic Cooperation (OECD) AI Principles, which have been endorsed by more than 40 countries (and the European Commission and the G20) and more than two dozen nations have  released national AI strategies.

Companies, too, can play a role in addressing potential ethical harms, for example by releasing AI software through a staged release process. The authors note that some AI researchers have “proposed that machine learning models should be accompanied by documentation that details their performance characteristics,” including “whether the model performs consistently across diverse populations, and to clarify intended uses and illsuited contexts.”

“The question is how to increase awareness and establish practices to promote the ethical development of AI that is robust well into the future,” Cussins Newman and Oak conclude. “The development of ethical AI is a necessary component of sustainable market competition and global leadership…. The need for robust ethical assessment is likely to vary depending on the degree of risk and impact of a given system. However, ethics should not be thought of as an add-on to be considered at the end of production but as a key part of the design process from the outset. Similar to the concept of privacy by design, we need to inculcate the culture of ethics by design. The research community is already at the forefront of many of these debates and is well positioned to play a key role in shaping a positive AI future.”

Read the full paper at ;login:, the USENIX Magazine (open to USENIX members)