Personal tools

AI Ethics

[University of Michigan Law School]



AI Ethics: Take Control of AI Systems

The convergence of the availability of massive amounts of big data, the speed and extension of cloud computing platforms, and advances in sophisticated machine learning algorithms have spawned a range of innovations in artificial intelligence (AI). Indeed, the benefits of AI systems to society are enormous, as are the challenges and concerns. 

AI systems may be able to get things done quickly, but that doesn't mean they always get things done fairly. If the dataset used to train the machine learning model contains biased data, the system may exhibit the same bias when making decisions in practice. For example, if a dataset contains mostly images of white men, a facial recognition model trained on that data may be less accurate for women or people of different skin tones.

The success of any AI application is intrinsically tied to its training data. Not only do you need the right data quality and the right amount of data; you must also proactively ensure that your AI engineers don’t pass on their own potential biases to their creations. If engineers allow their own worldviews and assumptions to influence datasets -- perhaps providing data limited to certain demographics or focuses -- applications that rely on AI to solve problems will be just as biased, inaccurate, and, well, less so it works.


- Identify AI Bias

AI bias comes in many forms. Cognitive biases originating from human developers can affect machine learning models and training datasets. Essentially, bias is hard-coded into the algorithm. Incomplete data can itself be biased -- especially if information is missing due to cognitive biases. 

When AI trained and developed without bias is put into use, its results can still be affected by deployment bias. Aggregation bias is another risk that occurs when small choices made throughout an AI project have a large collective impact on the integrity of the results. In short, any AI recipe has many inherent steps in which biases can arise.


- AI Ethics: Maximize AI, Minimize Risk

The ever-evolving technology learning curve means miscalculations and mistakes that can lead to unintended harmful effects. AI ethics includes a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide ethical behavior in the development and deployment of AI technologies. 

We live in a time of paramount importance, and problems in AI systems that can cause harm must be quickly identified and addressed. Therefore, identifying potential risks posed by AI systems means planning for countermeasures that must be implemented as soon as possible. 

As a result, public sector organizations can develop and implement ethical, fair, and safe AI systems by creating a culture of responsible innovation that anticipates and prevents potential future hazards. That said, everyone involved in the design, production, and deployment of AI projects, including data scientists, data engineers, domain experts, delivery managers, and department heads, should prioritize AI ethics and safety.


-  Three Major Areas of AI Ethics

AI ethics is a branch of technology ethics specific to AI systems. In machine ethics, it is sometimes divided into concerns about the ethical behavior of humans in the design, manufacture, use, and processing of artificial intelligence systems, and concerns about the behavior of machines. It also includes possible singularity problems due to superintelligent AI. 

As AI takes on a greater decision-making role in more industries, ethical concerns also increase. AI raises three main areas of social ethical concern: privacy and surveillance, bias and discrimination, and perhaps the most profound and difficult philosophical question of our time, the role of human judgment.


- Robot Ethics

Robot ethics is a growing interdisciplinary research effort, roughly at the intersection of applied ethics and robotics, aimed at understanding the ethical implications and consequences of robotics, especially autonomous robots. Researchers, theorists, and scholars from diverse fields including robotics, computer science, psychology, law, philosophy, and more are working on pressing ethical questions about the development and deployment of robotics in society. 

Many areas of robotics are affected, especially where robots interact with humans, from elder care and medical robots, to robots used in a variety of search and rescue missions, including military robots, to various service and entertainment robots . 

While military robots were initially the main focus of discussion (eg, whether and when autonomous robots should be allowed to use lethal force, whether they should be allowed to make these decisions autonomously, etc.), in recent years other types of influence especially robots, social robots have also become an increasingly important topic.


- Tackling The Biases in AI Systems

Over the past few years, society has begun to ponder the extent to which human biases can find their way into AI systems—with detrimental consequences. At a time when many companies look to deploy AI systems in their operations, it is imperative to be acutely aware of these risks and work to mitigate them. What can CEOs and their top management teams do to lead on bias and fairness? Among them, we see six essential steps: 

  • First, business leaders need to keep up with the latest developments in this rapidly evolving field of research. 
  • Second, establish responsible processes that mitigate bias when your business or organization is deploying AI. Consider using a combination of technology tools and operational practices such as internal "red teaming" or third-party audits. 
  • Third, have fact-based conversations around potential human biases. This can take the form of running algorithms with human decision makers, comparing results, and using "interpretability techniques" to help determine what caused the model to make a decision - to understand why there might be differences. 
  • Fourth, consider how humans and machines can work together to mitigate bias, including human-in-the-loop processes. 
  • Fifth, invest more, provide more data, and take a multidisciplinary approach to bias research (while respecting privacy) to continue advancing the field. 
  • Sixth, invest more in diversification in the AI ​​field itself. A more diverse AI community will be better able to predict, scrutinize and detect bias, and engage affected communities. 


- Algorithmic Biases

Algorithmic bias reflects the vulnerability of "so perfect" AI systems. The lack of fairness caused by computer system performance is algorithmic bias. In algorithmic biases, the mentioned lack of justice occurs in different ways, but can be interpreted as a set of biases differentiated based on specific categories.

Human bias is an issue that has been well-studied in psychology for many years. It stems from implicit associations, reflecting biases we are unaware of and how it affects the outcome of events. Over the past few years, society has begun grappling with the extent to which these human biases can find their way through artificial intelligence systems, with devastating consequences. 

As many companies are looking to deploy AI solutions, it is imperative to be deeply aware of these threats and seek to minimize them. Algorithmic bias in AI systems can take many forms, such as gender bias, racial bias, and ageism.

The role of data imbalance is crucial in introducing bias. For example, in 2016, Microsoft released an AI-based conversational chatbot on Twitter that was supposed to engage with people through tweets and direct messages. However, it began replying to highly offensive and racist messages within hours of posting. 

The chatbot was trained on anonymized public data and had built-in internal learning, which led to a coordinated attack by a group of people, introducing racist bias into the system. Some users were able to inundate the bot with misogynistic, racist and anti-Semitic language. The event was an eye-opener to a wider audience about the possible negative effects of unfair algorithmic bias in AI systems.



[More to come ...]

Document Actions