Personal tools

AI Ethics

[University of Michigan Law School]



- AI Systems: Great Promise But Potential For Peril

The convergence of the availability of a vast amount of big data, the speed and stretch of cloud computing platforms, and the advancement of sophisticated machine learning algorithms have given birth to an array of innovations in Artificial Intelligence (AI). Indeed, the benefits that AI systems bring to society are grand, and so are the challenges and worries. 

The evolving technologies learning curve implies miscalculations and mistakes, resulting in unanticipated harmful impacts. AI ethics comprise a set of values, principles, and techniques which employ widely accepted standards of right and wrong to guide moral conduct in the development and deployment of AI technologies.

We are living in times when it is paramount that the possibility of harm in AI systems has to be recognized and addressed quickly. Thus, identifying the potential risks caused by AI systems means a plan of measures to counteract them has to be adopted as soon as possible.Public sector organizations can, therefore, anticipate and prevent future potential harms through the creation of a culture of responsible innovation to develop and implement ethical, fair, and safe AI systems. That said, everyone involved in the design, production, and deployment of AI projects, which includes data scientists, data engineers, domain experts, delivery managers, and departmental leads, should consider AI ethics and safety a priority. 


-  Three Major Areas of AI Ethcs

The ethics of AI is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI.

Ethical concerns mount as AI takes bigger decision-making role in more industries. AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment.


- Robot Ethics

Robot ethics is a growing interdisciplinary research effort roughly situated in the intersection of applied ethics and robotics with the aim of understanding the ethical implications and consequences of robotic technology, in particular, autonomous robots. Researchers, theorists, and scholars from areas as diverse as robotics, computer science, psychology, law, philosophy, and others are approaching the pressing ethical questions about developing and deploying robotic technology in societies. Many areas of robotics are impacted, especially those where robots interact with humans, ranging from elder care and medical robotics, to robots for various search and rescue missions including military robots, to all kinds of service and entertainment robots. While military robots were initially a main focus of the discussion (e.g., whether and when autonomous robots should be allowed to use lethal force, whether they should be allowed to make those decisions autonomously, etc.), in recent years the impact of other types of robots, in particular, social robots has become an increasingly important topic as well.

- Biases in AI Systems



- Machine Ethics




[More to come ...]

Document Actions