Personal tools

Ethics of AI and Robotics

UC_Berkeley_101020A
[University of California at Berkeley]

 

 

- Overview

Machines mostly learn through a testing process of repeated trial and error. Give a machine lots of data, ask it to answer a question, and then tell the machine whether it gave a correct answer. After repeating this process millions of times (and getting things wrong a lot of the time), the machine gradually gets better at giving correct answers, finding patterns in the data, and working out how to solve increasingly complex problems. 

Machines can only learn based on the data they’re presented with - so what happens if there are problems with that data? We’ve already seen artificial intelligence systems that preference male over female candidates applying for technical jobs, or making a range of errors about people with dark skin tones, based on the biased data they were provided with.

We should think very carefully about what it is to be fair and what it means to make a good decision. What does it mean mathematically for a computer program to be fair - not to be racist, ageist, or sexist? These are challenging research questions that we're now facing, as we hand these decisions to machines. Those decisions might have impacts upon someone’s life: decide who gets a loan, who gets welfare, how much insurance we should pay, who goes to jail. We need to be careful about handing those decisions over to machines because those machines may capture the biases of society. That means there’s still plenty of work to do. We need to get better at knowing how to teach machines before giving them too much responsibility. Once we do, the benefits to society will be immense—and we can already see real-world examples of how AI is improving our lives.

 
 

 

[More to come ...]


Document Actions