Artificial intelligence (AI) undoubtedly offers tremendous benefits across many industries, including medicine, agriculture, and transportation. However, there is growing evidence of potential for harm from AI as biased and inaccurate algorithms routinely make decisions on job applicants, healthcare, social services, parole and criminal justice, policing, and access to credit and insurance.
The AI researcher, Noel Sharkey, argues that “algorithms are so ‘infected with biases’ that their decision-making processes could not be fair or trusted” and that we should be “testing AI decision-making machines in the same way as new pharmaceutical drugs are vigorously checked before they are allowed on to the market”.
We have produced a report on the Responsible use of AI (RAI) to provide guidelines for organisations engaged in the development and deployment of AI-based applications, with particular reference to applications that involve algorithmic decision-making (ADM). The report develops:
- provides an introduction to AI and machine learning
- describes the AI development and deployment process for organisations
- describes how networks of algorithms can interact to produce AI harms
- defines responsibilities for AI subjects as well as for AI deploying organisations
- proposes a set of responsible AI principles
- maps risks to AI principles
- builds a wide-ranging set of recommendations for organisations using AI
We use the report in our teaching at UNSW and encourage others to consider it in their classes. Any comments or feedback on the report welcomed.