Automated Decision Making and AI Regulation

Submission to Positioning Australia as a Leader in Digital Economy Regulation: Automated Decision Making and AI Regulation

DCA has made a submission to the Digital Technology Taskforce in response to Positioning Australia as a Leader in Digital Economy Regulation: Automated Decision Making and AI Regulation.

Artificial Intelligence is an area of interest for DCA. Along with Monash University and Hudson RPO, we are currently undertaking a series of studies exploring the impact of unconscious bias on recruitment and selection decisions that use artificial intelligence. This research has a number of important implications for this review.

There is enormous potential for AI and ADM tools to improve the way that we live and work. But there is also the potential for negative or biased outcomes, unless we consider diversity and inclusion throughout the design, development and training of algorithms.

Regulating AI and ADM in Australia should seek to minimise these potential negative impacts and ensure that AI ‘works for people’.1 Further, many Australians believe that AI will lead to job losses, and that there should be protections in place for people whose jobs are lost to AI.2

Australia should adopt an approach to regulation that ensures that AI and ADM ‘works for people’.3

Therefore, DCA made the following recommendations:

  1. There is a need for guidance and education for people using AI tools, so they know what the tool does, how it works, and how it can be used in a way that eliminates bias.
  2. Regulations should ensure that people are alerted when they are interacting with an AI, or when they are impacted by decisions made by AI or ADM.
  3. There should be regulation on AI vendors that requires an adequate level of transparency and accountability.
  4. AI and ADM vendors should be required to demonstrate how their products conform to AU ethics principles.
  5. Regulation should consider measures to address impacts of AI and ADM on vulnerable groups.
  6. Regulation should ensure that, where appropriate, there is an adequate level of human oversight of AI and ADM.
  7. Education, skills and training programs or strategies that aim to improve AI skills should incorporate measures to include a diversity of people.
  8. Australia’s approach to AI regulation should ensure that a diversity of people work in AI development.

Download our Submission

Read & Learn More

Monash University and Diversity Council Australia, AI in recruitment: friend or foe?

Monash University and Diversity Council Australia, Inclusive Artificial Intelligence at Work in Recruitment: From Cautious to Converted