AI Policy

HumanListening’s use of AI aligns with the 10 guiderails within the Australian Government's Voluntary AI standard which guides the safe and responsible use of artificial intelligence in Australia:

  • Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
  • Establish and implement a risk management process to identify and mitigate risks.
  • Protect AI systems, and implement data governance measures to manage data quality and provenance.
  • Test AI models and systems to evaluate model performance and monitor the system once deployed.
  • Enable human control or intervention in an AI system to achieve meaningful human oversight across the life cycle.
  • Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.
  • Establish processes for people impacted by AI systems to challenge use or outcomes.
  • Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.
  • Keep and maintain records to allow third parties to assess compliance with guardrails.
  • Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.

For more information, please visit the Department of Science and Resources website here: https://www.industry.gov.au/publications/voluntary-ai-safety-standard/10-guardrails.

Get in Touch