According to industry and science minister, Ed Husic Wednesday will see the release the government’s response to a consultation process on safe and responsible AI in Australia
Husic says, “Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled. We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI.”
The response also notes public concern about the technology. Husic emphasised that while the government supports the continued growth of “low-risk” applications of AI, certain uses such as self-driving cars or programs evaluating job applications require additional and more stringent regulations.
The Australian Government will also “commence work with industry” on the merits of another suggestion – a voluntary code on including watermarks or labelling of AI-generated content.
Evaluations are currently in progress regarding the utilisation of generative AI in educational settings and through the AI in Government taskforce.
The recently unveiled discussion paper in 2023 “Safe and Responsible AI in Australia”, introduced by Husic, raises various consultation queries regarding the trajectory and extent of Australia’s strategy in overseeing the swiftly advancing technology.
The document also places specific emphasis on considering the adoption of a ‘risk-based framework,’ a preference seen in other advanced economies such as the European Union (EU).
The Safe and Responsible AI in Australia discussion paper represents Australia’s first step towards defining its own approach to regulating AI.
Regardless of whether Australia adopts a risk-based framework, or chooses another approach, the technical and regulatory burden on entities implementing AI will likely be significant.
The paper also poses more open-ended consultation questions about the general direction of Australia’s AI regulation, including whether sector-specific regulation should be considered, how regulation should apply to models like ChatGPT and whether some AI implementations should be banned.
Interim response to the safe and responsible AI consultation held in 2023
The interim response outlines what the Australian public, academia and businesses told us about safe and responsible AI. It also details how the government is and will take action now and in the longer term.
The consultation received over 500 submissions. Over 20% of submissions were from individuals, showing that citizens care about safe and responsible .
The consultation made it clear that AI systems and applications are helping to improve wellbeing and quality of life, as well as growing our economy. However, current regulatory frameworks do not fully address the risks of AI.
Submissions called for further guardrails on legitimate but high-risk uses of AI, as well as unforeseen risks from powerful ‘frontier’ models.
The government wants the design, development and deployment of AI in legitimate high-risk settings to be safe and reliable. However, it aims to ensure that AI can continue being used in low-risk settings largely unimpeded.
The government will do this by:
- using testing, transparency and accountability measures to prevent harms from occurring in high-risk settings
- clarifying and strengthening laws to safeguard citizens
- working internationally to support the safe development and deployment of AI
- maximising the benefits of AI.
The potential move to require Australian tech companies to label AI-generated content marks a significant step towards fostering transparency and accountability in the rapidly evolving landscape of artificial intelligence.
The proposed labeling initiative not only addresses concerns related to misinformation and deepfakes but also establishes a foundation for responsible AI use.