In response to the escalating use of artificial intelligence (AI) technologies, the federal government has introduced a plan aimed at managing potential risks while fostering the growth of low-risk AI. The key elements of this initiative include a risk-based system, mandatory rules for high-risk technologies, and plans to label AI-generated content.
Under the proposed risk-based system, high-risk AI technologies, such as self-driving vehicle software and predictive tools, will face mandatory safeguards. These measures may include independent testing, ongoing audits, and mandatory labeling to enhance transparency and accountability.
Industry Minister Ed Husic emphasized the need for a balanced approach, stating, "The technology will evolve, we understand that, and while a lot of people will want to use the technology for good, there is always going to be someone motivated with ill-will, bad intent, and we're going to have to shape our laws accordingly."
To address concerns about AI-generated content, the government is considering introducing watermarks and labels, aiming to prevent such content from being mistaken as genuine. Ed Husic expressed readiness to implement mandatory measures if necessary.
The government's response reflects the desire to build trust and transparency in AI technologies. Tech Council of Australia CEO Kate Pounder commended the proposal for striking a balance between enabling innovation and ensuring safe AI development. She stressed the importance of simultaneously focusing on workforce skills, research funding, and improving digital literacy.
The proposed expert advisory committee will guide the development of mandatory rules for high-risk AI. The government remains open to considering amendments to existing laws or the introduction of an "AI Act," similar to the European Union.
The response acknowledged the global trend of some jurisdictions banning high-risk technologies, such as real-time facial recognition in law enforcement. However, it did not explicitly state whether Australia would follow suit. Additionally, the government recognized the unique challenges posed by advanced AI models like ChatGPT, suggesting that these models may require targeted attention due to their rapid development outpacing existing legislative frameworks.
As the government moves forward with consultations and legislation preparation, the objective remains to ensure a regulatory framework that safeguards against potential harms while allowing the continued growth and innovation in the AI sector.