[LINK] InfoAge: 'Australia[n business and government]'s Plan to Embrace ... AI'
Roger Clarke
Roger.Clarke at xamax.com.au
Thu Jan 18 16:46:10 AEDT 2024
[ The announcement on AI is yet another complete non-event.
[ The government is yet to decide anything, is committed to nothing, and
is clearly at least as beholden to commercial interests as that of the
USA. It does not even see a need to offer the pitifully inadequate
provisions in the European Commission's measures.
[ The public interest is interpreted as being solely that business
continue its unfettered behaviour, despite the substantial evidence of
serious inadequacies in AI, and of serious harm arising from it, and the
absence of solid evidence of benefits from it. (See companion posting).
[ Meanwhile, the Australian public appears to have lost the capacity to
be sceptical, and 'born suckers' dominate the landscape.
[ At ongoing risk of sending the same message again and again, see:
http://rogerclarke.com/EC/AII.html#Th (2019)
http://rogerclarke.com/EC/AIP.html#App1
http://rogerclarke.com/EC/AIR.html#CRF
http://rogerclarke.com/EC/AITS.html (2023)
> ... modern laws for modern technology ... *any* legislative changes
> For high-risk use, the government is set to introduce mandatory
safeguards that may include independent testing of products before and
after its release, continual audits, and labelling for when AI has been
used.
>
> The government is also mulling requiring organisations have a
dedicated role focusing on AI safety under this mandatory code.
>
> These uses may include ... [not much] ...
> The response acknowledged that other jurisdictions, most notably the
European Union, are moving to outright ban the highest risk usage of AI,
but did not flag that Australia will be following suit.
> The Australian government is yet to decide whether it will introduce
a similar standalone piece of legislation or make amendments to existing
laws in order to follow its AI policy agenda.
[ So far, I've only been able to find th document at a Google URL, which
seems to confirm the degree of business control over public policy:
https://storage.googleapis.com/converlens-au-industry/industry/p/prj2452c8e24d7a400c72429/public_assets/safe-and-responsible-ai-in-australia-governments-interim-response.pdf
"The government will consider possible legislative vehicles for
introducing mandatory safety guardrails for AI in high-risk settings in
close consultation with industry and the community. It is committed to a
collaborative and transparent approach to developing obligation ...".
Australia’s plan to embrace safe AI
The days of self-regulation have gone, says Industry Minister.
Denham Sadler
Info Age
Jan 17 2024 01:34 PM
https://ia.acs.org.au/content/ia/article/2024/australia-s-plan-to-embrace-safe-ai.html
Australia will introduce mandatory safeguards for high-risk AI use but
steer clear of outright bans under a new plan aiming to walk the
tightrope of mitigating risks and boosting productivity.
Industry minister Ed Husic on Wednesday unveiled the federal
government’s interim response to industry consultation on the
responsible use of artificial intelligence.
[ Info Age fails to provide a link to any source. ]
The government launched the consultation mid-last year and received 510
submissions by August.
Australia’s overarching response to the explosive growth in artificial
intelligence technology will be “risk-based” and aim to prevent the
high-level risks associated with it as much as possible while not
stifling its innovation potential and huge economic benefits.
“We’re threading multiple needles at the same time with the way the
technology is being developed,” Husic said in a press conference on
Wednesday morning.
“We want innovation to occur, but if they’re going to present a safety
risk then the government has a role to respond.
“We want to get the benefits of AI while shoring up and fencing off the
risks as much as we can, and to design modern laws for modern technology.”
According to McKinsey research, the adoption of AI and automation could
increase Australia’s GDP by up to $600 billion per year.
But this can only be done if there is trust in these technologies.
KPMG research found recently that only a third of Australians are
willing to trust AI systems, and more than 70 per cent back the
government in establishing guardrails.
“We acknowledge that the majority of AI is relatively low-risk, and we
are working to introduce these rules for companies that design, develop
and deploy AI in high-risk settings,” Husic said.
“Low trust is becoming a handbrake against the uptake of the technology
and that’s something we’ve got to confront.”
This is the balancing act the federal government is attempting to
perform in making any legislative changes around AI: to capitalise on
its wealth of potential benefits while avoiding as much of the
associated risk as possible.
“We’ve taken the concerns around AI seriously and sought to listen
widely, respond thoughtfully and cooperate internationally on a complex
issue.
“We also want to set up things so government can respond quicker to
developments in the technology as they occur.”
Plan welcomed by the ACS
The Australian Computer Society (ACS) has welcomed the AI plan, saying
it is an “important step” towards capitalising on the opportunities the
technology offers.
“Given the profound changes AI will make to the workforce in coming
years, ACS welcomes the federal government’s response and looks forward
to working with the proposed Temporary Expert Advisory Group to ensure
Australia has regulation that’s fit for purpose over the coming
decades,” said ACS interim CEO Josh Griggs.
“Consulting with experts and industry leaders is going to be critical in
ensuring that any regulation reaps the benefits of AI while mitigating
the real risks presented from misuse of the emerging technology.
“We look forward to working with the federal government, industry,
educators and all key stakeholders to ensure Australia maximises the
benefits from AI and associated technologies over the coming decade."
The ACS 2023 Digital Pulse report found that 75 per cent of workers will
see their roles changed by AI, with the impact felt across the
Australian economy.
Guardrails for high-risk use
The response will see the federal government distinguishing between the
use of AI in high-risk settings, such as health and law enforcement, and
its more general application, such as through generative AI tools like
ChatGPT.
For high-risk use, the government is set to introduce mandatory
safeguards that may include independent testing of products before and
after its release, continual audits, and labelling for when AI has been
used.
The government is also mulling requiring organisations have a dedicated
role focusing on AI safety under this mandatory code.
These uses may include self-driving vehicle software, tools to predict
the likelihood of someone reoffending and software to sift through job
applications to identify candidates.
The current legal landscape in Australia does not adequately address the
risks associated with the use of AI in this manner, the government said
in its response.
“Existing laws do not adequately prevent AI-facilitated harms before
they occur, and more work is needed to ensure there is an adequate
response to harms after they occur,” the government response said.
“The current regulatory framework does not sufficiently address known
risks presented by AI systems, which enables actions and decisions to be
taken at a speed and scale that hasn’t previously been possible.”
The response acknowledged that other jurisdictions, most notably the
European Union, are moving to outright ban the highest risk usage of AI,
but did not flag that Australia will be following suit.
Late last year, the European Union agreed on a landmark Artificial
Intelligence Act, which will ban the use of AI for high-risk activities
such as in social credit systems and biometric surveillance.
The Australian government is yet to decide whether it will introduce a
similar standalone piece of legislation or make amendments to existing
laws in order to follow its AI policy agenda.
Voluntary rules for low-risk use
For lower-risk AI use, the government will introduce a voluntary scheme
including an AI content label involving “watermarks” to identify when AI
has been used to make content.
An expert advisory committee will be stumped up to guide the development
of the mandatory code, and the federal government is preparing to
consult on details of legislation.
The mandatory code will be in place by the end of the year, Husic said,
with plans for the voluntary rules to be in place before this.
The interim response flagged that “frontier” AI tools like ChatGPT may
require future targeted attention but did not outline any steps the
government may take.
“It was also highlighted that AI services are being developed and
deployed at a speed and scale that could outpace the capacity of
legislative frameworks, many of which have been designed to be
technology-neutral,” it said.
--
Roger Clarke mailto:Roger.Clarke at xamax.com.au
T: +61 2 6288 6916 http://www.xamax.com.au http://www.rogerclarke.com
Xamax Consultancy Pty Ltd 78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Visiting Professor in the Faculty of Law University of N.S.W.
Visiting Professor in Computer Science Australian National University
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 840 bytes
Desc: OpenPGP digital signature
URL: <https://mailman.anu.edu.au/pipermail/link/attachments/20240118/69d881ae/attachment.sig>
More information about the Link
mailing list