[LINK] InnAus: 'Trump scraps responsible AI order on first day'
Roger Clarke
Roger.Clarke at xamax.com.au
Wed Jan 22 08:38:28 AEDT 2025
[ Looks like I'd better withdraw that paper I submitted for review just
a few days ago:
'Principles for the Responsible Application of Generative AI'
https://www.rogerclarke.com/EC/RGAI-C.html
[ Nonetheless, constructive critique much appreciated! ]
[ Oh, and the pleading for freedom to do harm isn't limited to the USA:
> Founding director of the Australian Institute for Machine Learning,
Professor Anton van den Hengel, meanwhile said the 2023 order was merely
“window dressing” and that countries’ regulatory actions are becoming
“increasingly irrelevant”.
> “The problem isn’t that people are writing irresponsible code, it’s
that people are following irresponsible business models. The real
challenge is the concentration of power in the hands of multinationals,”
he said.
Trump scraps responsible AI order on first day
Justin Hendry
Innovation Aus
21 January 2025
https://www.innovationaus.com/trump-scraps-responsible-ai-order-on-first-day/
US President Donald Trump has abandoned an executive order placing
safety obligations on the developers of artificial intelligence systems,
cutting red tape for his Big Tech backers on his first day in office.
An executive order revoking the 2023 order by former President Joe Biden
was one of almost 100 signed on Tuesday by President Trump. It arrived
shortly after his inauguration, at which leaders from US tech giants
Meta, Tesla, Amazon, Apple and Google had a front row seat.
Australian AI experts say there could be ripple effects for responsible
AI efforts globally, particularly if other laws are watered down, while
others believe the influence of individual nations is waning.
Before the signing, developers of AI systems had been required to share
the results of all red-team safety tests with the US government, in
accordance with the Defence Production Act, before releasing them to the
public.
The safety and security obligations extended to “any foundation model
that poses a serious risk to national security, national economic
security, or national public health and safety”.
Signed in October by former US President Joe Biden, the executive order
was intended to protect the privacy of consumers, while “advancing
equity and civil rights”, according to White House fact sheet released
at the time.
It followed voluntary commitments from OpenAI, Google and other tech
companies, in July 2023 to develop responsible AI systems – one of the
first steps towards embedding ethical principles following the arrival
of ChatGPT.
“To realise the promise of AI and avoid the risk, we need to govern this
technology,” Mr Biden said ahead of the executive order, describing it
as the “most significant action any government anywhere in the world has
ever taken on AI safety, security and trust.”
Trump’s executive order on Tuesday breaks with policies and legislation
introduced or proposed in much of the developed world, threating to undo
efforts to embed safety, security and trust in AI development.
Human Technology Institute director and former Australian Human Rights
Commissioner Ed Santo expects the move will have some impact, but said
it is “too early to say” whether this will be “truly transformative”.
Professor Santow, who sits on the Australian government’s AI expert
group, said the level of impact would, however, be mitigated if the new
President moves to introduce other technology-neutral laws in future.
“If tech-neutral law is enforced rigorously, the impact could be
minimal, because other rules could fill the gap left by the executive
order,” he told InnovationAus.com.
“But if rescinding the executive order is coupled with a watering-down
of other laws that protect the community from harms associated with AI,
and if regulators no longer see this problem as an enforcement priority,
then the risk of harm caused by AI will increase significantly.”
Founding director of the Australian Institute for Machine Learning,
Professor Anton van den Hengel, meanwhile said the 2023 order was merely
“window dressing” and that countries’ regulatory actions are becoming
“increasingly irrelevant”.
“The problem isn’t that people are writing irresponsible code, it’s that
people are following irresponsible business models. The real challenge
is the concentration of power in the hands of multinationals,” he said.
Australia has proposed ten mandatory guardrails for artificial
intelligence in response to the arrival of ChatGPT and other generative
AI system, but it continuing to consider its legislative options.
Consistent with Canada and the EU, the proposed guardrails will require
organisations developing or deploying high-risk AI systems to take steps
to ensure products reduce the likelihood of harms.
The guardrails, as well as a voluntary AI safety standard, were proposed
in September and followed more than 12 months of consultation with
government, industry and academia.
Mr Trump also introduced other executive orders on Tuesday to end
diversity, equity and inclusion (DEI) programs in government, describing
them as an example of “immense public waste and shameful discrimination”.
It comes just days after Meta chief executive Mark Zuckerberg announced
plans to immediately terminate DEI programs at the tech giant, with UK
staff reportedly among those concerned by the move.
--
Roger Clarke mailto:Roger.Clarke at xamax.com.au
T: +61 2 6288 6916 http://www.xamax.com.au http://www.rogerclarke.com
Xamax Consultancy Pty Ltd 78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Visiting Professorial Fellow UNSW Law & Justice
Visiting Professor in Computer Science Australian National University
More information about the Link
mailing list