[LINK] What does success look like for the second global AI summit?

Stephen Loosley stephenloosley at outlook.com
Tue May 28 22:20:11 AEST 2024


Please Note: This email did not come from ANU, Be careful of any request to buy gift cards or other items for senders outside of ANU. Learn why this is important.
https://www.scamwatch.gov.au/types-of-scams/email-scams#toc-warning-signs-it-might-be-a-scam

`

What we learned from the global AI summit in South Korea

One day and six (very long) agreements later, can we call the meeting to
hammer out the future of AI regulation a success?


  [Photo] The Ministers' Session of the AI Seoul Summit, at the Korea
Institute of Science & Technology. Anthony Wallace /AFP/Getty Images


By Alex Hern @alexhern The Guardian 28th May 2024 (Email)



What does success look like for the second global AI summit?

As the great and good of the industry (and me) gathered last week at the
Korea Institute of Science and Technology, a sprawling hilltop campus in
eastern Seoul, that was the question I kept asking myself.

If we’re ranking the event by the quantity of announcements generated,
then it’s a roaring success.

In less than 24 hours – starting with a virtual “leader’s summit” at 8pm
and ending with a joint press conference with the South Korean and
British science and technology ministers – I counted no fewer than six
agreements, pacts, pledges and statements, all demonstrating the success
of the event in getting people around the table to hammer out a deal.

There were the Frontier AI Safety Commitments:

The first 16 companies have signed up to voluntary artificial
intelligence safety standards introduced at the Bletchley Park summit,
Rishi Sunak has said on the eve of the follow-up event in Seoul.

“These commitments ensure the world’s leading AI companies will provide
transparency and accountability on their plans to develop safe AI,”
Sunak said.

“It sets a precedent for global standards on AI safety that will unlock
the benefits of this transformative technology.”

The [deep breath] "Seoul Statement of Intent toward International
Cooperation on AI Safety Science:"

Those institutes will begin sharing information about models, their
limitations, capabilities and risks, as well as monitoring specific “AI
harms and safety incidents” where they occur and sharing resources to
advance global understanding of the science of AI safety.

At the first “full house” meeting of those countries on Wednesday,
[Michelle Donelan, the UK technology secretary] warned the creation of
the network was only a first step. “We must not rest on our laurels. As
the pace of AI development accelerates, we must match that speed with
our own efforts if we are to grip the risks and seize the limitless
opportunities for our public.”

The Seoul Ministerial Statement:

Twenty-seven nations, including the United Kingdom, Republic of Korea,
France, United States, United Arab Emirates, as well as the European
Union, have signed up to developing proposals for assessing AI risks
over the coming months, in a set of agreements that bring the AI Seoul
summit to an end.

The Seoul Ministerial Statement sees countries agreeing for the first
time to develop shared risk thresholds for frontier AI development and
deployment, including agreeing when model capabilities could pose
“severe risks” without appropriate mitigations.

"Severe Risks" This could include helping malicious actors to acquire or
use chemical or biological weapons, and AI’s ability to evade human
oversight, for example by manipulation and deception or autonomous
replication and adaptation.

There was also the Seoul Declaration for safe, innovative and inclusive
AI, which outlined the common ground on which 11 participating nations
and the EU agreed to proceed, and the Seoul Statement of Intent toward
International Cooperation on AI Safety Science, which laid out a sense
of what the goals actually were.

And the Seoul AI Business Pledge, which saw 14 companies – only
partially overlapping with the 16 companies that had earlier signed up
to the Frontier AI Safety Commitments – “committing to efforts for
responsible AI development, advancement, and benefit sharing”.
Call to action

It’s understandable if your eyes glazed over. I’m confused, and I was there
.
The issue isn’t helped by the dual hosts of the event. For the UK, the
Frontier AI Safety Commitments were the big ticket, announced with a
comment from Rishi Sunak and paired with the offer of an interview with
the technology secretary for press in Seoul.

In the English-language press, the Seoul AI Business Pledge was all but
ignored, despite it being a significant plank of the South Korean
delegation’s achievements.

(My guess, for what it’s worth, is that the focus on “Frontier AI” from
the first set of commitments slightly short-changed the South Korean
technology sector, including as it did just Samsung and Naver; the Seoul
pledge, by contrast, was six domestic firms and eight international ones.)

It might be simpler to explain it all by breaking it down by group.

There’s the two competing pledges from businesses, each detailing a
slightly different voluntary code they intend to follow; there’s the
three overlapping statements from nations, laying out what they actually
wanted to get from the AI summit and how they’re going to get there over
the next six months; and there’s the one concrete plan of action from
the national AI safety institutes, detailing how and when they’re going
to work together to understand more about the cutting-edge technology
they’re trying to examine.

There’s an obvious objection here, which is that none of these
agreements have teeth, or even really sufficient detail to identify
whether or not someone is trying to follow them.

“Companies determining what is safe and what is dangerous, and
voluntarily choosing what to do about that, that’s problematic,”
Francine Bennett, the interim director of the Ada Lovelace Institute,
told me.

Similarly, it feels weird to hold the agreements up as the success of
the summit when they were largely set in stone before delegates even
arrived in South Korea. At times, as the agreements and releases
continued to hit my inbox with scant relationship to the events
happening in person, it felt like a summit that could have been an email.

The truth is slightly different: the success of the summit is that it
happened.

That sounds like the faintest of praise, but it’s true of any event like
this. Yes, the key agreements of the summit were all pieced together
before it started – but in providing a hard deadline and a stage from
which to announce success, the event gave everyone a motivation to sign up.

And while the agreements are certainly toothless, they’re also a
starting point for the real work to be done once the summit ends.

Companies and governments signing up to a shared description of reality
is the first step to being able to have the difficult technical
conversations required to fix the problems.

“We all want the same thing, here” is a powerful statement when it’s true.

And, when you draw the boundaries carefully enough, it still is.


--



More information about the Link mailing list