[LINK] Wi-Fi 7
Roger Clarke
Roger.Clarke at xamax.com.au
Fri Jan 5 15:30:29 AEDT 2024
> On 05/01/2024 13:00, Roger Clarke wrote:
>
>> If, and only if, we re-conceive and re-orient can we can get the
>> threats back under control, and reap benefits.
>>
>> What's needed is not 'AI', but 'Complementary Artefact Intelligence'.
>>
>> And that requires the use of decision-support thinking, envisioning
>> Artefact Intelligence as being *designed to* integrate with Human
>> Intelligence, to produce Augmented Intelligence.
>>
>> And, while we're at it, we need to build an explicit linkage with
>> robotics - or better still with 'co-botics' - and talk about
>> complementary artefact capability combining with human capability to
>> deliver augmented capability: http://www.rogerclarke.com/EC/AITS.html#F2
On 5/1/24 3:10 pm, David wrote:
>
> Without being in the least disparaging of that argument as a theoretical
> framework, I strongly suspect it would have no traction whatever with
> the usual forces of feral capitalism, the politics of self-interest and
> the conservative Right, even if they were prepared to think
> constructively about AI in the first place.
>
> The major part of the solution is probably legal. Could the
> Commonwealth legislate under their Human Rights powers to make citizens
> ultimately responsible for all decisions, advice, and control *actively*
> made to another citizen by non-human agents in their name, or under
> their control, or owned by them? Suitable penalties, including
> custodial sentences, would apply as though the responsible citizen had
> made the decision personally.
>
> I think this isn't too far from the situation now. One way or another
> we have to build the solution on flesh-and-blood human society living on
> this planet with other sentient beings.
I agree that there are strong incentives for the spruikers to reject
socially-responsible approaches.
But there are a few channels whereby what I'm proposing can take hold.
One is for the public to tell governments to stop gifting corporate
welfare to AI spruikers and spend the money on worthwhile things. Then,
when the next bubble-burst happens, the spruikers will grab the money
they'd squirelled away, and do a runner, as they always do; and the
next AI winter will follow.
Another is for the researchers who are at the front end of the R&D
funnel to be attracted by these new ideas, and abandon old-AI.
I'd anticipated some resistance to what I'm arguing among ANU and UNSW
AI Institute people. Instead I got signs of recognition and enthusiasm.
The quality researchers are ahead of me. But they don't wrote treatises
like this. They do experimental software, and play with data-sets.
If they see the right way to do things to be to design decision-support
tools, features like indicators and contra-indicators for the
application of particular techniques will be provided; and
user-interaction facilities will be designed-in; and explanation (to
the limited extent that it feasible) will be inherent instead of (at
best) tacked-on afterthought.
--
Roger Clarke mailto:Roger.Clarke at xamax.com.au
T: +61 2 6288 6916 http://www.xamax.com.au http://www.rogerclarke.com
Xamax Consultancy Pty Ltd 78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Visiting Professor in the Faculty of Law University of N.S.W.
Visiting Professor in Computer Science Australian National University
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature
Type: application/pgp-signature
Size: 840 bytes
Desc: OpenPGP digital signature
URL: <https://mailman.anu.edu.au/pipermail/link/attachments/20240105/9cdc6d6e/attachment.sig>
More information about the Link
mailing list