[LINK] Does current AI represent a dead end?
Roger Clarke
Roger.Clarke at xamax.com.au
Thu Dec 5 16:13:42 AEDT 2024
Music to *my* ears, at least.
http://rogerclarke.com/EC/AII.html#CML (2019)
http://rogerclarke.com/EC/AIEG.html#RF (2020)
But also:
http://www.rogerclarke.com/EC/RGAI.html#GAIC (2024)
_________________
On 5/12/2024 15:33, Kim Holburn wrote:
> https://www.bcs.org/articles-opinion-and-research/does-current-ai-represent-a-dead-end
>
> Many of these neural network systems are stochastic, meaning that
> providing the same input will not always lead to the same output. The
> behaviour of such AI systems is ‘emergent’ — which means despite the
> fact that the behaviour of each neuron is given by a precise
> mathematical formula, neither this behaviour nor the way the nodes are
> connected are of much help in explaining the network’s overall behaviour.
>
> ...
>
> This idea lies at the heart of piecewise development: parts can be
> engineered (and verified) separately and hence in parallel, and reused
> in the form of modules, libraries and the like in a ‘black box’ way,
> with re-users being able to rely on any verification outcomes of the
> component and only needing to know their interfaces and their behaviour
> at an abstract level. Reuse of components not only provides increased
> confidence through multiple and diverse use, but also saves costs.
>
> ...
>
> Current AI systems have no internal structure that relates meaningfully
> to their functionality. They cannot be developed, or reused, as
> components. There can be no separation of concerns or piecewise
> development. A related issue is that most current AI systems do not
> create explicit models of knowledge — in fact, many of these systems
> developed from techniques in image analysis, where humans have been
> notably unable to create knowledge models for computers to use, and all
> learning is by example (‘I know it when I see it
> <https://www.acluohio.org/en/cases/jacobellis-v-ohio-378-us-184-1964#:~:text=TheU.S.SupremeCourtreversed,tohardcorepornography...>’). This has multiple consequences for development and verification.
>
> ...
>
> Systems are not explainable, as they have no model of knowledge and no
> representation of any ‘reasoning’.
>
> ....
>
> Verification comes with a subset of issues following from the above. The
> only verification that is possible is of the system in its entirety; if
> there are no handles for generating confidence in the system during its
> development, we have to put all our eggs in the basket of post-hoc
> verification.
>
> ...
>
> So, is there hope? I believe — though I would be happy to be proved
> wrong on this — that current generative AI systems represent a dead end,
> where exponential increases of training data and effort will give us
> modest increases in impressive plausibility but no foundational increase
> in reliability. I would love to see compositional approaches to neural
> networks, hard as it appears.
>
--
Roger Clarke mailto:Roger.Clarke at xamax.com.au
T: +61 2 6288 6916 http://www.xamax.com.au http://www.rogerclarke.com
Xamax Consultancy Pty Ltd 78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Visiting Professorial Fellow UNSW Law & Justice
Visiting Professor in Computer Science Australian National University
More information about the Link
mailing list