[LINK] Does current AI represent a dead end?
Kim Holburn
kim at holburn.net
Thu Dec 5 15:33:23 AEDT 2024
https://www.bcs.org/articles-opinion-and-research/does-current-ai-represent-a-dead-end
Many of these neural network systems are stochastic, meaning that providing the same input will not always lead to the same output.
The behaviour of such AI systems is ‘emergent’ — which means despite the fact that the behaviour of each neuron is given by a
precise mathematical formula, neither this behaviour nor the way the nodes are connected are of much help in explaining the
network’s overall behaviour.
...
This idea lies at the heart of piecewise development: parts can be engineered (and verified) separately and hence in parallel, and
reused in the form of modules, libraries and the like in a ‘black box’ way, with re-users being able to rely on any verification
outcomes of the component and only needing to know their interfaces and their behaviour at an abstract level. Reuse of components
not only provides increased confidence through multiple and diverse use, but also saves costs.
...
Current AI systems have no internal structure that relates meaningfully to their functionality. They cannot be developed, or reused,
as components. There can be no separation of concerns or piecewise development. A related issue is that most current AI systems do
not create explicit models of knowledge — in fact, many of these systems developed from techniques in image analysis, where humans
have been notably unable to create knowledge models for computers to use, and all learning is by example (‘I know it when I see it
<https://www.acluohio.org/en/cases/jacobellis-v-ohio-378-us-184-1964#:~:text=TheU.S.SupremeCourtreversed,tohardcorepornography...>’).
This has multiple consequences for development and verification.
...
Systems are not explainable, as they have no model of knowledge and no representation of any ‘reasoning’.
....
Verification comes with a subset of issues following from the above. The only verification that is possible is of the system in its
entirety; if there are no handles for generating confidence in the system during its development, we have to put all our eggs in the
basket of post-hoc verification.
...
So, is there hope? I believe — though I would be happy to be proved wrong on this — that current generative AI systems represent a
dead end, where exponential increases of training data and effort will give us modest increases in impressive plausibility but no
foundational increase in reliability. I would love to see compositional approaches to neural networks, hard as it appears.
--
Kim Holburn
IT Network & Security Consultant
+61 404072753
mailto:kim at holburn.net aim://kimholburn
skype://kholburn - PGP Public Key on request
More information about the Link
mailing list