[LINK] Does current AI represent a dead end?

Kate Lance kate at lancewood.net
Thu Dec 5 20:54:45 AEDT 2024


Ed Zitron has been debunking the whole AI 'economy' for a while now
and he's delightfully readable:

https://www.wheresyoured.at/godot-isnt-making-it/

"The entire tech industry has become oriented around a dead-end technology that
requires burning billions of dollars to provide inessential products that cost
them more money to serve than anybody would ever pay. 

The obscenity of this mass delusion is nauseating — a monolith to bad
decision-making and the herd mentality of tech's most powerful people, as well
as an outright attempt to manipulate the media into believing something was
possible that wasn't. And the media bought it, hook, line, and sinker.

Hundreds of billions of dollars have been wasted building giant data centers to
crunch numbers for software that has no real product-market fit, all while
trying to hammer it into various shapes to make it pretend that it's alive,
conscious, or even a useful product. 

There is no path, from what I can see, to turn generative AI and its associated
products into anything resembling sustainable businesses, and the only path
that big tech appeared to have was to throw as much money, power, and data at
the problem as possible, an avenue that appears to be another dead end."

Lots more good stuff, especially on NVIDIA's chip problems.

Regards,
Kate



On Thu, Dec 05, 2024 at 03:33:23PM +1100, Kim Holburn wrote:
> https://www.bcs.org/articles-opinion-and-research/does-current-ai-represent-a-dead-end
> 
> Many of these neural network systems are stochastic, meaning that providing
> the same input will not always lead to the same output. The behaviour of
> such AI systems is ‘emergent’ — which means despite the fact that the
> behaviour of each neuron is given by a precise mathematical formula, neither
> this behaviour nor the way the nodes are connected are of much help in
> explaining the network’s overall behaviour.
> 
> ...
> 
> This idea lies at the heart of piecewise development: parts can be
> engineered (and verified) separately and hence in parallel, and reused in
> the form of modules, libraries and the like in a ‘black box’ way, with
> re-users being able to rely on any verification outcomes of the component
> and only needing to know their interfaces and their behaviour at an abstract
> level. Reuse of components not only provides increased confidence through
> multiple and diverse use, but also saves costs.
> 
> ...
> 
> Current AI systems have no internal structure that relates meaningfully to
> their functionality. They cannot be developed, or reused, as components.
> There can be no separation of concerns or piecewise development. A related
> issue is that most current AI systems do not create explicit models of
> knowledge — in fact, many of these systems developed from techniques in
> image analysis, where humans have been notably unable to create knowledge
> models for computers to use, and all learning is by example (‘I know it when
> I see it <https://www.acluohio.org/en/cases/jacobellis-v-ohio-378-us-184-1964#:~:text=TheU.S.SupremeCourtreversed,tohardcorepornography...>’).
> This has multiple consequences for development and verification.
> 
> ...
> 
> Systems are not explainable, as they have no model of knowledge and no representation of any ‘reasoning’.
> 
> ....
> 
> Verification comes with a subset of issues following from the above. The
> only verification that is possible is of the system in its entirety; if
> there are no handles for generating confidence in the system during its
> development, we have to put all our eggs in the basket of post-hoc
> verification.
> 
> ...
> 
> So, is there hope? I believe — though I would be happy to be proved wrong on
> this — that current generative AI systems represent a dead end, where
> exponential increases of training data and effort will give us modest
> increases in impressive plausibility but no foundational increase in
> reliability. I would love to see compositional approaches to neural
> networks, hard as it appears.
> 
> -- 
> Kim Holburn
> IT Network & Security Consultant
> +61 404072753
> mailto:kim at holburn.net  aim://kimholburn
> skype://kholburn - PGP Public Key on request
> 
> _______________________________________________
> Link mailing list
> Link at anu.edu.au
> https://mailman.anu.edu.au/mailman/listinfo/link


More information about the Link mailing list