[LINK] One for the Numerologists and Mystics...
Stephen Loosley
stephenloosley at zoho.com
Sat Jan 4 16:42:59 AEDT 2025
David writes,
> On Wednesday, 1 January 2025 12:57:09 PM AEDT sylvano wrote:
>
>> One of the perplexity results doesn't add up:
>>
>> **Sum of Consecutive Primes**
>> Interestingly, 2025 can be represented as the sum of five consecutive
>> prime numbers: 397 + 401 + 409 + 419 + 421 = 2025[7]. This connection
>> to prime numbers adds to its number-theoretical significance.
>>
>>
>> 397 + 401 + 409 + 419 + 421 = 2047
>
> However I notice the sum of the first five prime numbers beginning one prime earlier (389)
> does add up to 2015, thus: 389 + 397 + 401 + 409 + 419 = 2015 and the difference (10) can
> easily be found by differencing nearby primes.
>
> It would be interesting to know if this erroneous result reflects some internal glitch in Perplexity's
> architecture, and how it arose.
>
> And my best wishes to Linkers for a peaceful, if interesting, 2025 whatever its' numerical significance!
> I'd like to live for the next 100 years just to see how it all turns out....
>
> _David Lochrin_
AI Is Usually Bad At Math. Here’s Why It Matters
By John Werner, an MIT Senior Fellow. Updated Oct 7, 2024
https://www.forbes.com/sites/johnwerner/2024/10/07/ai-is-usually-bad-at-math-heres-what-will-happen-if-it-gets-better/
We’re seeing some new developments in AI models that are shedding light on one of the technology’s most prominent gaps – its relative inability to do math well.
Some experts note that AI is dysfunctional at math. It tends to produce wrong answers, and can be slow to correct them.
The people at OpenAI are working on new models that are more geared towards solving math problems.
But it’s interesting to think about why there’s this deficit in the first place:
Some engineers like to talk about tokenization, and the use of data in large language models. What it seems to boil down to is that the models themselves are geared toward language, so they can produce. Shakespearean works of literature, but aren’t as good at solving math problems
But another element of this has to do with the higher-level human thought involved in doing math, and how it works.
Now, computers are great at calculating numbers in a deterministic way. You’re never going to go wrong consulting a computer on a sum, for instance: just ask the calculator. But when it comes to automation, the AI entities that we’re familiar with have trouble.
In the earlier models, ChatGPT and other utilities tended to produce, not accurate math answers, but answers that represented what might come next in language – as LLMs typically do. At present, a lot of this has been solved for simple addition, multiplication, etc. and ChatGPT can solve basic equations by, say, moving like terms around, but this only obscures the deeper issue: that the AI isn’t using the right methods.
“Math is really, really hard for AI models,” writes Melissa Heikkila at our own MIT Technology Review. “Complex math, such as geometry, requires sophisticated reasoning skills, and many AI researchers believe that the ability to crack it could herald more powerful and intelligent systems. Innovations like AlphaGeometry show that we are edging closer to machines with more human-like reasoning skills. This could allow us to build more powerful AI tools that could be used to help mathematicians solve equations and perhaps come up with better tutoring tools.”
So although AI might get there, it’s starting from a deficit.
You might think about a human mathematician as using a higher-level cognition that you could call the ‘striving’ or ‘searching’ phenomenon – that they’re looking for novel ways to solve problems, and going out on a limb, experimenting, trying new things, and not going back to the same drawing board each time.
By contrast, when working in iterations, AI tends to ‘snap back’ to what it was doing before. That’s the brittleness of many of these models – that they are not as explorative as a human brain would be.
So in light of that, is math problem-solving creative?
And this brings up another question – to solve the problem of AI not being able to do math well, do we apply quantity, or quality?
In other words, if you add more processing power, will the LLM or AI agent eventually learn to be more explorative and experimental – or is that something that’s innate in the human mind in a way that’s not inherent to AI?
If the latter is true, then we’re not going to be able to make AI super good at math problems without really directing it in a granular way.
As for creativity in math, let’s look at this article at the University of British Columbia web site, and some of the human traits that go into math problem-solving. Authors cite:
Tolerance to information that is incomplete, or poorly defined
Constructing one’s own internal language where mathematical concept indispensable for solving the problem are set out and explained
Courage, and questioning commonly accepted rules and principles in order to find new and or atypical ways of solving the mathematical problem Ease in analyzing new information and ways of solving the problematic situation presented in the task at hand
Autonomy, and perseverance in searching for possible solutions to the problematic situation.
The ability to critically assess attempts one another peoples to solve the problem.
The acceptance of variability in applying the various problem-solving strategies
All of these point to, again, a higher-level set of cognitive skills that are more based on the old axiom “try, try again.”
So yes, it seems like math is a creative pursuit after all.
It’s that striving and effort to set out on a new trajectory that’s useful in this kind of thinking.
It makes you think of the popular Robert Frost line:
“Two road diverged in a wood, and I / I took the one less traveled by/ and it has made all the difference.”
Keep an eye on these models as humans try to make them better at math, because the result is going to give us a lot of insight into how these systems mimic our own human behavior.
John Werner has created a career out of bringing ideas, networks and people together to generate powerful results. John is a Managing Director and Partner at Link Ventures.
More information about the Link
mailing list