1. Why Gen AI Sometimes Gives Silly Maths Answers
A helpful way to explain this (building on regression‑model intuition):
Generative AI isn’t a calculator — it’s a probability engine.
Large Language Models (LLMs) don’t compute numbers the way spreadsheets, finance systems, or calculators do.
Instead:
They predict the next most likely word/token based on patterns in their training data.
Maths answers require deterministic logic, but LLMs operate on linguistic probability.
So if the training patterns around certain numbers are messy or sparse, the model may output:
a plausible‑sounding guess
rather than a correct calculation
That’s where the phrase:
“Generative AI lives in a world of words and not numbers”
comes from — because the model’s “native language” is text patterns, not symbolic reasoning.
A simple analogy you can use:
“If you ask a novelist to produce a balance sheet from memory without a calculator, they might write something plausible but still wrong. That’s what LLMs do — they write what looks right, not what is mathematically validated.”
2. How to Clarify the Concept for Finance Colleagues
Since finance people default to logic, controls, precision and reconciliation, explain it this way:
LLMs don’t run formulas; they autocomplete.
Accuracy is not guaranteed unless the model is connected to a tool that performs real calculations (e.g., Python, Excel, a finance engine).
When LLMs hallucinate, it’s because they “sound right” rather than “compute right.”
A phrasing that resonates with finance audiences:
“Think of Gen AI as an expert in predicting language, not executing algorithms. Without a computational back‑end, its maths is like an estimate written in prose rather than a validated entry in Workday or Adaptive.”
3. About the Image You Shared (“8 forces reshaping finance by 2030”)
You noted the text was unclear — and you’re absolutely right:
The resolution is too low to read the items clearly.
If you want, upload a higher‑resolution version or a cropped close‑up, and I can extract the text cleanly for you.
Your explanation of Gen AI as a kind of large‑scale regression engine really clicked — it makes sense why it sometimes gives odd answers to simple maths. The phrase “Generative AI lives in a world of words and not numbers” is thought‑provoking, and here’s how I’ve come to interpret it:
LLMs don’t truly calculate — they predict the next likely word based on patterns in text. So when it comes to maths, they generate something that sounds mathematically reasonable rather than something computed. It’s language prediction, not numerical logic, which explains the occasional silly answers.
at a mathematical level, many AI systems—especially neural networks—do behave like extremely large, stacked, non‑linear regression models. This framing helps explain why they sometimes behave unpredictably.
Below is a simple breakdown.
🔹 1. Regression = Predicting an Output from an Input
Regression models learn a relationship between inputs (X) and outputs (Y) by fitting parameters that minimize error.
This aligns with definitions surfaced in the enterprise search results:
Regression predicts continuous outputs by learning relationships between variables. [geeksforgeeks.org]
Machine learning regressions model Y = f(X). [builtin.com]
Regression techniques underpin forecasting, risk scoring, and trend estimation. [geeksforgeeks.org]
This basic principle extends to AI models.
🔹 2. Neural Networks = Millions of Regression Layers Stacked Together
Each neuron in a neural network performs something very similar to linear regression:
output = (weight1 × input1) + (weight2 × input2) + … + bias
Then the result passes through a non‑linear activation function, allowing the network to model extremely complex relationships—not just straight lines.
So:
One neuron ≈ one tiny regression.
One layer ≈ many regressions combined.
A deep model ≈ many layers of regressions chained together.
This matches enterprise guidance that deep learning builds on patterns and probability distributions across data, not direct logic rules. [AI & Machi...arning POV | PowerPoint]
🔹 3. Training = Fitting Millions or Billions of Parameters
Just like linear regression minimizes error (e.g., least‑squares), AI models:
Compare predicted vs. actual output.
Measure error via a “loss function”.
Adjust parameters using gradient descent.
This echoes what regression tutorials emphasize—minimizing prediction error and improving the best‑fit function. [bigdataelearning.com]
But AI models do this at a much larger scale, often adjusting billions of parameters instead of a handful of regression coefficients.
🔹 4. Generative AI = Predicting the Next Output Using a Learned Probability Function
Here’s where the regression analogy becomes very powerful:
LLMs like GPT or Claude don’t “think”—they predict the next token (word, symbol) using a huge mathematical function learned from data.
That function behaves like a gigantic non‑linear regression:
Next Word = f(all previous words)
Where f is a massive neural network that estimates probabilities.
Enterprise context also notes that generative AI works by understanding probability distributions of language rather than executing deterministic rules.
This is why you said:
“Gen AI lives in a world of words, not numbers.”
Exactly—because it’s predicting linguistic probability, not performing symbolic computation.
🔹 5. So Why Does It Sometimes Get Maths Wrong?
Because:
AI predicts what looks statistically likely, not what is computationally correct.
Maths requires discrete, exact logic.
Language models only approximate patterns from text data.
This echoes the caution in EU AI guidance: traditional regression and optimization systems are not considered AI systems, because AI models behave probabilistically, not deterministically. [EU AI Act FAQs | dentsu Dot]
So in a sense:
Calculators = logic
AI = probability
And that explains the “silly answers”.
🔹 6. One‑Sentence Summary You Can Give:
Here’s a tight version you can send back:
“Modern AI systems are essentially massive stacked regression models that learn a complex probability function over text. They don’t ‘calculate’ like a finance engine—they predict the most likely next token, which is why they sometimes produce wrong maths but fluent language.”
And yes — the diagram on “8 forces reshaping finance by 2030” is interesting.
- The end of transactional customisation
- Rise of Do it yourself tech
- Machine dominated decision making
- A workforce of AI agents
- The era of discontinuous regulatory changes
- The lonely enterprise
- The finance talent crash
- Maximally matrixed organisation




























