top of page

How JEDAI Enables LLMs to Navigate What Machines Can’t Learn

Context

June 2020, in the middle of pandemic, I first started exploring machine learning. My approach has always been to connect new ideas with what I already knew. So, I thought of it as an extension of traditional computing and this helped me to discover many nuances! In traditional programming, we use variables to store numbers and functions to process them into decision-making outputs. We did not have to hardcode every value. The logic was fixed, but the inputs could change.


Then came machine learning. Here, the model does not just receive input variables; it learns how those variables interact, what operations (+, -, <, <=, ± and so on) to apply, and where the thresholds should lie. In other words, the whole equation, the function itself, can be created on the fly.


But there is an important truth that anchors all of this: machines do not think or discover. (at least not yet!!) They search, optimise, and adjust until their predictions align with 'reality'. The magic lies not in understanding, but in reducing error through feedback.


This is the art of generalisation, and it begins with feature engineering, function fitting, and the discipline of minimising loss.


What You’ll Learn

  • What feature engineering really means (beyond Boolean zeros and ones)

  • Why machine learning is function fitting with feedback

  • How loss becomes the machine’s sense of direction

  • The difference between underfitting, overfitting, and generalisation


1. Feature Engineering: Translating Reality into Numbers

Imagine you are still behind the wheel. You see signs, signals, and other drivers. To make sense of them, you interpret everything as usable inputs: distance, speed, position.

That is what feature engineering does for machines. It takes messy, real-world data and turns it into numbers the model can actually process.


  • “Employed for 5 years” becomes employment_years = 5

  • “Age 35” becomes age = 35

  • “High salary” might become a scaled value like salary_scaled = 0.45


Feature engineering is representation. It is how we translate reality into a numerical language that machines can learn from.


2. From Writing Code to Letting Code Learn

In traditional programming, you decide everything: the rules, the equations, the thresholds. You are the driver and the mapmaker. Machine learning changes that. You no longer tell the car how to steer; you show it thousands of examples of good driving, and it figures out the steering rules itself.


Aspect

Traditional Programming

Machine Learning

Deep Learning

Who defines logic

You (people decide the rules)

You design features; model learns relationships

You design architecture; model learns features and relationships

Input

Raw variables

Engineered features

Raw or lightly processed data

Output

Deterministic

Probabilistic

Probabilistic, complex

Goal

Execute human-defined function

Learn relationships from data

Learn both representations and relationships


In short:

  • Traditional programming: You write the function.

  • Machine learning: You supply data; the model finds the function.

  • Deep learning: You design the layers; the model builds its own representations.


3. What “Learning” Actually Means

Machines do not understand patterns; they search for them. Learning means adjusting internal parameters until the model’s predictions align with real data. Mathematically, it is about finding the parameter values that minimise the gap between prediction and truth.


That gap is called loss.


The model is not discovering meaning; it is optimising numbers. Each adjustment is a small steering correction on its journey to accuracy.


4. The Curve and the Road: Function Fitting Made Simple

Picture a learner driver following a winding road. At first, they drift off course. Gradually, they adjust, turning the wheel and easing the brakes until their path follows the road smoothly.


That is function fitting.


In machine learning, data points are the road, and the model’s curve is the car’s path. With each iteration, it adjusts its slope and angle (weights and biases) to follow the data more faithfully. When the road gets twisty (non-linear), the model adds more flexibility: new parameters, more layers, more complex transformations.


The model, like the driver, begins with no understanding of the route ahead. It simply reacts, corrects, and repeats. Each turn of the wheel is an updated weight; each braking point, an adjusted bias. Over time, through feedback and correction, the model learns to align its internal representation with the true shape of the data, just as the driver learns the rhythm of the road.


In deep learning, each layer adds a new level of skill: first recognising the lane, then the road, then the entire landscape.


5. Why fit the model? The Ball-Throwing Analogy

Now imagine you are learning to throw a ball at a target. The first throw misses badly. You watch where it lands, adjust your aim, and throw again. Each throw gives feedback. Each correction reduces the error distance between the ball’s landing spot and the target. That is exactly what function fitting does!


  • Error tells them how far off they are.

  • Optimisation tells them how to adjust their aim.

  • Loss minimisation is how they improve, throw after throw, epoch after epoch.


The only way a machine learns is by comparing its output (where the ball lands) with the desired output (the target). The smaller the gap, the better it gets at hitting close to centre.


6. Underfitting, Good Fit, and Overfitting: The Three Driving Styles


Think of three types of drivers:

  1. The Novice (Underfitting)Turns too little, ignores the bends, and runs off the road. The model is too simple to capture the pattern.


  2. The Balanced Driver (Good Fit)Learns the rhythm of the road, anticipates curves, and stays centred. The model generalises well.


  3. The Overthinker (Overfitting)Reacts to every pebble, overcorrects, and weaves wildly. The model memorises training data but fails on new roads.


Finding the patterns (OpenAI generated image)
Finding the patterns (OpenAI generated image)

The goal is generalisation: learning enough to handle roads never driven before. It is to find the right balance: underfitting misses the pattern, overfitting memorises it, and good fit captures the underlying trend.


7. Embeddings: The Map Inside the Mind

Deep learning goes further. As the model drives through millions of data roads, it builds an internal map called embeddings. These are not meanings, just mathematical positions where similar things are closer together because it helps reduce loss.


“King” ends up near “Queen,” not because the model understands monarchy, but because that spatial arrangement improves its prediction accuracy.


Embeddings are mathematical shortcuts to lower loss, not comprehension.


8. The Big Picture: What’s Really Going On


Here’s the distilled summary:

  • Machine learning is curve fitting guided by feedback.

  • Feature engineering is data representation for that curve.

  • Deep learning is multi-layered curve fitting that learns its own features.

  • Embeddings are coordinates, not concepts.

  • Loss minimisation is the only compass that keeps learning on track.


Machines do not think; they adjust. They do not reason; they respond. They do not learn or discover in the human sense; they perform through patterns, searching for mathematical alignments that reduce error. Yet through this patterned precision, they approximate the world well enough to navigate it.


Why Tenacium Chose Knowledge Graphs from Day 1

At Tenacium, we realised early that the future of learning systems is not just about how fast models adapt, but how contextually they understand. This is why we built JEDAI around a Knowledge Graph foundation from day one.


Approach

Who Defines Logic

What’s Learned

Traditional Programming

You (explicit rules)

Nothing new

Machine Learning

You define features

Model learns relationships

Deep Learning

You define architecture

Model learns features and relationships


The above table simply illustrates that knowledge graphs give us the architecture where external AI models or self-hosted LLMs can learn both representations and relationships faster as well as effectively, and answer back with context that actually makes sense.


We also designed a flexible data ingestion mechanism that respects the reality of how data lives inside businesses: scattered, incomplete, and constantly changing. Training corporate data directly for agents would take phenomenal time and cost.


Without a hypergraph foundation like JEDAI, adopting only AI agents is like hiring a consultant who writes a great report but never solves the problem. Knowledge Graphs, in contrast, allow the business to own the intelligence, connect the dots, and act on insight immediately.


Call to Action

Machines learn perform by fitting mathematical functions to data, refining parameters until predictions align with reality. The goal is not perfection but balance: just enough fit to understand the road ahead without memorising every bump or turn.


And at Tenacium, our Knowledge Graph architecture ensures that when machines learn to navigate, they do so with context, continuity, and purpose, transforming raw data into real intelligence.



 
 
bottom of page