The Basic Premise

Think about it. What is the basic premise on which Machine Learning works?
Learning to update its weights, which updates its “beliefs” about something, based on how often it makes a “mistake.”
We apply for jobs, click apply, get rejected, if lucky, you get feedback : why not me?
However, most companies avoid it, because they know you’re going to update your belief of what works for that company, and reapply, or help others do the same.
You try sticking to a fitness program, satisfying the constraint of results + consistency, and based on that you find what works.
Unable to open a pull door, you realize it’s a push one. You update.
Around people, you learn what they like, what they don’t, what makes them sad, what makes them happy : and you update your beliefs about them.
Many of these updates happen automatically. Most happen subconsciously : we aren’t aware of what we are putting into our heads.
What is Bias Doing Here

A negative experience colors our judgement. How strongly we hold onto that judgement depends on the weights and the bias we carry.
But what exactly is bias in this context?
Bias in traditional machine learning is a value that helps the model generalize to unseen data. More mathematically speaking, bias is a learnable parameter that shifts the output of a neuron independent of its inputs , without it, every neuron is anchored to the origin.
If we have huge bias in a network, the corresponding weight needs to grow proportionally larger for it to out-power it — which consequently means more iterations. [1]
Similarly, coming to humans : if we are more biased, it takes a lot of iterations of data for our beliefs to be impacted.
A high-biased person would need many experiences for their beliefs to shift. Conversely, a low-biased person would require fewer.
Think of bias as the prior you carry into every experience : the verdict you’ve already written before the evidence arrives.
It isn’t always wrong. Sometimes it’s accumulated wisdom. Sometimes it’s accumulated damage. The difference matters.
This connection between ML bias and human cognition is not metaphorical — researchers have formally examined the bias-variance dilemma in the context of human decision-making, finding that the human brain often adopts high-bias, low-variance heuristics to navigate an uncertain world. [2]
High Bias — Stability or Blindness

Someone with high bias doesn’t update easily. That can be a feature or a bug.
What kind of bias the person has matters. If it’s biased towards positivity, it can be a great quality.
Think of a person who rarely gets demotivated by negative experiences : it would take a lot of negative experiences for them to turn negative — which is rare, because it’s rare for real world data to be consistently negative.
This is what psychologists call resilience — the ability to resist updating on noise.
A good investor doesn’t panic sell on one bad quarter.
A good doctor doesn’t change their diagnosis based on one outlier result.
Their high bias is doing its job.
Research on cognitive heuristics supports this : a high-bias mind, one that ignores part of the available information, can actually handle uncertainty more efficiently than one that tries to process everything. [3]
Contrary to that, a person who is negatively biased might not be moved at all by the positive things happening in their life : a phenomenon Burns calls “Disqualifying the Positive” — where neutral or even positive experiences are transformed into negative ones, maintaining a negative belief despite everyday contradictions. [6]
In ML terms : the model has been trained to reject incoming positive signal entirely. The weights never update.
The person who got cheated on once and now treats every partner as a suspect.
The data keeps coming in : a loyal partner, consistent behavior, genuine warmth : and none of it moves the needle.
The bias is too strong. The model has already decided.
In ML terms : underfitting. The model is too rigid to capture reality as it actually is. [4]
The solution is always the same as improving a machine learning model : adjusting biases, and more iterations.
In life that means deliberately exposing yourself to contradicting data : and actually letting it land. Not just experiencing it. Letting it update you.
Low Bias — Adaptability or Noise Sensitivity

On the other side, someone with low bias re-calibrates fast. This person adapts very quickly to new situations and environments, and can be very creative.
Drop them in a new country, new job, new social circle : they read the room fast, adjust, and move.
Startups love these people. Creative fields are full of them.
Low bias means low resistance to new signal.
However the downside : this person is one compliment away from confidence and one criticism away from a spiral.
A single bad interaction rewrites the map.
A good morning becomes proof they are a good person. A bad review becomes proof they are a fraud.
They aren’t processing data : they are drowning in it.
In ML terms : overfitting. The model is so sensitive to recent data that it has lost the ability to generalize. [4]
Calibrated Bias — The Actual Goal

Most people think flexibility is always good and rigidity is always bad.
ML shows you it’s a tradeoff.
The goal isn’t low bias. It’s calibrated bias : strong enough to filter noise, loose enough to update on genuine signal.
That’s what good thinkers, good models, and emotionally mature people have in common.
As Belkin et al. (2019) showed in their work on the double-descent curve, the relationship between model complexity and performance is not simple : the best models are, surprisingly, not the most flexible ones, rather the ones that balance structure with adaptability. [5]
The question worth asking yourself : what is your bias set to right now?
And is it earning its place : or just protecting an old verdict that no longer applies?
References
- IBM Technology. (2025). Bias-Variance Tradeoff. IBM Think. https://www.ibm.com/think/topics/bias-variance-tradeoff
- Wikipedia Contributors. (2025). Bias–variance tradeoff. Wikipedia. https://en.wikipedia.org/wiki/Bias-variance_tradeoff
- Gigerenzer, G., & Brighton, H. (2009). Homo Heuristicus: Why Biased Minds Make Better Inferences. Topics in Cognitive Science, 1(1), 107–143. https://pubmed.ncbi.nlm.nih.gov/25164802/
- Ranglani, H. (2024). Empirical Analysis of the Bias-Variance Tradeoff Across Machine Learning Models. Machine Learning and Applications: An International Journal (MLAIJ), 11. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5086450
- Belkin, M., Hsu, D., Ma, S., & Mandal, S. (2019). Reconciling modern machine-learning practice and the classical bias–variance trade-off. Proceedings of the National Academy of Sciences, 116(32). https://www.pnas.org/doi/10.1073/pnas.1903070116
- Burns, D. D. (1980). Feeling Good: The New Mood Therapy. William Morrow and Company. Cognitive Distortion #4: Disqualifying the Positive. https://feelinggood.com/2016/11/18/podcast-10-negative-and-positive-distortions-part-1/