Why does regularisation mostly make your parameters smaller not bigger?

If they ask me this in a junior quant interview I probably won't get the job

Do you know why that's the case? I don't. I was actually just chilling at Regent's Park, and some random thoughts came, and now here we are.

Firstly we will take that a look at misspecified models. Intuitively, misspecified models most likely to be biased; but how do we sharpen that intuition? You may or may not learn some new stuffs about linear regression here.

Then next, we will show some loose equivalence between regularisation and correcting the bias of a misspecified model. But then this brings up something that is not so intuitive, which is; exacerbating the bias of an already biased model can be beneficial. We will attempt to reconcile this observation with good old Bias-Variance tradeoff.

Of course, a quantymacro article wouldn't be complete without Ridge Regression. Here we will go through what I hope to submit to you; the most intuitive explanation on how exactly Ridge reduces variance.

And finally, we will go through some examples on regularisation that makes the parameters bigger. Might sound a bit weird, but actually it's not that weird, and there are absolutely some instances it makes a lot of sense. Some of them might even be something that we all already do, but don't think much about it.


This article is part of my new initiative where I write about stuffs/ideas that I find interesting with no conclusive takes. It is meant to provoke some ideas from the audience, in hope that they will teach me things that I don't know.

This post is for paying subscribers only

Already have an account? Sign in.

subscribe to quantymacro

sometimes I write cool stuffs
email
Subscribe