Laravel

Efficiency - Blog Posts

8 months ago

"If we keep on optimizing the proxy objective, even after our goal stops improving, something more worrying happens. The goal often starts getting worse, even as our proxy objective continues to improve. Not just a little bit worse either — often the goal will diverge towards infinity. This is an extremely general phenomenon in machine learning. It mostly doesn't matter what our goal and proxy are, or what model architecture we use. If we are very efficient at optimizing a proxy, then we make the thing it is a proxy for grow worse."

Too much efficiency makes everything worse: overfitting and the strong version of Goodhart's law


Tags
Loading...
End of content
No more pages to load
Explore Tumblr Blog
Search Through Tumblr Tags