Show HN: PILF, The ultimate solution to catastrophic oblivion on AI models

by NetRunnerSuon 6/27/2025, 11:10 AMwith 9 comments

by vermilinguaon 6/28/2025, 12:32 AM

Caution: this appears to be part of a very involved sci-fi LARP (as I understand it), so I’d take whatever claims it makes with a grain of salt.

by Ifkaluvaon 6/27/2025, 3:28 PM

It’s an interesting idea, I have two questions.

- Surprise is detected by the norm of the gradients. So, doesn’t this suggest that the model already has a way of adjusting to surprise?

- Is there a danger of model instability when the gradients become larger and the learning rate is also increased?

by upghoston 6/27/2025, 6:43 PM

This looks absolutely fantastic, please accept my meagre professional jealousy. I have long bemoaned manual hyperparam fiddling . I have on occasion dabbled with nonparametric ("genetic") methods of hyperparam tuning inspired by AutoML... but then you still have to manually tune the evolutionary hyperparams.

Finding a way to derive this from the gradients is amazing.

by hackingonemptyon 6/27/2025, 6:56 PM

Parameters I'd Like to Fiddle