Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AFAIK it works like this: you have a test data you develop against and some secret (bigger) test data that you only have access for the final score. While you are developing you can overfit if you want, but then probably you won't perform that well with the secret tests. What you are meant to be doing is to perform with the test data without overfiting. Even if it is not optimal, it mostly solves the overfiting problem. It might work for a cryptocurrency too.


I think the problem described is the following:

1. A researcher puts out a model with bad initial parameters/data.

2. The chain workers/miners train the model as request.

3. The model fails at the test or verification dataset due to the bad setup.

In this case, the miners would not get paid despite doing exactly what was asked of them.


You don't need to test that they perform well, just that they perform the same (for algorithms that should be bit for bit identical) / similarly (for ones that are less so). If multiple people train the same thing and the results are the same you could trust they have faithfully run the training as asked. That rewards "train as asked" rather than "train and get a good result".

Lack of trust there can be addressed with a few other things like staking, and a broad desire for all miners that people trust the system as a whole. Quite how to design those things is a complex problem but not insurmountable I think.

Disclaimer, this kind of decentralised more useful kind of work is something I'm investigating now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: