Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You keep on missing the point.

I think you misunderstand. I absolutely understand your point. I'm explicitly rejecting your point. There's a crucial difference.

The evidence laid out in this test does not necessarily lead to the conclusion you have here. There are a number of confounding factors, that if I had more time... I'd like to investigate.

True, your point is ONE possible story for what is going on here. But alas, my instincts suggest something else is going on. I think CNNs have simply been extremely well optimized for the GPU platform, and that indeed, they are one of the few algorithms that run extremely well on a GPU.

I'm curious how a well-put together "classical" chess AI would work if it were ported to a GPU. I understand that no such chess AI has ever been written, but that doesn't change my curiosity.

-------

EDIT:

> Chess is not strictly about computing power, and neural-network evaluation functions are vastly better.

I just thought of a way that would test this assertion. Instead of porting Stockfish to a GPU, port LeelaZero to a CPU. Run the Neural Net on the same set of hardware, and see who wins.

That way, Stockfish keeps its (cannot be scaled) centralized hash table / lazy SMP algorithm, while LeelaZero runs at the same compute-power that Stockfish has.



It was 14 wins for leela and 7 wins for sf, which is pretty large for the level they are playing at. Anyway people have thought about using GPU for chess engine but it was difficult to make work. (https://chess.stackexchange.com/questions/9772/cpu-v-gpu-for...). GPU and CPU have fundamentally different architecture and comparing them using just ops per second without taking into account their capabilities is missing the point.


You know very well that LeelaZero was designed to run on a GPU, just like Stockfish was designed to run on a CPU.

You can also do the reverse, it's easy to naively make Stockfish run on a GPU, it will just be performing terribly bad since it's algos will not utilize the GPU properly.


I mean, that's what needs to be done, for the test to work.

Either Stockfish's style of algorithm needs to be ported to a GPU (and I'm arguing its possible to do so efficiently. But a number of unsolved problems do have to be solved).

Or... Leela Zero needs to be ported to a CPU.

I'm not saying a naive port: I mean a port where the programmer spends a good bit of effort optimizing the implementation. That way its fair. Those are the two hypotheticals that can happen for a "fair" test.

I'm, personally, more interested in the case of Stockfish -> somehow ported to GPU, mostly because its never been done before. I mean, I don't want to do it, but if anyone ever did it, I'd be very interested in reading how they solved all of the issues. :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: