Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> That's patently false.

Then you should file a patent? Or at least email Clay Institute about it because I believe (I'm not sure) there's a substantial prize associated with proving P=NP.

> For example, you can fix some variables on different instances

You can also parallelize stock picking by just picking stocks at random across multiple portfolios. Do you think I should patent this approach as a winning strategy?



It may well be that for some worst case instances of SAT/SMT that there is no parallel algorithm to solve them faster than a sequential algorithm unless P==NP.

But most people who are not complexity theorists (or perhaps cryptographers) are not generally interested in worst case instances, but are instead interested in solving practical problems which are often not (or even-- are practically never) worst case instances.

My own experience w/ real SMT problems is that I can usually home-brew a fair amount of parallelism by fixing some variables and then solving the derived problems w/ Z3 on each core. But the result often has fairly poor load distribution (e.g. some threads finish long before others), so it could obviously be done better by software the changes the variable its parallel on and how it distributes those variables adaptively as it goes.


> But the result often has fairly poor load distribution (e.g. some threads finish long before others), so it could obviously be done better by software the changes the variable its parallel on and how it distributes those variables adaptively as it goes.

It's astonishing to me that you're staring my point straight in the face and still not getting it. Let me restate in terms of portfolios: If I distribute my stock picks across many portfolios, without an oracle to tell me how to distribute them, some of the portfolios will perform poorly and some will perform well, but on average they will perform as if I just had a single portfolio.


I do wonder how we're talking past each other. Usually when I use an SMT solver I'm interested in finding a satisfying solution. Breaking up the problem on some variable the solver would search across multiple processors can improve the wall-time until finding one.

I doubt I'm following your metaphors/allusions, but the parallel may be that you're only hoping for any portfolio to have excess performance, and not the sum total.


> I doubt I'm following your metaphors/allusions, but the parallel may be that you're only hoping for any portfolio to have excess performance, and not the sum total.

I'm sorry but I don't know how else to explain it to you when you're ignoring such an obvious fact: if you're hoping for any portfolio to hit big, but they're all picked without any intelligence (i.e., without an oracle) then you will always be only as good as the worst portfolio. It's just such a patently obvious logical argument I don't know how else to state it so that it becomes clearer. I mean I could easily write out the formula for expected value of the max of N uniform iid random variables but you've literally been witness to it by watching the wall-clock time on your threads!

Let me put it without any metaphors/allusions: you simply don't understand search and/or SAT if you think that dividing a 2^N search space into N pieces will give you a speedup. It will not. It's basically the "geometric" definition of NP.


Of course, dividing search space into K pieces is not guaranteed to give you a speed-up _in the worst case_: you might pick an unlucky division where K-1 pieces are easily UNSAT but the last remaining piece is as hard as the original (so the wall clock time is unchanged). However, in practice variable-fixing can and often does give an _expected time_ speed-up, especially if the SAT instance has some inherent parallel search structure (e.g., key or midstate bits for ciphers) and such heuristic tactics are still useful.


> However, in practice variable-fixing can and often does give an _expected time_ speed-up

Proof? Because as far as I know literally none of RP, BPP, ZPP relationships to NP are known.


Indeed no proof. By "in practice" I meant "on instances encountered in real-world applications."


then instead of _expected time_ you should say _hope-for time_ because _expected time_, in this context, is already firmly defined.


I'm afraid I'm still mystified and suspect we must be talking past each other, ... But, in any case, I'm grateful you took your time to try teaching me something.


first of all, it's not P=NP - it would be NC=NP. If you nitpick, then at least do it right

and secondly, when "normal bloody people" talk about parallelizing, we don't mean exponential speedup - we mean using 4 cores in hopes of getting 3x time shrinking

and cube'n'conquer does exactly that - and have been the standard tool in SAT solving for forever


NC is a subset is P.

> mean using 4 cores in hopes of getting 3x time shrinking

I've already been over this: reducing 2^N by a factor of 3 is useless.


> I've already been over this: reducing 2^N by a factor of 3 is useless.

and here you're again showing your absence of practice and you being too much into theory

it's not useless if it transforms day computation into work-shift one

because actual people aren't working on theoretical infinite problems - people need solving specific examples they are stuck at

you sound like you'll enjoy working with galactic algorithms - too bad inter-universal beings aren't hiring right now




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: