Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see a lot of points about how this isn't exactly the best practice but I still don't follow why it isn't the best practice.

Say you're already running a video sharing site and your servers are serving up all the content to the clients. So, you add your servers as seeders. The client comes in with support for webRTC, requests packets in order, gets your servers as seeders along with a couple other people watching the video and everyone goes along their merry way.

The rare portions don't seem to be an issue because your servers are always seeds, always running, and already have the capacity to support all the demand.

Is this not a win/win to reduce some bandwidth consumption?



Absolutely, all the talk about rejecting streaming really concerns the "true" p2p swarms, where everybody can be a seeder and everybody can be a leecher, and there is only one "true" source, the original seeder. In those cases the peers can go down at any moment in time so it is very important for the swarm vitality that pieces be distributed as efficiently as possible.

Your scenario is more or less the same as what we have today for those swarms that are comprised of many peers on the desktop and a few high-speed always-on seedboxes that already act like some kind of CDN.

The more seeders there are, the better, in any situation. The question is whether the swarm we're talking about is whether you can expect some seeders to be relatively long-lived (in which case streaming is ok) or if we are in a free-for-all (in which case streaming is not). Not all swarms are of the first type, far from it.


Bittorrent is designed to not depend on such central, always-on servers. Avoiding piece availability bottlenecks is one of its robustness features.

If you substitute built-in robustness with servers then yes, of course it will still work. But you're weakening the decentralized nature of the protocol by doing so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: