Hacker Newsnew | past | comments | ask | show | jobs | submit | klempner's commentslogin

Except it is a stretch to say it is "their theme park restaurant". This story was dramatically oversimplified in the media and Disney's position was nowhere near as unreasonable as everyone understands it to be.

The argument was not "they agreed to a EULA 5 years ago and therefore mandatory arbitration in all disputes with Disney".

This is a privately owned restaurant at a glorified shopping mall within the larger Walt Disney World resort. If you died due to a severe allergic reaction at a normal restaurant in a normal shopping mall in Florida the mall owners would generally not be liable unless there's something else going on.

The theory that Disney is liable here is more than anything based on the *restaurant featuring on their app.* The EULA for *that app* would certainly be relevant to this argument.

Now, the Disney lawyers also tried to argue that the Disney+ EULA would actually (at least plausibly) be relevant. That is more than a bit of a stretch, especially for a free trial from years ago, and I'd be surprised (but IANAL) if such a theory would actually hold up in court. Still, on a spectrum from "person died due to maintenance failure on a Magic Kingdom ride" to "person died from going to a restaurant featured on a Disney+ program", if you're arguing that the Disney+ EULA is relevant, this is a whole lot closer to the latter than the former.


It's my belief the Disney+ EULA claim was just the lawyers doing the "throw everything at the wall and see what sticks" shtick (no pun intended). They knew it was likely to not hold up, but tried it anyway because, if it did, it helps future claims.


>Disney's position was nowhere near as unreasonable as everyone understands it to be.

>Now, the Disney lawyers also tried to argue that the Disney+ EULA would actually (at least plausibly) be relevant.

Well, you know, they also could have not done _that_. With it they deserve all the flak that they've got and more, simply because they resorted to a scummy tactic, whatever the reason.


Except that the theme park did present the restaurant as being part of the park, which makes it quite reasonable to hold the theme park responsible financially for the entire debacle.

If a chainsaw juggler on a cruise ship cuts my dad in half while he's sleeping on his deck chair, "That entertainer was not a direct employee of Royal Caribbean" will hold exactly zero water in determining liability.

All arguments were complete shite.


There are very substantial differences between your chainsaw juggler scenario and the Disney one. Notably, the cruise ship is access controlled and your dad didn't actively engage with the chainsaw juggler.

To be clear, this isn't part of Magic Kingdom or one of the proper Disney theme parks. This is a shopping area, open to the public without admission.

For a closer scenario: the cruise ship docks at one of its stops for a day. The area around where the ship docks is owned by Royal Caribbean but open to the public. Most of the stores are privately owned and operated, leasing space from Royal Caribbean. One of those stores is a theater that runs a chainsaw juggling show. Royal Caribbean's website/app includes the full schedule of that theater and highlights that show as perfectly-safe-we-assure-you. Your dad attends that show and gets bisected.

The key point here, entirely not captured by your scenario: the theory making Disney plausibly liable is that Disney's own online services presented this restaurant and its menus which made the plaintiff believe that the restaurant was subject to Disney's allergy standards. It is not at all unreasonable to say that EULAs for those online services are relevant to this dispute.


The main downside to not having swap is that Linux may start discarding clean file backed pages under memory pressure, when if you had swap available it could go after anonymous pages that are actually cold.

On a related note, your program code is very likely (mostly) clean file backed pages.

Of course, in the modern era of SSDs this isn't as big of a problem, but in the late days of running serious systems with OS/programs on spinning rust I regularly saw full blown collapse this way, like processes getting stuck for tens of seconds as every process on the system was contending on a single disk pagefaulting as they execute code.


If anything my guess here would be the master/slave/cable select jumper.

Like, last I looked the Linux kernel still had MFM/RLL support, although I'm not sure that's going to get included even as a module in a modern distro.


IIRC, the Soundblaster 16 driver received a bug fix recently.


If the mid range laptop happens to have a Thunderbolt/USB4 port there are a number of Thunderbolt adapters built around Mellanox ConnectX-4 Lx SFP28 NICs.


Congratulations, you've created a server that lets people have shells running as the user running telnetd.

You presumably want them to run as any (non root) user. The capability you need for that, to impersonate arbitrary (non-root) users on the system, is pretty damn close to being root.


Well obviously each user just needs to run their own telnet daemon, on their own port of course.


This document being from 2010 is, of course, missing the C11/C++11 atomics that replaced the need for compiler intrinsics or non portable inline asm when "operating on virtual memory".

With that said, at least for C and C++, the behavior of (std::)atomic when dealing with interprocess interactions is slightly outside the scope of the standard, but in practice (and at least recommended by the C++ standard) (atomic_)is_lock_free() atomics are generally usable between processes.


That's right, atomic operations work just fine for memory shared between processes. I have worked on a commercial product that used this everywhere.


>HDDs typically have a BER (Bit Error Rate) of 1 in 1015, meaning some incorrect data can be expected around every 100 TiB read. That used to be a lot, but now that is only 3 or 4 full drive reads on modern large-scale drives. Silent corruption is one of those problems you only notice after it has already done damage.

While the advice is sound, this number isn't the right number for this argument.

That 10^15 number is for UREs, which aren't going to cause silent data corruption -- simple naive RAID style mirroring/parity will easily recover from a known error of this sort without any filesystem layer checksumming. The rates for silent errors, where the disk returns the wrong data that benefit from checksumming, are a couple of orders of magnitude lower.


RAID would only be able to recover if it KNEW the data was wrong.

Without a checksum, hardware RAID has no way to KNOW it needs to use the parity to correct the block.


My point is that the most common type of failure here has the drive returning an error, not silently returning bogus data.


This is pure theory. Ber shouldn't be counted per sector etc? We shouldn't tread all disk space as single entity, IMO


Why would that make a difference unless some sectors have higher/lower error rates than others?


For a fixed bit error rate, making your typical error 100x bigger means it will happen 100x less often.

If the typical error is an entire sector, that's over 30 thousand bits. 1:1e15 BER could mean 1 corrupted bit every 100 terabytes or it could mean 1 corrupted sector every 4 exabytes. Or anything in between. If there's any more detailed spec for what that number means, I'd love to see it.


This stat is also complete bullshit. If it were true, your scrubs of any 20+TB pool would get at least corrected errors quite frequently. But this is not the case.

The consumer grade drives are often given an even lower spec of 1 in 1e14. For a 20TB drive, that's more than one error every scrub, which does not happen. I don't know about you, but I would not consider a drive to be functional at all if reading it out in full would produce more than one error on average. Pretty much nothing said on that datasheet reflects reality.


> This stat is also complete bullshit. If it were true, your scrubs of any 20+TB pool would get at least corrected errors quite frequently. But this is not the case.

I would expect the ZFS code is written with the expected BER in mind. If it reads something, computes the checksum and goes "uh oh" then it will probably first re-read the block/sector, see that the result is different, possibly re-read it a third time and if all OK continue on without even bothering to log an obvious BER related error. I would expect it only bothers to log or warn about something when it repeatedly reads the same data that breaks the checksum.

Caveat Reddit but https://www.reddit.com/r/zfs/comments/3gpkm9/statistics_on_r... has some useful info in it. The OP starts off with a similar premise that a BER of 10^-14 is rubbish but then people in charge of very large pools of drives wade in with real world experience to give more context.


That's some very old data. I'm curious as to how stuff have changed with all the new advancements like helium drives, HAMR, etc. From the stats Backblaze helpfully publish, I feel like the huge amount of variance between models far outweigh the importance of this specific stat in terms of considering failure risks.

I also thought that it's "URE", i.e. unrecoverable with all the correction mechanisms. I'm aware that drives use various ways to protect against bitrot internally.


The almost as interesting takeaway I have (which I am sure is in their internal postmortem) is that they presumably don't have any usage of glibc getaddrinfo clients in their release regression testing.


The actual algorithm (which is pretty sensible in the absence of delayed ack) is fundamentally a feature of the TCP stack, which in most cases lives in the kernel. To implement the direct equivalent in userspace against the sockets API would require an API to find out about unacked data and would be clumsy at best.

With that said, I'm pretty sure it is a feature of the TCP stack only because the TCP stack is the layer they were trying to solve this problem at, and it isn't clear at all that "unacked data" is particularly better than a timer -- and of course if you actually do want to implement application layer Nagle directly, delayed acks mean that application level acking is a lot less likely to require an extra packet.


If your application need that level of control, you probably want to use UDP and have something like QUIC over it.

BTW, Hardware based TCP offloads engine exists... Don't think they are widely used nowadays though


Hardware TCP offloads usually deal with the happy fast path - no gaps or out of order inbound packets - and fallback to software when shit gets messy.

Widely used in low latency fields like trading


Speaking of Slashdot, some fairly frequent poster had a signature back around 2001/2002 had a signature that was something like

mv /bin/laden /dev/null

and then someone explained how that was broken: even if that succeeds, what you've done is to replace the device file /dev/null with the regular file that was previously at /bin/laden, and then whenever other things redirect their output to /dev/null they'll be overwriting this random file than having output be discarded immediately, which is moderately bad.

Your version will just fail (even assuming root) because mv won't let you replace a file with a directory.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: