Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are plenty of cheap NVMe SSDs that can push 3.5Gbytes/sec (Samsung 970, Adata XPG). Even with dual 10Gbe NICs, you can't match that, nor the low access times of local storage.

But I do agree that SAN storage is the norm here.



Lots of animation / multimedia houses use workstations with 40Gbps+ adapters (40GbE or Infiniband) connecting to network storage.

It's done this way so people can connect to the storage that's appropriate for the task at hand.

Different projects can be stored on different SAN/NAS arrays , each specc'ed out according to the size/needs of the project.

eg a short run animation doesn't need the same storage capacity of a full length feature film. They may have similar throughput needs though. (Summarising here, but the general concept is ok)

So, lets say someone is a Flame editor (Flame generally has high bandwidth needs). They're working on project A for today. So they connect to the storage being used for that project from their workstation. The next day, they might be working on a different project, so will be connecting to a different set of storage.

Other people using different software will connect to the same storage for their tasks, but can have different latency/throughput needs.

Obviously, this isn't the approach taken by single person multimedia er... "houses". ;)


It's usually direct-attached storage, not network-attached. They use USB-C or Thunderbolt cables to join to a RAID storage device, and then backup/archive to a network based storage pool later.


I rarely see direct attached storage anymore. It's too cost effective from a media management standpoint to just to right to 10Gbe RJ45 or fiber network storage. The only direct storage I see is if speed is absolutely critical and that's very rare, mostly just high end 3D stuff.


Got it. I'm a few years removed but fast DAS raid boxes for each workstation were common with work product being synced to a network share. Looks like the NICs and SANs are fast enough now to run everything off the network.


We are past that now, new PCIe 4.0 SSDs just have been showcased along with the new AMD chips and they can do 5GBytes/s read and a bit above 4GB/s write (AMD is rumored to have invested in the R&D of the controller). You'd need 40 GbE to match one -- and EPYC Rome, also scheduled for this fall, will have 160 lanes allowing for dozens of them. You could very easily reach 100 GByte/s read which no network will match.


> You could very easily reach 100 GByte/s read which no network will match.

High end networking gear already has higher throughput:

http://www.mellanox.com/page/ethernet_cards_overview

https://www.servethehome.com/mellanox-connectx-6-brings-200g...

A SAN/NAS using the same PCIe 4.0 SSD's you mention could probably fill the pipes too.

... and it would probably need a bunch of network stack tuning. ;)


>> You could very easily reach 100 GByte/s read which no network will match.

> High end networking gear already has higher throughput:

100GB/s > 200Gbps

You would need 4x 200Gbps ports to reach 100GB/s, so 2x MCX653105A-ECAT (each 2x 16-lanes) at > $700 each, and pay for 1/10th of a ~$30 000 switch, IOW 100GB/s would cost you ~ $4400, before paying for the storage.

Sure, it could be done, but it wouldn't be cheap, and you'll have used most of the PCIe lanes.


Agreed. Higher end network gear is $$$. :(

EPYC servers (128 PCIe lanes) would probably be the go too, not Xeons.

This is just Imagineering though. ;)

With the specifics, wouldn't it be 4 cards needed? Each card has 2x 100Gb/s ports, so 8 ports in total.


The 200 gb/s network adapter that you linked are 4 times slower than 100GB/s. The parent comment wrote explicitly 100 Gbyte/s.


Oops. Didn't spot that, sorry. :)

That being said, after re-reading the comment they're talking about adding multiple PCIe cards to a box to achieve 100GB/s of local total throughput.

That would be achievable over a network by adding multiple PCIe 200Gb/s network cards too. :)


Nah, a motherboard with enough M.2 connectors could easily exist. Or, U.2 or OCuLink. We have already seen 1P EPYC servers with six OCULink connectors...


Sure. My point is just that whatever bandwidth you can do locally, you can also do over the network.

As a sibling comment mentions though... the cost difference would be substantial. :(


Twin ConnectX6 adaptors, gives you 800Gbps, or ~1GB/s, at an absolute theoretical max.

It's good to see that local storage has finally returned to the reasonable state of being faster than network storage. SATA / SAS was a long, slow period ...


If it’s 800 gbps then it’s 100GB/s, not 1...


You're right. Brainfade.

So, even with protocol overhead from all the stack layers chewing up maybe an order of magnitude, that'd still leave 10GBps.

So .. I guess it's still possible, if impractical, to outperform a good PCIe SSD with the latest network interface.


... ~10GB/s can be done by a single 100Gb/s adapter.

More 0's needed? :)


cough 1GB/s can by done by 10GbE.

Maybe a slight typo there? Need to add a few zeros? :)


You're right, doh. See above.


Pcie 3 was not the bottleneck for SSD. They typically are only 4 pcie lanes, when they could go up to 16 for 4x the bandwidth.


But the standard M.2 NVMe interface happens to only have 4 lanes. PCIe4 will double the available bandwidth for these very common SSDs.


The new x570 motherboards will have them soon the pci 4.0 soon


Thunderbolt 3 can beat that though


A typical product aimed at mid-range video producers, the G-Speed Shuttle SSD, can do up to 2800MB/s. That's 32TB of local Thunderbolt 3 attached SSD storage.

Mind you, you'll pay $15K for it, but if you're in that business you can well afford it even if you're not a top tier Hollywood production shop.

Given that your storage array costs that much, the fully loaded Mac Pro price (somewhere in the $20K range?) is not that outrageous. The people who use Red cameras and G-Tech storage arrays are the Mac Pro demographic Apple is going for here.

Disclaimer: I used to work with G-Tech but no longer there.


Prices have dropped then, cause the G-Speed shuttle I use is 96TB and doesn't cost half that much. I've also used almost every model and in the real world and you don't ever get close to advertised speeds for R/W on those. Plus when the volume gets full it will drop to <100MB/s write.

They are popular though. I see them a lot, but I have had very little success with them over the years.


Thunderbolt 3 basically just provides 4 lanes of PCIe 3.0




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: