Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been curious about Oxide for a year or two without fully understanding their product. People talking about the "hyperconverged" market in this thread gave me an understanding for the first time.

Given this, can you help me understand in what ways they are different?

When I went to the Nutanix website yesterday, the link showed that'd I'd previously visited them (not a surprise, I look up lots of things I see mentioned in discussions) but their website does an extremely poor job of explaining their business to someone who lacks foundational understanding, even once I'd started reading about "hyperconverged" just before.

 help



If you want to KNOW the chain of custody for all of your OS and software, from the bootloader to the switch chip, and you want to run this virtualization platform airgapped, buying at rack-scale, you want Oxide. They are making basically everything in-house. That's government, energy, finance, etc. Customers that need descretion, security, performance, and something that works very reliably in a high-trust environment, with a pretty high level of performance.

Also check this out: https://www.linkedin.com/posts/bryan-cantrill-b6a1_unbeknown...

If you need a basic "vm platform", VMware, Proxmox, Nutanix, etc. all fit the bill with varying levels of feature and cost. Nutanix has also been making some fairly solid kubernetes plays, which is nice on hyperconverged infrastructure.

Then if you need a container platform, you go the opposite direction - Kubernetes/OpenShift and run your VMs from your container platform instead of running your containers from your VM platform.

As far as "hyperconverged"...

"Traditionally" with something like VMware, you ran a 3-tier infrastructure: compute, a storage array, and network switching. If you needed to expand compute, you just threw in another 1U-4U on the shelf. Then you wire it up to the switch, provision the network to it, provision the storage, add it to the cluster, etc. This model has some limitations but it scales fairly well with mid-level performance. Those storage arrays can be expensive though!

As far as "hyperconverged", you get bigger boxes with better integration. One-click firmware upgrades for the hardware fleet, if desired. Add a node, it gets discovered, automatically provisions to the rest of the configuration options you've set. The network switching fabric is built into the box, as is the storage. This model brings everything local (with a certain amount of local redundancy in the hardware itself), which makes many workloads blazing fast. You may still on occasion need to connect to massive storage arrays somewhere if you have very large datasets, but it really depends on the application workloads your organization runs. Hyperconverged doesn't scale compute as cheaply, but in return you get much faster performance.


https://news.ycombinator.com/item?id=30688865

Here is an answer by steveklabnik about this topic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: