Hacker Newsnew | past | comments | ask | show | jobs | submit | SirMonkey's commentslogin

At least for the neighboring country (Nicaragua) it's like that. Sometimes the reference points are not even there anymore, as in the case of some old buildings destroyed in the Managua earthquake in '72


This is pure speculation - I have no prove. I believe is also due to the target buyer: afaik the biggest adopters are municipalities and not general consumers.


If we are talking about the radio spectrum: Not much - other than the threat of the regulators showing up at your doorsteps after they found you with a van. If you are talking about the backbone so to say, it's not such a big Issue. The original packet-forwarder sends "json" over UDP. If you are using a packet-forwarder by one of the network operators, some of them are using MQTT (IP/TCP) as the transport.

Some gateways let you filter by network_id, this will not stop someone from pulliting/disrupting the radio spectrum, but it will stop your gateway from relaying those packets to your network-server.


A small trip down to memory lane:

My first OpenStack install was on a bunch of VMs. Got that to run following the official documentation. Got enough people interested in it, that I was "allowed" to salvage whatever hardware I could find and put in a Rack.

Getting Neutron to work was a head first intro to networks and in the end I had to learn how to configure actual network hardware.

After adding Ceph on top of old spinning disk, watching them die one after the other and the cluster was still up with performance not worst than the IBM-Stuff we were paying for gave me warm-fuzzy feelings.

Thanks to Triple-O, Fuel and most of the other "installers" at that time I learned that - the "big guys" don't know what they are doing either ;-)

At the end of the day, it took the "mystical" out of the cloud, and for Junior me that was good - it also saved me from the path management wanted me to go: Sharepoint developer/admin.


In reality that "poker player" would probably just get a line of credit at the casino[1]. Moving/Getting money is "hard" mostly if you are poor.

[1] https://en.wikipedia.org/wiki/High_roller


You mean the helm-package you installed without reading the documentation? Hate the player, not the game.


No. No, no no.

This was legitimate a bug that they immediately fixed when I reported it... There is no legitimate reason to expose the master JNLP port to the internet, ever. The chart did not have a configuration option for this, it just exposed the JNLP port as part of the same k8s Service that exposed the web port. (The JNLP port is for jenkins workers to phone back home to the main instance, it's not for external use.)

"Just read the docs" is not an answer to a chart setup which is just plain broken.


I learned about them on my previous job. It's rather easy to cluster and the Web-Hooks are super easy to get started with. I don't know Erlang myself, but looking at the code and comparing it to some of the Java solutions, it seemed well structured and easier to understand/modify if I wanted to. The only "Issue" I have with it right now, is the "custom OpenSource license" they have.


I learned k8s with some NUCs we had laying around at work. Might be easier than Pi, but not as cheap. Some things I used: https://metallb.universe.tf/ (LoadBalancer) https://cilium.io/ (Networking) https://rook.io/ (Persistent Storage)


I also use MetalLB and Rook. Can't say enough good things about them.

I have also used Kube-Router (https://kube-router.io - Digital Ocean's non Virtual networking plugin for bare metal environments; it puts your containers on your physical network, which is freaking neat) and loved that, but since I started deploying kubernetes with Rancher I've found for dev clusters I'm not caring about what networking is used. (currently running Canal).

Not sure what we will decide on when we go to production.


I've had a lot of performance issues with Rook. I setup a k8s cluster six months ago on a few ThinkCenter Tiny's (more powerful than a Pi). They each have a single 120GB SSD, so I've allocated a directory on the filesystem (ext4) to Rook rather than a dedicated device.

Originally I only had 2GB of RAM and Rook ended up using most of this. Even now the CPU usage by Rook is quite high even when the system is idle. It hasn't lost data though, even though one of the SSDs died and was unrecoverable, so props for that. To be fair I haven't tried upgrading since I installed it so maybe my issues would be resolved by that.

Overall Rook feels somewhat overkill for a homelab user, but I haven't seen any recommended alternatives.


You need to set a memory target for your OSDs, or they will gobble up as much as possible for their cache. In Rook you do that by specifying the resource limits on your CephCluster[0].

Sadly, Rook enforces some pretty high minimum values (2GiB per OSD, 1 GiB per mon), which is fine for production but can make homelab experimentation annoying.

[0]: https://rook.io/docs/rook/v1.1/ceph-cluster-crd.html#cluster...


Does Rook give you the equivalent of EBS root volumes for your nodes then? Is that the function you have it providing? Does it offer something beyond using local host storage and minio?

I ask because I've generally been confused about the use case for Rook despite having read the "what is Rook?" paragraph many times on the project home page. My assumption is that it lets you build your own internal cloud provider. Is that correct?


https://rook.io/docs/rook/v1.1/

Rook will give you:

- Ceph: Block Storage, Shared storage, Object storage

- EdgeFS: Object but apparently will function as Block and File

- Minio: Object

- NFS4: Shared

- Cockroach: Database

- Cassandra: Database

- YugabyteDB: distributed SQL database (new to me)

Only the first two are marked as stable.


I'll mention that they're all independent projects under rook's umbrella.


Not directly, rook provides a storage backend for in-cluster workload persistent volumes.

In theory you could manage kvm machines with kubevirt and back the machines with PVCs from rook. I have not tried this and would be curious how the performance is.


I would say they're functionality quite equivalent, you just wouldn't pull them in as a root volume.

You should be able to point your default StorageClass at Ceph and have it create RBD block storage devices for you, which would auto create pvc, that you mount you actual data in in your kube manifests.


The parent comment was asking about root volumes for kubernetes nodes

> Does Rook give you the equivalent of EBS root volumes for your nodes then?

I think we are saying the same thing if i'm not mistaken?


I doubt the parent really cares about whether or not it's a root volume, but for the record you can mount it to root.

It'll just essentially be empty, which means you have to add a step of populating root with something useful. You'd have to either try and use an initContainer to copy in a filesystem at runtime, have ceph give you a pre-populated directory, probably via thin provisioning, or do something out of band (I've did this in k8s).

This is usually more effort than it's worth though, as container runtimes populate the root for you with whatever you want anyway. Plus, if you start treating containers as blobs like VMs, you'll end up in the situation where you don't know where your important variable data is, which leads to situations where people forget to back it up and test it.


I only said "root volumes" as it's a common use case for EBS volumes. For instance with etcd running in a container you would want the host volume to be an EBS volume since it's critical it's a critical K8S component.


Right, but you would only need to mount the etcd data partition (/var/lib/etcd afaik), rather than the entire node and/or container.

The main problem you have here, is chicken and egg. How do you use a StorageClass, kubernetes PVC, or rook to provision Block Storage for etcd, when you need etcd for kubernetes to function, and you need kubernetes to function for rook, et all.

At some point, you need to bootstrap the world, which is people either start off with cloud APIs, ansible, or PXE.


."The main problem you have here, is chicken and egg. How do you use a StorageClass, kubernetes PVC, or rook to provision Block Storage for etcd, when you need etcd for kubernetes to function, and you need kubernetes to function for rook, et all."

I totally agree. The EBS lifecycle management is generally handled by something like Terraform. That's why I was wondering if the use case for Rook is primarily bare-metal Kubernetes since AWS/GCP et al. already provide these. So I'm wondering that even in a bare-metal environment where you still need to use config management tools like Ansible/Terraform to do things like provision block storage what's the upside of Rook over existing iscsi/Ceph/minio installations?


>"Not directly, rook provides a storage backend for in-cluster workload persistent volumes."

Right so is it fair to say that the use case for Rook is if you are running Kubernetes on bare metal? For instance if I'm Kubernetes cluster on AWS then AWS already provides PVC via EBS and S3 volumes. Or am I overlooking a use case where you would run Rook on cluster running on AWS/GCP?


Yes, rook is great for bare metal and would enable dynamically provisioned persistent volumes.

Rook runs a ceph cluster inside your kubernetes cluster to provide the storage. The downside is this consumes cpu/memory (and obviously storage) resources to run, whereas on AWS, EBS is integrated into the platform so it does not "run" inside your cluster (other than the aws cloud-provider that provides the integration).

If you wanted to run the same storage backend on bare metal and AWS, rook would enable that.


The funny thing with envoy is that you can take the source of the authz service and build your own auth-mechanism for other protocols. We are doing that for MQTT


A Demo of the gameplay for the ones too lazy. https://www.youtube.com/watch?v=Kx2cHmkqWZ4

I remember playing this with my dad and having it installed for a birthday party. Fun times.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: