Really thoughtful take. That exact gap: bridging identity-aware tunneling (like WireGuard) with protocol-aware proxy decisions is exactly what we set out to solve with Border0.
We pair WireGuard-style tunnels with real-time identity (sso, device, group context) and protocol aware proxies for SSH, RDP, HTTP, psql, Mysql, mssql, ES, and Kubernetes. Our policy engine lets you write rules like “only the DBA group can run DELETEs in Prod” or “Support can exec into this pod,” and we log every query, command, or request, all tied back to the user and device.
Think of it as combining the modern VPN experience of Tailscale with the deep authZ and observability of Teleport. I call it VPN plus PAM. Would love your thoughts if you give it a look.
Outages are inevitable, but the Rogers outage in 2022 had some devastating consequences.
Route leaks can happen to anyone, but the fact that it brought down their entire network, including voice and internet services, across all provinces, was unacceptable.
What's even more concerning is that they had no out-of-band access, which meant no management access to their network. This explains why the outage lasted a whopping 24 hours.
In my opinion, the lack of OOB was the most critical and yet the most preventable. Proper OOB is a must; I wouldn't operate a network without it, I don't understand why Rogers thought that was acceptable.
I want to try and add it to the af_inet_syscall.go method, but seems like a bit more work than I have time for now. I'm surprised it's already quite fast, even without it.
I've had a great experience with Tailscale, but when it comes to databases, I lean towards Border0. What stands out for me is the seamless single sign-on access it offers to my database, eliminating the need for a traditional perimeter VPN and shared database credentials.
Another aspect I appreciate is the granular control it provides. With Border0, I can define specific database access levels, like Select, Delete, Insert, etc., which can be tailored based on various flexible policies, including SSO identity, IP addresses, or country-specific parameters. Also, the detailed query logs tied to each SSO identity are invaluable for accurate attribution and tracking.
ARIN’s free pool of IPv4 address space was depleted on 24 September 2015. As a result, we no longer can fulfill requests for IPv4 addresses unless you meet certain policy requirements that reserved blocks of IPv4 addresses for special cases. https://www.arin.net/resources/guide/ipv4/
ie. you have virtually no other option than to buy on the private market
Based on data from the IPv4 brokerage ipv4.global, the cost of IPv4 addresses has seen a notable increase. In 2014, the price ranged from $6 to $24 per IP, depending on the size of the subnet. By 2021, this range had jumped to between $23 and $60 per IP. The fluctuation between the lowest and highest sales prices for each IPv4 address remained relatively stable until 2021, when we began to see more significant spikes.
The peak prices for IPv4 addresses in 2021 were observed in September and October. During these months, IP addresses allocated by RIPE NCC and ARIN fetched as much as $60 each. Specifically, a /24 block from RIPE NCC sold for $15,360, while ARIN's /22 and /23 blocks went for $61,440 and $30,720, respectively.
Fast forward to October 2022, and the highest sale of the year was recorded: IP addresses allocated by ARIN sold for $60.70 each, or $15,540 for a complete /24 block. Despite these peaks, the IPv4 market appears to have reached a more stable pricing structure, especially when compared to the more volatile price shifts seen in 2021.
You'll need to become a member of one of the regional Internet route registries, like RIPE or ARIN. Then you can buy, say a /24, and transfer it into your RIPE/ARIN account.
Now you have your own IPv4 range. And you can start for example start to use it for your own servers. To do so you need to "announce" this new /24 to the internet, using a protocol known as BGP. You can do that yourself, using a router, assuming you have an Autonomous system number (AS). You can get these via RIPE or ARIN as well.
Or rely on your hosting provider to do that. For example AWS support "bring your own IP address". In that case they will announce the ip prefix in BGP for you, and you can assign your ec2 instances public IP's out of your range.
Equinix Metal, (previously Packet), also makes it easy to do this.
Take a look at the state of RPKI. ROA validation is common these days, and ASPA validation will be common soon. You still need to manually validate that your peer truly represents the AS that they claim to, but if that's been done, ROA+ASPA validation prevents unauthorized announcements.
Absent RPKI, people have been filtering based on IRR for ages, which will not necessarily prevent unauthorized announcements, but will require an attacker to leave a paper trail when making one.
> To do so you need to "announce" this new /24 to the internet, using a protocol known as BGP. You can do that yourself, using a router, assuming you have an Autonomous system number (AS).
But good ISPs filter the prefixes their customers can announce to only those they actually own.
Then you have shitty providers that dont do it, and thats how you get BGP hijacking.
And you cant do this just from any connection, fyi.
You will need a datacenter, cloud host or residential ISP that actually allows you to peer with them and announce routes. This isnt a standard thing you get just by being a customer.
Yah, it certainly seems like maybe that was peak pricing. This write up has some more data on historical pricing https://www.ipxo.com/blog/ipv4-price-history/
I've also heard folks pay quite a bit over the average price for novelty IP addresses, so perhaps that skewed the data? I'd love to be able to buy 2.2.2.0/23 or my favorite 42.42.42.0/24
Yeah, one example is Cloudflare and 1.1.1.1; though the story behind that is less about money and far more interesting. Apparently, APNIC had owned 1.1.1.1 for, basically, forever, but were never able to actually use it for anything because it caught so much garbage traffic. Cloudflare is one of only a handful of service providers that could announce the IP and handle the traffic; so in exchange for helping APNIC's research group sort through the trash traffic, Cloudflare hosts their DNS resolver there.
That’s pretty cool. I’d never though about bogons and debogonizing before, it’s like chasing off all the squatters on your property and more keep coming. You need some fat pipes and beefy servers to be able to handle all the bogus traffic of machines trying to hit your server, and also be able to actually fulfill your purpose.
Make sense now why Cloudflare would be one of the only companies that could handle it!
The last public analysis was done in 2010 by RIPE and APNIC. At the time, 1.1.1.0/24 was 100 to 200Mb/s of traffic, most of it being audio traffic. In March, when Cloudflare announced 1.0.0.0/24 and 1.1.1.0/24, ~10Gbps of unsolicited background traffic appeared on our interfaces.
Yeah it broke my use case. I used to run `curl --retry 9999 http://1.1.1.1` and since it didn't exit, the heat generated by the running curl process kept me warm in the winter. But now http://1.1.1.1 returns immediately, so I'm freezing!
I mean, for smaller routers that had static routes set for that subnet, it would probably just keep working - the issue being that trying to get to real addresses in the 1.0.0.0/8 network (or parts of it) wouldn't work.
If you were BGP peering then you'd probably get a real route into your local table though.
So yeah, some stuff would probably have just broken, but that's the risk you take using parts of the IP space you shouldn't be using!
To be honest I feel as bad for them as I do for Hamachi, when their (otherwise quite nice in that it was a spiritual predecessor to Tailscale!) overlay VPN service fell apart once 5.0.0.0/24 became publicly assigned.
We pair WireGuard-style tunnels with real-time identity (sso, device, group context) and protocol aware proxies for SSH, RDP, HTTP, psql, Mysql, mssql, ES, and Kubernetes. Our policy engine lets you write rules like “only the DBA group can run DELETEs in Prod” or “Support can exec into this pod,” and we log every query, command, or request, all tied back to the user and device.
Think of it as combining the modern VPN experience of Tailscale with the deep authZ and observability of Teleport. I call it VPN plus PAM. Would love your thoughts if you give it a look.
Quick 2-minute overview: https://www.youtube.com/watch?v=hU7QixSqnSM&t=3s
https://www.border0.com/