Hacker Newsnew | past | comments | ask | show | jobs | submit | MayeulC's commentslogin

Thank you for the explanation, it was most interesting, I had no idea Bedrock could be coerced into talking to java servers.

Here are a few ideas:

1. Geoblocking. Not ideal, but it can make your resolver public for fewer people.

2. What if your DNS only answers queries for a single domain? Depending on the system, the fallback DNS server may handle other requests?

3. You could always hand out a device that connects to the WLAN. Think a cheap esp32. Only needs to be powered on when doing the resolution. Then you have a bit more freedom: ipv6 RADV + VPN, or try hijacking DNS queries (will not work with client isolation), or set it as resolver (may need manual config on each LAN, impractical).

4. IP whitelist, but ask them to visit a HTTP server from their LAN if it does not work (the switch has a browser, I think), this will give you the IP to allow, you can even password-protect it.

I'd say 2. Is worth a try. 4. Is easy enough to implement, but not entirely frictionless.


This is true, but only relevant if you order enough units (>100 k? Depending on price & margin of course) to customize your die. Otherwise, you have to find a chip with the I/Os that you want, all the rest being equal. Good luck with that if you need something specific (8 UARTs for instance) or obscure.

Hey, glad to see you here. I'm a huge fan of your projects, and the Baochip was one I didn't see coming. Very nice surprise!

I ordered a few, thinking it would make a good logic analyzer (before the details of the BIO were published). Obviously, it's going to be a stretch with multiple cycles per instructions, and a reduced instruction set. I'll see how far I can push it if I rely on multiple BIOs, perhaps with some tricks such as relying on an external clock signal. At first glance, they seemed to be perfect for doing some basic RLE or Huffman compression on-the-fly, but I am less sure now, I will have to play with it. Bit-packing may be somewhat expensive to perform, too.

One thing stood out to me in this design: that liberal use of the 16 extra registers. It's a very clever trick, but wouldn't some of these be better exposed as memory addresses? Or do you foresee applications where they are in the hot path (where the inability to write immediate values may matter). Stuff like core ID, debug, or even GPIO direction could be hard-wired to memory addresses, leaving space for some extra features (not sure which? General purpose registers? More queues? More GPIOs? A special purpose HW block?).

I really like the "snap to quantum" mechanism: as you wrote, it is good for portability, though there should be a way to query frequency, if portability is really a goal.

Anyway, it's plenty for a v1, plenty of exciting things to play with, including the MMU of the main core!


The core ID definitely didn't need to be in a register, but the elapsed clocks since reset is actually really handy. Having this in the hot path allows me to build a captouch sensor using the BIO, because the clock increment is 1.42ns and even though the rise time of the pad is microseconds you get plenty of resolution at that counting rate.

I think it will be interesting to see what people end up doing with it and what are the pain points. As you say, it's a v1 - with any luck there will be a v2, so we could consider the time starting now as a deliberation period for what goes into v2.

The good news is that it also all compiles into an FPGA, so proposed patches can be tested & vetted in hardware, albeit at a much slower clock rate.


Ah, thank you for the example, I understand how a linearly-increasing counter can be useful, if you use it that way. It would obviously be more versatile with write access & configurable clock dividers, pre-setters, counting direction, etc. The current design probably allows re-using the counter across cores & minimize space, so makes sense to me. I should dig into the RTL when I have a bit of time… Maybe I'll make it my bedside reading?

You could also say it's up to the user to implement a fully-fledged timer/counter in a BIO coprocessor if they need one, though ideally there would be a shared register (or a way to configure the FIFOs depth + make them non-blocking) to communicate the result.

Small cores like these are really fun to play with: the constraints easily fit in your head, and finding some clever way to use the existing HW is very rewarding. Who needs Zachtronics games when you have a BIO or PIO?


I typically just create a "new" connection in a separate tab when I want to add tunneling.

I put new in quotes because I use another little-known feature, "ControlMaster". Multiplexes multiple connections into one, it makes making " new" sessions instant (can also be configured to persist a bit after disconnecting). Also useful for tab-completing remote paths. It does not prompt for authentication again, though. And it's a bit annoying when the connection hands (can be solved with ssh -o close, IIRC).


> I use another little-known feature, "ControlMaster". Multiplexes multiple connections into one, it makes making " new" sessions instant

Is this what secureCRT used as well? I remember this being all the rage back when I used windows, and it allowed this spawn new session by reusing the main one.


I'm using that as well but had issues with tunneling where it creates the tunnel in the background and terminates and so you might not know the random port it assigned or I couldn't figure out how to un-tunnel it and tunnel again to the same port. Just bypassed the control master then.


TIL; thanks, that's interesting (and somehow escaped my 20+ years of using ssh)! As usual the gold is in the comments :-)


Note that it only works after pressing enter, so the odds are slim. In practice, I don't think I ever hit it by accident.


I have noticed it while running ~/bin/some_command. The ~ doesn't echo until I also type the /. It doesn't cause any misbehavior because there is no binding for ~/ but can be slightly surprising.


I find it odd that you would have commands in ~/bin but not have it be the highest priority in your PATH. I use ~/.local/bin, but would never type it because i wouldn't have bins that overlap shell commands and no other path would have priority.


Usually, it is. IIRC, this was when I was just setting up my environment on a new host, after I had populated ~/bin but before I restarted my shell to pick up PATH modifications.


I'm curious, what is the powe draw for such a system? Of course, it heavily depends on the disks, but does it idle under 200W?

I personally feel like I will downscale my homelab hardware to reduce its power draw. My HW is rather old (and leagues below yours), more recent HW tends to be more efficient, but I have no idea how well these high end server boards can lower their idle power consumption?


That's an "if you have to ask, it's not for you" question. Also, the noise these things make... You better have a separate garage. The constraints of a data center are really far from those of a homelab.


I don't remember, but I know I measured it once. I believe around 200W or a bit above.


Usually you can bind ZigBee devices together. I have multiple IKEA "rodret" switches bound to generic ZigBee smart plugs from Aliexpress. Works great, with minimal latency.

With zha, you can bind them together from the Home Assistant device page.

I usually favor an architecture that can work without Home Assistant, such as standalone ZigBee dimmers, or contactors that can work with existing wiring. Home Assistant brings automation on top, but it doesn't matter much if it breaks (I mostly notice the shutters not opening with sunrise). Then Internet connectivity can bring additional features, but most things still work if it's down.

I'd say it has been pretty solid for years, and I don't stress too much when I have server issues.


The beam is split and re-emitted in multiple points. By controlling the optical length (refractive index, or just the length of the waveguide by using optical junctions) of the path that leads to each emitter, the phase can be adjusted.

In practice, this can be done with phase change materials (heat/cool materials to change their index), or micro ring resonators (to divert light from one wave guide to another).

The beam then self-interferes, and the resulting interference pattern (constructive/destructive depending on the direction) are used to modulate the beam orientation.

You are right that a single source is needed, though I imagine that you can also use a laser source and shine it at another "pumped" material to have it emit more coherent light.

I've been thinking about possible use-cases for this technology besides LIDAR,. Point to point laser communication could be an interesting application: satellite-to-satellite communication, or drone-to-drone in high-EMI settings (battlefield with jammers). This would make mounting laser designators on small drones a lot easier. Here you go, free startup ideas ;)


Hey, thanks for the new release. I should definitely fix my wristband and start wearing my AsteroisOS watch again (LG Lenok).

You have probably addressed that somewhere, but would it be possible to run your UI stack somewhere else? (PostmarketOS).

My other wish for AsteroidOS would be for it to leverage Wi-Fi better. Not sure how much more energy it would use, but having a longer range for my notifications would be nice (at least on LAN). Being able to perform a few other actions independently of my phone would be great: weather % time updates, e-mail notifications, home assistant control, etc. I get that it may affect battery life as well.

While I'm at it: tiny bug report, but I adjusted the time while the stopwatch was running, and this affected the stopwatch result.


Nice, thanks for the bug report! I have made in issue in the stopwatch repo: https://github.com/AsteroidOS/asteroid-stopwatch/issues/13

We have implemented a wifi toggle in the quickpanel with 2.0. But the wifi credentials still need to be entered into connmanctl on the cli. As soon as you got wifi set up and connected, you can already now sync weather data usin asteroid-weatherfetch. But right, wifi usually uses up to 30% more power and should be enabled selectively.

For the postmarket question, yes, it is our longterm goal to mainline watches, which we are sort of doing in coorperation with the postmarket guys. But thats a humongous task and part of the idea of this 2.0 release is to interest capable contributors to push things further ;)


> Just like the US not bankrolling half of Ukraine's defense would be unthinkable...

This is outdated. Look at page 4 of this report for instance: https://www.kielinstitut.de/publications/europe-steps-up-ukr...

Their data is not perfect as they rely on public sources, and some governments are more transparent than others, but the reality is that US funding all but vanished in 2025.

Back to the topic, there is also a pattern of promising European startups being bought by wealthy USA incumbent companies. This is also happening to established compagnies: see ARM, Alstom Power, etc. As Europe de-couples from the USA in the current context, I suspect (and hope) that such acquisitions will come under more regulatory scrutiny.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: