Regarding the first problem: are you looking at NCP maps for non-Markovian processes given you mention C*-algebra? Or is it more of a continuous weak monitoring of a stochastic system that results in dynamics with memory effects?
I'd be very curious to know how any LLMs fare. I completely understand if you don't want to continue the discussion because of anonymity reasons.
More of the latter. It's a pet project of mine, and all of the LLMs tend to utterly fail at getting anywhere with it, at least in chats. In an agentic setup, it can chip away at some aspects, but it needs serious guidance on relevant language, notation, and concepts. To me, it demonstrates that the LLMs are not particularly good at crossing literatures, but then again, humans rarely seem to be good at that either...
I echo similar sentiments. It is high time to choose self-hosting over handing over essentials to the cloud. You don't know when it could be inaccessible due to plethora of reasons. It is just that, every time I looked into setting up a home lab, it feels cost prohibitively expensive.
Well that's your problem right there. The home labber setups are for experimentation or "hot rodding" purposes and they typically way overbuild their solutions.
What most people need is an old desktop in a corner somewhere (preferably close to your router so you can get to it with an ethernet cable).
It's won't be Grandma proof, but if you're remotely technical you can write a docker compose file that glues together some useful home server utilities that sound interesting to you.
My setup is roughly speaking: Ubuntu LTS, ZFS (with 4 disks in a RAID10 style config), and a docker compose file that runs plex, transmission, syncthing, vaultwarden behind an nginx-proxy[1] container that even automagically renews my Let's Encrypt certs for me (though it's probably even easier if you use a Cloudflare tunnel).
If you're confident all your apps are available on these platforms, the storage part is easier with something like TruNAS or Unraid. If you don't need storage at all you can slim down your hardware a lot and just use a raspberry pi.
IMO, just find an old beater machine and get hacking :)
What makes it grandma proof is software that makes it extremely simple, which is like a home appliance, which is within the realm of possibility.
The simpler way to go on most fronts is some form of Proxmox with things like the above managed, it takes care of much of the overhead and doubt on it's own or through a reasonably point and click interface, which could be pre-configured.
I moved my DO server to a pi that was gathering dust. I agree, folks need to get off the cloud, find an old laptop or an old $40 mac mini, they are usually low power enough.
>It is just that, every time I looked into setting up a home lab, it feels cost prohibitively expensive.
You would say that if you look into my 12U rack right now, only 6 months ago all I had was 2x Dell SFF second hand computer from eBay that might have costed me AUD300.
Before that, I had one of those miniPC with two network ports that cost me AUD200. I installed Proxmox in it, then OPNSense (router) and pihole as virtual machine, it ran like that for years
Install Proxmox in them and you can run eveything.
This is the major misconception regarding homelab, you absolutely don't need expensive gear.
A single miniPC + Proxmox is all you need to start, try to have at least 16GB of memory, 256GB NVMe is more than enough to start.
Don't let those massive homelab setup you see on the internet tell you that is the only way :)
We're teetering on the brink of highly capable software agents that can run on a phone using a local model, that can manage things like basic digital hygiene, operating a self-hosted cloud, with tailscale and other private vpns that can leverage your own home internet service with a well maintained set of firewall rules and level of locked-down access that it's actually practical.
An inspired nerd can do it right now, but grandma will be able to do a curated, accessible set of things by the end of the year, and by the end of 2027, the internet and self hosted things are going to be incredibly different. When people can self host plex and anonymously pirate anything, and their local model can do the ethically gray area stuff like ensure everything is done so they don't get caught - cloud services can't compete with that. Cable and netflix and spotify and the rest are going to have to up their game, and not do the stupid lashing out, price gouging, hunting the pirates type of thing or they're just going to burn down faster.
We're headed for some really cool, interesting times.
People overengineer homelabs all the time for fun and practice. To selfhost (which is not, in fact, the same thing), you can get a mini PC and probably host all of the basics. A small two-bay NAS plus a mini PC and you're really cooking with gas.
Homelab = Experimenting with environments you might use at work.
Selfhost = Hosting what you need at home.
These are two very different goals with very different reasonable choices. People homelab with Kubernetes clusters, selfhosting with Kubernetes is dumb.
If
1. Rx6,it is stalemate. So it must be
1. N4 N5.
Then we could proceed with,
2. Nx6+ K7.
Now, if you capture the knight (Rxe), it is stalemate again. So sacrifice the knight,
3. R4 Kx6
so that you force black to zugzwang with
4. K2 K7,
and finally,
5. Rx5#
More like Heroes of Might and Magic. It's a turn-based strategy game where battles take place on a hex grid map. It's got full campaigns, lots of factions and units, resources to gather... it's one of my favorite OSS projects. Wesnoth has been in active development forever and is a real labor of love, as well as a showcase of collaborative game development.
Not really. This game uses a turn-based combat system with a hex grid. It's more like Sid Meier's Civilization, but with a drastically simplified economy and a strong focus on battles. It also has a Tolkein-esque fantasy theme instead of a real-life history theme.
If that sounds at all interesting, I suggest giving it a shot.
I agree with some parts but I mostly don't see the point of this article: shooting down ideas is a skill in academia, in industry, in fields where decisions have huge opportunity costs. One needs to shoot down ideas pretty often, because really good ideas are only a handful.
Things that are really worth someone's time are often something that should be well thought out, stress-tested, collectively agreed upon by at least a few. So shoot the unfeasible ones bang on so you don't waste time on it. Just don't make it personal; it's only ideas that need judgement, not the people.
I hate to read this line when academics and graduate students who work in basic and hard sciences have their funding cut. The grand funding that pays minimum wage to grad students is a burden for this society, yet for a company that took all the valuable data from sources that never got credit, raises billions of dollars. Open says the name, but closed it is by operation. Sorry for this rant, but the priorities of this world suck.
Or all of the people that they didn’t ask, let alone compensate, that made all of the stuff they munged up for training data, so they could sell cheap knockoffs in the same markets.
I'd be very curious to know how any LLMs fare. I completely understand if you don't want to continue the discussion because of anonymity reasons.
reply