> You can comfortably play AV1 video in software on your phone
Maybe for about 15 minutes before your battery is drained to 20%. I'm not aware of any software video decoder at all that won't unacceptably heat up your phone and kill your battery.
> Your "old iPhone 7" probably wont even boot up now, let alone play 1080p AV1 video for more than 5 minutes.
None of your predictions came true. In fact, you were more than 24 times wrong about it.
My 10-year-old iPhone 7 with its 10-year-old battery and a small crack in the screen, hardware which was released before AV1 was released, did boot up. I charged it to 99%.
I downloaded a 4 minute music video. It's 1080p25 1609kbps AV1 video, 48khz 122kbps stereo Opus audio in an MP4 container.
Using VLC 3.7.2 (which uses dav1d for decoding AV1), I played the video continuously on repeat. It took 122 minutes for the battery to go from 99% to 20%. At 20% the phone switched to low power mode and kept playing the video.
I should have put it in low power mode the whole time. I'll try that next to see if it can go longer.
In the meantime, what we can conclude is that the iPhone 7 is mighty.
Thanks for sharing that. Zerobox _does_ use the native OS sandboxing mechanisms (e.g. seatbelt) under the hood. I'm not trying to reinvent the wheel when it comes to sandboxing.
Re the URLs, I agree, that's why I added wildcard support, e.g. `*.openai.com` for secret injection as well as network call filtering.
You know, the thing is, that it is super easy to create such tools with AI nowadays. …and if you create your own, you can avoid these unnecessary abstractions. You get exactly what you want.
Zerobox creates a cert in `~/.zerobox/cert` on the first proxy run and reuses that. The MTIM process uses that cert to make the calls, inject certs, etc. This is actually done by the underlying Codex crate.
Yeah, but how does the sandboxed process “know” that it has to go through the proxy? How does it trust your certificate? Is the proxy fully transparent?
Great question! On Linux, yes, network namespaces enforce that and all net traffic goes through the proxy. Direct connections are blocked at the kernel level even if the program ignores proxy env vars, but I will test this case a bit more (unsure how to though, most network calls would respect HTTPS_PROXY and other similar env vars).
That being said, the default behaviour is no network, so nothing will be routed if it's not allowed regardless of whether the sandboxed process respects env vars or not.
On macOS, the proxy is best effort. Programs that ignore HTTPS_PROXY/HTTP_PROXY can connect directly. This is a platform limitation (macOS Seatbelt doesn't support forced proxy routing).
BUT, the default behaviour (no net) is fully enforced at the kernel level. Domain filtering relies on the program respecting proxy env vars.
It does but because I'm inheriting the seatbelt settings from Codex, I'm not resetting it in Zerobox (I thought it's a safer option). Let me look into this, there should be a way to take Codex' profile and safely combine/modify it.
There is no such thing as agentic codebase. If humans don’t understand it, nothing really does. Agents give zero fuck about anything. If they burn 100 or million tokens to add a feature, they don’t care.
It’s the developers responsibility to keep it under control.
100% this. With these new tools it's tempting to one-shot massive changesets crossing multiple concerns in preexisting, stable codebases.
The key is to keep any changes to code small enough to fit in your own "context window." Exceed that at your own risk. Constantly exceeding your capacity for understanding the changes being made leads to either burnout or indifference to the fires you're inevitably starting.
Be proactive with these tools w.r.t. risk mitigation, not reactive. Don't yolo out unverified shit at scales beyond basic human comprehension limits. Sure, you can now randomly generate entirely (unverified) new software into being, but 95% of the time that's a really, really bad idea. It is just gambling and likely some part of our lizard brains finds it enticing, but in order to prevent the slopification of everything, we need to apply some basic fucking discipline.
As you point out, it's our responsibility as human engineers to manage the risk reward tradeoffs with the output of these new tools. Anecdotally, I can tell you, we're doing a fucking bad job of it rn.
- A chrome extension the original "developer" can no longer work on cause the bots will wreck something else on every new feature he tries to add or bug he tries to fix
I think we're a ways out from truly complex code bases that only agents understand.
I've seen a bunch of hype video where people spend lord knows how much money in order to have a bunch of these things run around and I guess... use Facebook, and make reports to distribute amongst themselves, and then the human comes in and spends all their time tweaking this system. And then apparently one day it's going to produce _something_ but two years and counting and much like bitcoin, I've yet to see much of this _something_ materialize in the form of actual, working, quality software that I want to use.
My buddy made a thing that tells him how many people are at the gym by scraping their API and pushing it into a small app package... I guess that's kind of nice.
> Rewriting it in C can prove that the problem is not in your programmin language
It's a lot of work of rewriting it in C. It's doable, but impractical. And such rewrite may introduce new bugs.
Proving that the problem isn't in my language wasn't necessary - there were no room for it to introduce such kind of bug. Language bugs are usually manifest themselves in a way different way, (like broken binary code leading to crashes or accepting invalid code). That's why I have created my question in a first place - while I was sure, that it wasn't a language bug.
> Outdated OS can be the problem as well.
Outdated OS can't be a problem. Basic socket functionality was implemented eons ago and all known bugs should be fixed even in pretty outdated versions of the kernel.
> What kind of advice did you expect?
Eventually I have found myself, that in my test program I create too much connections for a TCP server and its internal connection queue overflows. But it took some time to find it, way longer compared to what could be achieved if my SO was answered. The problem was not so hard to spot considering that I have described what exactly I do. Even providing code examples wasn't necessary - textual description was enough.
Could very well be that your language was the issue and writing a small proof of concept for the specific use case that's problematic in a battle tested language other people know is a standard trouble shooting step. Especially if it was a rare limiting error, that sounds like a trivial thing to be implemented in C with a simple for loop and some dummy data perhaps.
Same with the OS. Only because socket functionality is decades old doesn't mean that you can't hit a recently introduced bug, that could have been fixed in a new version. That also is a standard troubleshooting step.
It doesn't make for bad advice only because you're too lazy to do it.
Getting Java to run is a base requirement for running most software written in Java.
However, there is a dedicated Dockerfile for creating a native image (Java words for "binary") that shouldn't require a JVM. I haven't tested running the binary myself so it's possible there are dependencies I'm not aware of, but I'm pretty sure you can just grab the binary out of the container image and run there locally if you want to.
It'll produce a Linux image of course, if you're on macOS or Windows you'd have to create a native image for those platforms manually.
Isn't a docker image basically a universal binary at this point? It's a way to ship a reproducible environment with little more config than setting an ENV var or two. All my local stuff runs under a docker compose stack so I have a container for the db, a container for redis, LocalStack, etc
I'm not saying it's ideal, just saying that's what we've shifted to for repeatable programs. Your Linux "universal" binary certainly won't work on your Mac directly either...
reply