Hacker Newsnew | past | comments | ask | show | jobs | submit | nateb2022's commentslogin


Wherever practical, I also recommend using devcontainers, so that in addition to breaking supply chain security, large-scale damage would require an unpatched sandbox exploit too.

I'll plug Pyreqwest here: https://github.com/MarkusSintonen/pyreqwest

It's been a pleasure to use, has a httpx compatibility layer for gradually migrating to its API, and it's a lot more performant (right now, I think it's the most performant Python http client out there: https://github.com/MarkusSintonen/pyreqwest/blob/main/docs/b...)


[dupe] https://news.ycombinator.com/item?id=47506251 (18 minutes older, 6 comments)

This is my condensed version for the SBCwiki documentation focused on the key facts without all the unnecessary marketing around it

This is substantially more useful than the marketing fluff in the press release. Probably would have made sense to post this in that thread though

I added a cost/performance analysis for that at https://news.ycombinator.com/item?id=47509236 in case anyone's interested.


See also: https://github.com/finbarr/yolobox "Let your AI go full send. Your home directory stays home."

This is basically a vibecoded ollama?

Strongly agree. Plus, for all but very specific usecases, most people will spend less money by paying for cloud services, with "most" here referring to the general population.

I disagree. Let's take the M1 vs the M5 (https://www.macrumors.com/2025/11/10/apple-silicon-m1-to-m5-...):

  - 6× faster CPU/GPU performance
  - 6× faster AI performance
  - 7.7× faster AI video processing
  - 6.8× faster 3D rendering
  - 2.6× faster gaming performance
  - 2.1× faster code compiling
Over the span of 5 years.

Plus, realistically what makes an "ai" server different from a computer? This "lineage info of the family may be passed down through generations" sounds nice but do you know anyone passing down a Commodore 64 or Apple II that remains in daily use? I fail to see how "ai" would protect something from obsolescence.


It doesnt matter if computers keep getting faster - it just matters if eventually they get to the point where everything is good enough for good AI.

That being said I feel like were gonna get to that point for most other stuff way sooner than AI (and already have for many pieces of software)


I love this conundrum.

I have a good analogy. 10 years ago, I was convinced that a 24-inch 1080p monitor at arm's length was perfection. There could never be any reason to improve over it. I could do everything I ever wanted to, to a standard I would never need to improve upon.

Yet here we are. The simplest and most obvious improvement is a 24" 4k monitor at 200% scaling. Basically, better in every way.

There's a discussion to be had about whether you need the better setup, which I think is your point, but there's no denying you'd want it (all other variables the same).


At some point specs don’t matter. I don’t wonder about the processor in my thermostat either. I don’t know how many horsepower my XC90 has. I don’t know the rated power of my chainsaw.

All I care about is: do they work, are they ‘safe’, are they comfortable, etc.


A thermostat’s capabilities and what’s expected of it wont change even if the tech gets better though, and that’s the key difference.

That first bullet is a bit sketchy. Benchmarks, particularly geekbench, may have increased 6x, but that's being manipulated.

The GPUs have become much larger, so 6.8x is believable there, as is the inclusion of a matmul unit boosting AI.

The 2.x numbers are the most realistic, especially because they represent actual workloads.


Even the geekbench numbers from the link only ~doubled. For both single- and multi-core CPU and Metal GPU.

Today, not much differentiates them. But as time passes our only option will be to further specialize the hardware to get realistic gains; at some point perhaps a 'purpose built analog' computer kinda thing will get to the point where it is so useful, that it would be like the 'Standard Template Constructs' concept in Warhammer 30k. So what you can make a faster ai but, the current one can 'teach everyone, basically anything'.

It depends on what/how you're comparing. Core to core, according to CPU benchmark, the M1 is 5800 vs the M5 at 3600, so we're still not quite to 2x.

Overall system performance is better at about 2x improvement thanks to extra cores/other improvements/changes. I could see other more specialized benchmarks improving more thanks to different improvements/core/power/size improvements in other components (GPU/NPU/etc...).


I guess they're going after companies like Repl.it?

Probably not explicitly, but this (and continued AI Studio improvements) is going to erode the value proposition of a whole lot of tools.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: