Hacker Newsnew | past | comments | ask | show | jobs | submit | kolinko's commentslogin

Because Amazon stops supporting devices after 14 years? (while they can still be used to read books already downloaded)

Really?


In this case the reason for dropping support is most likely that the only DRM they can support on that older hardware has been broken. There's no technical reason why it can't be supported, and I doubt it would cost them much (or even anything) to continue support.

Meanwhile, I can still read physical books I've had since I was a child, 40 years ago. The Kindle is undeniably more convenient than physical books, but this is absolutely an unnecessary sunset of these devices.


you can still remove the drm and sideload them

In my post I said "this kind of stuff" which also includes their DRM policies (which is the real reason they are ending the users' kindle support).

My Kindle 4 hardware works great, I still read it nightly. Since it doesn’t feel like it’s obsolete (in fact it has physical buttons so may be slightly better than a modern Kindle), it feels like a blatant cash grab by Amazon to get us to buy new devices that probably are laden with ads or other revenue generators.

Since the November/December Opus and Claude Code, I found I don't need to read the code any more. Architecture overview sure, and testing yes, but not reading the code directly any more.

Me (and my friends similarly) inspect code indirectly now - telling agents to write reports about certain aspects of the code and architecture etc.


I do regularly read the code that Claude outputs. And about 25% of the time the tests it writes will reimplement the code under test in the test.

Another 25% of the time the tests are wrong in some other way. Usually mocking something in a way that doesn't match reality.

And maybe 5% of the time Claude does some testing that requires a database, it will find some other database lying around and try to use that instead of what it's supposed to be doing.

And even if Claude writes a correct test, it will general have it skip the test if a dependency isn't there--no matter how fervently I tell it not to.

If you're not looking the code at all, you're building a house of cards. If you not reading the tests you're not even building you're just covering the floor in a big sloppy pile of runny shit.


> I do regularly read the code that Claude outputs

You probably could have s/Claude/Human/ in your rant and been just as accurate. I don't know how many times I've flagged these issues in code reviews. And that's only assuming the human even bothered to write tests...

What I find is that when I ask AI to write tests it writes too many, and I agree with you that a lot of them are useless. But then I just tell it that, and it agrees with me and cleans it up. Much faster feedback loop and much better final result.

I feel like people that look at a poor result and stop there and conclude it's useless have made up their mind and don't want to see the better results that are right in front of them if they just spend an extra 5 seconds trying.


How do you know whether the tests it’s spits out are bad if you don’t read the tests.

We’re not dealing AGI here. Tests aren’t strictly necessary for humans. They are for AI. AI requires guardrails to keep from spinning out. That’s essentially the entire premise of the agentic workflow.


> How do you know whether the tests it’s spits out are bad if you don’t read the tests.

I do read the tests (quickly, I admit) and so does OP:

Architecture overview sure, and testing yes, but not reading the code directly any more.

Reading that again I may have misunderstood what they meant by "testing yes", though.


I’m pretty sure they just meant they do testing not that they read the tests and that’s what everyone else who responded interpreted that as well.

You can get Claude to write good tests but based on what I’m seeing at work that’s not what’s happening. They always look plausible even when they’re wrong, so people either don’t read them, skim them very quickly, or read the first few assume the rest work and commit.

I think Claude is great for testing because setting test data and infrastructure is such a boring slog. But it almost always takes a lot of back and forth and careful handholding to get it right.


I read the tests, it also is really really good to have Claude verify that removing the changes in question break the tests. This brings the quality way way up for me.

In comparison, I see this issues in fewer than 1% of the changes I review. Because when it happens you can effectively teach people to stop doing it.

I'd understand not reading the code of the system under test, but you don't even read the tests? I'd do that if my architecture and design were very precise, but at this point I'd have spent too much time designing rather than implementing (and possibly uncovering unknown unknowns in the process).

> Me (and my friends similarly) inspect code indirectly now - telling agents to write reports about certain aspects of the code and architecture etc.

Doesn't this take longer than reading the code?

I can see how some of this is part of the future (I remember this article talking about python modules having a big docstring at the top fully describing the public functions, and the author describing how they just update this doc, then regenerate the code fully, never reading it, and I find this quite convincing), but in the end I just want the most concise language for what I'm trying to express. If I need an edge case covered, I'd rather have a very simple test making that explicit than more verbose forms. Until we have formal specifications everywhere I guess.

But maybe I'm just not picturing what you mean exactly by "reports".


I've seen the code these models produce without a human programmer going over the results with care. It's still slop. Better slop than in the past, but slop none the less. If you aren't at minimum reading the code yourself and you're shipping a significant amount of it, you're either effectively the first person to figure out the magic prompt to get the models to produce better code, or you're shipping slop. Personally, I wouldn't bet on the former.


Yeah, these models have definitely become more useful in the last months, but statements like "I don't need to read the code any more" still say more about the person writing that than about agents.


If I were you I’d very worried about getting laid off. That kind of work isn’t going to keep earning a software engineer salary.

you're a slop maker

No, the rules have to be the same for all EU citizens.


Did you travel in Europe? Even without crisis, gas stations are often way busier on the cheaper country's border than more expensive.

My friends living in Switzerland (near the border) always go to Germany to fuel up. And, even without a crisis, gas stations on the cheaper sides of borders are often way more crowded than on the other side.

Also, keep in mind that Slovenia is roughly the size of Los Angeles. Or not much wider than Long Island. If there fuel was 30% cheaper on one side of Long Island, than on the other, I'm sure plenty of people wouldn't think twice about that.


Ah yes, the rich people of switzerland, doing their weekly shopping in germany :)

Aside from a cost? It's also managing the actual human being, and making sure they have enough work. If the place has 5-10 calls a day, then it's pointless to hire receptionist that will do nothing for 1 hour, and then get 2 minutes chat. It used to be pointless to build software to do that, but since claude code it's cheap enough to make sense.


receptionist as a service has been a thing for like... forever. You are never going to solve the problem of accurately estimating and quoting with AI or an answering service, so pay for someone to answer the phone and take down the details; have a mechanic or trained service rep review and estimate. Cheap code that doesn't solve the problem is not cheap.


Couldn't an ai take down the details and pass it to a mechanic or trained service rep?


Yes, of course. The bot can request information and the customer can provide it if they feel like it, and then someone qualified can call them back when they have their hands free.

But there's no bot, per se, needed at all. An answering machine from 1993 can do this same information-gathering job. :)


I can see a useful simple case of structuring a good answering system and then using AI to do STT then using Claude to structure the callback data


Good point.

So update the device from 1993's new-fangled digital answering machine to 2009's Google Voice, and have it do the transcription from voicemail to text.

Someone will still have to call Bill back about his Honda (which is actually the Kia he bought for his daughter -- Bill is not a very technical guy these days[1] and he confuses such concepts regularly) in order to get any trading of money for services done.

It doesn't take an LLM to get there, and Bill would probably prefer to avoid being frustrated by the bot's insistent nature.

[1]: https://news.ycombinator.com/item?id=47356166


Look, you‘re kicking an open door. I think LLMs applied like this are just a layer of complexity that os mostly replacing lower level programming solutions that could do the same thing


The transcription + callback loop is honestly underrated. Most of the value here is just capturing intent accurately ("Honda" vs "Kia" aside) so the mechanic can prioritize callbacks. A dumb voicemail-to-text pipeline handles that fine. The LLM layer adds complexity without solving the actual bottleneck, which is someone qualified picking up the phone.


You nailed it.

But I'm not sure that a bot can be trusted to make good decisions about priority, either. So even if it makes good decisions based on context (which it can increasingly-often do, but does not always do), it lacks the context that is necessary to form the basis of good decisions.

Suppose a message comes into the box with this form: "This is Wendy, can you call me? My car is making that noise again."

The bot might deprioritize that call because it lacks actionable contextual information. "My job as a bot is to get more jobs into the shop. This call does not have enough data to do that, so I'll shove to the bottom of list of callbacks behind more-actionable jobs."

But the mechanic? The mechanic knows Wendy's Ford very well, and he also knows Wendy. She's a been a good customer for over a decade. The mechanic also knows the noise, and that Wendy has 3 little kids and that she's vacationing 900 miles away on a road trip with those kids in that Ford. The context is all there inside of the mechanic's brain to combine and mean that this might be the highest-priority call he gets all week.

Wendy may not have actively relayed any urgency in her message, but the urgency is real and she needs called back right away. She needs answers about what to do (keep driving and look into it when she gets back? pull over immediately and get a tow to a decent local shop? maybe she even needs help finding such a shop?) pretty much immediately. Not because it means more business today, but because it means more business for years.

The mechanic can spot this from a list of transcripts in an instant and give her a ring back Right Now. The bot is NFG at this.

The addition of the bot only adds noise to the process, and that noise only works to Wendy's detriment. When the bot adds detrimental noise to Wendy's situation, it also adds detriment to the shop's longevity.

The presence of the bot -- even as a prioritizing sorting mechanism -- asymptotically shifts the state from an excellent shop that knows their customers very well to a bot-driven customer-averse hellscape.

(And no, the answer isn't to make the bot into an all-knowing oracle that actively gets fed all context. The documentation burden would be more expensive, time-wise (and thus money-wise) than hiring a competent human receptionist who answers the phone, handles the front door traffic, and absorbs context from their surroundings. A person who chatted with Wendy last Thursday right before she left for her trip is always going to be superior to a bot.)


If someone put on their website and voicemail that they were available for calls only from 8-10am (for example), or that they would return my call at that time, I'd make a point to call them then. It's reasonable that people are busy too.


instead of asking „what’s next”, a good question to ask is „what jobs are now feasible that previously were cinstrained by the cost of producing software”?


I think the parent meant vs MacOS, not vs Linux.


Users of MacOS rarely have an active dislike for Windows, nor are they likely to announce this.


I use macos and I do actively dislike windows: here I announce it.


I liked the apple II, and the TRS 80 as I rather like basic. And then I didn’t hate DOS, and then I actively hated the graphical shell of Windows 3, but could not afford a Macintosh -so suffered through it where I had to, but mainly used DOS. Then I discovered UNIX, and did almost all of my work on a timeshare - in the early 90s!

Then Windows 95 came out and I actively hated it, but did think it was amazingly pretty - somehow this was the impetus for me to get a pc again, which I put Windows NT on. Which was profitable for freelance gigs in college. Soon after that, I dual booted it to Linux and spent most of my time in Slackware.

After that, I graduated and had enough money to buy a second rig, which I installed OS/2 warp on - which was good for side gigs. And I really liked. A lot. But my day job required that I have a Windows NT box to shell into the Solaris servers as we ran. Then I got a better class of employer and the next several let me run a Linux box to connect to our solaris (or Aix) servers.

Next my girlfriend at the time got a PowerBook G4 and installed OS X on it. It was obviously amazing. Windows XP came out, and it was once again so much worse than Windows NT - and crashed so much more - which was odd as it was based on Windows NT. (yes 98 was before this but it was really bad). Anyhow, right about here the Linux box I was running at home, died. And it was obvious that I was not going to buy an XP box, so I bought my first Mac.

And it’s been the same for the last 25 years - every time I look at a Windows box it’s horrible. I pretty much always have a Linux box headless somewhere in the house, and one rented in the cloud, and a Mac for interacting with the world.

And like the parent I actively dislike windows. And that’s interesting because I’ve liked most other operating systems I’ve used in my life, including MS-DOS. Modern windows is uniquely bad.


DOS was bad by UNIX standards too. Only Windows NT/2000 was decent.


I use windows and absolutely hate the mac UI. Having the current window title bar always at the top of the screen doesn't make any sense when you have a very big monitor. It only made sense with the tiny monitors available when the mac UI was originally created.


Yeah, that is an annoyance for me too but for a different reason. I have set the menu bar to be only in the internal display (to avoid issues with my OLED external monitor) so when I have a window in the external monitor, I have to move the mouse to the internal monitor screen space if I want to open something that is in the app's title bar.

On the other hand, it is actually useful that there is mostly a specific place you find settings etc, as in windows/linux it tends to vary depending on the app where to find those (is there a bar on top of the window? Is there a button to expand a menu somewhere? Something else? Who knows).


The very idea of being able to have the programs' main menu on a different screen is so silly.


Me, personally, I have an active dislike for windows and I announce it broadly. But I may be weird :)


As someone who spent most of my time with computer scientists - the last thing I’d like would be for them to run the world.


Yup. Unhinged to put it mildly.

People who are liberal artsy at the core but do computer science? Yes.


Steam solves delivery, not so much discovery


The article lists the reasons quite clearly.


For everyone else,

The reason is because arxiv is growing significantly leading to 297,000 deficit in operating costs for 2025 alone. Corenell has helped with donation a long with other organizations that pay membership fees.

As a result, donors + leaders of arxiv think it's best to spin off to increase funding.


What is unclear why they need stuff of 27 and 6.7 million to operate essentially static hosting website in 2026.


The "essentially static hosting" isn't the cost centre (although with 5 million MAU, it's nothing to sneeze at). The real costs are on the input side - they have an ingestion pipeline that ensures standardised paper formatting and so on, plus at least some degree of human review.


Do you mean that the CPU compute cost of turning latex into pdf/HTML is the main cost?


No, I mean that the pipeline requires software engineers to build/maintain, and salaries are (as in basically every tech organisation) the dominant cost


Then drop it and make people upload a pdf and a zip of the latex sources.

Most people I talk to hate that pipeline and spend a lot of debug hours on it when Arxiv can't compile what overleaf and your local latex install can.


Arxiv can recompile latex to support accessibility and html. Going to pdf submissions would be a major step backward.


Make it an external service then, and leave the thing that's already working great to just be.

The reason authors like and use arxiv is that it gives 1) a timestamp, 2) a standardized citable ID, and 3) stable hosting of the pdf. And readers like the no-nonsense single click download of the pdf and a barebones consistent website look.

All else is a side show.


You have to keep in mind that an increasing portion of their time and labor is going towards moderation and filtering due to a mass influx of nonsensical AI generated papers, non-academic numerology-tier hackery, and other useless drivel.

Spinning the service off forces other the labor out onto other universities rather than leaving them to solely Cornell


Is the problem the storage cost for hosting them, the HDDs? I'm sure they can be offloaded to cold storage because most of that slop won't be opened by anyone.

Arxiv doesn't need moderation. Nobody is asking for Arxiv moderation. It needs minimal checks to remove overtly illegal content.


> Arxiv doesn't need moderation. Nobody is asking for Arxiv moderation

Seems like a lot of people are asking for moderation. And moderation is a pretty big part of the existing offering[1].

[1]: https://info.arxiv.org/help/moderation/index.html


When you stop moderating input, that's when someone builds a fuse filesystem on top of it. We had those for discord (dsfs), twitterfs, redditfs, yt-media-storage, etc. It's also when someone starts using it to distribute malware, like websites built on a combination of GitHub and a cdn.


We are talking about a different kind of moderation. People want to filter out incorrect information that in their opinion damages the reputation of Arxiv, eg covid stuff. It's not about dumping binary data.

This is a motte and bailey fallacy. The real question is about moderation with the goal of checking truth and the scientific content. Obviously illegal content and ddos type overloading attacks need to be blocked.

Very different philosophies are clashing here. Arxiv came about in an age of different zeitgeist. We may never get back to that moment.


> Is the problem the storage cost for hosting them, the HDDs?

No. Around half the cost is infrastructure. The other half of the cost is people. i.e. engineers to maintain infra and build mod tools for moderators to operate.

> Arxiv doesn't need moderation. Nobody is asking for Arxiv moderation.

This is just not true. Tons of people ask for arxiv to have moderation. Especially since covid, etc when antivaxxers and alternative medicine peddlers started trying to pump the medical categories of arxiv with quack science preprints and then go on to use the arxiv preprint and its DOI to take advantage of non academics who don't really understand what arxiv is other than it looks vaguely like a journal.

And doubly so now that people keep submitting AI generated slop papers to the service trying to flood the different categories so they can pad their resumes or CVs. And on top of that people who don't actually understand the fields they are trying to write papers in using AI to generate "innovative papers" that are completely nonsensical but vaguely parroting the terms of art.

The only reason you don't see more people calling for arxiv moderation is because they already spend so much time on it. If they were to stop moderating the site it would overflow into an absolute nightmare of garbage near overnight. And people wouldn't be upset with the users uploading this of course, they'd be upset with arxiv for failing to take action.

Moderation is inherently unappreciated because in the ideal form it should be effectively invisible (which arxiv's mostly is).

If you want to see the type of stuff that arxiv keeps out, go over to ViXrA [1] or you can watch k-theory's video [2] having fun digging through some of the quality posts that live over on that site.

1. https://en.wikipedia.org/wiki/ViXra

2. https://www.youtube.com/watch?v=1at9BjQP8CI


The PDF formatting is all but standardised. They ingest LaTeX sources, which is formatted according to the authors' whims (most likely, according to whatever journal or conference they just submitted the manuscript to). I'll concede that the (relatively novel) HTML formatter gives paper a more uniform appearance. They also integrate a bunch of external services for e.g., citation metrics and cross-references. Still hard to justify such a high cost to operate, but eh.

Also, the "human review" is a simple moderation process [1]. It usually does not dig into the submission's scientific merits.

[1] https://info.arxiv.org/help/moderation/index.html


https://info.arxiv.org/about/reports/2024_arXiv_annual_repor...

A critical component of the arXiv-CE project is moving our services entirely off of Cornell University’s infrastructure — this goal is also known as Milestone 1. Milestone 1 completion is projected for the end of fiscal year 2026.

Assume if you are a library, and every day, half baked so-called books brought to the librarians where they have to make sure it is meaningful, readable and printable, 3000 of them, they accept and put them in the right bookshelf, and entire internet reads every one of them on the shelf multiple times by the AI bots, search engines and researchers.

They are not only making a new library, they are also maintaining both and syncing two libraries because Cornell cannot handle the volume of access by bots.

It is not static. It is essentially running two ships side-by-side, and two ships need to appear as one from the outside. And, the new ship is still only half built. The new ship is being designed, and being built. 27 seems small to me.


I don't see it as an especially exuberant structure or budget. I've seen larger teams with bigger budgets struggle to maintain smaller applications.

I've contracted into some consultancy teams which you could uncharitably describe as "15 people and $4mn/yr to create one PDF per month".


Now they're going to have a deficit of 600,000 in operating costs.


> The reason is because arxiv is growing significantly leading to 297,000 deficit in operating costs for 2025 alone.

Dollars? So 300 people's cable bill? That's basically nothing. They're spending too much, and it's still nothing, and the solution is going to be to privatize it and eventually loot it.

You can't hand out a collection plate and get $300K for Arxiv? Your local neighborhood church can. Civilization is obviously collapsing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: