Hacker Newsnew | past | comments | ask | show | jobs | submit | spindump8930's commentslogin

Not clear that they even have any GPUs yet:

> Allbirds, which will be renamed “NewBird AI,” said it executed a $50 million deal with an unnamed institutional investor to acquire “high-performance GPU assets” to begin transitioning into a “fully integrated GPU-as-a-Service”

Serious folks know it's not straightforward to suddenly get any number of GPUs these days, even at that level of money


Yes, the paper itself tells a different story than the bullet points in this article.

The article seems quite editorialized, shifting between describing "large-scale AI models" and "neural network-based approaches".

The underlying paper itself is more precise, comparing against LUAR, a 2021 method based on bert-style embeddings (i.e. a model with 82M parameters, which is 0.2% the size of e.g. the recent OS Gemma models). I don't fault the authors of the paper at all for this, their method is interesting and more interpretable! But you can check the publication history, their paper was uploaded originally in 2024: https://arxiv.org/abs/2403.08462

A good example of why some folks are bearish on journals.

"AI bad" seems to sell in some circles, and while there are many level-headed criticisms to be made of current AI fads, I don't think this qualifies.


I don't see it. Seems even-keeled for the most part. Not a polemic.

"Researchers found that a relatively simple, linguistically grounded method can perform as well as - and in some cases better than - complex artificial intelligence systems in identifying authorship.

The study suggests that increasingly sophisticated AI is not always necessary for high-performing writing analysis, particularly when methods are designed around established principles of how language works."


Are you prepared to demonstrate a superior result with models newer than those available when the research was done? Can you suggest a candidate experiment design to test your hypothesis?

Yes, it's far more certain that meta released this, which is less convincing on evals, as a result of the mythos previews.

Re: changes, there's been enormous turnover in AI organizations, and in theory this one was developed by a "new" org. Whether that means less or more benchmaxxing is anyone's guess.

More I'd guess since the new org needs to prove itself long enough for stock to vest. Fudge the benchmarks gives them a longer horizon before they're all fired anyways.

Spending tons of money on Claude and the recent token benchmarks came WELL after Meta's huge investments in compute infrastructure for AI as well as the long history of language model development inside science divisions at the company.

Only for poor quality systems. Unfortunately there are many systems that tried to make easy hype, but are the equivalent of an ML 101 classifier class project.

If one measures for perplexity (how likely text is under a certain language model), common text in a training set will be very likely. But you can easily create better models.


Pangram has time after time been shown as the only detector that mostly works. And that paper is pretty old now! There are recent papers from academics independently bench-marking and studying detectors e.g. https://arxiv.org/abs/2501.15654


Is the proxy here linkedin messaging/mail instead of direct email?


https://marco.org/2013/10/25/linkedin-intro-insecurity

I don't recall all of the specific details, but I just remember reading about it at the time and how they bypassed some of iOS security protections to do it. Adn that they didn't get perma-banned from the various app stores back then is beyond me. It's a huge part of why I avoid installing apps on my phone in general.


It was mentioned elsewhere in the thread but this article is relevant: https://www.cbssports.com/mlb/news/guardians-reliever-emmanu...

The ability to bet on short term individual events (such as a single pitch) means that even a single pitch, otherwise nearly inconsequential, can be abused.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: