Hacker Newsnew | past | comments | ask | show | jobs | submit | johnbarron's commentslogin


Oh this is a good article. Thanks for that.

Towards the bottom they list some satellite imagery and a statement indicating they are possibly using the taxiways as parking.

Still leaves open the question of who might have been injured and where, but at least answers how the Iranians could have possibly hit a taxiing plane — they didn’t.


Iran has now released their own images from their satellites. Shows the U.S. Air Force E-3G Sentry AWACS aircraft damaged on the runway.

See update at bottom of the story: https://www.twz.com/air/images-purportedly-show-e-3-sentry-t...

Also reported damage to five refuel airplanes...



>> I'm amazed that wasn't taken into account!

This was taken into account: https://news.ycombinator.com/item?id=47563392


You found a paper saying that contamination is possible. That doesn’t mean that most of these plastic studies are doing the necessary controls, let alone the (almost impossible) task of preventing the contamination in a laboratory setting where nanomolar detection levels are used to make broad claims.

Are more “controls” what is necessary here? The problem wasn’t plastic contamination, it was the presence of stearates. Distinguishing between stearates and microplastics sounds like a classification problem, not a control problem.

There is practically universal recognition among microplastics researchers that contamination is possible and that strong quality controls are needed, and to be transparent and reproducible, they have a habit of documenting their methodology. Many papers and discussions suggest avoiding all plastics as part of the methodology, e.g. “Do’s and don’ts of microplastic research: a comprehensive guide” https://www.oaepublish.com/articles/wecn.2023.61

Another thing to consider is that papers generally compare against baseline/control samples, and overestimating microplastics in baseline samples may lead to a lower ratio of reported microplastics in the test samples, not higher.


Many papers in this field are missing obvious controls, but you’re correct that controls alone are insufficient to solve this problem.

When you are taking measurements at the detection limit of any molecule that is widespread in the environment, you are going to have a difficult time of distinguishing signal from background. This requires sampling and replication and rigorous application of statistical inference.

> Another thing to consider is that papers generally compare against baseline/control samples,

Right, that’s what a control is.

> and overestimating microplastics in baseline samples may lead to a lower ratio of reported microplastics in the test samples, not higher.

There’s no such thing as “overestimating in baseline samples”, unless you’re just doing a different measurement entirely.

What you’re trying to say is that if there’s a chemical everywhere, the prevalence makes it harder to claim that small measurement differences in the “treatment” arm are significant. This is a feature, not a bug.


You’re still bringing up different issues than this article we are commenting on.

> There’s no such thing as “overestimating in baseline samples”

What do you mean? Contamination and mis-measurement of control samples is a thing that actually happens all the time, and invalidates experiments when discovered.

> What you’re trying to say is that if there’s a chemical everywhere, the prevalence makes it harder to claim that small measurement differences in the “treatment” arm are significant.

No. What I was trying to say is that if the control is either mis-measured, for example by accidentally counting stearates as microplastics, or contaminated, then the summary outcome may underestimate or understate the prevalence of microplastics in the test sample, even though the measurement over-estimated it.


> What do you mean? Contamination and mis-measurement of control samples is a thing that actually happens all the time, and invalidates experiments when discovered.

The entire point of a control is to test for that sort of contamination (or more generally, for malfunctions in the experimental workflow). In the case of a negative control, specifically, you're looking for an "positive" where one should not exist. If an experiment is set up such that you can obtain differential contamination in the controls but not the experimental arms, as you've described, then the entire experiment is invalid.

> What I was trying to say is that if the control is either mis-measured, for example by accidentally counting stearates as microplastics, or contaminated, then the summary outcome may underestimate or understate the prevalence of microplastics in the test sample, even though the measurement over-estimated it.

The control cannot be "mis-measured", any more or less than the other arms can be "mis-measured". You treat them identically, otherwise the control is not a control. Neither example you've given are exceptions: if the assay mistakes chemical B for chemical A, then it will also do so for the non-controls. If the experimental process contaminates the controls, it will also contaminate the non-controls.

What you're missing is that there's no absolute "correct" measurement -- yes, the control may itself be contaminated with something you don't even know about, thus "understating" the absolute measurement of whatever thing you're looking for, but the absolute measurement was never the goal. You're looking for between-group differences, nothing more.

Just to make it clearer, if I were going to run an extremely naïve experiment of this sort (i.e. detection of trace chemical contamination C via super-sensitive assay A) with any hope of validity, I'd want to do multiple replications of a dilution series, each with independent negative and positive controls. I'd then use something like ANOVA to look for significant deviations across the group means. This is like the "science 101" version of the experimental design. Any failure of any control means the experiment goes in the trash. Any "significant" result that doesn't follow the expected dilution series patterns, again, goes in the trash.

(This is, of course, after doing everything you can to mitigate for baseline levels of the contaminant in the lab environment, which is a process that itself probably requires multiple failed iterations of the experiment I just described.)

Most of the plastic contamination papers I have read are far, far from even that naïve baseline.


> The entire point of a control is to test for that sort of contamination

No, the point of a control is to give you a reference point that shares all the systemic biases and unknown unknowns, not to detect those biases. If you follow the same procedure on a known null and on your experiment and observe an effect, assuming you really did exactly the same thing except the studied intervention, you can subtract out the bias.

This one example of technical jargon diverging from colloquial or intuitive use, and it is the type of thing people who haven't had statistics or scientific process education often struggle with because they keep applying their colloquial intuitions.

You talk like you understand this on the rest of the comment so I'm confused by this framing, and the person you are replying to points out (in my reading ) that contamination of the control 1) does happen in practice (in the sense that there was an accidental intervention) and 2) if the gloves contaminated both the measurements and control the same way then the control is exactly serving it's purposes


You’re repeating several of my points in your own words, supporting them and not arguing with them, even though your language and emphasis suggests you think you are arguing.

> then the entire experiment is invalid

Isn’t that what I said? You even quoted me saying it. But I didn’t say anything about only control being contaminated or mis-measured, I think you’re assuming something I didn’t say. Validity is, of course, compromised if the control is compromised, regardless of what happens to the test samples.

> The control cannot be “mis-measured” […] yes, the control may itself be contaminated […]

So which is it? Isn’t the article we’re commenting on talking about the possibility of mis-measuring? Are you suggesting this article cannot possibly be an issue when measuring control samples? Why not?

Controls absolutely can be mis-measured or contaminated or both. It has been known to happen. It’s bad when this happens because it means the experiment has to be re-done.

> If the experimental process contaminates the controls, it will also contaminate the non-controls

Yes! This is exactly what I was implying, and is exactly how you might end up underestimating the relative presence of whatever you’re looking for in the test, if your classification procedure overestimates it.

> You’re looking for between-group differences

Yes! and this is why if, for example, you didn’t notice your control had stearates and you counted them as microplastics accidentally, and then reported that your test sample had 2x more microplastics than your control, you might have missed the fact that your test actually had 10x more microplastics, or that your control actually had none when you thought incorrectly that it had some.

This, of course, is not the only possible outcome, not the only way that the results might be distorted. But this is one possible outcome that the Michigan paper at hand is warning against, no?

> Most of the papers I have read are far, far from even that naïve baseline.

Short of it, or exceeding it? Based on earlier comments, I assume you mean they’re not meeting your standards. I don’t know what you’ve read, and my brief googling did not seem to support your claims here so far. Can you provide some references? It would be especially helpful if you showed recent/modern SOTA papers, work that is considered accurate, and is highly referenced.


Any scientific paper that does not document how things were done (methodologies) is basically worthless in the search for truth.

I agree completely. My point is that documenting methodology is standard practice, as is strict quality control, in the microplastics literature. I don’t know what controls are missing according to GP, and we don’t yet have references here to back up that claim. By and large I think researchers are aware of the difficulties measuring this stuff, and doing everything they can to ensure valid science.

Luckily HN software developers, the foremost authority on literally every subject imaginable, are here to bless the world with their insights.

I think there's an important distinction of smug better-knowing instances.

"I have unique insight as a non-expert that all experts miss and the entire field is blind to" -> usually nonsense

"I think in this specific instance academically qualified people are missing something that's obvious to me" -> often true.


There’s also the possibility that some of us actually, you know…have subject-matter expertise.

Doubtful, in your case, no?

"Nanomolar" is a dissolved-species concentration unit. It doesn't apply to spectroscopic particle counting.


Uh, yeah. I know what the word means. See my response to the other comment where you say the same thing.

Spiritual equivalent of a life sciences forum discovering memory safety, one person who wrote code for a bit saying they wrote a memory bug in C once, then someone clutching pearls about why all programmers irresponsibly write memory unsafe code given it has a global impact.

Been here 16 years, it's always an adventure seeing whether stuff like this falls into:

A) Polite interest that doesn't turn into self-keyword-association

B) Science journalism bad

C) Can you believe no one else knows what they're doing.

(A) almost never happens, has to avoid being top 10 on front page and/or be early morning/late night for North America and Europe. (i.e. most of the audience)

(B) is reserved for physics and math.

(C) is default leftover.

Weekends are horrible because you'll get a "harshin' the vibe" penalty if you push back at all. People will pick at your link but not the main one and treat you like you're argumentative. (i.e. 'you're taking things too seriously' but a thoughtful person's version)


> Spiritual equivalent of a life sciences forum discovering memory safety, one person who wrote code for a bit saying they wrote a memory bug in C once, then someone clutching pearls about why programmers irresponsibly write memory unsafe code given it has a global impact.

I used to be a code monkey, I wrote systems software at megacorps, and still can't understand why so many programmers irresponsibly write memory unsafe code given it has a global impact.

So Poe's law applies here.


That's the analogy working as intended: the answer to "why do programmers still write memory-unsafe code" is the same shape as "why do microplastics researchers still wear gloves." The real answer is boring and full of tradeoffs. The HN thread version skips to indignation: "they never thought of contamination so ipso facto all the research is suspect"

(to go a bit further, in case it's confusing: both you and I agree on "why do people opt-in to memunsafe code in 2026? There’s no reason to" - yet, we also understand why Linux/Android/Windows/macOS/ffmpeg/ls aren't 100% $INSERT_MEM_SAFE_LANGUAGE yet, and in fact, most new written for them is memunsafe)


Thank you for helping me understand. I get it now.

You’re ignoring the article to grind your axe.

What do you mean? (Genuinely seems you replied to wrong comment to me. What axe? What’s in the article that’s been ignored?)

They may have meant .exe

You joke, but given that SWE/AI researchers literally invented AI that does everything else for them and is often super-human at intelligence across most things, I would unironically prefer the opinion of the creator of such a system over most others for most things.

I cooked a steak yesterday therefore I am an expert in biology.

Creating a user interface for the world’s knowledge doesn’t make the developer an expert on the knowledge that the interface holds in its database. Regardless of how sophisticated that interface might be.


'I disagree, therefore I am an expert in skepticism.' The sword cuts both ways.

No it doesn’t. What you’re describing is an oxymoron.

Please. You don't get special treatment for being a skeptic. Either you have the credentials or you don't. Prove you're qualified.

You don’t need to be qualified to be unsure about something. Being unsure is a healthy position because it’s an acknowledgment that you don’t know something entirely. Which can also means you have an open mind to learn more about that subject.

Being certain, on the other hand, requires an assumption that you are a subject expert.

But this is all moot anyway because you’re constructing an elaborate strawman here. The original point was that the GP (possibly you?) trusts SWE more than others because they built AI. And I said building databases doesn’t make you smart at the subject loaded into the database.

Really, this whole premise of SWEs assuming expertise on subjects they’ve trained AI on says more about the Dunning-Kruger effect than anything of value in our little tangent.


You can be skeptical in wrong ways. See solipsism for example.

Typically when I get genuine responses to the question, "What would change your mind?" it's an incredibly high bar that is practically impossible to achieve. That's not necessarily a bad thing, but when skepticism is applied without deliberation, it supports biases rather than truth.

So yes, you do need to be qualified to be skeptical, SWEs doubly so.


Oh wow I've never seen such a prime specimen in the wild. I feel like you be pinned to a piece of cardboard in a drawer somewhere.

You'd trust a programmer to be your doctor? Or design the structure of your house?

Not OP, but:

> "You found a paper"

johnbarron didn't find it. The authors cited it as foundational to their own work. it's ref. 38 in the paper under discussion. From the paper: "this finding had not been reported in the MP literature until 2020, when Witzig et al. reported that laboratory gloves submerged in water leached residues that were misidentified as polyethylene."[1]

> "most of these plastic studies are [not] doing the necessary controls"

which studies? The paper they linked surveys 26 QA/QC review articles[1]. Seems well understood.

> "a laboratory setting where nanomolar detection levels are used to make broad claims"

This is like saying "miles per gallon" when discussing weight. "nanomolar detection levels"...microplastics are individual particles identified by spectroscopy, reported as particles per mm^2. "Nanomolar" is a dissolved-species concentration unit. It has nothing to do with particle counting. (I, and other laymen, understand what you mean but you go on later in the thread to justify your unsourced and unjustified claims here via your subject-matter expertise.)

> "(almost impossible) task of preventing the contamination"

The paper provides open-access spectral libraries and conformal prediction workflows to identify and subtract stearate false positives from existing datasets[1]. Prevention isn't the strategy. Correction is. That's the entire point of the paper they linked and the follow-up in [2]

[1] https://pubs.rsc.org/en/content/articlehtml/2026/ay/d5ay0180...

[2] https://news.umich.edu/nitrile-and-latex-gloves-may-cause-ov...


> This is like saying "miles per gallon" when discussing weight. "nanomolar detection levels"...microplastics are individual particles identified by spectroscopy, reported as particles per mm^2. "Nanomolar" is a dissolved-species concentration unit. It has nothing to do with particle counting. (I, and other laymen, understand what you mean but you go on later in the thread to justify your unsourced and unjustified claims here via your subject-matter expertise.)

This paper used “light-based spectroscopy” [1]. Many others use methods that depend on gas chromatography or NMR. A relatively infamous recent example used pyrolysis GCMS to make low-concentration measurements (hence: nanomolar), which they credulously scaled up by some huge factor, and then made idiotic claims about plastic spoons in brains.

Relatively little quantitative science in this area depends on counting plastic particles in microscopic images, but it’s what gets headlines, because laypeople understand pictures.

[1] as an aside, the choice of terminology here is noteworthy. A simple visual light absorption spectra is also “light based spectroscopy”, but is measuring the aggregate response of a sample of a heterogeneous mixture, and is conventionally converted to molar equivalents via some sort of calibration curve (otherwise you can’t conclude anything). But there could be other approaches that are closer to microscopy, which they also discuss. “Particles per square millimeter” is also a unit of concentration (albeit a shitty one, unless your particles are of uniform mass).

Anyway, the point is that these kinds of quantitative analyses are all trying to do measurements that are fundamentally about concentration, which is why I chose the words that I did.


> ...

"1 nanomole of polyethylene" requires you to pick an arbitrary average molecular weight.

This changes the answer by orders of magnitude depending on what you pick.

Which is why nobody does it.

> Relatively little quantitative science in this area depends on counting plastic particles in microscopic images...Many others use methods that depend on gas chromatography or NMR.

So we're dismissive of some subset of papers, because they get false positives using toy methods.

Real science would use gas chromatography.

But...the paper we're dismissing tested gas chromatography. And found the same false positive. [1, in abstract]

> A relatively infamous recent example used pyrolysis GCMS to make low-concentration measurements (hence: nanomolar)

The brain study I'm guessing you are referring to, [2], measured low concentrations, yes.

But it reported them in ug/g.

Because polymers don't have a defined molecular weight.

> made idiotic claims about plastic spoons in brains

The brain study I'm guessing you are referring to, [2], does not mention spoons, or, come close.

Are we sure there's a paper that did that?

[1] Witzig et al, https://pubs.acs.org/doi/10.1021/acs.est.0c03742, "Therefore, u-Raman, u-FTIR, and pyr-GC/MS were further tested for their capability to distinguish among PE, sodium dodecyl sulfate, and stearates. It became clear that stearates and sodium dodecyl sulfates can cause substantial overestimation of PE."

[2] Campen et al, https://pubmed.ncbi.nlm.nih.gov/38765967/, "Bioaccumulation of Microplastics in Decedent Human Brains"


Doesn't take an expert to see that fatty acids and hydrocarbon chains from the degradation of polyethylene look nearly the same.

Not sure what you mean or how it’s related. If the idea is microplastics aren’t actually a problem, I’m totally open to that. But “it’s possible everyone involved is overrating it due to scientists seeing fatty acids or hydrocarbons and calling it plastic” needs a little more than anon assertion :)

PE consists of very long hydrocarbon chains. It can degrade into shorter hydrocarbon chains. Fatty acids also have long hydrocarbon chains. The detection method for microplastics commonly involves pyrolysis, which breaks down polymers into smaller molecules. It's not hard to see that they'll end up looking nearly the same.

Fair enough! I'll add that to my pile of "evidence of microplastics overestimation"

>> That doesn’t mean that most of these plastic studies are doing the necessary controls

That was never my argument. Read it again.


A rediscovery...six years later:

"When Good Intentions Go Bad — False Positive Microplastic Detection Caused by Disposable Gloves" - https://pubs.acs.org/doi/10.1021/acs.est.0c03742

From the study in the OP you cannot derive that current studies on microplastics are not valid. The headline framing that scientists have been measuring their own gloves, is science journalism doing what it does best...

Stearates are water soluble soaps, so any study using standard wet chemistry extraction, and that is most of them, washes them away before analysis even begins. Stearates also cant mimic polystyrene, PET, PVC, nylon, or any of the dozens of other polymers routinely found in environmental and human tissue samples.

Nothing to see here.


Why do you say "nothing to see here" ? The existence of the earlier paper does not imply that procedures corrected for this afterwards. Is there any published protocol for a study since that first article that mentions avoiding stearate powder from gloves ?

"Israeli politician Yitzik Kroizer endorses targeting children": https://www.reddit.com/r/PublicFreakout/comments/1s42uo8/isr...

"IDF assault and arrest CNN journalists covering the settlements": https://www.reddit.com/r/PublicFreakout/comments/1s5zzey/isr...

"Israel responsible for two-thirds of record 129 press killings in 2025" - https://www.theguardian.com/media/2026/feb/25/record-number-...


The movie got 12 posts here on HN, pumped by Amazon Studios within the day of release but suddenly it cant be discussed?

If you have some proof of astroturfing you should write a blog and then share on hn, it might make for a very good post here. otherwise it feels wildly inappropriate (not to mention incredibly unlikely that they would spend marketing money on astroturfing here of all places). Andy Weir has written some books that are incredibly successful in the tech industry circles with Hail Mary being the current most popular if not slightly under The Martian, chances are there's just going to be a lot of talk about it. But even if there is astroturfing, telling people to not watch the movie in a thread where someone is showing off their space photography is inappropriate and misplaced.

The author of the great Astrophotography is not the OP of the HN post.

And that is already one starting and possible isolated indicator of astroturfing, ....when the movie related posts got no traction, they went looking for related subjects...


That proves nothing. You are making assumptions. Did you look at the submission history of the poster?

HN runs on user-submitted posts. People submit things they find interesting, and things they believe others will find interesting.


>>People submit things they find interesting, and things they believe others will find interesting.

I can hear the sounds of Kumbaya, My Lord.... this is a more realistic take: https://news.ycombinator.com/item?id=47520761


>> HN runs on user-submitted posts.

Its about the timing.


You are still making assumptions.

That is how every investigation starts....

Under the assumption that amazon has decided to astroturf on this niche tech news site, that is still okay and allowed to do as long as the article provided is interesting. There are countless posts here which are just blogs from various companies, sometimes posted here by the companies themselves. Meta, Apple, small startups, movie companies, whatever. It's all allowed here as long as the content posted has substance and is interesting, thats all. What's important is that the comments are productive and interesting and not being used as a soapbox, which is likely why your parent comment is negatively voted. There are countless platforms for you to suggest others not see a movie, but a hn post about astrophotography is not that.

>> Under the assumption that amazon has decided to astroturf on this niche tech news site

Even Microsoft astroturfs here...

Satya Nadella, Microsoft FY2019 Q1 earnings call [1]:

“In fact, this morning, I was reading a news article in Hacker News, which is a community where we have been working hard to make sure that Azure is growing in popularity and I was pleasantly surprised to see that we have made a lot of progress..."

[1] - https://www.fool.com/earnings/call-transcripts/2018/10/24/mi...


there is a stark difference between a division aimed at developers attempting to astroturf on a tech industry (mostly developer) news aggregator and a movie studio.

Its a whole industry with hundreds of employees and thousands of bots: https://news.ycombinator.com/item?id=47520761

I don't think I have ever submitted my own work to hn, but I'm not astroturfing what I do submit.

>> It’s about friendship and loneliness and the fragility of the human experience and the triumph of the human spirit!

So is every Disney movie and that is what this but with the crappy Amazon Studios take on it.

>> Anyways, everybody’s a critic these days,

Do you believe a movie can objectively be considered good or bad? If you do you then believe some are better critics than others, the same some way some are better Coders than others or better Basketball players than others?


You're asking the wrong person lol. I can give you a list of "objectively bad" movies that I think are incredible for a variety of defensible reasons.

Just off the top of my head as I briefly scan shit sitting on the shelves of my office:

- Joe Dirt

- Death Wish 3

- Thrashin

- Hackers

- Mortal Kombat

- Uncle Buck

- The Incredible Burt Wonderstone

- Tapeheads

- Prayer of the Rollerboys

- Weekend at Bernie's

Not exactly Fellini, and some are barely even Andy Sidaris if we're being honest, but every movie in that list is amazing for different reasons. An objective critique of any of them (especially in context with "film", as a shapeless, vague concept) misses the point and the spirit of each and every one. But I am an uncultured heathen, so ...


Uncle Buck is on your list of objectively bad movies?!?!?

Yes, all this talk about AI is extremely distracting... https://www.youtube.com/shorts/LDPDDS3HaGo

Reddit aviation groups are full of professional pilots, saying how terrified they of flying into La Guardia or JFK, recounting close calls, with one saying how he avoided those two for 10 years...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: