Hacker Newsnew | past | comments | ask | show | jobs | submit | anderskaseorg's commentslogin

The point of trusted publishing is supposed to be that the public can verifiably audit the exact source from which the published artifacts were generated. Breaking that chain via a private repo is a step backwards.

https://docs.npmjs.com/generating-provenance-statements

https://packaging.python.org/en/latest/specifications/index-...


Pardon my limited understanding but my read of the suggestion was simply to perform the same exact operation that the public would do to verifiably audit the exact source when generating the official published artifacts, the point was just that there was no automation to do so directly from the public repo.

The maintainer can verify the correspondence between source and release, but the public has been deprived of this verifiability.

This matters. Consider the XZ Utils compromise where a malicious maintainer hid the line that triggers compilation of the (otherwise dormant) backdoor payload in a generated file present only in the release tarball: https://www.openwall.com/lists/oss-security/2024/03/29/4. If the public had the ability to audit that the release tarball was correctly built from the version-controlled code, this would have been much more difficult to hide.


Thanks for circling back.

I interpret your comment as emphasizing that the current norm relying on publicly accessible (GitHub) infrastructure building releases in public and thus allowing public review of logs and artifacts provides tremendous value (and I admit that true 100% binary reproducibility is an often nearly unreachable goal still not yet typically expected as the norm).

> Breaking that chain via a private repo is a step backwards

I was stating that performing a reproducible build elsewhere and distributing the output could in theory still be validated though it would require re-running said build for one's self and comparing the outputs. This might encourage the pursuit of 100% reproducibility! The chain need not be considered broken just because the final link is private.

> the public has been deprived of this verifiability

This is not what I was trying to point out, though I agree the cost of verifying reproducibility would be higher. My point was that anyone could still perform the same steps themselves and verify the output. Yes this would be more work than reviewing logs on GitHub.

OP's primary concern with today's standard approach appears to be the automated connection from GitHub build action -> release. Even simply requiring manual maintainer intervention to copy the action output over to a release seems to satisfy both their and your concerns.

> If the public had the ability to audit that the release tarball was correctly built from the version-controlled code

I am not intimately familiar with all the details of the XZ fiasco but agree that it offers an opportunity to learn and make changes to work toward making sure nothing similar can happen to any project again. If I am reading your link correctly, it serves as an example of members of the public (not a maintainer of XZ) doing exactly what you said: auditing the release tarball. IIRC this occurred only after an additional point release (apparently allowing the attacker to fix a bug in their backdoor) because of a performance regression.


Right, the public was able to spend manual effort hand-auditing one specific tarball after it had already been singled out as suspicious for other reasons. In order for verification to effectively increase supply chain security, it needs to become uniformly standardized, fully automated, and ubiquitous. That’s the ultimate goal of the provenance attestation mechanisms that would be defeated by indirection through private repositories.

If you want to require extra maintainer intervention for releases, there are better mechanisms available for that, such as workflow_dispatch.


There is a well-defined reload point—it’s the `subsecond::call` wrapper around `tick()`. But the hypothetical design that you seem to have in mind where this doesn’t exist would not have a well-defined reload point, so it would need to be able to preempt your program anywhere.

Layout changes are supported for structs that don’t persist across the well-defined reload point.


Yes, uv has a standard permissive open source license (Apache-2.0 OR MIT): https://github.com/astral-sh/uv/?tab=readme-ov-file#license


Yes, allowing this to execute would be very unsound:

    let lock = RwLock::new(Box::new(111));
    let r: &i32 = &**lock.read().unwrap(); // points to 111
    *lock.write().unwrap() = Box::new(222); // allocates a new Box and deallocates 111
    println!("{}", *r); // use after free


It can be done safely with an upgrade method that requires an owned read guard. The RwLock implementation provided by the parking_lot crate supports this safely:

    let lock = RwLock::new(Box::new(111));
    let read = lock.upgradable_read();
    let r: &i32 = &**read; // points to 111
    *RwLockUpgradableReadGuard::upgrade(read) = Box::new(222); // error[E0505]: cannot move out of `read` because it is borrowed
    println!("{}", *r);


Of course (but that’s not relevant to the original scenario, where the programmer is hypothetically not aware that the read lock is still being held, let alone that they could manually upgrade it after changing to a different lock library).


The problem is that in the 'if let' case, the 'else' block has no access to the read guard. It's out of scope, except that the compiler hasn't dropped it yet.


Clippy already has an error for this pattern with Mutex. It should be trivial to extend it to cover RwLock.

    error: calling `Mutex::lock` inside the scope of another `Mutex::lock` causes a deadlock
      --> src/main.rs:5:5
       |
    5  |       if let Some(num) = *map.lock().unwrap() {
       |       ^                   --- this Mutex will remain locked for the entire `if let`-block...
       |  _____|
       | |
    6  | |         eprintln!("There's a number in there: {num}");
    7  | |     } else {
    8  | |         let mut lock2 = map.lock().unwrap();
       | |                         --- ... and is tried to lock again here, which will always deadlock.
    9  | |         *lock2 = Some(5);
    10 | |         eprintln!("There will now be a number {lock2:?}");
    11 | |     }
       | |_____^
       |
       = help: move the lock call outside of the `if let ...` expression
       = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#if_let_mutex
       = note: `#[deny(clippy::if_let_mutex)]` on by default


There are a lot of alternative lock implementations that are used all the time, with the most common ones probably being tokios RwLock/Mutex and parking_lot.

That lint won't help for those.


There was an idea floated for another lint `#[diagnostics::lint_as_temporary(reason = "...")]`, which is supposed to be added by implementors of such locks. https://github.com/rust-lang/rust/issues/131154#issuecomment...


There are plenty of systems that sacrifice consistency even while the network is fully connected, in the name of performance—for example, DNS, or any system with a caching proxy server.


Yeah, CAP is about the best possible behavior a system can have but you can always do worse.


You’re certainly allowed to make the client a more active participant in your consensus protocol, but then it needs to play by the same rules if you want the system to have guarantees. For example, you need to handle network partitions between clients and some servers, and you need to be able to reconcile multiple reads from servers that might have seen different sets of writes. The CAP theorem still applies to the system as a whole.


They’re doing a slow phase-out over a long time to try to avert a wave of bad publicity that threatens their browser monopoly, but that timeline has already started as of June.

https://developer.chrome.com/docs/extensions/develop/migrate...

https://www.bleepingcomputer.com/news/google/google-chrome-w...


The mechanism is a clever application of quines (self-reproducing programs), first explained in the classic lecture “Reflections on Trusting Trust” by Ken Thompson:

https://dl.acm.org/doi/pdf/10.1145/358198.358210

Russ Cox obtained the actual code for Thompson’s compiler backdoor and presented it here:

https://research.swtch.com/nih


The jurisdictional status of .local and other standards-reserved special use domains is explained by RFC 6761 section 3:

https://datatracker.ietf.org/doc/html/rfc6761#section-3

And ICANN is bound by the IETF/ICANN Memorandum of Understanding Concerning the Technical Work of the IANA, which prevents it from usurping that jurisdiction:

https://www.icann.org/resources/pages/agreements-en


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: