Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In recent years, it's started to feel like you can't trust third-party dependencies and extensions at all anymore. I no longer install npm packages that have more than a few transitive dependencies, and I've started to refrain from installing vscode or chrome extensions altogether.

Time and time again, they either get hijacked and malicious code added, or the dev themselves suddenly decides to betray everyone's trust and inject malicious code (see: Moq), or they sell out to some company that changes the license to one where you have to pay hundreds of dollars to keep using it (e.g. the recent FluentAssertions debacle), or one of those happens to any of the packages' hundreds of dependencies.

Just take a look at eslint's dependency tree: https://npmgraph.js.org/?q=eslint

Can you really say you trust all of these?



> Can you really say you trust all of these?

We need better capabilities. E.g. when I run `fd`, `rg` or similar such tool, why should it have Internet access?

IMHO, just eliminating Internet access for all tools (e.g. in a power mode), might fix this.

The second problem is that we have merged CI and CD. The production/release tokens should ideally not be on the same system as the ones doing regular CI. More users need access to CI (especially in the public case) than CD. For example, a similar one from a few months back https://blog.yossarian.net/2024/12/06/zizmor-ultralytics-inj...


> We need better capabilities. E.g. when I run `fd`, `rg` or similar such tool, why should it have Internet access?

Yeah!! We really need to auto sandbox everything by default, like mobile OSes. Or the web.

People browse the web (well, except Richard Stallman) all the time, and run tons of wildly untrusted code, many of them malicious. And apart from zero days here and there, people don't pay much attention to it, and will happily enter any random website in the same machine they also store sensitive data.

At the same time, when I open a random project from Github on VSCode, it asks whether the project is "trusted". If not, it doesn't run the majority of features like LSP server. And why not? Because the OS doesn't sandbox stuff by default. It's maddening.


I’ve been doing all of my dev work in a virtual machine as a way to clamp things down. I’ve even started using a browser in a VM as a primary browser.

Computers are fast enough where the overhead doesn’t feel like it’s there for what I do.

For development, I think Vagrant should make a comeback as one of the first things to setup in a repo/group of repos.


https://www.qubes-os.org/ is the extension of this.


I’m not sure I can recommend Qubes entirely due to the usability aspect.

I’ve used Qubes several times for a week at a time over the last few years. It’s gotten better, but they really need someone to look at the user experience of it all for it to be a compelling option.

I’m regularly questioning myself if what I’m doing is making it less secure because I don’t understand exactly everything Qubes is doing. I know how all the pieces work individually (Xen, etc).

Outside of configuration, I believe I’d have to ditch any hope of running 3D-anything with any expectation of performance. That’s simply a non-starter as someone who has written off “nation-state actor targeting me, specifically” as something I can defend against.

And lastly, I’m deeply skeptical of anything that loudly wears the Snowden-badge-of-approval as that seems to follow grifts.

My main workstation is a Mac and I’m doing this on Parallels. Would Qubes probably be more secure? Maybe. But it comes at a massive usability hit.


I get a lot or usability from having one single operating system.

Sure it’s less secure than full isolation, but full isolation is a real pain.


OpenBSDs pledge[0] system call is aimed at helping with this. Although, it's more of a defense-in-depth measure on the maintainers part and not the user.

> The pledge() system call forces the current process into a restricted-service operating mode. A few subsets are available, roughly described as computation, memory management, read-write operations on file descriptors, opening of files, networking (and notably separate, DNS resolution). In general, these modes were selected by studying the operation of many programs using libc and other such interfaces, and setting promises or execpromises.

[0]: https://man.openbsd.org/pledge.2


Pledge is for self-isolating, it helps with mistakes but not against intentional supply chain attacks.


How so? Obviously this is ineffective at the package level but if the thing spawning these processes, like the GitHub runners or Node itself added support to enter a "restricted" mode and pledged then that would help, no?


According to https://www.openbsd.org/papers/eurobsdcon2017-pledge.pdf pledge turns off upon execve. Surely it would be quite limiting for runners to use it.

As far as I see its purpose is mostly a mitigation/self-defence for vulnerabilities in C-based apps, so basically limiting what happens once the attacker has exploited a vulnerability. Maybe it has other uses.

It could be used defending against bugs in the Node runtime itself, as you say, but as I understand vulnerabilities in the Node runtime itself are quite rare, so more fine-grained limitations could be implemented within itself.


I'm not much of an openbsd user, but I have been meaning to understand if this is the hole execpromises is intended to fill.

At the very least, I think execpromises was added a year after the documentation that you linked, so it's worth looking into.


I've found firejail to be a useful tool for this (https://github.com/netblue30/firejail), and additionally use opensnitch (https://github.com/evilsocket/opensnitch) to monitor for unexpected network requests.

For CI/CD using something like ArgoCD let's you avoid giving CI direct access to prod - it still needs write access to a git repo, and ideally some read access to Argo to check if deployment succeeded but it limits the surface area.


Great points! Harden-Runner (https://github.com/step-security/harden-runner) is similar to Firejail and OpenSnitch but purpose-built for CI/CD context. Harden-Runner detected this compromise due to an anomalous outbound network request to gist.githubusercontent.com.

Interestingly, Firejail itself uses Harden-Runner in its GitHub Actions workflows! https://github.com/search?q=repo%3Anetblue30%2Ffirejail%20ha...


bubblewrap is a safer alternative to firejail because it does not use setuid to do its job, and it is used by flatpak (so hopefully has more eyes on it, but I have no idea).

https://wiki.archlinux.org/title/Bubblewrap

You do have to assemble isolation scripts by hand though, it's pretty low level. Here is a decent comment which closely aligns to what I'm using to isolate npm/pnpm/yarn/etc, I see no need to repeat it:

https://news.ycombinator.com/item?id=43369927


FreeBSD has Capsicum [0] for this. Once a process enters capability mode, it can't do anything except by using already opened file descriptors. It can't spawn subprocesses, connect to the network, load kernel modules or anything else.

To help with things that can't be done in the sandbox, e.g. DNS lookups and opening new files, it provides the libcasper library which implements them using helper processes.

Not all utilities are sandboxed, but some are and hopefully more will be.

Linux recently added Landlock [1] which seems sort of similar, although it has rulesets and doesn't seem to block everything by default, as far as I can tell from quickly skimming the docs.

[0] https://wiki.freebsd.org/Capsicum

[1] https://docs.kernel.org/userspace-api/landlock.html


I don't think it would help in this case, when the entire process can be replaced with malicious version. It just won't make the Capscium call.

What you really want is something external and easily inspectable, such as systemd per-service security rules, or flatpak sandboxing. Not sure if FreeBSD has somethingike this.


You also need to block write access, so they can’t encrypt all your files with an embedded public key. And read access so they can’t use a timing side channel to read a sensitive file and pass that info to another process with internet privileges to report the secret info back to the bad guy. You get the picture, I’m sure.


I get the picture, yes, namely that probably 99% of project dependencies don't need I/O capabilities at all.

And even if they do, they should be controlled in a granular manner i.e. "package org.ourapp.net.aws can only do network and it can only ping *.aws.com".

Having finer-grained security model that is enforced at a kernel level (and is non-circumventable barring rootkits) is like 20 years overdue at this point.

Every single big org is dragging their feet.


> You also need to block write access, so they can’t encrypt all your files with an embedded public key. And read access so they can’t use a timing side channel to read a sensitive file and pass that info to another process with internet privileges to report the secret info back to the bad guy. You get the picture, I’m sure.

Indeed.

One can think of a few broad capabilities that will drastically reduce the attack surface.

1. Read-only access vs read-write 2. Access to only current directory and its sub-directories 3. Configurable Internet access

Docker mostly gets it right. I wish there was an easy way to run commands under Docker.

E.g.

If I am running `fd`

1. Mount current read-only directory to Docker without Internet access (and without access to local network or other processes) 2. Run `fd` 3. Print the results 4. Destroy the container


This is exactly what the tool bubblewrap[1] is built for. It is pretty easy to wrap binaries with it and it gives you control over exactly what permissions you want in the namespace.

[1]: https://github.com/containers/bubblewrap


> 1. Mount current read-only directory to Docker without Internet access (and without access to local network or other processes) 2. Run `fd` 3. Print the results 4. Destroy the container

Systemd has a lot of neat sandboxing features [1] which aren't well known but can be very useful for this. You can get pretty far using systemd-run [2] in a script like this:

  #!/bin/sh
  
  uid="$(id -u)"
  gid="$(id -g)"
  cwd="$(pwd -P)"
  
  sudo systemd-run --system --pty --same-dir --wait --collect --service-type=exec \
      --uid="$uid" \
      --gid="$gid" \
      -p "TemporaryFileSystem=/:ro /tmp" \
      -p "BindReadOnlyPaths=-/bin -/sbin -/usr/bin -/usr/sbin -/lib -/lib64 -/usr/lib -/usr/lib64 -/usr/libexec" \
      -p "BindPaths=$cwd" \
      -p "PrivateNetwork=true" \
      -p "PrivateDevices=true" \
      -p "PrivateIPC=true" \
      -p "RestrictNamespaces=true" \
      -p "RestrictSUIDSGID=true" \
      -p "CapabilityBoundingSet=" \
      "$@"
Which creates a blank filesystem with no network or device access and only bind mount the specified files.

Unfortunately TemporaryFileSystem require running as a system instance of the service manager rather than per-user instance, so that will generally mean running as root (hence sudo). One approach is to create a suid binary that does the same without needing sudo.

[1] https://www.freedesktop.org/software/systemd/man/latest/syst...

[2] https://www.freedesktop.org/software/systemd/man/latest/syst...

You could also use bubblewrap [3] pretty similarly, and may not need to use sudo if unprivileged user namespaces are allowed by your kernel.

  #!/bin/sh
  
  cwd="$(pwd -P)"
  
  bwrap --new-session --die-with-parent \
      --tmpfs /tmp \
      --ro-bind-try /bin /bin \ 
      --ro-bind-try /sbin /sbin \
      --ro-bind-try /usr/bin /usr/bin \ 
      --ro-bind-try /usr/sbin /usr/sbin \
      --ro-bind-try /lib /lib \
      --ro-bind-try /lib64 /lib64 \
      --ro-bind-try /usr/lib /usr/lib \
      --ro-bind-try /usr/lib64 /usr/lib64 \
      --ro-bind-try /usr/libexec /usr/libexec \
      --bind "$cwd" "$cwd" \
      --dev-bind /dev/null /dev/null \
      --dev-bind /dev/zero /dev/zero \
      --dev-bind /dev/random /dev/random \
      --unshare-net \
      --unshare-ipc \
      --cap-drop ALL \
      --chdir "$cwd" \
      "$@"
[3] https://github.com/containers/bubblewrap


If you require network (for pnpm to install packages, etc), you also often have to add readonly access to /etc/ssl, or https wouldn't work.

It might also be helpful to just use --unshare-all, and then whitelist things you actually need (--share-net, etc).


But that's what firejail and docker/podman are for. I never run any build pipeline on my host system, and neither should you. Build containers are pretty good for these kind of mitigations of security risks.


> We need better capabilities.

I'd love to say "just use Kubernetes and run Nexus as a service inside" but unfortunately Network Policies are seriously limited [1]...

[1] https://kubernetes.io/docs/concepts/services-networking/netw...


This is the death of fun. Like when you had to use SSL for buying things online.

Adding SSL was not bad, don't get me wrong. It's good that it's the default now. However. At one point it was sorta risky, and then it became required.

Like when your city becomes crime ridden enough that you have to lock your car when you go into the grocery store. Yeah you probably should have been locking it the whole time. what would it have really cost? But now you have to, because if you don't your car gets jacked. And that's not a great feeling.


Just you wait. Here in America when your city becomes crime ridden enough you start leaving it unlocked again.


Used to live near San Francisco, and had a lot of coworkers say they intentionally leave their windows down when parking in SF so that burglars don't break the glass to steal something!


Crime is lower than the 80s and 90s. It has been declining since 2023.


On the other extreme, I can (and do) leave my keys inside my running car while I shop for groceries!


In the era of the key fob it's pretty automatic to lock the car every time. Some cars even literally do it for you. I hardly think of this, let alone get not great feelings about it.


I liked living in a city where I could leave my doors unlocked and windows down. It was less to worry about.


Yes. Same with browser plugins. I've heard multiple free-plugin authors say they're receiving regular offers to purchase their projects. I'm sure some must take up the offer.


For an example of a scary list of such offers, see https://github.com/extesy/hoverzoom/discussions/670


This is why I fork the extensions I use, with the exception of uBlock. Basically just copy the extension folder, if I can't find it on GitHub. That way I can audit the code and not have to worry about an auto-update sneaking in something nefarious. I've had two extensions in the past suddenly start asking for permissions they definitely did not need, and I suspect this is why.

Btw, here's a site where you can inspect an extension's source code before you install it: https://robwu.nl/crxviewer/


Yeah, and thx for the link to the neat crx explorer.

Close to what you do, I started writing my own addon to replace a couple addons whose featureset I use only partially.

For example, when I use Chromium I want 1. to customize the New Tab page, and 2. to add a keyboard shortcut to pin/unpinTab. These two features are absolutely part of extensions, but in addition to the security risk I find them heavy (I don’t need the kitchen sink, just need 2 micro-features!). And so, I have my little personal addon with zero resource usage with just these two features. It’s tiny (20 lines of code!), git-versioned, and never changes / gets pwned. When I need an extra micro-feature, it’s easy enough to add it by searching addons docs, of asking an LLM.


You shouldn’t need an extension just to add a keyboard shortcut for a menu item. Doesn’t your OS let you map that? On macOS you can in Keyboard Settings


Indeed, one point for MacOS! I use GNOME.


do you know of any other ones like this that post their offers?


No I don’t. But Wladimir Palant is where I get most of my information on the topic (and is probably where I got this link). His blog might have a post (or a comment) that links to similar lists: https://palant.info/categories/security/


This is cool but useless because they redacted all the company names. The opposite of a name and shame, because no name and no shame.


It's not useless. It shows the scale at which extension authors get offers for buyouts. The intended buyer doesn't exactly matter.


Precisely. Thank you.


I have long since stopped using any extension that doesn’t belong to an actual company (password managers for example). Even if they aren’t malware when you installed them, they will be after they get sold.


A bit off topic, but how is the bitwarden browser extension protected against supply-chain attacks (npm dependencies)?


Actual companies also get sold and churned into shit. See LastPass for example.


I got an outreach for an extension I made as a joke. It had like maybe 5000 downloads ever.


> eslint's dependency tree

And if you turn on devDependencies (top right), it goes from 85 to 1263.


I'd also emphasize out that there's nothing safe about it being "only dev", given how many attacks use employee computers (non-prod) as a springboard elsewhere.


The original .NET (and I think Java?) had an idea in them of basically library level capability permissions.

That sort of idea seems increasingly like what we need because reputation based systems can be gamed too easily: i.e. there's no reason an action like this ever needed network access.


It was only recently removed in Java and there was a related concept (adopted from OSGi) designed to only export certain symbols -- not for security but for managing the surface area that a library vendor had to support

But I mentioned both of those things because [IMHO] they both fell prey to the same "humanity bug": specifying permissions for anything (source code, cloud security, databases, Kubernetes, ...) is a lot of trial and error, whereas {Effect: Allow, Action: ["*:*"]} always works and so they just drop a "TODO: tighten permissions" and go on to the next Jira

I had high hopes for the AWS feature "Make me an IAM Policy based on actual CloudTrail events" but it talks a bigger game than it walks


Are there examples of these types of actions in other circles outside of the .NET ecosystem? I knew about the FluentAssertions ordeal, but the Moq thing was news to me. I guess I've just missed it all.


node-ipc is a recent example from the Node ecosystem. The author released an update with some code that made a request to a geolocation webservice to decide whether to wipe the local filesystem.


Yeesh. Found an article for anyone interested: https://snyk.io/blog/peacenotwar-malicious-npm-node-ipc-pack...

I like this comment from u/mailto_devnull (https://www.reddit.com/r/node/comments/tg451e/do_not_use_nod...):

  Where do I stand on the war? I stand with Ukraine.
  Where do I stand on software supply chain issues? I stand with not fucking around with the software supply chain.


Missed them too. Always was annoyed by FluentAssertions anyway, some contractor added it to a project that we took over couldn't see the value add.


Stealing crypto is so lucrative. So there is a huge 'market' for this stuff now that wasn't there before. Security is more important now than ever. I started sandboxing Emacs and python because I can't trust all the packages.


What do you use for sandboxing?


You should never have trusted blindly in third-party dependencies in the first place.

Abnormal behavior was to trust by default.


Yes, this...

I hope the irony is not completely lost on the fine folks at semgrep that the admittedly "overkill" suggested semgrep solution is exactly the type of pattern that leads to this sort of vulnerability: that of executing arbitrary code that is modifiable completely outside of one's own control.


You should have never trusted them. That ecosystem is fine for hobbyists but for professional usage you can't just grab something random from the Internet and assume it's fine. Security or quality wise.


If you want a Cathedral they still exist. Use .NET and only MS Nuget packages.


Yeah, I’ve moved off vscode entirely, back to fully featured out of the box ides for me. Jetbrains make some excellent tools and I don’t need to install 25 (dubious) plugins for them to be excellent


The alternative would be to find a sustainable funding model for open source, which is the source of betrayals due to almost all of the maintainers having to sell their projects to make a living in the first place.

The problem you're describing is an economical and a social one.

Currently, companies exploit maintainers of open source projects. There are rarely projects that make it due to their popularity, like webpack, when it comes to funding...but the actual state is that everyone that webpack is based on as a dependency didn't get a single buck for it, which is unfair, don't you think?

On top of sustainable funding, we need to change our workflows to reproducible build ecosystems that can also revert independent of git repositories. GitHub has become the almost single source of code for the planet, which is insane to even bet on from a risk assessment standpoint. But it's almost impossible to maintain your own registry or mirror of code in most ecosystems due to the sheer amount of transitive dependencies.

Take go mod vendor, for example. It's great to stick your dependencies but it comes with a lot of overhead work in case something like OPs scenario happens to its supply chain. And we need to account for that in our workflows.


It's not going to happen. If buying a forever license of unlimited usage for an open source library cost $1 I'd skip it. Not be cause I don't want to give money to people who deserve it, but because of the absolute monstrous bureaucratic nightmare that comes from trying to purchase anything at a company larger than 10 people.

Don't even talk about when the company gets a lawyer who knows what a software license is.


Open source has a very sustainable funding model as evidenced by 50 years of continuous, quality software being developed and maintained by a diverse set of maintainers.

I say sustainable because it has been sustained, is increasing in quantity and quality, and reasonably seems to be continuing.

> companies exploit maintainers of open source projects Me giving something away and others taking what I give is not exploitation. Please don’t speak for others and claim people are exploited. One of the main tenets of gnu is to prevent exploitation.


This amuses me:

> But Lewis Ardern on our team wrote a Semgrep rule to find usages of tj-actions, which you can run locally (without sending code to the cloud) via: semgrep --config r/10Uz5qo/semgrep.tj-actions-compromised.

So "remote code you download from a repo automatically and run locally has been compromised, here run this remote code you download from a repo automatically and run locally to find it"


A semgrep rule is not code; it does not run anything.


I think the conventional approach of checking for vulnerabilities in 3rd party dependencies by querying CVE or some other database has set the current behaviour i.e. if its not vulnerable it must be safe. This implicit trust on vulnerability databases has been exploited in the wild to push malicious code to downstream users.

I think we will see security tools shifting towards "code" as the source of truth when making safety and security decision about 3rd party packages instead of relying only on known vulnerability databases.

Take a look at vet, we are working on active code analysis of OSS packages (+ transitive dependencies) to look for malicious code: https://github.com/safedep/vet


npm supply chain attacks are the lone thing that keeps me up at night, so to speak. I shudder thinking about the attack surface.

I go out of my way to advocate for removing dependencies and pushing against small dependency introductions in a large ruby codebase. Some dependencies that suck and impose all sorts of costs, from funky ass idiosyncratic behavior or absurd file sizes (looking at you any google produced ruby library, especially the protocol buffer dependent libraries) are unavoidable, but I try to keep fellow engineers honest about introducing libraries that do things like determine the underlying os or whatever and push towards them just figuring that out themselves or, at the least, taking "inspiration" from the code in those libraries and reproducing behavior.

A nice side effect of AI agents and copilots is they can sometimes write "organic" code that does the same thing as third party libraries. Whether that's ethical, I don't know, but it works for me.


Yeah, I’m working on a library where the core is dependency free. It takes longer but I know the provenance of everything—me!


Did you turn off updates on your phone as well? Because 99.999% of people have app auto-updates and every update could include an exploit.

I'm not saying you're wrong not to trust package managers and extensions but you're life is likely full of the same thing. The majority of apps are made from 3rd party libraries which are made of 3rd party libraries, etc.... At least on phones they update constantly, and every update is a chance to install more exploits.

The same is true for any devices that get updates like a Smart TV, router, printer, etc.... I mostly trust Apple, Microsoft, and Google to check their 3rd party dependencies, (mostly), but don't trust any other company - and yet I can't worry about it. Don't update and I don't get security vulnerabilities fixed. Do update and I take the chance that this latest update has a 3rd party exploit buried in a 3rd party library.


I don't trust apps. I trust Apple (enough) that they engineered iOS to have a secure enough sandbox that a random calculator app can't just compromise my phone.

Most developer packages have much higher permission levels because they integrate it with your code without a clear separation of boundaries. This is why attackers now like to attack GitHub Actions because if you get access to secrets you can do a lot of damage.


How far will you go? If you are user of Linux, are you going to inspect all sources before using a distribution?


This is why I have begin to prefer languages with comprehensive, batteries-included standard libraries so that you need very few dependencies. Dep Management has become a full time headache nowadays with significant effort going into CVE analysis.


I think this is the root of the problem.

I think library/runtime makers aren't saying "let's make an official/blessed take on this thing that a large number of users are doing" as much as they should.

Popular libraries for a given runtime/language should be funded/bought/cloned by the runtime makers (e.g. MS for .NET, IBM/Oracle for Java) more than they are now.

I know someone will inevitably mention concerns about monopolies/anti-trust/"stifling innovation" but I don't really care. Sometimes you have to standardize some things to unlock new opportunities.


Instead of bloating the base language for this, a trusted entity could simply fork those libraries, vet them, and repackage into some "blessed lib" that people like you can use in peace. In fact, the level of trust needed to develop safe libraries is less than developing language features.


That's basically what Boost[1] brought to C++.

[1]: https://www.boost.org/


I agree completely.

If I see a useful extension, I want to use is on GitHub. I fork it. Sometimes I make a bookmarklet with the code instead.

I keep most extensions off until I need to use them. Then, I enable them, use them, and turn them off again. I try to even keep Mac apps to a minimum.


49 modules with only one maintainer and over 600 modules with only one maintainer if devDependencies are included. This is only a matter of time until the next module becomes compromised.


You can trust (in time), but you can't blindly upgrade. Vendor or choose to "lock" with a cryptographic hash over the files your build depends on. You then need to rebuild that trust when you upgrade (wait until everyone else does; read the diffs yourself).

There is something to be said for the Go proverb "a little copying is better than a little dependency", as well. If you want a simple function from a complicated library, you can probably copy it into your own codebase.


> the Go proverb "a little copying is better than a little dependency"

What a nice way to put it! Thanks for the mention and thanks for making me discover https://go-proverbs.github.io/ .


Years ago I saw that most browser extensions ask for the permission “can access all data on all websites” and thought yeah let’s not do that…


> In recent years, it's started to feel like you can't trust third-party dependencies and extensions at all anymore.

Was it really a recent thing?

> Just take a look at eslint's dependency tree

Npm / node has always been extra problematic though. Where's the governance / validation on these packages? It's free for all.


When using NuGet packages I usually won't even consider ones with non-Microsoft dependencies, and I like to avoid third-party dependencies altogether. I used to feel like this made me a weird conspiracy theorist but it's holding up well!

It also has led to some bad but fun choices, like implementing POP3 and IMAP directly in my code, neither of which worked well but taught me a lot?


I have used https://github.com/lirantal/npq for a good while now, but I am yearning for that'd look deeper into the health of the package at hand.


This isn't new - Thompson warned us 40 years ago (and I believe others before him) in his Reflections on Trusting Trust paper.

It's something I've been thinking about lately because I was diving into a lot of discussion from the early 90s regarding safe execution of (what was, at the time, called) "mobile code" - code that a possibly untrustworthy client would send to have executed on a remote server.

There's actually a lot of discussion still available from w3 thankfully, even though most of the papers are filled with references to dead links from various companies and universities.

It's weirdly something that a lot of smart people seemed to have thought about at the start of the World Wide Web which just fell off. Deno's permissions are the most interesting modern implementation of some of the ideas, but I think it still falls flat a bit. There's always the problem of "click yes to accept the terms" fatigue as well, especially when working in web development. It's quite reasonable for many packages one interacts with in web development to need network access, for example, so it's easy to imagine someone just saying "yup, makes sense" when a web-related package requests network access.

Also none of this even touches on the reality of so much code which exists to brutally impact a business need (or perceived need). Try telling your boss you need a week or two to audit every one of the thousands of packages for the report generator app.


Trusting Trust is not about this at all. It's about the compiler being compromised, and making it impossible to catch malicious code by inspecting the source code.

The problem here is that people don't even bother to check the source code and run it blindly.


I'm just going to say this out loud: It's mostly a Javascript thing.

Not that every other platform in the world isn't theoretically vulnerable to the same sort of attack, but there's some deep-rooted culture in the javascript community that makes it especially vulnerable.

The charitable interpretation is "javascript evolves so fast!". The uncharitable interpretation is "they are still figuring it out!"

Either way, I deliberately keep my javascript on the client side.


The solution for trusting dependencies is signed public builds and ML 'weirdness' detectors that require manual review.


If this were “the solution”, then the many, many smart individuals and teams tasked with solving these problems throughout the software industry would’ve been out of work for some time now.

It’s obviously more complicated than that.

Signed public builds don’t inherently mean jack. It highly depends on the underlying trust model.

Malicious actor: “we want to buy your browser extension, and your signing credentials”.

Plugin author: “Well, OK”.

Malicious actor: hijacks npm package and signs new release with new credentials

The vast majority of dependent project authors: at best, see a “new releaser” warning from their tooling, which is far from unusual for many dependencies. ignores After all, what are they going to do?.

Hacker News, as usual, loves to pretend it has all the answers to life’s problems, and the issue is that nobody has listened to them.


> Hacker News, as usual, loves to pretend it has all the answers to life’s problems, and the issue is that nobody has listened to them.

eh, it’s not just HN.

like, there’s no single technical/material solution to something as complex and widespread as humanity’s apparent base need to “get more stuff”. which is the root cause for acting maliciously — it’s just “getting more stuff” in a way that’s harmful to others.

but that won’t stop people from claiming that they can come up with a technical solution. whether that’s politicians, tech bros, HN commentators or that guy down the pub on a thursday evening.

that being said, signing software is better than doing nothing… so, a better way of phrasing it from the GP would probably have been it is a partial mitigation for the problem in some cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: