ProTip: use PNPM, not NPM.
PNPM 10.x shutdown a lot of these attack vectors.
1. Does not default to running post-install scripts (must manually approve each)
2. Let's you set a min age for new releases before `pnpm install` will pull them in - e.g. 4 days - so publishers have time to cleanup.
NPM is too insecure for production CLI usage.
And of course make a very limited scope publisher key, bind it to specific packages (e.g. workflow A can only publish pkg A), and IP bound it to your self hosted CI/CD runners. No one should have publish keys on their local, and even if they got the publish keys, they couldn't publish from local.
Npm is what happens when you let tech debt stack up for years too far. It took them five attempts to get lock files to actually behave the way lock files are supposed to behave (lockfile version 3, + at least 2 unversioned attempts before that).
It’s clear from the structure and commit history they’ve been working their asses off to make it better, but when you’re standing at the bottom of a well of suck it takes that much work just to see daylight.
The last time I chimed in on this I hypothesized that there must have been a change in management on the npm team but someone countered that several of the maintainers were the originals. So I’m not sure what sort of Come to Jesus they had to realize their giant pile of sins needed some redemption but they’re trying. There’s just too much stupid there to make it easy.
I’m pretty sure it still cannot detect premature EOF during the file transfer. It keeps the incomplete file in the cache where the sha hash fails until you wipe your entire cache. Which means people with shit internet connections and large projects basically waste hours several times a week doing updates that fail.
> And of course make a very limited scope publisher key, bind it to specific packages (e.g. workflow A can only publish pkg A), and IP bound it to your self hosted CI/CD runners. No one should have publish keys on their local, and even if they got the publish keys, they couldn't publish from local.
I've by now grown to like Hashicorp Vaults/OpenBao's dynamic secret management for this. It's a bit complicated to understand and get to work at first, but it's powerful:
You mirror/model the lifetime of a secret user as a lease. For example, a nomad allocation/kubernetes pod gets a lease when it is started and the lease gets revoked immediately after it is stopped. We're kinda discussing if we could have this in CI as well - create a lease for a build, destroy the lease once the build is over. This also supports ttl, ttl-refreshes and enforced max-ttls for leases.
With that in place, you can tie dynamically issued secrets to this lease and the secrets are revoked as soon as the lease is terminated or expires. This has confused developers with questionable practices a lot. You can print database credentials in your production job, run that into a local database client, but as soon as you deploy a new version, those secrets are deleted. It also gives you automated, forced database credential rotation for free through the max_ttl, including a full audit log of all credential accesses and refreshes.
I know that would be a lot of infrastructure for a FOSS project by Bob from Novi Zagreb. But with some plugin-work, for a company, it should be possible to hide long-term access credentials in Vault and supply CI builds with dropped, enforced, short-lived tokens only.
As much as I hate running after these attacks, they are spurring interesting security discussions at work, which can create actual security -- not just checkbox-theatre.
Or just 'npm ci' so you install exactly what's in your package-lock.json instead of the latest version bumps of those packages. This "automatic updating" is a big factor in why these attacks are working in the first place. Make package updating deliberate instead of instant or on an arbitrary lag.
NPM was never "too insecure" and remains not "too insecure" today.
This is not an issue with npm, JavaScript, NodeJS, the NodeJS foundation or anything else but the consumer of these libraries pulling in code from 3rd parties and pushing it to production environments without a single review. How this still fly today, and have been since the inception of public "easy to publish" repositories remains a mystery to me even today.
If you're maintaining a platform like Zapier, which gets hacked because none of your software engineers actually review the code that ends up in your production environment (yes, that includes 3rd party dependencies, no matter where they come from), I'm not sure you even have any business writing software.
The internet been a hostile place for so long, that most of us "web masters" are used to it today. Yet it seems developers of all ages fall into the "what's the worst that can happen?" trap when pulling in either one dependency with 10K LoC without any review, or 1000s of dependencies with 10 lines each.
Until you fix your processes and workflows, this will continue to happen, even if you use pnpm. You NEED to be responsible for the code you ship, regardless of who wrote it.
They didn't deploy the code. That's not how this exploit works. They _downloaded_ the code to their machine. And npm's behavior is to implicitly run arbitrary code as part of the download - including, in this case, a script to harvest credentials and propagate the worm. That part has everything to do with npm behavior and nothing to do with how much anybody reviewed 3P deps. For all we know they downloaded the new version of the affected package to review it!
wait, I short-circuited here. wasn't the very concept of "libraries" created to *not* have to think about what exactly the code does?
imagine reviewing every React update. yes, some do that (Obsidian claims to review every dependency, whether new or an update), but that's due to flaws of the ecosystem.
take a look at Maven Central. it's harder to get into, but that's the price of security. you have to verify the namespace so that no one will publish under e.g. `io.gitlab.bpavuk.` namespace unless they have access to the `bpavuk` GitLab group or user, or `org.jetbrains.` unless they prove the ownership of the jetbrains.com domain.
Go is also nice in that regard - you are depending on Git repositories directly, so you have to hijack into the Git repo permissions and spoil the source code there.
“Personally, I never wear a seatbelt because all drivers on the road should just follow the road rules instead and drive carefully.”
I don’t control all the drivers on the road, and a company can’t magically turn all employees into perfect developers. Get off your high horse and accept practical solutions.
> and a company can’t magically turn all employees into perfect developers
Sure, agree, that's why professionals have processes and workflows, everyone working together to build the greatest stuff you can.
But when not a single person in the entire company reviews the code that gets deployed and run by users, you have to start asking what kind of culture the company has, it's borderline irresponsible I'd say.
I never, ever, do development outside of a podman container these days.
Basically if I am going to run some code from somewhere and I haven't read it, it goes in a container.
I know its not foolproof, but I can't believe how often people run code they haven't read where it can make a huge mess, steal secrets, etc. I'll probably get owned someday, I'm sure, but this feels like a bare minimum.
Probably because it’s fine 99.99% of the time and humans aren’t intuitively good at handling risk that functions like that. Besides, security is something handed off to specialists to free the devs up to focus on building things in most companies. We’re not going to change that no matter how much it represents some ideal.
You are just reducing the blast radius with use of podman; you will likely need secrets for your app to work, which will be exposed regardless of the podman approach.
>You have to make sure you're not putting any secrets in the container environment.
How does this work exactly?
containers still need env vars and access to databases and cloud environments. Without these the container is just useless isolated pod.
Not who you asked, but I have a similar setup. I can run everything I need for local development in that image (db, message queue emulator, cache, other services). So, setting things like environment variables or running postgres work the same as they do outside the container.
The image itself isn't the same image that the app gets deployed in, but is a portable dev environment with everything needed to build and run my apps baked in.
This comes with some nice side effects like being able to instantly spin up clean work environments on my laptop, someone elses, or a remote vm.
What do you mean? You can drop into bash in a container and run any arbitrary command, so `npm install foo` works just fine. Why would posthog's SDK be a special case?
I think the issue is more about what else has to go into or be connected to that container. Posthog isn't really useful if it's air-gapped. You're going to give it keys to access all kinds of juicy databases and analytics, and those NPM tokens, AWS/GCP/Azure credentials, and environment variables are exactly what it exfiltrates.
I don't run much on the root OS of my dev machine, basically everything is in a container or VM of some kind, but that's more so that I can reproduce my environment by copying a VMDK than in an effort to limit what the container can do to itself and data it has access to. Yeah, even with root access to a VM guest, an attacker they won't get my password manager, personal credit card, socials, etc. that I only use from the host OS... But they'll get everything that the container contains or has access to, which is often a lot of data!
You could create/run thin proxies for every external service that handle the auth side, and run each in a separate container. Orchestrate everything with docker-compose. Need to connect to cloud services for local development? Have a container with a proxy that transparently handles authentication. Now only that container has the secrets for talking to that service.
That's a lot of work though, and increases the difference between your local dev environment and prod.
You're severely limiting the blast radius. This malware works by exfiltrating secrets during installation, if I understood it correctly. If you would properly containerize your app and limit permissions to what is absolutely required, you could be compromised and still suffer little to no consequences.
Of course, this is not a real defense on its own, its just good practice to limit blast radius, much like not giving everybody admin rights.
Sure, but only the container is affected and it is always your responsibility to grant as little access as possible to the various credentials you may need to supply that environment. AFAICT with this worm, if you don't supply write-level GitHub credentials to the container (and you shouldn't!) and you install infected packages, the exploit goes no further.
Because PostHog's "Talk to a human" chat instead gets a grumpy gatekeeping robot (which also doesn't know how to get you to a working urgent support link), and there's nothing prominently on their home page or github about this:
Your status page isn't clear, but are all versions between the compromised and "safe to install" versions compromised or just the ones listed?
For example I installed `posthog-react-native` version `4.12.4` which is between the `4.11.1` version which is compromised and the safe to install version `4.13.0`. Is that version compromised or not?
Have a slack channel with them, these are the versions they mentioned:
posthog-node 4.18.1
posthog-js 1.297.3
posthog-react-native 4.11.1
posthog-docusaurus 2.0.6
The e18e community are reducing dependencies in popular libraries and building tools to prevent and reduce the impact of such attacks. Join if you want to help out! https://e18e.dev/
I don't know why you were downvoted. The actual page does not say SHA1, the attack as far as I know is not related to the SHA1 algorithm, and the name of the worm isn't intended as that sort of pun.
Upgrading once a month is insane at any rate, I could see the point in upgrading maybe once a year. For stable projects, you're very much fine upgrading only when there's a vulnerability or you need something from a newer release. Upgrade when you actually need to and use stable versions that have been out for a while, no need to hamster wheel it.
> bun_environment.js is a highly obfuscated malicious JavaScript file. It is over 10MB in size and contains a significant amount of built-in logic for information theft.
That seems a bit silly. Even on the beefy boi I used to work on a 10MB hiccup in deployable size would have been sufficient to make me look.
I released one of the packages I work on last night so of course this drew my eye. I assume checking the unpacked size hasn’t gotten ridiculous confirms that your code is not infected yeah? And looks like it’s past time for me to set up a separate account for release management.
Vendoring wouldn't really affect this at all. If anything it would keep you vulnerable for longer because your vendored copy keeps "working" after the bad package got removed upstream. There's a tiny chance that somebody would've caught the 10MB file added in review but that's already too late - the exploit happened on download, before the vendored copy got sent for review.
pnpm is the better comparison maybe in this context. Most of Deno's approach to security is focussed on whole program policies which doesn't do much in this context. Just like pnpm and others, they do have opt-in for install scripts though. The npm CLI is an outlier there by now.
Why? Everyone won't use cooldowns, but the key is to have just enough people running brand new to set off a warning/have systems that check dependencies scan and find vulns go off and the packages get pulled before production builds them.
Monocultures where everyone pulls and builds with every brand new thing for the most minor changes is dangerous.
1. Does not default to running post-install scripts (must manually approve each)
2. Let's you set a min age for new releases before `pnpm install` will pull them in - e.g. 4 days - so publishers have time to cleanup.
NPM is too insecure for production CLI usage.
And of course make a very limited scope publisher key, bind it to specific packages (e.g. workflow A can only publish pkg A), and IP bound it to your self hosted CI/CD runners. No one should have publish keys on their local, and even if they got the publish keys, they couldn't publish from local.
It’s clear from the structure and commit history they’ve been working their asses off to make it better, but when you’re standing at the bottom of a well of suck it takes that much work just to see daylight.
The last time I chimed in on this I hypothesized that there must have been a change in management on the npm team but someone countered that several of the maintainers were the originals. So I’m not sure what sort of Come to Jesus they had to realize their giant pile of sins needed some redemption but they’re trying. There’s just too much stupid there to make it easy.
I’m pretty sure it still cannot detect premature EOF during the file transfer. It keeps the incomplete file in the cache where the sha hash fails until you wipe your entire cache. Which means people with shit internet connections and large projects basically waste hours several times a week doing updates that fail.
I've by now grown to like Hashicorp Vaults/OpenBao's dynamic secret management for this. It's a bit complicated to understand and get to work at first, but it's powerful:
You mirror/model the lifetime of a secret user as a lease. For example, a nomad allocation/kubernetes pod gets a lease when it is started and the lease gets revoked immediately after it is stopped. We're kinda discussing if we could have this in CI as well - create a lease for a build, destroy the lease once the build is over. This also supports ttl, ttl-refreshes and enforced max-ttls for leases.
With that in place, you can tie dynamically issued secrets to this lease and the secrets are revoked as soon as the lease is terminated or expires. This has confused developers with questionable practices a lot. You can print database credentials in your production job, run that into a local database client, but as soon as you deploy a new version, those secrets are deleted. It also gives you automated, forced database credential rotation for free through the max_ttl, including a full audit log of all credential accesses and refreshes.
I know that would be a lot of infrastructure for a FOSS project by Bob from Novi Zagreb. But with some plugin-work, for a company, it should be possible to hide long-term access credentials in Vault and supply CI builds with dropped, enforced, short-lived tokens only.
As much as I hate running after these attacks, they are spurring interesting security discussions at work, which can create actual security -- not just checkbox-theatre.
Unfortunately you need to `npm login` with username and password in order to publish the very first version of a package to set up OIDC.
Each of those deps contains a constraint installing only for the relevant platform.
Windows: ~/AppData/Local/pnpm/config/rc
macOS: ~/Library/Preferences/pnpm/rc
Linux: ~/.config/pnpm/rc
Funny that this is getting downvoted, but it installs dependencies super fast, and has the same approval feature as pnmp, all in a simple binary.
NPM was never "too insecure" and remains not "too insecure" today.
This is not an issue with npm, JavaScript, NodeJS, the NodeJS foundation or anything else but the consumer of these libraries pulling in code from 3rd parties and pushing it to production environments without a single review. How this still fly today, and have been since the inception of public "easy to publish" repositories remains a mystery to me even today.
If you're maintaining a platform like Zapier, which gets hacked because none of your software engineers actually review the code that ends up in your production environment (yes, that includes 3rd party dependencies, no matter where they come from), I'm not sure you even have any business writing software.
The internet been a hostile place for so long, that most of us "web masters" are used to it today. Yet it seems developers of all ages fall into the "what's the worst that can happen?" trap when pulling in either one dependency with 10K LoC without any review, or 1000s of dependencies with 10 lines each.
Until you fix your processes and workflows, this will continue to happen, even if you use pnpm. You NEED to be responsible for the code you ship, regardless of who wrote it.
imagine reviewing every React update. yes, some do that (Obsidian claims to review every dependency, whether new or an update), but that's due to flaws of the ecosystem.
take a look at Maven Central. it's harder to get into, but that's the price of security. you have to verify the namespace so that no one will publish under e.g. `io.gitlab.bpavuk.` namespace unless they have access to the `bpavuk` GitLab group or user, or `org.jetbrains.` unless they prove the ownership of the jetbrains.com domain.
Go is also nice in that regard - you are depending on Git repositories directly, so you have to hijack into the Git repo permissions and spoil the source code there.
I don’t control all the drivers on the road, and a company can’t magically turn all employees into perfect developers. Get off your high horse and accept practical solutions.
Sure, agree, that's why professionals have processes and workflows, everyone working together to build the greatest stuff you can.
But when not a single person in the entire company reviews the code that gets deployed and run by users, you have to start asking what kind of culture the company has, it's borderline irresponsible I'd say.
I know its not foolproof, but I can't believe how often people run code they haven't read where it can make a huge mess, steal secrets, etc. I'll probably get owned someday, I'm sure, but this feels like a bare minimum.
How does this work? Every single npm package has tons of dependency tree nodes
You have to make sure you're not putting any secrets in the container environment.
How does this work exactly? containers still need env vars and access to databases and cloud environments. Without these the container is just useless isolated pod.
The image itself isn't the same image that the app gets deployed in, but is a portable dev environment with everything needed to build and run my apps baked in.
This comes with some nice side effects like being able to instantly spin up clean work environments on my laptop, someone elses, or a remote vm.
I don't run much on the root OS of my dev machine, basically everything is in a container or VM of some kind, but that's more so that I can reproduce my environment by copying a VMDK than in an effort to limit what the container can do to itself and data it has access to. Yeah, even with root access to a VM guest, an attacker they won't get my password manager, personal credit card, socials, etc. that I only use from the host OS... But they'll get everything that the container contains or has access to, which is often a lot of data!
That's a lot of work though, and increases the difference between your local dev environment and prod.
Of course, this is not a real defense on its own, its just good practice to limit blast radius, much like not giving everybody admin rights.
Most attacks on popular packages last at most a few months before detection.
Hey PostHog! What version do we need to avoid?
- posthog-node 4.18.1, 5.13.3 and 5.11.3
- posthog-js 1.297.3
- posthog-react-native 4.11.1
- posthog-docusaurus 2.0.6
If you make sure you're on the latest version you should be good.
For example I installed `posthog-react-native` version `4.12.4` which is between the `4.11.1` version which is compromised and the safe to install version `4.13.0`. Is that version compromised or not?
The e18e community are reducing dependencies in popular libraries and building tools to prevent and reduce the impact of such attacks. Join if you want to help out! https://e18e.dev/
Just this morning, after trying to make the case over the past year, we had a change landed to remove more than a dozen dependencies from typescript-eslint! https://bsky.app/profile/benmccann.com/post/3m6fcjax7ec2h
> SHA1-Hulud the Second Comming – Postman, Zapier, PostHog All Compromised via NPM
Should be Shai-Hulud, not SHA1-Hulud.
I'm surprised github is leaving these up.
If I want more stability for my OS I can choose Debian-stable rather than Ubuntu-nightly.
But for npm, there doesn't seem to be the same choice available. Either I sign up to the fire-hose or I don't.
I can choose to only upgrade once a month, but there's a chance I'm still getting a package that dropped 5 minutes before.
That seems a bit silly. Even on the beefy boi I used to work on a 10MB hiccup in deployable size would have been sufficient to make me look.
I released one of the packages I work on last night so of course this drew my eye. I assume checking the unpacked size hasn’t gotten ridiculous confirms that your code is not infected yeah? And looks like it’s past time for me to set up a separate account for release management.
My devs don't have access to production keys at all (and would never need them).
typo after the listed affected packages
~6 hours ago | 430 comments
shinhan is a large korean bank and this admin area geo json util seems to be embedded in many korean gov services.
shinhan-limit-scrap
korea-administrative-area-geo-json-util
As it arguably would have reduced impact
(I'm one of the Renovate maintainers and have recently pushed for this to be more of a widely used feature)
Monocultures where everyone pulls and builds with every brand new thing for the most minor changes is dangerous.
All libraries should strive to have dependency only on it.