The quant fund use case is the most interesting angle here. WARN filings have the rare property of being legally mandated with specific timing (60-day advance notice), which makes the signal horizon predictable in a way that most alternative data is not.
The big caveat: compliance is uneven. Companies under 100 employees are exempt, and there is a documented pattern of employers paying WARN Act penalties retroactively rather than filing -- especially in fast-moving situations where 60 days advance notice is operationally inconvenient. So the signal has systematic gaps at exactly the moments of highest market interest.
Have you looked at coverage rates vs. announced layoffs (e.g., correlation with Challenger Gray reports or JOLTS)? That gap number is basically the signal noise floor for any quant strategy built on this data.
> and there is a documented pattern of employers paying WARN Act penalties retroactively rather than filing -- especially in fast-moving situations where 60 days advance notice is operationally inconvenient.
Oh, I got a solution for that. Don't just go for WARN Act penalties. Go after offenders with the hammer called SEC and market manipulation regulations. That kind of stuff really hurts.
This dataset looks interesting but the site doesn’t instill a lot of confidence in data integrity.
On the Charts page the selected time range is 12/01/2025 to 02/28/2026 and shows 106,603 employees affected. But the horizontal bar chart with state level data shows numbers in millions. For example, CA has more than 2 million and IL has more than 1.7 million employees affected. Then the layoff map at the bottom shows only layoffs in Texas.
You can think about LLM-generated UIs/apps the same way you think about LLM-generated responses. It's a bunch of garbage, but if you know what you're looking for, you might find something useful.
This doesn't seem to work at all for stats-related apps/sites though, since you can't judge the accuracy of what's being presented. If the site claims it'll "take you to space," you don't take that literally, you just treat it as another AI artifact. But with numbers, you have no way to tell what's accurate and what's just made up.
> It's a bunch of garbage, but if you know what you're looking for, you might find something useful.
If you mean an LLM can be a brainstorming and hypothesis machine, and you have prior expertise to evaluate the proposals, then I can see that value. (Maybe that's what you meant, of course.)
But prior expertise is absolutely necessary. Otherwise we make ourselves victims of mis/disnformation. People say the Internet is a cesspool of mis/disinfo, yet nobody thinks it could affect them - we're all too smart, of course (no really, I'm the exception!). [0]
> This doesn't seem to work at all for stats-related apps/sites though, since you can't judge the accuracy of what's being presented.
I don't see the difference. If it's obvious nonsense, in numbers or in text, it's detectable. Everything else, see above.
[0] Research shows that thinking is a big reason people get fooled, and better educated people are easier to fool.
Interesting, though a lot of the UI seems broken. For my state I see some notice dates in the future (it's not explained why, if this is when the filing will be executed or if it's an incorrect filing date, as the column is just "Notice Date")
Some of the entries pull up a page that says "Failed to load company data: No company name provided in URL" from the state specific view (e.g, any link on https://warnfirehose.com/data/layoffs/california ). Has a vibe-coded feel to it.
I saw a lot of "Purchase dataset for city details" in places which was annoying. Wondering how much processing is being done on the base dataset to justify the pricing. Could you explain a bit on the normalization/cleaning process?
Definitely vibe coded. It follows the same generic Claude UI patterns for a data app / data oriented website. Not necessarily a bad thing per say if it's still curated and tweaked with human taste. And ofc validated to work :)
Great site thank you. Just curious, I looked up my company(more than 40k employees across the world including many US states) and it seems like I am not seeing the layoffs that colleagues have experienced. This is probably expected as im probably missing some criteria. Do all layoffs have to have a WARN notice or are there mechanisms/criteria that allow companies to lay people off without filing these noticies?
The big caveat: compliance is uneven. Companies under 100 employees are exempt, and there is a documented pattern of employers paying WARN Act penalties retroactively rather than filing -- especially in fast-moving situations where 60 days advance notice is operationally inconvenient. So the signal has systematic gaps at exactly the moments of highest market interest.
Have you looked at coverage rates vs. announced layoffs (e.g., correlation with Challenger Gray reports or JOLTS)? That gap number is basically the signal noise floor for any quant strategy built on this data.
Oh, I got a solution for that. Don't just go for WARN Act penalties. Go after offenders with the hammer called SEC and market manipulation regulations. That kind of stuff really hurts.
On the Charts page the selected time range is 12/01/2025 to 02/28/2026 and shows 106,603 employees affected. But the horizontal bar chart with state level data shows numbers in millions. For example, CA has more than 2 million and IL has more than 1.7 million employees affected. Then the layoff map at the bottom shows only layoffs in Texas.
The layoffs in the report are not listed in NJ's own warn notice https://www.nj.gov/labor/assets/PDFs/WARN/2026_WARN_Notice_A...
This doesn't seem to work at all for stats-related apps/sites though, since you can't judge the accuracy of what's being presented. If the site claims it'll "take you to space," you don't take that literally, you just treat it as another AI artifact. But with numbers, you have no way to tell what's accurate and what's just made up.
If you mean an LLM can be a brainstorming and hypothesis machine, and you have prior expertise to evaluate the proposals, then I can see that value. (Maybe that's what you meant, of course.)
But prior expertise is absolutely necessary. Otherwise we make ourselves victims of mis/disnformation. People say the Internet is a cesspool of mis/disinfo, yet nobody thinks it could affect them - we're all too smart, of course (no really, I'm the exception!). [0]
> This doesn't seem to work at all for stats-related apps/sites though, since you can't judge the accuracy of what's being presented.
I don't see the difference. If it's obvious nonsense, in numbers or in text, it's detectable. Everything else, see above.
[0] Research shows that thinking is a big reason people get fooled, and better educated people are easier to fool.
Some of the entries pull up a page that says "Failed to load company data: No company name provided in URL" from the state specific view (e.g, any link on https://warnfirehose.com/data/layoffs/california ). Has a vibe-coded feel to it.
I saw a lot of "Purchase dataset for city details" in places which was annoying. Wondering how much processing is being done on the base dataset to justify the pricing. Could you explain a bit on the normalization/cleaning process?