One key line about ATMs is buried deep in the article:
> the number of tellers per branch fell by more than a third between 1988 and 2004, but the number of urban bank branches (also encouraged by a wave of bank deregulation allowing more branches) rose by more than 40 percent
So, ATMs did impact bank teller jobs by a significant amount. A third of them were made redundant. It's just that the decrease at individual bank branches was offset by the increase in the total number of branches, because of deregulation and a booming economy and whatever else.
A lot of AI predictions are based on the same premise. That AI will impact the economy in certain sectors, but the productivity gains will create new jobs and grow the size of the pie and we will all benefit.
My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.
For example, ATMs being automated did cause a negative drop in teller jobs, but fast money any time does increase the velocity of money in the economy. It decreases savings rate and encourages spending among the class of people whose money imparts the highest multiplier.
AI does not. All the spending on AI goes to a very small minority, who have a high savings rate. Junior employees that would have productively joined the labor force at good wages, must now compete to join the labor force at lower wages, depressing their purchasing power and reducing the flow of money.
Look at all the most used things for AI: cutting out menial decisions such as customer service. There are no "productivity" gains for the economy here. Each person in the US hired to do that job would spend their entire paycheck. Now instead, that money goes to a mega-corp and the savings is passed on to execs. The price of the service provided is not dropping (yet). Thus, no technology savings is occurring, either.
In my mind, the outcomes are:
* Lower quality services
* Higher savings rate
* K-shaped economy catering to the high earners
* Sticky prices
* Concentration of compute in AI companies
* Increased price of compute prevents new entrants from utilizing AI without paying rent-seekers, the AI companies
* Cycle continues all previous steps
We may reach a point where the only ones able to afford compute are AI companies and those that can pay AI companies. Where is the innovation then? It is a unique failure outcome I have yet to see anyone talk about, even though the supply and demand issues are present right now.
Your argument is (mildly) a variant of the broken window fallacy.
AI will bring about a de-sequestering of talent and resources from some sectors of the economy. It's very difficult to predict where these people and resources will go after that, and what effect that will have upon the world.
> My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.
Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable. The problem with services is that they're typically resistant to productivity growth, and that's finally changing.
If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam.
"Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable."
You've expressed very clearly what LLMs would have to do in order to be economically transformative.
"If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam."
It's not that process innovations are lacking, it's that product innovations are perceived as an indignity by most people. Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
> Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
Is the value in the outcome of receiving medical advice and care, and becoming educated, or is the value just in the co-opting of another human being's attention?
If the value is in the outcome, the means to achieving that aren't of much consequence.
More subtly, what is an education? What is care? As you point out, the LLMs are (or probably will become) perfectly good at the measurable parts of those services; but I think the residual edge of “good” education/care is more than just the other human’s co-opted attention.
How many of us have a reminiscence that starts “looking back, the most life-changing part of my primary or secondary education was ________,” where the blank is a person, not a curriculum module? How many doctors operate, at least in part, on hunches—on totalities of perception-filtered-through-experience that they can’t fully put into words?
I’m reminded of the recent account of homebound elderly Japanese people relying on the Yakult delivery lady partly for tiny yoghurt drinks, but mainly for a glimmer of human contact [0]. Although I guess that cuts to your point: the value in that example really is just co-opting another human’s attention.
In most of these caring professions, some of the value is in the measurable outcome (bacterial infection? Antibiotic!), but different means really do create different collections of value that don’t fully overlap (fine, I’ll actually lay off the wine because the doctor put the fear of the lord in me).
I guess the optimistic case is, with the rote mechanical aspects automated away, maybe humans have more time to give each other the residual human element…
The premise of your argument is that "the outcome" can be separated from the process. This is true enough for manufacturing bricks: I don't much care what processes was used to create a brick if it has certain a compressive strength, mass, etc.
But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished, even if a distinction in thought is possible among economic theorists.
It's very true for healthcare (especially mental healthcare) and education today as well, because for most people, the choice isn't LLM vs. human attention - it's LLM vs. no access at all.
Even if you have perfect medical information and advice through an LLM, can you perform surgery on yourself? Can you prescribe yourself whatever medication you think you need?
For education, if you know as much as the average Harvard grad, can you give yourself a Harvard degree that will be as readily accepted in a job application or raising funds for a new business?
> the value just in the co-opting of another human being's attention?
Thats a weird way of describing it.
A machine telling me to exercise and eat right will be ignored, even if the advice is correct. A person I trust taking me aside, looking me in the eye and asking me the same would be taken far more seriously.
It also seems like the value of quality tutoring that doesn't primarily function as social/class signaling goes down as tools capable of automating high quality intellectual work are more widely available.
It depends on outcome again: is the value of tutoring the social class elevation, or is it in the outcome of becoming more skilled and knowledgable?
There's also the deeper philosophical question of what is the meaning of life, and if there's inherent value in learning outside of what remunerative advantages you reap from it.
By the time it replaces doctors, nobody but today's investors will be able to afford anything at all. The X-shaped economy would have owners in the V and manual laborers (assuming this doesn't translate to gains in automation) in the ^.
I’m sick of this idea that “free” services are beneficial to society. There is no such thing as a free lunch; users are essentially bartering their time, attention, IP (contributed content) and personal/behavioral data in exchange for access to the service.
By selling those services at a cost of “free”, hyperscalers eliminate competition by forcing market entrants to compete against a unit price of 0. They have to have a secondary business to subsidize the losses from servicing the “free” users, which of course is usually targeted advertising to capitalize on the resources paid by users for access. Or simply selling to data brokers.
With the importance of training data and network effects, “free” services even further concentrate market power. Everyone talks about how AI is going to take away jobs, but no one wants to confront how badly the anticompetitive practices in big tech are hurting the economy. Less competition means less opportunity for everyone else, regardless of consumer benefit.
The only way it works if the “free” service for tutoring or healthcare is through government subsidies or an actual non-profit. Otherwise it’s just going to concentrate market power with the megacorps.
This 1000x. "Free" is only a viable business model if the govt funds it. Otherwise, the $$ has to come from somewhere else in the company - how long will it take for the company to lose interest in a loss-leader when they're making $$ from other parts?
Look at all the deprecated Google products. What happens when Gemini-SaaS makes billions from licensing to other companies, and Gemini-Charity-for-the-poors starts losing money?
Sadly, the bigger the $$ in the tech pie, the more we have attracted robber barons, etc.
> We may reach a point where the only ones able to afford compute are AI companies
Nah. I think "good enough AI for 95% of people" will be able to run locally within 3-5 years on consumer-accessible devices. There will be concentration of the best compute in AI companies for training, but inference will always become cheaper over time. Decommissioned training chips will also become inference chips, adding even more compute capacity to inference.
This is like computing once again. In 1990 only the upper class could afford computers, as of 2000 only the upper class owned mobile phones, as of now more or less everyone and their kid has these things.
Computers were roughly ~ $1000 in 1990. How did your lower-middle class family justify a $1000 expenditure inflation adjusted to $2565 today? Average minimum wage in the US is $11.30 so that's 29 days working at minimum wage.
My family was on the border of upper-lower and lower-middle and we bought a computer once and used it for 10+ years. I dumpster dove later to scavenge parts for upgrading until the mid 2000s when cheap computers became available.
I would argue we've even already seen this play out with productivity gains across the economy over the last 40 years. The American middle class has been gradually declining since the '80s. AI seems likely to accelerate that trend for the exact reasons you point out.
A lot of people recognize this pattern even if they can't articulate it, and that's why they hate AI so much. To them, it doesn't matter if AI lives up to the hype or not. Either it does and we're staring down a future of 20%+ unemployment, or it doesn't and the economy crashes because we put all our eggs in this basket.
No matter what happens, the middle class is likely fucked, and anyone pushing AI as "the future" will be despised for it whether or not they're right.
Personally, I think the solution here might be to artificially constrain the supply of productivity. If AI makes the average middle-class worker twice as productive, then maybe we should cut the number of work hours expected from them in a given week.
The complete unwillingness of people in power to even acknowledge this problem is disheartening, and is highly reminiscent of the rampant corruption and wealth inequality of the Gilded Age.
Technological progress that hurts more people than it helps isn't progress, it's class warfare.
Right there with you. Sure, I have gained a lot as a software engineer in the valley (I guess I'm upper-middle class now), but I'd give it up and go right back to lower-middle class (1980s) status I was raised in if it meant my kids could also aspire to a similar lower-middle class life.
This suicide-pact of "either AI goes crazy and 100 people rule the world with 99% of the world's wealth" or "AI fails badly and everyone's standard of living drops 3 levels, except for the 100 people that rule the world with 99% of the world's wealth" is not what I signed up for. Nor is it in any way sustainable or wise.
Too much class distinction / wealth between lower/upper classes, and a surplus of unemployed lower-class men is how many revolts/revolutions/wars have started.
The longer we ignore the collapse of the middle class, the angrier the bottom half of the economy will get and the more justified they will feel in enacting retribution. We absolutely have historical precedents for what happens here: The French Revolution, the Gilded Age, etc. People will only tolerate a declining standard of living for so long.
> Technological progress that hurts more people than it helps isn't progress, it's class warfare.
I think this is right. The historical analogue I keep drifting toward is Enclosure. LLM tech is like Enclosure for knowledge work. A small class of capital-holding winners will benefit. Everyone else will mostly get more desperate and dependent on those few winners for the means of subsistence. Productivity may eventually rise, but almost nobody alive today will benefit from it since either our livelihood will be decimated (knowledge workers, for now) or we will be forced into AI slop hell-world where our children are taught by right-wing robo-propagandists, we are surveilled to within an inch of our lives, and our doctor is replaced by an iPad (everyone who isn't fabulously wealthy). Maybe we can eek out a living being the meat arms of the World Mind, or maybe we'll turned into hamburger by robotic concentration camp guards.
IIRC, the way this worked was that by decreasing tellers required per branch, it made a lot more marginal locations pencil out for branches, at a time when the banking industry was expansionary.
This is not so helpful if AI is boosting productivity while a sector is slowing down, because companies will cut in an overabundant market where deflationary pressure exists.
We're already seeing large software companies figure out that they don't need 5,000 developers. They probably only need 1,000 or maybe even fewer.
However, the number of software companies being started is booming which should result in net neutral or net positive in software developer employment.
Don't count all those chickens before they hatch. There might be more started but do they all survive? Think back to the dot-com boom/crash for an example of where that initial gold rush didn't just magically ramp forever. There were fits and starts as the usefulness of the technology was figured out.
Why will we need 1000 companies tomorrow to do the same thing that 100 companies are doing today? If they are really so efficient because of AI then won't 10 companies be able to solve the same problems?
Because that car repair company with 3 local stores previously couldn't justify building custom software to make their business more efficient and aligned with what they need. The cost was too high. Now they might be able to.
Plenty of businesses need very custom software but couldn't realistically build it before.
I see no way that company would save more money from hiring an experienced developer compared to paying their yearly invoice on the COTS product doing the same thing today. The only way this works is with a very wage suppressing effect.
Car repair companies won’t see a meaningful improvement to their bottom line with more custom software. Will it increase the number of cars per employee per day they can repair?
I do bespoke work like this, but mostly to replace software that’s starting to cost mid 5 figure amounts per year for a SaaS setup and the support phone line has been replaced by an LLM chat bot.
This is one of the key "inefficiencies" of the private sector - there might be one winner at the end of the day providing the product that fills the market niche, but there was always multiple competitors giving it a go in the mean time.
A recent example, Mitchell Hashimoto was pointing out that he wasn't "first to market" with his product(s), he was (at least) SEVENTH
Do the booming companies pay the same as the ones who did layoffs? If you're laid off from Meta or other top tier paying company (the behemoths doing layoffs) you might have a tough time matching your compensation.
But do they need to? If a <role X> job at a top tier company making $600k is eliminated and two <role X> jobs at a "more average" company making $300k replace it; is that really a bad thing? Clearly, there's some details being glossed over, but "one job paying more than a person really needs" being replaced by "two jobs, each paying more than a person really needs" might just be good for society as a whole.
It doesn't seem too bad when you cherry pick an outlier example, but what about when the person making $100k now makes $50k?
I'm sure the retort of the AI optimist will be that AI will make the things that person buys cheaper, and there may be truth to that when it comes to things that people buy with disposable income...
But how likely is AI to make actual essentials like housing and food cheaper?
I think this is assuming that the labor market knows how to identify the dirct value of devs. This already seems to be a problem across the board regardless of job role.
I think this is true in the short/medium term, hence the confusing picture of layoffs but growing number of tech roles overall. The limit maybe be just millions of companies with one tech person and a team of agents doing their bidding.
Maybe software engineers will be like your personal lawyer, or plumber. Every business will have a software engineer on dial, whether it's a small grocery store or a kindergarten.
Previously, software devs were just way too expensive for small businesses to employ. You can't do much with just 1 dev in the past anyway. No point in hiring one. Better go with an agency or use off the shelf software that probably doesn't fill all your needs.
> Someone still needs to review and test the code, and if the code is for embedded systems I find it unlikely.
I feel like you didn't understand my comment. I am predicting that there is no code to review. You simply ask the AI to do stuff and it does it.
Today, for example, you can ask ChatGPT to play chess with you, and it will. You don't need a "chess program," all the rules are built in to the LLM.
Same goes for SaaS. You don't need HR software; you just need an LLM that remembers who is working for the company. Like what a "secretary" used to be.
> I feel like you didn't understand my comment. I am predicting that there is no code to review. You simply ask the AI to do stuff and it does it.
I didn’t, and thanks for clarifying for me.
This doesn’t pass the sniff test for me though - someone needs to train the models, which requires code. If AI can do everything for you, then what’s the differentiator as a business? Everything can be in chatGPT but that’s not the only business in existence. If something goes wrong, who is gonna debug it? Instead of API requests you would debug prompt requests maybe.
We already hate talking to a robot for waiting on calls, automated support agents, etc. I don’t think a paying customer would accept that - they want a direct line to a person.
I can buy the argument that the backend will be entirely AI and you won’t need to be managing instances of servers and databases but the front end will absolutely need to be coded. That will need some software engineering - we might get a role that is a weird blend of product + design + coding but that transformation is already happening.
Honestly the biggest change I see is that the chat interface will be on equal footing with the browser. You might have some app that can connect to a bunch of chat interfaces that is good at something, and specializations are going to matter even more.
It was a bit of a word vomit so thanks for coming to my TED Talk.
Because AI agents are tool users. Why does AI need to research 2026 tax code changes and then try to one-shot your taxes when it can just use Turbotax to do it for you? Turbotax has the latest 2026 tax changes coded into the app. I'd feel much more confident if AI uses Turbotax to do my taxes than to try to one-shot it.
50 years ago, using a personal computer was an extravagant luxury. Until it wasn't.
30 years ago, carrying a powerful computer in your pocket was unthinkable. Until it wasn't.
Right now, it's cheaper to run your accounting math on dedicated adder hardware. But Llms will only get cheaper. When you can run massive LLMs locally on your phone, it's hard to justify not using it for everything.
Not until power access/generation is MUCH cheaper. Long, long, long way off.
If I can run 50,000 fixed tasks that cost me $0.834/hr but OpenAI is costing $37/hr and the automation takes 40x as long and can make TERRIBLE errors why the fuck would I not move to the deterministic system?
It will never ever be as cheap as as cron job and a shell script. There is a certain limit to how efficient using an LLM to do a job vs using an LLM to create a job is. There is a large distinction in compute and power resources between the two. Don't mistake one for the other.
If I can run 50,000 fixed tasks that cost me $0.834/hr but OpenAI is costing $37/hr and the automation takes 40x as long and can make TERRIBLE errors why the fuck would I not move to the deterministic system?
Because you'll be outcompeted by people who make the best of the nondeterministic system.
Ah, so that explains why job growth is at a steady pace and the software industry hasn’t been experiencing net negative job growth the past year or so.
How silly of me to rely on reality when it’s so obvious that AI is benefiting us all.
Anyways, this is the start. Companies are adjusting. You hear a lot about layoffs but unemployments. But we're in a high interest environment with disruptions left and right. Companies are trying to figure out what their strategy is going forward.
I don't expect to see a boom in software developer hiring. I think it'll just be flat or small growth.
We are in negative growth, and the current leadership class keeps talking about all the people they can get rid of.
Look at the Atlassian layoff notice yesterday for example where they lied to our faces by saying they were laying off people to invest more in AI but they totally aren’t replacing people with AI.
No, I think it's likely that this is the first major productivity boom that won't be followed with a consumption boom, quite the opposite. It'll result in a far greater income inequality. Things will be cheaper but the poor will have fewer ways to make money to afford even the cheaper goods.
It's not that simple. If a poor person makes zero dollars how much of the reduced cost item could they now afford?
We have a massively distorted economy driven by debt financialization and legalised banking cartels. It leads to weird inversions. For example as long as housing gets increasingly expensive at a predictable rate the housing becomes more affordable instead of less as banks are more able to lend money. The inverse is also true, if housing were to drop at a predictable rate fewer people would be able to get a mortgage on the house so fewer people could afford to buy one. Housing won't drop below cost of materials and labor (ignoring people dumping housing to get rid of tax debts as I would include such obligations in the cost of acquisition). Long term it's not sustainable but long term is multi-generational.
Fwiw in places like parts of the midwest housing is below cost of labor and materials. An existing house might be $70k and several bedrooms at that. You just can’t get anything built for that even if you build it all yourself.
I intended to make a weaker claim of ‘in general long run / maintainable’ circumstances and should have done so.
Many low cost areas have bad crime problems, there is another little phenomenon where the wealthy by doing a poor job in governance can increase the price of their assets by making alternative assets (lower cost housing) less desirable due to the increase in crime.
> Housing won't drop below cost of materials and labor
Only if every person born needs to have a brand new house constructed for them.
Not if - you know - people die and don't need a house to live in anymore.
But considering how it's been the past 20 years, I'm starting to expect that a lot of the current elder generation will opt to have their houses burnt down to the ground when they die. Or maybe the banker owned politicians will make that decision for them with a new policy to burn all property at death to "combat injustice". Who knows what great ideas they have?
"will" being the operative word here. High school level Econ makes no promises about WHEN prices adjust. Price setting is a whole science highly susceptible to collusion pressure. Prices generally drop only when the main competition point is price (commodities). In this case the main issue is that AI is commoditizing many if not all types of labor AND product. In a world where nothing has value how does anything get done?
Cool concept, but this isn't 1980. We've been sold these sorts of concepts for 40+ years now and things have only gotten worse.
We have a K shaped economy. Top earners take the majority. The top 20% make up 63% of all spending, and the top 10% accounted for more than 49%. The highest on record. Businesses adapt to reality and target the best market, in this case the top 10 to 20%, and the rest just get ignored, like in many countries around the world.
All that unlocked money? In a K shaped economy it mostly goes to those at the top, who look to new places to park/invest it, raising housing prices, moving the squeeze of excess capital looking for gains to places like nursing homes and veterinary offices. That doesn't result in prices going down, but in them going up.
The benefit to the average American will be more capital in the top earners' hands looking for more ways to do VC style squeezes in markets previously not as ruthless but worth moving to now as there are less and less 'untapped' areas to squeeze (because the top 10-20% need more places to park more capital). The US now has more VC funds than McDonalds.
Irrelevant aside: But I hold grudge against the economists who picked the letter K to represent increased inequality. They missed the perfect opportunity to use the less-then inequality symbol (<) and call it a “less-then economy”.
The only solution here is to stop tying people's value to their productivity. That makes a lot of sense in the 1900s but it makes a lot less sense when the primary faucet of productivity is automation. If you insist on tying a person's fundamental right to a decent and secure life to their productivity and then take away their ability to be productive you're left with a permenant and growing underclass of undesirables and an increasingly slim pantheon of demigods at the top.
We have written like, an ocean of scifi about this very subject and somehow we still fail to properly consider this as a likely outcome.
They key is to do it by setting up the right structure or end up with it naturally, not by laws and control, because then you end up in a oppressive nanny state at the very best.
> They key is to do it by setting up the right structure or end up with it naturally
This is extremely hand-wavy.
Can you be more concrete in what you think this looks like?
The way I see it, we're only 5-10 years away from having general purpose robots and AI that can basically do anything. If the prices for that automation is low enough, there will be massive layoffs as workers are replaced.
There's no way to "naturally" solve the problem of skyrocketing unemployment without government involvement.
Speaking of fairytales, you're living in your own.
Disconnecting value from productivity sounds good if you don't examine any of the consequences.
Can you build a society from scratch using that principle? If you can't then why would it work on an already built society?
Like if we're in an airplane flying, what you're saying is the equivalent getting rid of the wings because they're blocking your view. We're so high in the sky we'd have a lot of altitude to work with, right?
Imagine a society where one person produces all the value. Their job is to do highly technical maintenance on a single machine that is basically the Star Trek replicator: it produces all the food, clothing, housing, energy, etc. that is enough for every human in this society and the surplus is stored away in case the machine is down for maintenance, which happens occasionally. Maintaining the machine takes very specialized knowledge but adding more people to the process in no way makes it more productive. This person, let’s call them The Engineer, has several apprentices who can take over but again, no more than 5 because you just don’t need more.
In this society there is literally nothing for anyone else to do. Do you think they deserve to be cut out of sharing the value generated by The Engineer and the machine, leaving them to starve? Do you think starving people tend to obey rules or are desperate people likely to smash the evil machine and kill The Engineer if The Engineer cuts them off? Or do you think in a society where work hours mean nothing for an average person a different economic system is required?
It's already completely disconnected, don't worry about it. Most people who own any real estate earn more in price appreciation per year than they earn in take-home salary from their real full-time jobs.
to the point of where the cost of bringing the goods to market or its opportunity cost exceed the price the market will bear. Its why people living in areas of material poverty don't just get everything on discount.
I go back and forth on this. I relate it to software. I don't think AI can meaningfully write software autonomously. There are people who oversee it and prompt it and even then it might write things badly. So there needs to be a person in the loop. But that person should probably have very deep knowledge of the software especially for say low level coding. But then that person probably developed the knowledge by coding things by hand for a long time. Coding things by hand is part of getting the knowledge. But people especially students rely heavily on AI to write code so I assume their knowledge growth is stunted. I don't know mathematical proofs will help here. The specs have to come from somewhere.
I can see AI making things more productive but it requires humans to be very expert and do more work. That might mean fewer developers but they are all more skilled. It will take a while for people to level up so to speak. It's hard to predict but I think there could be a rough transition period because people haven't caught on that they can't rely on AI so either they will have to get a new career or ironically study harder.
An AI’s ability to meaningfully write software autonomously has changed hugely even in the last 6 months. They might still require a human in the loop, but for how long?
It's probably an 80/20 or 90/10 problem. Tesla FSD also seems amazing to some percentage of the population, but the more widely it get used, the more cracks are appearing.
And then you let them train themselves and no one notices when they "accidentally" remove the guardrail prompts from the next version. And another 10 years later, almost no one remembers how "The Guardian" learns new things or how to stop it from being evil.
Quantitative measures of this are very poor, and even those are mixed.
My subjective assessment is that agents like Copilot got better because of better harnesses and fine tuning of models to use those harnesses. But they are not improving in the direction of labor substitution, but rather in the direction of significant, but not earth-shaking, complementarity. That complementarity is stronger for more experienced developers.
This LLM ability is directly proportional to the quantity of encoded (i.e. documented) knowledge about software development. But not all of the practice has thus been clearly communicated. Much of mastery resides in tacit knowledge, the silent intuitive part of a craft that influences the decision making process in ways that sometimes go counter to (possibly incomplete or misguided) written rules, and which is by definition very difficult to put into language, and thus difficult for a language model to access or mimic.
Of course, it could also be argued that some day we may decide that it's no longer necessary at all for code to be written for a human mind to understand. It's the optimistic scenario where you simply explain the misbehavior of the software and trust the AI to automatically fix everything, without breaking new stuff in the process. For some reason, I'm not that optimistic.
I am not saying AI's abilities are the shortcoming here. The problem is that people need to trust that software has certain attributes. For now, that requires someone with knowledge to be part of it. It's quite possible development becomes detached from human trust. As I said that would reduce the number of developers but the ones who are left would have to have deep knowledge to oversee it and even that may be gone. Whatever happens in the future, for now I think people will have to level up their knowledge/skills or get a new career and that's probably true for most professions.
> So, ATMs did impact bank teller jobs by a significant amount.
Did it? This sounds like describing a company opening a new campus as laying off a third of their employees, partly offset by most of them still having the same job in the same company but at a new desk.
I also notice that in the very first graph bank teller jobs were growing rapidly until ATMs started to be deployed, and then switched to growing very slowly. That sure suggests to me that if ATMs didn't exist bank teller growth would have continued at a faster pace than it actually did.
I don't understand the economics behind bank branches. Some of the best real estate by me is taken up by giant bank branches that are always mostly empty with a few bored employees inside. And they open new ones all the time. So it's not like they're stuck in some lease.
But when those employees are meeting with clients, they create money out of thin air by making loans, which then is used to pay for goods and services such as leases.
Right. What banks do is sell loans. That's the profit center. Teller windows, vaults, and cash handling are all low or no revenue cost items.
So newer bank branches look like car dealership offices. There are many little glass rooms where you sit down with a bank employee and discuss loans and other financial products. That's where the money is made.
There's a small area in back with traditional tellers. It's not where the money is made.
Correct. The story isn’t correct even in the original formulation. US population increased by 50% from 1980 to 2010, and the economy became far more financialized. But the number of bank teller jobs barely grew during that period, even before the iPhone.
I don't think it will, but I also think it's not all doom and gloom.
I think it would be a mistake to look at this solely through the lens of history. Yes, the historical record is unbroken, but if you compare the broad characteristics of the new jobs created to the old jobs displaced by technology, they are the same every time: they required higher-level (a) cognitive (b) technical or (c) social skills.
That's it. There is no other dimension to upskill along.
And LLMs are good at all three, probably better than most people already by many metrics. (Yes even social; their infinite patience is the ultimate advantage. Prompt injection is an unsolved hurdle though, so some relief there.)
Plus AI is improving extremely rapidly. Which means it is probably advancing faster than most people can upskill.
An increasingly accepted premise is that AI can displace junior employees but will need senior employees to steer it. Consider the ratio of junior to senior employees, and how long it takes for the former to grow into the latter. That is the volume of displacement and timeframe we're looking at.
Never in history have we had a technology that was so versatile and rapidly advancing that it could displace a large portion of existing jobs, as well as many new jobs that would be created.
However, what few people are talking about is the disintermediating effect of AI on the power of capital. If individuals can now do the work of entire teams, companies don't need many of them. But by the same token(s) (heheh) individuals don't need money, and hence companies, to start something and keep it going either! I think that gives the bottom side of the K-shaped economy a fighting chance to equalize.
You are right that LLMs are uniquely broad in what they can do compared to previous waves -- but I think the "no dimension left to upskill along" framing understates human adaptability.
What is more likely is that the valuable skills shift toward things LLMs cannot own: relationships, physical presence, institutional trust, creative taste, and domain judgment that comes from years of actual experience. The bank teller analogy is actually a good one -- the tellers who survived became relationship managers, not better cash-counters.
The practical challenge for workers in transition is not really "what skills should I learn" -- it is "how do I reframe what I already know for a different role." That translation step is where most people get stuck. Your skills might be directly applicable to a new role but your resume still reads like your old one.
Why is that the endgame with people though? Maybe I'm just jaded but several different human nature elements came to mind when I read your comment:
Greed/Change Avoidance:
If someone invented replicators right now, even if they gave it completely away to the world, what would happen? I can't imagine the finance and military grind just coming to an end to make sure everyone has a working replicator and enough power to run it so nobody has to work anymore. Who gives up their slice of society to make that change and who risks losing their social status? This is like openai pretending "your investment should be considered a gift because money will have no value soon". That mask came off really quickly.
Status/Hate:
There are huge swaths of the US population that would detest the idea that people they see as "below" them don't have to work. I can imagine political movements doing well on the back of "don't let the lazy outgroup ruin society by having replicators".
Fuck the Poor:
We don't do the easy things to eliminate or reduce suffering now, even when it has real world positive effects. Malaria, tuberculosis, even boring old hunger are rampant and causing horrible, unnecessary suffering all over the world.
Dont tread on me:
I shudder when I think of the damage someone could do with a chip on their shoulder and a replicator.
The road to hell is paved with good intentions:
What happens when everyone can try their own version of bio engineering or climate engineering or building a nuclear power plant or anything else. Invasive species are a problem now and I worry already when companies like Google decide to just release bioengineered mosquitos and see what happens. I -really- worry when the average person decides a big complicated problem is actually really simple and they can just replicate their particular idea and see what happens. Whoops, ivermectin in the water supply didn't cure autism!
Someone give me some hope for a more positive version here because I bummed myself out.
Does it? The Communist Manifesto famously hypothesized that those who have the replicators, so to speak, will not allow society to freely use them.
The future is anyone's guess, but it is certain that 100% of your needs being able to be met theoretically is not equivalent to actually having 100% of your needs met.
We have to grow out of those kind of dreams. That's like a kid dreaming that when he grows up he'll eat ice cream for dinner every day.
People when they mature have an innate desire to work. It is good for body and mind. If you're curious about the world, you'll have to do some work one way or another to achieve your goals and satisfy your curiosity.
If "society" is just a function of basic needs, then there's plenty of places in the world to visit where people live like that and use any excess energy in endless fighting against each other instead of work.
I'm based in the rich Western world. Whenever I travel elsewhere, I'm amazed by the cheapness of labor.
Humans would attend a gas station or fetch items in a store. Why? They're completely unneeded, I can do (and WANT to do) that myself.
I always feel sad about these people, trapped in an economic system that forces them into useless labour when they could spend their time learning actually useful skills.
That labor cheapness is enabled by a cheapness of cost of living. Those things all tend to feed onto each other.
> I always feel sad about these people, trapped in an economic system that forces them into useless labour when they could spend their time learning actually useful skills.
It's useful labor. Yes you could do it yourself, but it gives them a job which they can ultimately use to afford food and where they live.
I mostly only feel bad for kids doing that sort of labor as it means they aren't getting an education. But for an adult? It speaks to something a bit right about their economic situation that they can stay a float by merely fetching items in a store.
I wish in the US that it was possible for someone to make a living doing doordash or instacart.
Because the presence of a human likely prevents shoplifting and / or vandalism. It must make economic sense for the gas station owner to employ a human, and I suppose this is the sense.
What actual useful skill do you think the gas station keeper could learn? Is their employment the thing that prevents them from learning these skills?
> What actual useful skill do you think the gas station keeper could learn?
I mean, it's possible there are useful skills they could learn but there's not the interest or desire to learn those skills. It's completely possible that person is perfectly content doing that work.
First: Most people believe it was Netflix that killed Blockbuster, but that's not strictly correct. It was the combination of Netflix and Redbox that really sealed the deal for Blockbuster (and video rental generally). It normally takes not one, but at least two things to really fill the full functionality of a old paradigm. Also it's human nature to focus heavily on one thing (Blockbuster was aware of Netflix) but lose sight of getting flanked by something else.
Second: Not listed here is how banks themselves have changed to be almost entirely online, which in many cases is more of a outsourcing play than a labor destruction play. My favorite example of this is Capital One, where the vast majority of their credit card operations literally cannot be solved in a branch. You must call them to say, resolve a fraud dispute. Note that this still requires staffing and is (not yet) fully automated, just not branch staffing. It doesn't make sense to staff branches to do that.
I do not get what's special about banking apps as opposed to online banking. I've been doing online banking in the browser on a PC since before apps and I'm still doing it because dealing with data on a phone is painful compared to a PC.
I know this is true, but for serious tasks, I need the screen real estate. I'm amazed at what some people can do from a phone, but also wonder if they're missing things, of if it's actually inefficient.
I'm going to bet that you are a millennial or older? We need our big screens for $IMPORTANT work (buying big things, money stuff, etc.). GenZ tends to be less bothered by it and just does it all on the tiny screen in their pocket. It's time to schedule a colonoscopy.
Just like with a lot of things. Sure you could do a thing better, faster, more efficiently on a PC, but some people just don't care when 80% is good enough.
My boomer dad does more things on his phone than I do and I'm Gen X. It's actually astonishing how much he does on his iPhone. I'm dragging out the laptop and he's on his iPhone happy as a clam.
I've heard that GenX/Millenials are in a sort of PC goldilocks zone. People older than that cohort don't know computers and therefore use phones for everything, people younger don't know computers and also use phones for everything.
I used to be with ‘it’, but then they changed what ‘it’ was. Now what I’m with isn’t ‘it’ anymore and what’s ‘it’ seems weird and scary. It’ll happen to you!
that's kind of an ad hominem, but also beside the point: most bank apps (and websites) are actually absolute garbage, especially the top ones, just one example: the Citi app (on different phones) for a very long time refused to allow me to make a payment or change my password, so i had no choice but to use desktop. Somehow still, top banks' ugly websites seem to allow more functionality/fewer bugs than their mobile apps, which are very often just dumbed-down webviews or simplifications of their websites.
I am going to guess you are 30 or older. Google image search "laptop tasks millennial" to see that this is a feeling shared among our cohort but not the younger cohort.
If you consider a website fully laden with ads as working. I have yet to find an ad blocker that works on my iOS/iPad OS that works as well as on my computer. I also hate apps with all of their invasive data hoarding that is much more controllable on my computer. So to me, websites on mobile are broken as they are full of malware vectors that are not present when looking at the same website on my non-mobile device. For me, website === desktop only
That wasn't true before smartphones, everyone had a computer so they could access the Internet. Except maybe in developing countries - but the article is about the US.
At one point, humans had not stepped on the moon. At one point, we didn't know about antibiotics. At one point....
It doesn't matter what used to be, we're discussing what is now. We now have mobile devices that are much cheaper for people to obtain than a computer. For most, that device is more powerful than a computer they could afford. Arguing the fact that a vast number of people's only compute device is their mobile is just arguing with a fence post. It serves no purpose.
My bank decided that the online banking website needed to be more like the app, so now they are both terrible. Basically the entire site is white space on the computer, because everything is centred and dumb down. Input fields for numbers are invisible, they are just a label saying "Kr" and you're suppose to click it and the numerical keyboard on the phone pops up, except it obviously doesn't on the computer.
Paying billed is easier on the phone in the sense that bills in Denmark have a three part number, e.g. +71 1234567890 1234678 where the first is a type number, second is the receiver and the last is a customer number with the receiver. The phone allows to just use the camera to scan the number.
Transferring money is terrible on both platforms, because it's designed to be doable on the phone, meaning having three or four screen, but it gives you no overview. There's plenty of space on a computer for a proper overview giving you the feeling of safety, but it's not used. Same for account overview. Designed to the phone, but doesn't adapt to the bigger screen and provide you with more details, so you need to click every single expense to see what is is exactly.
I've had the same thing happen. Huge buttons, a lot of whitespace, little functionality in the default web version. To deal with stocks and such, the old version is still available somewhere.
My main reason to go to bank after online was to deal with physical things. Mainly checks and specifically depositing them. Now, I can usually do that with my phone because of the camera. Even if I had a webcam before, I don’t recall the functionality being there. They had check scanners but usually for businesses and my check volume is really low so never made sense to get one (usually came with a monthly fee to have one iirc)
Even now, the mobile deposit limit seems sufficiently low that I still go to the bank with more frequency than I’d like. Luckily, the ATM at the bank has a check scanner now that doesn’t have a limit so that’s usually easier and faster. It’s the daily $5000 limit I hit the most, a single check and put me over it and require a trip to bank. I think the monthly limit is $30000 and that doesn’t get in my way often. I think $5000 is too low of a daily limit. It’s common enough that I have to make a $5k+ settlement with friends/family that usually always has to be done by check. (For curious, This is usually travel that I pay for and we settle up later.)
Less common, but sometimes I need to get a bank check (guaranteed funds) or a money order. Way less frequent is need to get/give cash funds. Usually can use ATM for this unless it’s a larger withdrawal or if I need some particular denomination. This whole paragraph accounts for about 1-4 annual trips in any given year though.
> I do not get what's special about banking apps as opposed to online banking.
I use both. In the beginning I used to prefer the web version. I can use my large monitor to see more data and use a full keyboard and mouse. But I have started to use the mobile version more. For Wells Fargo at least, the mobile version is faster to log into because of face ID support. The website requires a lot more clicks and keystrokes. Also, the mobile app makes it easy and possible to deposit checks if and when I get them.
I've had the same thought. The only major difference that I can think of is the built-in camera making check deposits easier. It may also be that people were just generally using computers more and using the internet more over this same time period, although a lot is that because of smartphones
Yes, the apps perform better/faster and generally have more UI thought put into them. Overall, lower friction. Often when people need to use their banking app, they're in a hurry, maybe stressed (e.g. in line at a grocery store) so everything the bank can do quickly and with visual assurance helps.
On the premium end of banking, where users generally aren't stressed about money, offering an app is more about catering to however the user prefers to interact.
Official banking apps are harder to phish than websites. They also tend to keep you signed in for longer, especially once you enable something like FaceID.
Yeah, I have been doing online banking since around 1998.
I have refused to install the bank app on my phone because I see no point in it and just downsides in case I get mugged (bad experience in my teenage years)
The 1 check I get a year takes about a minute to deposit at the ATM on my way to work.
I've never written a check, but I have had to deposit occasional checks. In the last 6 years the only checks I've received were first paychecks at a new job (before direct deposit was set up) and my covid stimulus checks.
I'm in Europe where the situation is different: checks haven't been used in appreciable numbers for 30 years or so. It's all online or paper transfer orders. If you get a pre-filled paper transfer order, you can type (or scan and OCR I suppose) the same data into the online form.
Europe is a big place, but my understanding is that the US is the outlier here and Europe is relatively similar in this regard.
The only time I really saw checks used was when I was a child ~30-35 years ago and my parents used them. I did once cash a check from an elderly relative, but that was very unusual and only happened once. I didn't even know it was still possible to do that, my reaction was more like if someone had handed me a stack of punch cards to run on my computer.
There hasn't been anything an average person used checks for in the last decades in Germany. Except a few elderly people, nobody uses checks and there are no rebates via checks at all.
Cash is still fairly common, and manufacturer rebates are basically not a thing. If they were, you'd send them an account number (IBAN = bank ID + account number at bank) to transfer the money to.
In fairness, manufacturer rebates have pretty much (mercifully) disappeared in the US as well as they were basically a scheme to mentally make you account for a lower price you wouldn't end up being rebated for various reasons.
I am in the UK and I have received two cheques in the last year, both for small amounts.
As it turned out, my bank rejected both because they were made out to [middle name] [surname] rather than [firstname] [surname]. Ironically the former is unique (probably) whereas they had another customer with the latter.
What's a check? As the saying goes, 'I'm too European for this'.
On a more serious note, the last time I saw a cheque in the UK was my grandfather balancing his cheque book in the mid 80s. It really has been that long since they were in general use in the UK, at least.
Just like with the prevalance of Apple/iPhones, the US banking system is global outlier.
Things you can't do with my banking app you can do with the web site:
- Extract your transactions to excel/csv
- Use OpenBanking
- See all my accounts on screen at once
- Sharedealing
- International transfers
But people are right, banks trust the mobile app more, and realy on it as an MFA device, so even if you use the website you still need the app.
One bank I work with seems to have all but given up on online banking and I just have to use their app because online banking will no longer work on Linux (although they don't openly admit it).
I think Android and iOS are safer platforms than PCs and that's why banks want you to use your phone.
> I think Android and iOS are safer platforms than PCs and that's why banks want you to use your phone.
This statement fills me with revulsion and rage lol. The only real "safety" involved here is the removal of user agency. I have a lot more trust in a machine I can actually control, secure, and monitor than the black box walled-garden of phoneland.
Your bank's insurer trusts Google's security more than yours, and they must surely (and rightfully) believe that while Google would spy on you, they wouldn't steal your bank account.
Generally yes the apps tend to be easier to use for most things, especially with a high-speed internet connection. Customers prefer them, banks build them since customers prefer them.
My PC has had a scanner connected to it for over 20 years, and in the mid 00s I was scanning and depositing checks through my bank's website (USAA). Even with modern cameras and fancy smarphone software, the results you get from a PC scan are still much better than taking a picture with your phone.
If you don't have a scanner, nearly all laptops have a webcam built in, and many people have one for their desktop as well.
On top of all that, there's no reason you can't use your smartphone camera to upload an image into a website through the mobile browser. I've done it many times for things. Just this morning I "scanned" a receipt into Ramp by taking a picture with my smartphone in the mobile browser.
You can't invade the user's privacy nearly as well in a browser (which is great for analytics/marketing), so there's a lot of incentive to the app creator to force a mobile app. But I think we should be honest that it's not for the user, it's for the company.
> My PC has had a scanner connected to it for over 20 years
You're basically the only person in America doing this. Tens of millions of folks are just scanning it with the app on their phone and it's objectively a much better experience lol. The resolution of the photo taken on your smartphone is beyond good enough, there's no need to over-engineer something here.
> You can't invade the user's privacy nearly as well in a browser (which is great for analytics/marketing), so there's a lot of incentive to the app creator to force a mobile app. But I think we should be honest that it's not for the user, it's for the company.
I agree with your first sentence, but not your second one.
Banking applications can certainly get more/different data on you from using the app, but the job of the bank is to protect money and to know their customer. Privacy is secondary, of course outside of things like other people knowing your account balance, unauthorized access, &c. That's for the bank, because they don't want to lose your money, but it's also for you because you don't want other people getting access to your money.
Make that two people. I much prefer to slap the rare check on the scanner than fiddle with the phone. My banks "scan the check" part of the app was buggy for a long time, so maybe that jaded me. (~"move closer", ~"move away", ~"increase lighting"...)
> the results you get from a PC scan are still much better than taking a picture with your phone.
The quality of the check images is not as big of a deal as you might think. No one is actually inspecting these unless the amount of deposit is near a limit or the account is flagged for suspicious activity. You definitely do not want to throw away the physical copy until the bank confirms the deposit.
Yes I totally agree. Mainly I threw that in there to pre-empt any "quality" argument that someone might try to use for why native mobile app is needed.
Is it? I lived in the US for 20+ years until 2021 and, though there were definitely more checks than I see in Europe now, the frequency with which I used them was approaching zero, which definitely wouldn't qualify as "stubbornly check-focused".
Both my housekeeper and contractor use checks and, while I could get the bank to "write" them checks, it's easier to just hand them a piece of paper. I've also needed to pay my neighbor something from time to time and it's easier to just write a check. I do also periodically receive checks from various institutions.
I guess to me there's just a big difference between what you're describing (which matches what I remember) and "stubbornly check-focused" as ancestor comment said.
I do find the money transfer options where I am in Europe much easier, though, and they do make checks and PayPal/Zelle/Venmo pretty obsolete too, IMO.
I can do all the same things with my bank with a browser that I can via the app.
It seems like a natural evolution of the technology and adoption rates to me. There was rudimentary online banking in the 2000s, then we saw banks shift to fully online presences in the 2010s. Maybe it wasn’t “the iphone” but just the fact that by the 2010s, everybody had a device in their pocket.
Best way to get clicks without publishing something of substance is to publish something wrong. If the article was titled "The internet killed bank teller jobs", then people would think "duh" and no one would click on it.
Honestly, its overkill. When my MaBook went kaput, i had to start doing everything on my iPhone. Had to get a good mobile documents office suite (Collabora is great ), do all my banking with both mobile apps or desktop browser apps, etc. Its been dfine, i doubt i would use a full size computer for that anymore.
Right, I'm going out of my way to avoid inviting Google/Apple and their respective app store surveillance ecosystems into my transactions. I don't even have banking apps installed. I don't understand why so many people are prostrating themselves to this future for minor convenience.
I'm always a bit confused in these discussions what is special about banking software of any kind at all. My bank has an app, but other than checking a balance every now and again, the only reason I use it is because it's also my insurance provider and I make claims through it. For actual banking, I don't really do any, through the website or the app. My pay is direct deposit. My purchases are on credit with payment details generally stored with the vendor; otherwise, I have cards or use the numbers. Monthly balance payoff is autopay. I had to go into the website once to set all that up however many years ago I don't remember, but people talk in these threads like they're in their banking apps directly moving money around all the time, actually making payments with the app. Why?
I have a personal current account, a shared current account with my wife, and several savings accounts. It is frequently necessary to move money between these accounts.
Also, here in the UK we don't really use Venmo or anything like that, so normally transferring cash to and from friends and family happens by bank transfer as well.
Doing it on the go via the app is much easier than using the web app through the main OS browser just because the UI is optimized. not a problem with using the web app approach, just that there isnt as much investment in it due to zeitgeist i guess.
Also since you are already using 2FA, you are already on the phone so might as well do basic operations there.
I can also look at transactions in my bed before going to bed so that is nice.
If I need to look at a support ticket or look at transactions more deeply, i still use the desktop approach.
I don't think many people would argue that there shouldn't be a mobile app, just that there should also be a website/webapp way to do it as well if you don't want to install their native app.
Mobile payments (at least in places where they are executed correctly) are certainly a huge improvement over physically exchanging cash and change. I haven't needed to take out my wallet for years.
You just need to understand how things are now. Here are few modern smartphone conventions that render banking on an old-fashioned PC totally obsolete:
- Remembering that you need to do banking, but waiting to do it until you're at home in front of your computer. This is impossible now, and if I don't follow the impulse the moment it occurs, the impulse will forever escape into the ether.
- Even the mere mention of needing to observe a URL is often far too scary. Typing one in, or using a browser bookmark is of course, impossible.
- Using a keyboard and mouse. It's just too onerous to use tools that are efficient and accurate. Modern users would much rather try to build a mental map of the curvature of their thumb, so that when they touch their touchscreen and obscure the button they're hitting, they they can reference that 3D mental map to guess at what portion of the screen they've actually pressed. Getting this wrong 30% of the time does not detract from the allure of touch screens.
- Using a normal-sized screen that allows you to actually see a lot of data at once, or even use multiple tabs. Again, this is really unthinkable. Of course it be be completely unacceptable to need to wait to do your banking until you're in front of a computer. It's 2026, and I cannot be bothered to remember to do a task later. But, in needing to always follow every impulse immediately, it doesn't matter that my phone screen only displays a small amount of information at once, or that tabbed browsing is impossible in a banking app. Those inconveniences are acceptable, or even welcome!
There are ATMs not attached to bank branches. They could have replaced the branches with ATMs before. (I do wonder what bank tellers are doing these days. I mean actual tellers, not investment advisors and jobs like that.)
Had go to go a branch a couple times in the last year at a local credit union. Largely seems like tellers are getting busy work. There are not a lot of tellers present, and they appear to be doing other things on their workstation. So they get up to go to the teller window and help me out with my request, which usually involves them playing around with some archaic bank app on the teller machine and fiddling with the copier for a bit. A supervisor is always around who knows more of the business use cases and always seems to get involved either out of boredom or because they're the only ones who know how to do something.
They are handling in-person transactions, usually deposits (many who deposit checks manually still don't know how to use the app to do so, or if the branch has an ATM that does deposits).
They are the only way to get non-20 cash in many areas; the ATMs that can dispense other bills are quite rare. And if you want $100 in ones you're going inside.
They're basically bank receptionists for old people who will type details into the same system that the general public has access to. They also handle cash for small businesses (I worked in a cafe during university and we'd regularly have to do runs into town to deposit rolls of bills and get more change to float the till)
If that's all you think tellers are then you're missing out on a lot of opportunities.
They are the first line of human-to-human contact with customers. They are able to sell new services or upsell existing services to customers, especially with the customer's data right in front of them. A new pleasant conversation plus "Oh by the way, did you know that you could get service ABC that would help you?" is something that an LLM or ATM can't do reliably.
There's a tremendous amount of opportunity available with well-trained tellers.
When ATMs first came out, they were mostly still only at the branch because they were big machines. I remember in the late 70s/early 80s, if you got a steady check (like social security or a paycheck from a steady job) you could cash them at the liquor store. The liquor store would even run my Dad a tab, and he would pay it off when he cashed the check. On paydays he would not be the only one doing that, they must have had to get a lot of cash on hand.
I didn't notice any link with the iPhone, except maybe a vague coincidence in timing. Online banking existed before the iPhone, it worked using websites, on personal computers. And it took some time before smartphones were taken seriously by banks.
What I noticed however is a noticeable decrease in service quality in bank branches while online (desktop browser) options became better. Banks pushed customers out of their branches progressively. In the early 2010s tellers couldn't do anything you couldn't do online by yourself. For services like dealing with large quantities of cash, or coins, they made it so that you couldn't do more than what the ATMs allowed you to do, limiting the amount of cash the branch had access to and increasing how much you could withdrew from ATMs.
They didn't get the idea to fire all their tellers when Steve Jobs announced the iPhone. It was a decision at least a decade in the making. It is just that people tend to resist change so it happens slowly, especially for big, serious business like banking. And I don't think it is a bad thing.
That paired with an increasingly cashless society. (Which is also in large part to smart phones) Otherwise you'd still need more tellers to conduct transactions that exceed ATM limits.
As far as I can tell, it's entirely that. The things the author cites as how mobile banking supplanted going to the bank (paying for things with debit cards, getting your paycheck direct deposited, etc) have nothing to do with mobile banking. They are all just as you said: we live in an increasingly cashless society, the only reason to go to the branch is to deposit or withdraw money, so the need for tellers has gone off a cliff.
Yes, exactly my reaction. Other than maybe to open an account in the first place, the only reason I ever went into to a bank even in the pre-internet, pre-smartphone era was to deal with cash.
Checks could be deposited in the deposit drop, or later at an ATM. My payroll went to direct deposit as soon as that was possible.
But to get cash, before ATMs, you went into the bank, unless you had check-cashing privilges somewhere else (supermarkets used to offer this). To deposit cash, you went into the bank so the teller could count it in front of you and agree on the amount. It was risker to deposit cash in a deposit drop or ATM.
The move to cashless transactions for almost everything, and the resultant rare need to carry cash, is IMO the main reason why we don't need very many bank tellers anymore.
Something that only came with the banking apps was opening of accounts via camera based identification and other security critical stuff, like 2fa for transfers, resetting card pins and setting other security features.
It's also easier to scan payments via app than go to the bank, something that is only possible via native like apps
In recent years I have been going less and less to banks. 20 years ago I would go monthly to pay some bills.
Nowadays, I must visit a bank once or twice a year tops. My manager frequently sends me messages, but invariably he is trying to sell me something.
I've noticed that branches have really cut down on tellers and in my latest visit the branch didn't even have a teller, just someone helping people use the ATM and lots of desks (most were empty) for you to handle more complicated business with your account manager.
Fun story. There are still bank tellers in the Falkland Islands because there is no e-banking. Transfers are literally made by filling in a piece of paper and taking it to the bank.
Starting with quotes with JD Vance and talking about listening to him on Joe Rogen is... a choice. Also I fail to see how the iPhone did anything or is relevant at all. Banking apps were made by third parties years after the iPhone came out and everybody had dozens of smart phones to choose from. The reason why they mentioned the iPhone specifically, touch screen and app store, already existed in the form of PDAs long before the iPhone came out.
There is also a premium for the human touch. I currently pay $15 fee to my bank a month. Going rate here for a bank account is $0.
But the $15 bank has a call center that is dreamy - reliably connected to a competent focused individual in under 3 seconds.
It doesn't matter how good the tech & automation is I place an economic value on that ability to pick up the phone and talk to a human. LLMs are crushing it but I'm not fuckin paying $15 for an LLM.
If I have to physically still go to the bank, it really hasn't disrupted much. The iPhone created an opportunity... the banks investing around the technology is the disruption. ATM itself couldn't unlock as much which I suppose is the paradigm mentioned in the article.
I hate the graph here. "Bank teller employment has fallen off a cliff" - well it _looks_ that way but actually it's more like halved from its peak because the bottom of the Y axis isn't zero. That's still a significant reduction, but it's not as dramatic as it seems at first glance.
This must be an amerilard phenomenon. There’s no way the number of bank tellers has remained constant in the western world. I haven’t been to a bank branch in 10 years.
I guess the trope in movies of masked bank robbers going in and threatening a scared bank teller will be a thing of the past soon. Pointing a gun at an iPhone doesn't have the same vibe.
I really enjoyed this article, I didn't bridge the idea of an ATM and mobile banking.
I think the idea raised about "Automated Firms" is a bit off in the picture painted in that linked article. I think the David Oks intention is to paint a picture of a fully automated company, but the linked article gives this impression:
> Future AI firms won’t be constrained by what's scarce or abundant in human skill distributions – they can optimize for whatever abilities are most valuable. Want Jeff Dean-level engineering talent? Cool: once you’ve got one, the marginal copy costs pennies. Need a thousand world-class researchers? Just spin them up. The limiting factor isn't finding or training rare talent – it's just compute.
In that above paragraph the author is saying to the reader that a human will be able to spin up and get these armies of intelligent workers, but at the end of the day their output is given to a human who presumably needs to take ownership of the result. Intelligent workers make bad choices or bad bets, but those AI machines cannot "own" an outcome. The responsibility must fall on a person.
To this end, I think the fully autonomous firm is kind of a fallacy. There needs to be someone who can be sued if anything goes wrong. You're not suing the AI.
That is why a fully automated firm would be a paradigm shift. Instead of requiring someone to be responsible and to QA things, you just let AI systems be responsible internally, and the company responsible as a whole for legal concerns.
This idea of an automated firm relies on the premise that AI will become more capable and reliable than people.
In this regard, the company cannot be created where there is not a single person tied to it, at least legally, even shell corporations have a person on the record as being responsible. So there needs to be some human that is apart of it, and in any "normal" organization if there is a person tied to the outcome of the company they presumably care about it and if the AI 99.99% of the time does good work, but still can make mistakes, a person will still be checking off on all its work. Which leads to a system of people reviewing and signing off on work, not exactly a fully autonomous firm.
Also, employing “infinite intelligence” by splitting it into “workers” and organizing them into firms cannot be farther than a paradigm change.
It’s strictly an attempt to shoehorn the new tech into an existing paradigm, just because right now the system prompt makes an “agent” behave differently than the one with a different prompt.
Yeah, I think if there is some sort of super intelligence, the idea would be that it would make the system of computers and computation irrelevant entirely. Now that would be novel.
There is no clear link to the iPhone causing lower teller employment.
This article does have a glaring omission: The 2008 financial crisis effects on the banking industry in general. When there are fewer local banks there are naturally fewer tellers employed. Bank failures peaked in 2010 in the aftershocks of the crises, which lines up nicely with the articles timeline.
yeah weird. Same goes for the "ATMs increased demand for tellers" strange idea suggested earlier in the article, which was automatically disproven right there by actually attributing the growth in tellers to deregulation. Which one is it?
This seems like a fluff piece. The tl;dr is that mobile banking (not the "iPhone") is what "killed" bank teller jobs. You can add online banking, credit cards, debit cards, and all other cashless payment options to that too.
This writing style where every section has multiple paragraphs of preamble, prolepsis, cold openers for cold openers, and tangents is infuriating. Get on to the point already.
I was born in the mid-80s and I've never had a bank teller experience. For me growing up, the bank teller was simply the tech support person for my debit card.
I didn't see the article mentioning how banks forced people to use ATMs or apps instead of tellers by having "green" accounts. where you would get a monthly account fee waved if you didn't go in to a branch.
Right around when my local credit union began requiring (IMHO insecure) 2FA, I coincidentally moved right next door to a branch location.
Since I refuse to implement their "security" "feature," I just walk into their office every time I need a simple balance inquiry/transfer. They probably hate that I have just enough money deposited to consider my inconveniencing them profitable.
Everyone I knew working as a bank teller quit because the actual job is screwing over old people with bad performing and long lasting investments.
My bank calls me at least once a year to tell me my personal bank teller changed again.
the line is being blurred as the need for tellers goes down many banks have the tellers performing personal banking adjacent tasks, like selling products, accounts or other upsells to existing customers
Based on the fact that we've had ATMs since the 1970s and bank tellers didn't fall away until the 2000s, the correlation isn't there regardless of the causation.
The interesting takeaway is that automation rarely removes jobs inside the existing paradigm.
ATMs automated a task inside branch banking, so banks just reorganised labour around it.
Smartphones removed the need for the branch entirely.
I mean, there is definitely a turndown period in labour force when a new tech is introduced, but it will defintely produce more jobs tho, as an evolution of human history. <3
Uhhh... if it's 'mobile banking' that killed teller jobs, what does the iPhone have to do with anything other than clickbait? (I guess I answered my own question)
The graph showing that "Bank teller employment has fallen off a cliff" is not zero based. This is pretty damn bad. The graph looks like it's going down 90%, but it's actually going from 350k to 150k. That's a ~60% drop which is a lot, but not "falling off a cliff".
Probably a bigger sign to look for would be average age of bank tellers vs other occupations. If it's trending higher, then it's likely just people who've been doing the job for a long time and serving other older customers. I have a feeling not many young people are becoming tellers or even needing their services, but I can't verify it.
> an AI system is literally a machine that can think and do things itself
why do so many writers claim this as a matter of fact? are we losing (or did we never have) a shared definition of the word "think"? can an LLM, at this time, function with zero human input whatsoever?
edit to add: these are genuine questions, not meant to be rhetorical :)
it's hard for me to gauge a broader understanding of AI/LLMs since most of the conversations i experience around them are here, or in negative contexts with people i know. and i'll admit i'm one of those negative people, but my general aversion to AI mostly has to do with my own anxiety around my mental health and cognitive ability in a use-it-or-lose-it sense, along with a disdain for its use in traditionally-creative fields.
>are we losing (or did we never have) a shared definition of the word "think"
People have been saying, “the computer is thinking,” while webpages are loading or software is running for as long as I’ve been consciously aware. I agree there’s something new about describing AI as, “literally a machine that can think,” but language has always had fuzzy borders
It's wild to watch documentaries from the 1980s where a primitive computer is said to be "a thinking machine" that is "taking most of the work out of a job".
yeah, for sure. i really think some people are under the impression that LLMs are a form of general AI that actually processes thought rather than being an admittedly-impressive exponential autocomplete.
though i'm not by any means an AI booster, my question wasn't really meant to be taken as a gotcha - more a general taking stock of where we're at in terms of broader understanding of these technologies outside of the professional AI/hobbyist world.
> the number of tellers per branch fell by more than a third between 1988 and 2004, but the number of urban bank branches (also encouraged by a wave of bank deregulation allowing more branches) rose by more than 40 percent
So, ATMs did impact bank teller jobs by a significant amount. A third of them were made redundant. It's just that the decrease at individual bank branches was offset by the increase in the total number of branches, because of deregulation and a booming economy and whatever else.
A lot of AI predictions are based on the same premise. That AI will impact the economy in certain sectors, but the productivity gains will create new jobs and grow the size of the pie and we will all benefit.
But will it?
My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.
For example, ATMs being automated did cause a negative drop in teller jobs, but fast money any time does increase the velocity of money in the economy. It decreases savings rate and encourages spending among the class of people whose money imparts the highest multiplier.
AI does not. All the spending on AI goes to a very small minority, who have a high savings rate. Junior employees that would have productively joined the labor force at good wages, must now compete to join the labor force at lower wages, depressing their purchasing power and reducing the flow of money.
Look at all the most used things for AI: cutting out menial decisions such as customer service. There are no "productivity" gains for the economy here. Each person in the US hired to do that job would spend their entire paycheck. Now instead, that money goes to a mega-corp and the savings is passed on to execs. The price of the service provided is not dropping (yet). Thus, no technology savings is occurring, either.
In my mind, the outcomes are:
* Lower quality services
* Higher savings rate
* K-shaped economy catering to the high earners
* Sticky prices
* Concentration of compute in AI companies
* Increased price of compute prevents new entrants from utilizing AI without paying rent-seekers, the AI companies
* Cycle continues all previous steps
We may reach a point where the only ones able to afford compute are AI companies and those that can pay AI companies. Where is the innovation then? It is a unique failure outcome I have yet to see anyone talk about, even though the supply and demand issues are present right now.
AI will bring about a de-sequestering of talent and resources from some sectors of the economy. It's very difficult to predict where these people and resources will go after that, and what effect that will have upon the world.
Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable. The problem with services is that they're typically resistant to productivity growth, and that's finally changing.
If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam.
You've expressed very clearly what LLMs would have to do in order to be economically transformative.
"If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam."
It's not that process innovations are lacking, it's that product innovations are perceived as an indignity by most people. Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
Is the value in the outcome of receiving medical advice and care, and becoming educated, or is the value just in the co-opting of another human being's attention?
If the value is in the outcome, the means to achieving that aren't of much consequence.
How many of us have a reminiscence that starts “looking back, the most life-changing part of my primary or secondary education was ________,” where the blank is a person, not a curriculum module? How many doctors operate, at least in part, on hunches—on totalities of perception-filtered-through-experience that they can’t fully put into words?
I’m reminded of the recent account of homebound elderly Japanese people relying on the Yakult delivery lady partly for tiny yoghurt drinks, but mainly for a glimmer of human contact [0]. Although I guess that cuts to your point: the value in that example really is just co-opting another human’s attention.
In most of these caring professions, some of the value is in the measurable outcome (bacterial infection? Antibiotic!), but different means really do create different collections of value that don’t fully overlap (fine, I’ll actually lay off the wine because the doctor put the fear of the lord in me).
I guess the optimistic case is, with the rote mechanical aspects automated away, maybe humans have more time to give each other the residual human element…
[0] https://news.ycombinator.com/item?id=47287344
But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished, even if a distinction in thought is possible among economic theorists.
For education, if you know as much as the average Harvard grad, can you give yourself a Harvard degree that will be as readily accepted in a job application or raising funds for a new business?
Thats a weird way of describing it.
A machine telling me to exercise and eat right will be ignored, even if the advice is correct. A person I trust taking me aside, looking me in the eye and asking me the same would be taken far more seriously.
There's also the deeper philosophical question of what is the meaning of life, and if there's inherent value in learning outside of what remunerative advantages you reap from it.
By selling those services at a cost of “free”, hyperscalers eliminate competition by forcing market entrants to compete against a unit price of 0. They have to have a secondary business to subsidize the losses from servicing the “free” users, which of course is usually targeted advertising to capitalize on the resources paid by users for access. Or simply selling to data brokers.
With the importance of training data and network effects, “free” services even further concentrate market power. Everyone talks about how AI is going to take away jobs, but no one wants to confront how badly the anticompetitive practices in big tech are hurting the economy. Less competition means less opportunity for everyone else, regardless of consumer benefit.
The only way it works if the “free” service for tutoring or healthcare is through government subsidies or an actual non-profit. Otherwise it’s just going to concentrate market power with the megacorps.
Look at all the deprecated Google products. What happens when Gemini-SaaS makes billions from licensing to other companies, and Gemini-Charity-for-the-poors starts losing money?
Sadly, the bigger the $$ in the tech pie, the more we have attracted robber barons, etc.
https://en.wikipedia.org/wiki/Productivity_paradox
by this logic, the invention of mechanized farm equipment, which displaced farm labor, didnt increase productivity
Nah. I think "good enough AI for 95% of people" will be able to run locally within 3-5 years on consumer-accessible devices. There will be concentration of the best compute in AI companies for training, but inference will always become cheaper over time. Decommissioned training chips will also become inference chips, adding even more compute capacity to inference.
This is like computing once again. In 1990 only the upper class could afford computers, as of 2000 only the upper class owned mobile phones, as of now more or less everyone and their kid has these things.
My family was on the border of upper-lower and lower-middle and we bought a computer once and used it for 10+ years. I dumpster dove later to scavenge parts for upgrading until the mid 2000s when cheap computers became available.
A lot of people recognize this pattern even if they can't articulate it, and that's why they hate AI so much. To them, it doesn't matter if AI lives up to the hype or not. Either it does and we're staring down a future of 20%+ unemployment, or it doesn't and the economy crashes because we put all our eggs in this basket.
No matter what happens, the middle class is likely fucked, and anyone pushing AI as "the future" will be despised for it whether or not they're right.
Personally, I think the solution here might be to artificially constrain the supply of productivity. If AI makes the average middle-class worker twice as productive, then maybe we should cut the number of work hours expected from them in a given week.
The complete unwillingness of people in power to even acknowledge this problem is disheartening, and is highly reminiscent of the rampant corruption and wealth inequality of the Gilded Age.
Technological progress that hurts more people than it helps isn't progress, it's class warfare.
This suicide-pact of "either AI goes crazy and 100 people rule the world with 99% of the world's wealth" or "AI fails badly and everyone's standard of living drops 3 levels, except for the 100 people that rule the world with 99% of the world's wealth" is not what I signed up for. Nor is it in any way sustainable or wise.
Too much class distinction / wealth between lower/upper classes, and a surplus of unemployed lower-class men is how many revolts/revolutions/wars have started.
We've never seen such a thing before, so I don't know how you can draw such sweeping conclusions about it.
I think this is right. The historical analogue I keep drifting toward is Enclosure. LLM tech is like Enclosure for knowledge work. A small class of capital-holding winners will benefit. Everyone else will mostly get more desperate and dependent on those few winners for the means of subsistence. Productivity may eventually rise, but almost nobody alive today will benefit from it since either our livelihood will be decimated (knowledge workers, for now) or we will be forced into AI slop hell-world where our children are taught by right-wing robo-propagandists, we are surveilled to within an inch of our lives, and our doctor is replaced by an iPad (everyone who isn't fabulously wealthy). Maybe we can eek out a living being the meat arms of the World Mind, or maybe we'll turned into hamburger by robotic concentration camp guards.
This is not so helpful if AI is boosting productivity while a sector is slowing down, because companies will cut in an overabundant market where deflationary pressure exists.
However, the number of software companies being started is booming which should result in net neutral or net positive in software developer employment.
Today: 100 software companies employ 1,000 developers each[0]
Tomorrow: 10,000 software companies employ 10 developers each[1]
The net is the same.
[0]https://x.com/jack/status/2027129697092731343
[1]https://www.linkedin.com/news/story/entrepreneurial-spirit-s...
Plenty of businesses need very custom software but couldn't realistically build it before.
A recent example, Mitchell Hashimoto was pointing out that he wasn't "first to market" with his product(s), he was (at least) SEVENTH
I'm sure the retort of the AI optimist will be that AI will make the things that person buys cheaper, and there may be truth to that when it comes to things that people buy with disposable income...
But how likely is AI to make actual essentials like housing and food cheaper?
IE. If a top tier dev make $1m today, they'll make $5m in the future. If the average makes $100k today, they'll maybe make $60k.
AI likely enables the best of the best to be much more productive while your average dev will see more productivity but less overall.
Previously, software devs were just way too expensive for small businesses to employ. You can't do much with just 1 dev in the past anyway. No point in hiring one. Better go with an agency or use off the shelf software that probably doesn't fill all your needs.
Long-term, they will need none. I believe that software will be made obsolete by AI.
Why use AI to build software for automating specific tasks, when you can just have the AI automate those tasks directly?
Why have AI build a Microsoft Excel clone, when you can just wave your receipts at the AI and say "manage my expenses"?
Enjoy your "AI-boosted productivity" while it lasts.
I think this is a bit hyperbolic. Someone still needs to review and test the code, and if the code is for embedded systems I find it unlikely.
For SaaS platforms you’ll see a dramatic reduction, maybe like 80% but it’ll still have a handful of devs.
Factories didn’t completely eliminate assembly line workers, you just need a far fewer number to make sure the cogs turn the way it should.
I feel like you didn't understand my comment. I am predicting that there is no code to review. You simply ask the AI to do stuff and it does it.
Today, for example, you can ask ChatGPT to play chess with you, and it will. You don't need a "chess program," all the rules are built in to the LLM.
Same goes for SaaS. You don't need HR software; you just need an LLM that remembers who is working for the company. Like what a "secretary" used to be.
I didn’t, and thanks for clarifying for me.
This doesn’t pass the sniff test for me though - someone needs to train the models, which requires code. If AI can do everything for you, then what’s the differentiator as a business? Everything can be in chatGPT but that’s not the only business in existence. If something goes wrong, who is gonna debug it? Instead of API requests you would debug prompt requests maybe.
We already hate talking to a robot for waiting on calls, automated support agents, etc. I don’t think a paying customer would accept that - they want a direct line to a person.
I can buy the argument that the backend will be entirely AI and you won’t need to be managing instances of servers and databases but the front end will absolutely need to be coded. That will need some software engineering - we might get a role that is a weird blend of product + design + coding but that transformation is already happening.
Honestly the biggest change I see is that the chat interface will be on equal footing with the browser. You might have some app that can connect to a bunch of chat interfaces that is good at something, and specializations are going to matter even more.
It was a bit of a word vomit so thanks for coming to my TED Talk.
Speed, cost, security, job/task management
Next question
All of that will inevitably be solved.
50 years ago, using a personal computer was an extravagant luxury. Until it wasn't.
30 years ago, carrying a powerful computer in your pocket was unthinkable. Until it wasn't.
Right now, it's cheaper to run your accounting math on dedicated adder hardware. But Llms will only get cheaper. When you can run massive LLMs locally on your phone, it's hard to justify not using it for everything.
If I can run 50,000 fixed tasks that cost me $0.834/hr but OpenAI is costing $37/hr and the automation takes 40x as long and can make TERRIBLE errors why the fuck would I not move to the deterministic system?
Also, battery life of mobile devices.
But now, we not only have laptops, we run horribly inefficient GUIs in horribly inefficient VMs on them.
The dollar-per-compute trend goes ever downward.
Because you'll be outcompeted by people who make the best of the nondeterministic system.
How silly of me to rely on reality when it’s so obvious that AI is benefiting us all.
Anyways, this is the start. Companies are adjusting. You hear a lot about layoffs but unemployments. But we're in a high interest environment with disruptions left and right. Companies are trying to figure out what their strategy is going forward.
I don't expect to see a boom in software developer hiring. I think it'll just be flat or small growth.
We are in negative growth, and the current leadership class keeps talking about all the people they can get rid of.
Look at the Atlassian layoff notice yesterday for example where they lied to our faces by saying they were laying off people to invest more in AI but they totally aren’t replacing people with AI.
We have a massively distorted economy driven by debt financialization and legalised banking cartels. It leads to weird inversions. For example as long as housing gets increasingly expensive at a predictable rate the housing becomes more affordable instead of less as banks are more able to lend money. The inverse is also true, if housing were to drop at a predictable rate fewer people would be able to get a mortgage on the house so fewer people could afford to buy one. Housing won't drop below cost of materials and labor (ignoring people dumping housing to get rid of tax debts as I would include such obligations in the cost of acquisition). Long term it's not sustainable but long term is multi-generational.
Many low cost areas have bad crime problems, there is another little phenomenon where the wealthy by doing a poor job in governance can increase the price of their assets by making alternative assets (lower cost housing) less desirable due to the increase in crime.
Only if every person born needs to have a brand new house constructed for them.
Not if - you know - people die and don't need a house to live in anymore.
But considering how it's been the past 20 years, I'm starting to expect that a lot of the current elder generation will opt to have their houses burnt down to the ground when they die. Or maybe the banker owned politicians will make that decision for them with a new policy to burn all property at death to "combat injustice". Who knows what great ideas they have?
We have a K shaped economy. Top earners take the majority. The top 20% make up 63% of all spending, and the top 10% accounted for more than 49%. The highest on record. Businesses adapt to reality and target the best market, in this case the top 10 to 20%, and the rest just get ignored, like in many countries around the world.
All that unlocked money? In a K shaped economy it mostly goes to those at the top, who look to new places to park/invest it, raising housing prices, moving the squeeze of excess capital looking for gains to places like nursing homes and veterinary offices. That doesn't result in prices going down, but in them going up.
The benefit to the average American will be more capital in the top earners' hands looking for more ways to do VC style squeezes in markets previously not as ruthless but worth moving to now as there are less and less 'untapped' areas to squeeze (because the top 10-20% need more places to park more capital). The US now has more VC funds than McDonalds.
The only solution here is to stop tying people's value to their productivity. That makes a lot of sense in the 1900s but it makes a lot less sense when the primary faucet of productivity is automation. If you insist on tying a person's fundamental right to a decent and secure life to their productivity and then take away their ability to be productive you're left with a permenant and growing underclass of undesirables and an increasingly slim pantheon of demigods at the top.
We have written like, an ocean of scifi about this very subject and somehow we still fail to properly consider this as a likely outcome.
This is extremely hand-wavy.
Can you be more concrete in what you think this looks like?
The way I see it, we're only 5-10 years away from having general purpose robots and AI that can basically do anything. If the prices for that automation is low enough, there will be massive layoffs as workers are replaced.
There's no way to "naturally" solve the problem of skyrocketing unemployment without government involvement.
Disconnecting value from productivity sounds good if you don't examine any of the consequences.
Can you build a society from scratch using that principle? If you can't then why would it work on an already built society?
Like if we're in an airplane flying, what you're saying is the equivalent getting rid of the wings because they're blocking your view. We're so high in the sky we'd have a lot of altitude to work with, right?
In this society there is literally nothing for anyone else to do. Do you think they deserve to be cut out of sharing the value generated by The Engineer and the machine, leaving them to starve? Do you think starving people tend to obey rules or are desperate people likely to smash the evil machine and kill The Engineer if The Engineer cuts them off? Or do you think in a society where work hours mean nothing for an average person a different economic system is required?
If goods aren't being sold, then the price will increase.
I can see AI making things more productive but it requires humans to be very expert and do more work. That might mean fewer developers but they are all more skilled. It will take a while for people to level up so to speak. It's hard to predict but I think there could be a rough transition period because people haven't caught on that they can't rely on AI so either they will have to get a new career or ironically study harder.
My subjective assessment is that agents like Copilot got better because of better harnesses and fine tuning of models to use those harnesses. But they are not improving in the direction of labor substitution, but rather in the direction of significant, but not earth-shaking, complementarity. That complementarity is stronger for more experienced developers.
Of course, it could also be argued that some day we may decide that it's no longer necessary at all for code to be written for a human mind to understand. It's the optimistic scenario where you simply explain the misbehavior of the software and trust the AI to automatically fix everything, without breaking new stuff in the process. For some reason, I'm not that optimistic.
Did it? This sounds like describing a company opening a new campus as laying off a third of their employees, partly offset by most of them still having the same job in the same company but at a new desk.
So newer bank branches look like car dealership offices. There are many little glass rooms where you sit down with a bank employee and discuss loans and other financial products. That's where the money is made.
There's a small area in back with traditional tellers. It's not where the money is made.
If I'm reading this correctly, the interpretation should be that a third of them were transferred to new branches.
0.66 (two thirds retention) * 1.4 (40% more branches) = 0.84, so we only expect ~16% were made redundant.
I think it would be a mistake to look at this solely through the lens of history. Yes, the historical record is unbroken, but if you compare the broad characteristics of the new jobs created to the old jobs displaced by technology, they are the same every time: they required higher-level (a) cognitive (b) technical or (c) social skills.
That's it. There is no other dimension to upskill along.
And LLMs are good at all three, probably better than most people already by many metrics. (Yes even social; their infinite patience is the ultimate advantage. Prompt injection is an unsolved hurdle though, so some relief there.)
Plus AI is improving extremely rapidly. Which means it is probably advancing faster than most people can upskill.
An increasingly accepted premise is that AI can displace junior employees but will need senior employees to steer it. Consider the ratio of junior to senior employees, and how long it takes for the former to grow into the latter. That is the volume of displacement and timeframe we're looking at.
Never in history have we had a technology that was so versatile and rapidly advancing that it could displace a large portion of existing jobs, as well as many new jobs that would be created.
However, what few people are talking about is the disintermediating effect of AI on the power of capital. If individuals can now do the work of entire teams, companies don't need many of them. But by the same token(s) (heheh) individuals don't need money, and hence companies, to start something and keep it going either! I think that gives the bottom side of the K-shaped economy a fighting chance to equalize.
What is more likely is that the valuable skills shift toward things LLMs cannot own: relationships, physical presence, institutional trust, creative taste, and domain judgment that comes from years of actual experience. The bank teller analogy is actually a good one -- the tellers who survived became relationship managers, not better cash-counters.
The practical challenge for workers in transition is not really "what skills should I learn" -- it is "how do I reframe what I already know for a different role." That translation step is where most people get stuck. Your skills might be directly applicable to a new role but your resume still reads like your old one.
That is partly why I built a tool that does this through conversation -- you describe your background and the role you are targeting, and it helps reframe your experience accordingly: https://super.myninja.ai/apps/6de082c7-a05f-4fc5-a7d3-ab56cc...
No, because if you think about Startrek the endgame is replicators. Well the concept that 100% of basic needs are met.
At some point work becomes unnecessary for a society to function.
Greed/Change Avoidance:
If someone invented replicators right now, even if they gave it completely away to the world, what would happen? I can't imagine the finance and military grind just coming to an end to make sure everyone has a working replicator and enough power to run it so nobody has to work anymore. Who gives up their slice of society to make that change and who risks losing their social status? This is like openai pretending "your investment should be considered a gift because money will have no value soon". That mask came off really quickly.
Status/Hate:
There are huge swaths of the US population that would detest the idea that people they see as "below" them don't have to work. I can imagine political movements doing well on the back of "don't let the lazy outgroup ruin society by having replicators".
Fuck the Poor:
We don't do the easy things to eliminate or reduce suffering now, even when it has real world positive effects. Malaria, tuberculosis, even boring old hunger are rampant and causing horrible, unnecessary suffering all over the world.
Dont tread on me:
I shudder when I think of the damage someone could do with a chip on their shoulder and a replicator.
The road to hell is paved with good intentions:
What happens when everyone can try their own version of bio engineering or climate engineering or building a nuclear power plant or anything else. Invasive species are a problem now and I worry already when companies like Google decide to just release bioengineered mosquitos and see what happens. I -really- worry when the average person decides a big complicated problem is actually really simple and they can just replicate their particular idea and see what happens. Whoops, ivermectin in the water supply didn't cure autism!
Someone give me some hope for a more positive version here because I bummed myself out.
The future is anyone's guess, but it is certain that 100% of your needs being able to be met theoretically is not equivalent to actually having 100% of your needs met.
People when they mature have an innate desire to work. It is good for body and mind. If you're curious about the world, you'll have to do some work one way or another to achieve your goals and satisfy your curiosity.
If "society" is just a function of basic needs, then there's plenty of places in the world to visit where people live like that and use any excess energy in endless fighting against each other instead of work.
That doesn't mean it has to be wage labor though.
If you go in with the attitude that work is hell and humiliation, that's what life is going to give you.
That's not quite my read - the original says per branch there was a 1/3 reduction, but your comment appears to say 1/3 total redundancy.
There was, according to the original, a 40% increase in number of branches, meaning a net increase in tellers (my math might be off though)
edit:
100 branches → 140 branches = +40%
100 tellers/branch → 67 tellers/branch = -33%
140 × 67 = 9,380
100 × 100 = 10,000
net difference -620 or just over 6% (loss)
Humans would attend a gas station or fetch items in a store. Why? They're completely unneeded, I can do (and WANT to do) that myself.
I always feel sad about these people, trapped in an economic system that forces them into useless labour when they could spend their time learning actually useful skills.
> I always feel sad about these people, trapped in an economic system that forces them into useless labour when they could spend their time learning actually useful skills.
It's useful labor. Yes you could do it yourself, but it gives them a job which they can ultimately use to afford food and where they live.
I mostly only feel bad for kids doing that sort of labor as it means they aren't getting an education. But for an adult? It speaks to something a bit right about their economic situation that they can stay a float by merely fetching items in a store.
I wish in the US that it was possible for someone to make a living doing doordash or instacart.
Because the presence of a human likely prevents shoplifting and / or vandalism. It must make economic sense for the gas station owner to employ a human, and I suppose this is the sense.
What actual useful skill do you think the gas station keeper could learn? Is their employment the thing that prevents them from learning these skills?
I mean, it's possible there are useful skills they could learn but there's not the interest or desire to learn those skills. It's completely possible that person is perfectly content doing that work.
we all need to do something
First: Most people believe it was Netflix that killed Blockbuster, but that's not strictly correct. It was the combination of Netflix and Redbox that really sealed the deal for Blockbuster (and video rental generally). It normally takes not one, but at least two things to really fill the full functionality of a old paradigm. Also it's human nature to focus heavily on one thing (Blockbuster was aware of Netflix) but lose sight of getting flanked by something else.
Second: Not listed here is how banks themselves have changed to be almost entirely online, which in many cases is more of a outsourcing play than a labor destruction play. My favorite example of this is Capital One, where the vast majority of their credit card operations literally cannot be solved in a branch. You must call them to say, resolve a fraud dispute. Note that this still requires staffing and is (not yet) fully automated, just not branch staffing. It doesn't make sense to staff branches to do that.
Is an app really that much easier to use?
BTW newer mobile phones offer "desktop mode" (the Samsung Dex, and what came to AOSP), so you can attach them to a TV.
Just like with a lot of things. Sure you could do a thing better, faster, more efficiently on a PC, but some people just don't care when 80% is good enough.
It’s free, it’s transparent, you can read the profile… And it takes two minutes.
It doesn't matter what used to be, we're discussing what is now. We now have mobile devices that are much cheaper for people to obtain than a computer. For most, that device is more powerful than a computer they could afford. Arguing the fact that a vast number of people's only compute device is their mobile is just arguing with a fence post. It serves no purpose.
Paying billed is easier on the phone in the sense that bills in Denmark have a three part number, e.g. +71 1234567890 1234678 where the first is a type number, second is the receiver and the last is a customer number with the receiver. The phone allows to just use the camera to scan the number.
Transferring money is terrible on both platforms, because it's designed to be doable on the phone, meaning having three or four screen, but it gives you no overview. There's plenty of space on a computer for a proper overview giving you the feeling of safety, but it's not used. Same for account overview. Designed to the phone, but doesn't adapt to the bigger screen and provide you with more details, so you need to click every single expense to see what is is exactly.
Even now, the mobile deposit limit seems sufficiently low that I still go to the bank with more frequency than I’d like. Luckily, the ATM at the bank has a check scanner now that doesn’t have a limit so that’s usually easier and faster. It’s the daily $5000 limit I hit the most, a single check and put me over it and require a trip to bank. I think the monthly limit is $30000 and that doesn’t get in my way often. I think $5000 is too low of a daily limit. It’s common enough that I have to make a $5k+ settlement with friends/family that usually always has to be done by check. (For curious, This is usually travel that I pay for and we settle up later.)
Less common, but sometimes I need to get a bank check (guaranteed funds) or a money order. Way less frequent is need to get/give cash funds. Usually can use ATM for this unless it’s a larger withdrawal or if I need some particular denomination. This whole paragraph accounts for about 1-4 annual trips in any given year though.
I use both. In the beginning I used to prefer the web version. I can use my large monitor to see more data and use a full keyboard and mouse. But I have started to use the mobile version more. For Wells Fargo at least, the mobile version is faster to log into because of face ID support. The website requires a lot more clicks and keystrokes. Also, the mobile app makes it easy and possible to deposit checks if and when I get them.
On the premium end of banking, where users generally aren't stressed about money, offering an app is more about catering to however the user prefers to interact.
I have refused to install the bank app on my phone because I see no point in it and just downsides in case I get mugged (bad experience in my teenage years)
The 1 check I get a year takes about a minute to deposit at the ATM on my way to work.
Many countries have functioning giro systems. The U.S. is just an outlier.
What about manufacturer rebates?
The only time I really saw checks used was when I was a child ~30-35 years ago and my parents used them. I did once cash a check from an elderly relative, but that was very unusual and only happened once. I didn't even know it was still possible to do that, my reaction was more like if someone had handed me a stack of punch cards to run on my computer.
There hasn't been anything an average person used checks for in the last decades in Germany. Except a few elderly people, nobody uses checks and there are no rebates via checks at all.
Receiving a check however is even rarer.
Granny can always give you cash or just send it directly to you account in the same way.
As it turned out, my bank rejected both because they were made out to [middle name] [surname] rather than [firstname] [surname]. Ironically the former is unique (probably) whereas they had another customer with the latter.
On a more serious note, the last time I saw a cheque in the UK was my grandfather balancing his cheque book in the mid 80s. It really has been that long since they were in general use in the UK, at least.
Just like with the prevalance of Apple/iPhones, the US banking system is global outlier.
Things you can't do with my banking app you can do with the web site:
- Extract your transactions to excel/csv
- Use OpenBanking
- See all my accounts on screen at once
- Sharedealing
- International transfers
But people are right, banks trust the mobile app more, and realy on it as an MFA device, so even if you use the website you still need the app.
I think Android and iOS are safer platforms than PCs and that's why banks want you to use your phone.
How? Across multiple browsers?
> I think Android and iOS are safer platforms than PCs and that's why banks want you to use your phone.
This statement fills me with revulsion and rage lol. The only real "safety" involved here is the removal of user agency. I have a lot more trust in a machine I can actually control, secure, and monitor than the black box walled-garden of phoneland.
Generally yes the apps tend to be easier to use for most things, especially with a high-speed internet connection. Customers prefer them, banks build them since customers prefer them.
If you don't have a scanner, nearly all laptops have a webcam built in, and many people have one for their desktop as well.
On top of all that, there's no reason you can't use your smartphone camera to upload an image into a website through the mobile browser. I've done it many times for things. Just this morning I "scanned" a receipt into Ramp by taking a picture with my smartphone in the mobile browser.
You can't invade the user's privacy nearly as well in a browser (which is great for analytics/marketing), so there's a lot of incentive to the app creator to force a mobile app. But I think we should be honest that it's not for the user, it's for the company.
You're basically the only person in America doing this. Tens of millions of folks are just scanning it with the app on their phone and it's objectively a much better experience lol. The resolution of the photo taken on your smartphone is beyond good enough, there's no need to over-engineer something here.
> You can't invade the user's privacy nearly as well in a browser (which is great for analytics/marketing), so there's a lot of incentive to the app creator to force a mobile app. But I think we should be honest that it's not for the user, it's for the company.
I agree with your first sentence, but not your second one.
Banking applications can certainly get more/different data on you from using the app, but the job of the bank is to protect money and to know their customer. Privacy is secondary, of course outside of things like other people knowing your account balance, unauthorized access, &c. That's for the bank, because they don't want to lose your money, but it's also for you because you don't want other people getting access to your money.
The quality of the check images is not as big of a deal as you might think. No one is actually inspecting these unless the amount of deposit is near a limit or the account is flagged for suspicious activity. You definitely do not want to throw away the physical copy until the bank confirms the deposit.
(I'm guessing you are because in the USA they spell it check, not cheque.)
I asked because the USA still seems to be stubbornly check-focused.
Everything else allowed either credit card or direct debit on top of allowing checks.
I do find the money transfer options where I am in Europe much easier, though, and they do make checks and PayPal/Zelle/Venmo pretty obsolete too, IMO.
I wonder if you can use a webcam?
It seems like a natural evolution of the technology and adoption rates to me. There was rudimentary online banking in the 2000s, then we saw banks shift to fully online presences in the 2010s. Maybe it wasn’t “the iphone” but just the fact that by the 2010s, everybody had a device in their pocket.
It's the Internet that killed bank tellers.
Native apps can provide a bit more streamlined UX (e.g. Face ID), while also being able to provide more robust features (mobile deposit).
The downsides are arguably higher development costs / OS compatibility, and having to install a separate app.
Also, here in the UK we don't really use Venmo or anything like that, so normally transferring cash to and from friends and family happens by bank transfer as well.
Also since you are already using 2FA, you are already on the phone so might as well do basic operations there.
I can also look at transactions in my bed before going to bed so that is nice.
If I need to look at a support ticket or look at transactions more deeply, i still use the desktop approach.
- Remembering that you need to do banking, but waiting to do it until you're at home in front of your computer. This is impossible now, and if I don't follow the impulse the moment it occurs, the impulse will forever escape into the ether.
- Even the mere mention of needing to observe a URL is often far too scary. Typing one in, or using a browser bookmark is of course, impossible.
- Using a keyboard and mouse. It's just too onerous to use tools that are efficient and accurate. Modern users would much rather try to build a mental map of the curvature of their thumb, so that when they touch their touchscreen and obscure the button they're hitting, they they can reference that 3D mental map to guess at what portion of the screen they've actually pressed. Getting this wrong 30% of the time does not detract from the allure of touch screens.
- Using a normal-sized screen that allows you to actually see a lot of data at once, or even use multiple tabs. Again, this is really unthinkable. Of course it be be completely unacceptable to need to wait to do your banking until you're in front of a computer. It's 2026, and I cannot be bothered to remember to do a task later. But, in needing to always follow every impulse immediately, it doesn't matter that my phone screen only displays a small amount of information at once, or that tabbed browsing is impossible in a banking app. Those inconveniences are acceptable, or even welcome!
First, ATMs increased the demand for bank branches, which more than made up for the decrease in tellers per branch.
Second, mobile banking decreased the demand for physical branches.
They are the only way to get non-20 cash in many areas; the ATMs that can dispense other bills are quite rare. And if you want $100 in ones you're going inside.
They are the first line of human-to-human contact with customers. They are able to sell new services or upsell existing services to customers, especially with the customer's data right in front of them. A new pleasant conversation plus "Oh by the way, did you know that you could get service ABC that would help you?" is something that an LLM or ATM can't do reliably.
There's a tremendous amount of opportunity available with well-trained tellers.
What I noticed however is a noticeable decrease in service quality in bank branches while online (desktop browser) options became better. Banks pushed customers out of their branches progressively. In the early 2010s tellers couldn't do anything you couldn't do online by yourself. For services like dealing with large quantities of cash, or coins, they made it so that you couldn't do more than what the ATMs allowed you to do, limiting the amount of cash the branch had access to and increasing how much you could withdrew from ATMs.
They didn't get the idea to fire all their tellers when Steve Jobs announced the iPhone. It was a decision at least a decade in the making. It is just that people tend to resist change so it happens slowly, especially for big, serious business like banking. And I don't think it is a bad thing.
Checks could be deposited in the deposit drop, or later at an ATM. My payroll went to direct deposit as soon as that was possible.
But to get cash, before ATMs, you went into the bank, unless you had check-cashing privilges somewhere else (supermarkets used to offer this). To deposit cash, you went into the bank so the teller could count it in front of you and agree on the amount. It was risker to deposit cash in a deposit drop or ATM.
The move to cashless transactions for almost everything, and the resultant rare need to carry cash, is IMO the main reason why we don't need very many bank tellers anymore.
It's also easier to scan payments via app than go to the bank, something that is only possible via native like apps
Nowadays, I must visit a bank once or twice a year tops. My manager frequently sends me messages, but invariably he is trying to sell me something.
I've noticed that branches have really cut down on tellers and in my latest visit the branch didn't even have a teller, just someone helping people use the ATM and lots of desks (most were empty) for you to handle more complicated business with your account manager.
Why? Seems like basically the same paradigm to me, I can just do it without going anywhere.
But the $15 bank has a call center that is dreamy - reliably connected to a competent focused individual in under 3 seconds.
It doesn't matter how good the tech & automation is I place an economic value on that ability to pick up the phone and talk to a human. LLMs are crushing it but I'm not fuckin paying $15 for an LLM.
AI is more iPhone than ATM IMO.
Any time I needed anything advanced, I get shuffled to someone else.
Getting rid of them isn't a good thing.
Entry-level jobs are important.
Lies, damn lies...
I think the idea raised about "Automated Firms" is a bit off in the picture painted in that linked article. I think the David Oks intention is to paint a picture of a fully automated company, but the linked article gives this impression:
> Future AI firms won’t be constrained by what's scarce or abundant in human skill distributions – they can optimize for whatever abilities are most valuable. Want Jeff Dean-level engineering talent? Cool: once you’ve got one, the marginal copy costs pennies. Need a thousand world-class researchers? Just spin them up. The limiting factor isn't finding or training rare talent – it's just compute.
In that above paragraph the author is saying to the reader that a human will be able to spin up and get these armies of intelligent workers, but at the end of the day their output is given to a human who presumably needs to take ownership of the result. Intelligent workers make bad choices or bad bets, but those AI machines cannot "own" an outcome. The responsibility must fall on a person.
To this end, I think the fully autonomous firm is kind of a fallacy. There needs to be someone who can be sued if anything goes wrong. You're not suing the AI.
This idea of an automated firm relies on the premise that AI will become more capable and reliable than people.
It’s strictly an attempt to shoehorn the new tech into an existing paradigm, just because right now the system prompt makes an “agent” behave differently than the one with a different prompt.
It’s unimaginative to say the least.
There is no clear link to the iPhone causing lower teller employment.
This article does have a glaring omission: The 2008 financial crisis effects on the banking industry in general. When there are fewer local banks there are naturally fewer tellers employed. Bank failures peaked in 2010 in the aftershocks of the crises, which lines up nicely with the articles timeline.
Since I refuse to implement their "security" "feature," I just walk into their office every time I need a simple balance inquiry/transfer. They probably hate that I have just enough money deposited to consider my inconveniencing them profitable.
Worth the $1.00 monthly "in-person banking fee"
That’s not a bank teller’s job, at least not in the U. S. You’re confusing that job with something else.
I mean, there is definitely a turndown period in labour force when a new tech is introduced, but it will defintely produce more jobs tho, as an evolution of human history. <3
That huge job loss also means no hiring. If you were a bank teller you would seriously need to consider a job switch
why do so many writers claim this as a matter of fact? are we losing (or did we never have) a shared definition of the word "think"? can an LLM, at this time, function with zero human input whatsoever?
edit to add: these are genuine questions, not meant to be rhetorical :)
it's hard for me to gauge a broader understanding of AI/LLMs since most of the conversations i experience around them are here, or in negative contexts with people i know. and i'll admit i'm one of those negative people, but my general aversion to AI mostly has to do with my own anxiety around my mental health and cognitive ability in a use-it-or-lose-it sense, along with a disdain for its use in traditionally-creative fields.
People have been saying, “the computer is thinking,” while webpages are loading or software is running for as long as I’ve been consciously aware. I agree there’s something new about describing AI as, “literally a machine that can think,” but language has always had fuzzy borders
though i'm not by any means an AI booster, my question wasn't really meant to be taken as a gotcha - more a general taking stock of where we're at in terms of broader understanding of these technologies outside of the professional AI/hobbyist world.