I find OP's communication style abrasive and off-putting, which tracks with them saying they've been coached on this, and found that advice lacking.
Maybe it's still insufficient advice, but it hasn't worked for them at least in part because they haven't figured out how to apply it.
From the post, I see low empathy and an air of superiority, (perhaps earned by genuinely being smarter than their peers-- doesn't make it more attractive).
That's going to cause friction because a team is a _social_ construct.
I realize it's been "written" by an LLM, but the content could have been written by someone I know. It's eerie how this person thinks exactly the same way. It's never their fault, always the others', and they are always obviously right and no amount of arguing can change their mind.
Yeah, a lot of the examples made me think "wait, there's something else going on there, right?", which would make sense if the author has difficulty communicating or negotiating their proposals.
In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.
> In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.
I do not find anything missing here. This is how things often plays out in reality. Both your retelling of it and what was actually written in the article.
Your retelling: Some people agree and some disagree with new metric. That is completely normal. Then someone who agree or want to achieve the peace or just temporary does not feel like doing "real jira" tasks fixes warnings. Team moves on.
Actual article: the warnings get solved when it becomes apparent one of them caused production issue. That is when "this new process step matters" side wins.
> I find OP's communication style abrasive and off-putting
Your comment is hilarious on a meta-level: it's an example of exactly the sort of socially-mediated gatekeeping the author of the article (machine or human, I don't care) criticizes. It is, in fact, essential to match authority and responsibility to achieve excellence in any endeavor, and it's a truth universally acknowledged that vague consensus requirements are tools socially adept cowards use to undermine excellence.
LLMs originally learned these patterns from LinkedIn and the “$1000 for my newsletter” SEO pillions. Both accomplish a goal. Now that's become a loop.
There is a delayed but direct association between RLHF results we see in LLM responses and volume of LinkedIn-spiration generated by humans disrupting ${trend.hireable} from coffee shops and couches.
// from my couch above a coffee shop, disrupting cowork on HN. no avatars. no stories. just skills.md
The titles are giveaways too: Comfort Over Correctness, Consensus As Veto, The Nuance, Responsibility Without Authority, What Changes It. Has that bot taste.
If you want I can compile a list of cases where this doesn't happen. Do you want me to do that?
Neither is Vonnegut's (which your short, choppy sentences reminded me of), but he was a very successful and beloved author. I'm in no way comparing myself to Vonnegut, but my point is just because it doesn't appeal to you, it doesn't mean it isn't good.
Writing is art. Does it get the intended point across? Does it resonate with the reader? Does it make them feel something? Then it is good.
OP's dismissiveness of soft skills is a big red flag. Unless you're a solo dev, software development is a social activity, and understanding the social dynamics is key to effecting change.
Your efforts to improve quality could be vetoed by your coworkers for a variety of reasons: they don't care, they don't trust your judgement, they see other things as a higher priority... the list goes on and on. Some of these things can't be changed by you, but some can, and that's where the soft skills come into play.
Side note, this is why I'm not that worried even if AI becomes even better at writing code. The only times I've spent "too long" on features, are times where I basically had an empty ticket. I need to find the right people to talk with, figure out requirements, iterate on changing requirements etc.
That's only marginally sped up even if you could generate the code with a click of a button.
This was somehow related to the "social activity" part :D
100%. The thing I'm currently working on has been a pain probably 80% because the work was underspecified and didn't take a bunch of legacy concerns into account and probably 20% because of nature of the code itself.
If it was better specified I'd be done already, but instead I've had to go back and forth with multiple people multiple times about what they actually wanted, and what legacy stuff is worth fixing and not, and how to coordinate some dependent changes.
Most of this work has been the oft-derided "soft skills" that I keep hearing software engineers don't need.
They do not dismiss soft skills. But, they do not know how to play the politics and were give bad advice. I would even say that their observations are entirely correct, they accurately described how teams function. What they do not know is how to influence people.
Bad advice given to them:
> The standard advice is always "communicate better, get buy-in, frame it differently." [...] The advice for this position is always the same: communicate better. Get buy-in. Frame it as their idea. Pick your battles. Show, don't tell.
That sort of naive kindergarten advice is how people want things to work, but how they rarely work. Literally the only functional part of it is the "pick your battles" part. That one is necessary, but not sufficient. The listed advice will make you be seen as nice cooperative person. It is not how you achieve the change.
So OP comes to the "the problem isn't communication. It's structural." conclusion.
The point is that if you unite authority and responsibility in the same individual, you can move fast and confidently because you don't drain people's time and energy by making them "influence people". In a healthy organization, responsible people act and are held to account by their results. Democracy is a choice, not an obligation.
You're right that organizations do often become consensus-driven. It's a failure mode, not something to which we should aspire. "Disagree and commit" is a good thing. Escalating disagreement to a "single threaded owner" for a quick decision is a good thing. It avoids endless argumentation and aligns incentives the right way. Committees (formal or not) diffuse responsibility. Maturity is understanding that hierarchy is normal and desirable.
Where did they dismiss soft skills? The point is that every improvement is met with "just get better soft skills bro" dismissal, which in reality has nothing to do with soft skills. I've met this firsthand.
That's a very strong foundational claim right at the start. And in my experience, a completely false one. Which makes the whole argument that follows it completely unsound.
Also, the author seems to treat the terms "consensus" and "buy-in" as synonymous. They're not, and this distinction can make a huge difference in terms of healthy teams can operate. Patrick Lencioni covers this well in his classic book, "Five Dysfunctions of a Team".
I worked for 7 years in a place where my technical insight slowly turned into questioning my decisions and expertise (this was after being 3 years in tech lead and 2 years in staff engineer role). Sometimes the solution is just to walk away
While I can't say that I observe that kind of radical shift for myself, one of the reasons I still can see something similar is AI development.
Basically manager asks me something and asks AI something.
I'm not always using so-called "common wisdom". I might decide to use library of framework that AI won't suggest. I might use technology that AI considers too old.
For example I suggested to write small Windows helper program with C, because it needs access to WinAPI; I know C very well; and we need to support old Windows versions back to Vista at least, preferably back to Windows XP. However AI suggest using Rust, because Rust is, well, today's hotness. It doesn't really care that I know very little of Rust, it doesn't really care that I would need to jump through certain hoops to build Rust on old Windows (if it's ever possible).
So in the end I suggest to use something that I can build and I have confidence in. AI suggests something that most internet texts written by passionate developers talk about.
But manager probably have doubts in me, because I'm not world-level trillion-dollar-worth celebrity, I'm just some grumpy old developer, so he might question my expertise using AI.
You mention the tradeoffs between rust. Including the high level of uncertainty and increased lead time as you need to learn the language.
The manager, now having that information, can insist on using rust, and you get er great opportunity to learn rust. Now being totally off the hook, even if the project fails, as you mentioned the risks.
I think when you are new with good ideas, you are judged against average. If you are above average, you are listened to.
As years pass, you are judged against the standard you set, and if you do not keep raising this standard, you start being seen as average, even if you are performing the same when you joined.
I've seen this play out many, many times.
When an incompetent person is hired, even if issues are acknowledged, if they somehow stay, the expectations from them will be set to their level. The feedback will stop as if you complain about same issues or same person's work every time, people will start seeing this as a you problem. Everyone quietly avoids this, so the person stays.
When a competent person is hired, it plays out the same. After 3/5/10 years, you are getting the same recognition and rewards as the incompetent person as long as you both maintain your competency.
However, I've seen (very few) people who consistently raised their own standards and improved their impact and they've climbed quickly.
I've seen people lowering their own standards and they were quickly flagged as under-performers, even if their reduced impact was still above average.
“Truly I tell you,” he continued, “no prophet is accepted in his hometown."
- Luke 4:24
It's why people often trust consultants over the people inside the organization. It's why people often want to elect new leaders even if the current leaders are doing a decent job.
The baby almost always gets thrown out with the bath water.
>Ignoring it costs more later, but later is someone else's problem
Given the standard advice to job hop every 1-3 years, and the intern/coop work pattern of semester long stints, is this not just a structural consequence?
Do you gain competitive advantage as a company with longer tenures? Or shorter, even?
Or is it an attitude problem, compare with old people planting shade trees:
“Codebases flourish when senior devs write easily maintainable modules in whose extensions they will never work”
I feel the author's frustration, but I think it's partially self-inflicted. I get the sense that they aren't thinking of their coworkers as respected equals.
For every developer who feels strongly that the codebase needs stricter lint rules, a doc comment on every public function, and far more tests, there's a developer of equal rank who feels equally strongly that all three of those changes would create needless, expensive busy-work. (Between one month and the next, I could be either of those people; the correct level of "technical excellence" is very situational!)
If it's correct for the first developer to impose their preferences on the second, it would be equally correct for the second developer to impose their preferences on the first. Let's introduce a CI nag which tracks the ratio between "lines of test code edited" and "lines of real code edited". The linter has been getting on my nerves, so I've thrown together a quick PR to switch off half of the rules, the ones which are clearly useless. You shouldn't introduce any Rust into the codebase, I'm really not a fan of that language.
If you feel that you have very little power over your peers - the ones who are just too complacent to take a risk on your technically excellent suggestions - then that's by design. You aren't supposed to exert much control over coworkers of equal rank, you're supposed to peacefully coexist with them. With `n` peers, your influence within the team is meant to be `1/n`. If you try to exert more control over the team, you'll lose what little soft power you had, because saying "yes" to your suggestions will feel like tacit approval of an illegitimate power grab!
There are a few ways out of this frustration trap. You could find legitimate authority, by getting promoted or founding your own startup; you could build up a ton of respect with your colleagues, until your influence is an enormous `2/n` or `3/n`; or you could find work which is a little more solitary, such as turning a side project into a small business. Until then, I think it's best to squeeze as much impact as you can out of your `1/n` share, and practice being chill about the `(n-1)/n` which is outside of your control.
>If you're in this position (relied upon, validated, powerless), you're not imagining it. And it's not a communication problem. "Just communicate better" is the advice equivalent of "have you tried not being depressed?"
How about "have you tried unionizing?" Because the common theme here is lack of respect which is ultimately limited by your own bargaining power. That means it's only your individual value against the collective will of the company, and the individual is going to lose that fight more often than not (with very rare exceptions for extremely talented and smart people who won the life lottery who are smarter than everyone at a company).
Hard to unionize a digital industry, very easy to find someone willing to take lower pay and “scab” when location is not truly a factor. Not to say impossible, but software development is one of few trades that just by the nature of being digital is pretty hard to unionize.
If we believe that, then the real question/answer for the OP's worries is that software development is in a race to the bottom, and then the advice becomes "have you tried switching to hard-to-automate-and-outsource industry?" because you are certainly never going to get respect by volunteering to be paid less just to remain competitive with cheaper workers
I hate seeing the idea that unionization is the answer. I grew up in South GA. Every single time that a corporation didn’t want to deal with a union, they just picked up and left.
> Authority matching responsibility. That's the only fix I've seen work.
So, if I understood correctly, complaining that his architectural advice for other teams/people was constantly ignored, and his solution is the same thing he was complaining about.
ie The teams he was advising also thought authority should match responsibility - and they did want they wanted and ignored him?
Ouch I don't want to work there! It seems extreme. A decent place to work let's you do your thing. There will be guardrails. But my current job my boss has never told me not to do something. Getting the time to do it is another story and there are solutions. Sometimes picking the battle and lettung it go. Sometimes driving a decision and agreement. But if you do that people like it. And I work somewhere pretty well mocked on HN and Reddit etc. But they are good.
Other places I worked it is usually another engineer throwing a spanner in the works. Smaller companies have a lot of pets in the code and architecture. But if you avoid the pets you can change things.
That is why I think technical excellent people should be in charge. They are the ones able to see the trade offs. They can see who is actually doing great work. Think Linus, Guido, Larry Wall, or Carmack.
Technical excellence is often overlooked by the MBA groups. They will simply walk into a project, pick something perfectly functional and ask you to tear it down for no fucking reason other than to demonstrate "they add value" to the company. They will be really good with the slides and graphs and that's what is visible to management anyway.
Not the framework you developed. Not the fact that your work powers millions of users. To them, you're just a replaceable worker bee. You are only needed when something breaks. Architectural decisions are made by anecdotal experiences by them and it's just stone, paper, scissors all over again.
And when shit blows up right in their faces, it will not be about their judgement or lack thereof - it will be about how you didn't communicate about the issue properly. It will always be you who will be under the bus. And then the bunch of these clowns go and vibe code some stupid-ass product and sell it to gullible investors "wHo NeEds EnGiNeErs?"
And then you read about how 1000s of users' information went public all over the internet post their launch...the very next day.
> "Discuss before shipping" sounds reasonable. In practice, when you're discussing with people who resist the category of change you're proposing, the outcome is predetermined. The discussion isn't evaluation, it's a veto dressed as process.
I literally had this discussion with my boss yesterday. I spent time writing up what I already knew to be true (we have systemic issues which are unsolved, because we only ever fix symptoms, not root causes), replete with 10+ incidents all pointing to the same patterns, and was told I need to get the opinions of others on my team before proceeding with the fixes I recommended. “I can do that, but I also already know the outcome.”
> Responsibility Without Authority
This. So much this. Every time I hear someone excitedly explain that their dev teams “own their full stack,” I die a little inside. Do they fix their [self-inflicted] DB problems, or do they start an incident, ask for help, and then refuse to make the necessary structural changes afterwards? Thought so.
That's exactly what happens in some organizations. I couldn't believe it the first time I saw it, but it is what it is. And the reason is some bosses are addict to consensus. Infuriating but there's really no other option than shrugging off the problems, waiting for staff changes or looking for another job.
I suspect the author has little to no experience running a commercial organisation.
Business outcome comes first, and it is only rarely aligned with technical excellence. Closing a deal might involve making an unreasonable promise, and implementing it might not require more than an ugly hack, so you go with the ugly hack and make the money.
Comfort could be important but many people don't perform well when comfortable, so the organisation has to add some degree of confusion and pressure to keep them at a productive equilibrium where they don't fall into either apathy or burst into flames.
And yes, the boss decides, not because they are especially accountable or responsible, but because the power comes from ownership. In some organisations this is veiled and workers get a say most of the time, but in a pinch it'll be the higher-ups that actually have that power.
You have to be able to pick your battles. Sometimes people are in the wrong teams. Sometimes they are just assholes who think they are always right. Too often the "right thing" is subjective.
> Ignoring it costs more later, but later is someone else's problem.
and then the blame could be shifted to the future generations, it's their incompetence after all.
> Correctness wins when the cost of ignoring it becomes impossible to miss: an outage, a customer complaint, data loss. Until then, comfort wins every time.
Those who tolerate comfort-winning aren't engineers and shouldn't be admitted to stand close to engineering systems overall, especially outside the software industry.
It’s a trust issue. There’s no one more of a PITA than a new team member who joins and starts questioning every little thing and demanding it be changed (the initial questioning is fine, so long as you accept “because” as a reason). OF COURSE any team that’s shipping software will have things that don’t make sense prima facie, because they’re accumulated tech debt or historical accident.
Go beyond identifying all these problems towards solving them. Choose a small problem, where you won’t have to fight and argue, just a little dust bunny you can sweep out of the way. Do it again, and again, and again. This is how you build trust. As you build trust, it becomes easier to seek change.
Additionally, you may also find that not all the little problems are worth solving, and what’s more interesting are the bigger problems around product-market fit, usability, and revenue.
> Additionally, you may also find that not all the little problems are worth solving, and what’s more interesting are the bigger problems around product-market fit, usability, and revenue.
TFA author (and me), and you have wildly different motivations. I don't know the author, but have said verbatim much of what they wrote, so I feel like I can speak on this.
Beyond the fact that I recognize the company has to continue exist for me to be employed, none of those hold the slightest bit of interest for me. What motivates me are interesting technical challenges, full stop. As an example, recently at my job we had a forced AI-Only week, where everyone had to use Claude Code, zero manual coding. This was agony to me, because I could see it making mistakes that I could fix in seconds, but instead I had to try to patiently explain what I needed to be done, and then twiddle my thumbs while cheerful nonsense words danced around the screen. One of the things I produced from that was a series of linters to catch sub-optimal schema decisions in PRs. This was praised, but I got absolutely no joy from it, because I didn't write it. I have written linters that parse code using its AST before, and those did bring me joy, because it was an interesting technical challenge. Instead, all I did was (partially) solve a human challenge; to me, that's just frustration manifest, because in my mind if you don't know how to use a DB, you shouldn't be allowed to use the DB (in prod - you have to learn, obviously).
I am fully aware that this is largely incompatible with most workplaces, and that my expectations are unrealistic, but that doesn't change the fact that it is how I feel.
Don't really have anything to add but I do want to say you're not alone - I feel very similarly about AI tooling, the level of satisfaction I get from using them (none), the need for interesting technical challenges, etc. etc.
Re: AI, that's not to say I don't use it, I just view it as a sometimes useful tool that you have to watch very closely. I also often view their use as an X-Y problem.
Another recent example: during the same AI week, someone made an AI Skill (I'm not sure how that counts as software, but I digress) that connects to Buildkite to find failed builds, then matches the symptoms back to commit[s]. In their demo, they showed it successfully doing so for something that "took them hours to solve the day before." The issue was having deployed code before its sibling schema migration.
While I was initially baffled at how they missed the logs that very clearly said "<table_name> not found," after having Claude go do something similar for me later, I realized it's at least partially because our logs are just spamming bullshit constantly. 5000-10000 lines isn't uncommon. Maybe if you weren't mislabeling what are clearly DEBUG messages as INFO, and if you didn't have so many abstractions and libraries that the stack traces are hundreds of lines deep, you wouldn't need an LLM to find the needle in the haystack for you.
I’m a development manager and senior developer. I have seen the described behavior from TFA play out on several different teams. Sometimes such team members learn to adapt their approach while holding onto their ideals, and they become valued colleagues. Other times they don’t and they leave out of frustration or are fired or spin their wheels. I have no doubt there’s a great deal of truth in the author’s description, but there’s also maybe some truth in the feedback they’ve received.
I also share some of your philosophy — life is too short for us not to find joy at work, if we can. It’s a lot easier to find that joy when the team’s shipping valuable software, of course.
> Sometimes such team members learn to adapt their approach while holding onto their ideals, and they become valued colleagues.
What's frustrating (I've said that a lot, I know) to me is that my skills are seen as valued, but my opinions aren't. I also have a pathological need to help people, and so when someone asks me, I can't help but patiently explain for the Nth time how a B+tree works (I include docs! I've written internal docs at varying levels!) and why their index design won't work. This is usually met with "Thanks!" because I've solved their problem, until the next problem occurs. When I then point out that they have a systemic issue, and point to the incidents proving this, they don't want to hear it, because that turns "I made an error, and have fixed it" into "I have made a deep architectural mistake," and people apparently cannot stand to be wrong.
That also baffles me - I don't think I'm arrogant or conceited; when I'm wrong, I publicly say so, and explain precisely where I was mistaken, what the correct answer is, and provide references. Being wrong isn't a moral failing, or even necessarily an indictment on your skills, but for some reason, people are deathly afraid to admit they were wrong.
So basically you get hired with 10-15 years of experience and you start nothing but by earning trust fixing small problems for how long? That sounds like a great way to get into the "does not meet expectations" territory very quickly.
I've seen this pattern play out, and been frustrated by it many times over.
> Authority matching responsibility. That's the only fix I've seen work. Either you get decision-making power that matches the decisions you're already making, or you find a place that treats your judgment as an asset instead of something to manage.
I don't think the solution is to become some kind of dictator. And I don't think it's about not valuing your judgement.
The key issue is a fundamental misalignment of core values. In the examples given, the culture is such that quality is not the highest priority. A system based on consensus only really works if core values are shared, or there will always be discontent. Consensus won't work under these circumstances. You'll never be able to 'trust' your colleagues to 'do the right thing'.
If you care about quality, you have to look for another organisation and have a lot of questions about how they assure quality.
> The key issue is a fundamental misalignment of core values.
Agreed, but my main frustration is what glitchc wrote a few comments down: "No one actually claims their product is crap and quality doesn't matter."
I have never met anyone in management who will admit that they value velocity over correctness and uptime, but their actions do. If you want to optimize for velocity, growing your user base, expanding your features, that's fine - but you need to acknowledge that you're making a trade-off in doing so. If you're a solo dev, or working at an extremely small shop with high trust, it's possible that you can have high velocity and high quality, but the combination is vanishingly rare at most places.
The world is a big place with all kinds of organizations and people that fit in different ways on those organizations.
Some organizations do in fact optimize for correctedness, and some people are good at it.
Some people are good in everything (totally possible, universe doesn't care about keeping dichotomies). Maybe that technical guy was only technical up until now because it was what added more value. People often don't consider that.
Right now, we're seeing some small changes in value dynamics. It makes us foster those (mostly pointless) meta-conversations about what organizations are and how people fit in them. But the truth stays the same, both are incredibly diverse.
Okay I'll bite: How does one find these organizations? They all have high quality listed in marketing blurbs on their websites. No one actually claims their product is crap and quality doesn't matter.
IME, here are some signals that a company actually values correctness. This is not all-inclusive, nor is any one of them a guarantee.
* Their codebase is written in something relatively obscure, like Elixir or Haskell.
* They're an infrastructure [0] or monitoring provider.
* They're running their code on VMs, and have a sane instantiation and deployment process.
* They use Foreign Key Constraints in their RDBMS, and can explain and defend their chosen normalization level.
* They're running their own servers in a colo or self-owned datacenter.
And here are some anti-signals. Same disclaimers apply.
* Their backend is written in JS / TS (and to a somewhat lesser extent, Python [1]).
* They're running on K8s with a bunch of CRDs.
* They've posted blog articles about how they solved problems that the industry solved 20 years ago.
* They exclusively or nearly exclusively use NoSQL [2].
0: This is hit or miss; reference the steady decline in uptime from AWS, GitHub, et al.
1: I love Python dearly, and while it can be made excellent, it's a lot easier to make it bad.
2: Modulo places that have a clear need for something like Scylla - use the the right tool for the job, but the right tool is almost never a DocumentDB.
Look at what they do instead, not their marketing. NASA is the obvious and biggest example. They won't be vibe coding and skipping QA any time soon. Probably ever.
Look at any high quality open source software, and the care people put into them. Those are organizations, made up of people, some of them highly technical.
Startups often don't optimize for correctedness. They can't afford it. But that's a niche. Funny enough, it's the one that's being most affected by the shift in value dynamics right now, so I understand that some people here might see the world as just this, but it isn't.
Maybe it's still insufficient advice, but it hasn't worked for them at least in part because they haven't figured out how to apply it.
From the post, I see low empathy and an air of superiority, (perhaps earned by genuinely being smarter than their peers-- doesn't make it more attractive).
That's going to cause friction because a team is a _social_ construct.
In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.
I do not find anything missing here. This is how things often plays out in reality. Both your retelling of it and what was actually written in the article.
Your retelling: Some people agree and some disagree with new metric. That is completely normal. Then someone who agree or want to achieve the peace or just temporary does not feel like doing "real jira" tasks fixes warnings. Team moves on.
Actual article: the warnings get solved when it becomes apparent one of them caused production issue. That is when "this new process step matters" side wins.
Your comment is hilarious on a meta-level: it's an example of exactly the sort of socially-mediated gatekeeping the author of the article (machine or human, I don't care) criticizes. It is, in fact, essential to match authority and responsibility to achieve excellence in any endeavor, and it's a truth universally acknowledged that vague consensus requirements are tools socially adept cowards use to undermine excellence.
> Organizations don't optimize for correctness. They optimize for comfort
...do I need to say it?
Stopped here. That pattern.
I recognize this pattern from this AI "companion" my mate showed me over Christmas. It told a bunch of crazy stories using this "seize the day" vibe.
It had an animated, anthropomorphized animal avatar. And that animal was an f'ing RACCOON.
There is a delayed but direct association between RLHF results we see in LLM responses and volume of LinkedIn-spiration generated by humans disrupting ${trend.hireable} from coffee shops and couches.
// from my couch above a coffee shop, disrupting cowork on HN. no avatars. no stories. just skills.md
- It is not X. It is Y.
- X [negate action] Y. X [action] Z.
The titles are giveaways too: Comfort Over Correctness, Consensus As Veto, The Nuance, Responsibility Without Authority, What Changes It. Has that bot taste.
If you want I can compile a list of cases where this doesn't happen. Do you want me to do that?
Writing is art. Does it get the intended point across? Does it resonate with the reader? Does it make them feel something? Then it is good.
Your efforts to improve quality could be vetoed by your coworkers for a variety of reasons: they don't care, they don't trust your judgement, they see other things as a higher priority... the list goes on and on. Some of these things can't be changed by you, but some can, and that's where the soft skills come into play.
That's only marginally sped up even if you could generate the code with a click of a button.
This was somehow related to the "social activity" part :D
If it was better specified I'd be done already, but instead I've had to go back and forth with multiple people multiple times about what they actually wanted, and what legacy stuff is worth fixing and not, and how to coordinate some dependent changes.
Most of this work has been the oft-derided "soft skills" that I keep hearing software engineers don't need.
Bad advice given to them:
> The standard advice is always "communicate better, get buy-in, frame it differently." [...] The advice for this position is always the same: communicate better. Get buy-in. Frame it as their idea. Pick your battles. Show, don't tell.
That sort of naive kindergarten advice is how people want things to work, but how they rarely work. Literally the only functional part of it is the "pick your battles" part. That one is necessary, but not sufficient. The listed advice will make you be seen as nice cooperative person. It is not how you achieve the change.
So OP comes to the "the problem isn't communication. It's structural." conclusion.
You're right that organizations do often become consensus-driven. It's a failure mode, not something to which we should aspire. "Disagree and commit" is a good thing. Escalating disagreement to a "single threaded owner" for a quick decision is a good thing. It avoids endless argumentation and aligns incentives the right way. Committees (formal or not) diffuse responsibility. Maturity is understanding that hierarchy is normal and desirable.
That's a very strong foundational claim right at the start. And in my experience, a completely false one. Which makes the whole argument that follows it completely unsound.
Also, the author seems to treat the terms "consensus" and "buy-in" as synonymous. They're not, and this distinction can make a huge difference in terms of healthy teams can operate. Patrick Lencioni covers this well in his classic book, "Five Dysfunctions of a Team".
Basically manager asks me something and asks AI something.
I'm not always using so-called "common wisdom". I might decide to use library of framework that AI won't suggest. I might use technology that AI considers too old.
For example I suggested to write small Windows helper program with C, because it needs access to WinAPI; I know C very well; and we need to support old Windows versions back to Vista at least, preferably back to Windows XP. However AI suggest using Rust, because Rust is, well, today's hotness. It doesn't really care that I know very little of Rust, it doesn't really care that I would need to jump through certain hoops to build Rust on old Windows (if it's ever possible).
So in the end I suggest to use something that I can build and I have confidence in. AI suggests something that most internet texts written by passionate developers talk about.
But manager probably have doubts in me, because I'm not world-level trillion-dollar-worth celebrity, I'm just some grumpy old developer, so he might question my expertise using AI.
Maybe he's even right, who knows.
You mention the tradeoffs between rust. Including the high level of uncertainty and increased lead time as you need to learn the language.
The manager, now having that information, can insist on using rust, and you get er great opportunity to learn rust. Now being totally off the hook, even if the project fails, as you mentioned the risks.
As years pass, you are judged against the standard you set, and if you do not keep raising this standard, you start being seen as average, even if you are performing the same when you joined.
I've seen this play out many, many times.
When an incompetent person is hired, even if issues are acknowledged, if they somehow stay, the expectations from them will be set to their level. The feedback will stop as if you complain about same issues or same person's work every time, people will start seeing this as a you problem. Everyone quietly avoids this, so the person stays.
When a competent person is hired, it plays out the same. After 3/5/10 years, you are getting the same recognition and rewards as the incompetent person as long as you both maintain your competency.
However, I've seen (very few) people who consistently raised their own standards and improved their impact and they've climbed quickly.
I've seen people lowering their own standards and they were quickly flagged as under-performers, even if their reduced impact was still above average.
- Luke 4:24
It's why people often trust consultants over the people inside the organization. It's why people often want to elect new leaders even if the current leaders are doing a decent job.
The baby almost always gets thrown out with the bath water.
https://en.wikipedia.org/wiki/Don't_throw_the_baby_out_with_...
Given the standard advice to job hop every 1-3 years, and the intern/coop work pattern of semester long stints, is this not just a structural consequence?
Do you gain competitive advantage as a company with longer tenures? Or shorter, even?
Or is it an attitude problem, compare with old people planting shade trees:
“Codebases flourish when senior devs write easily maintainable modules in whose extensions they will never work”
For every developer who feels strongly that the codebase needs stricter lint rules, a doc comment on every public function, and far more tests, there's a developer of equal rank who feels equally strongly that all three of those changes would create needless, expensive busy-work. (Between one month and the next, I could be either of those people; the correct level of "technical excellence" is very situational!)
If it's correct for the first developer to impose their preferences on the second, it would be equally correct for the second developer to impose their preferences on the first. Let's introduce a CI nag which tracks the ratio between "lines of test code edited" and "lines of real code edited". The linter has been getting on my nerves, so I've thrown together a quick PR to switch off half of the rules, the ones which are clearly useless. You shouldn't introduce any Rust into the codebase, I'm really not a fan of that language.
If you feel that you have very little power over your peers - the ones who are just too complacent to take a risk on your technically excellent suggestions - then that's by design. You aren't supposed to exert much control over coworkers of equal rank, you're supposed to peacefully coexist with them. With `n` peers, your influence within the team is meant to be `1/n`. If you try to exert more control over the team, you'll lose what little soft power you had, because saying "yes" to your suggestions will feel like tacit approval of an illegitimate power grab!
There are a few ways out of this frustration trap. You could find legitimate authority, by getting promoted or founding your own startup; you could build up a ton of respect with your colleagues, until your influence is an enormous `2/n` or `3/n`; or you could find work which is a little more solitary, such as turning a side project into a small business. Until then, I think it's best to squeeze as much impact as you can out of your `1/n` share, and practice being chill about the `(n-1)/n` which is outside of your control.
How about "have you tried unionizing?" Because the common theme here is lack of respect which is ultimately limited by your own bargaining power. That means it's only your individual value against the collective will of the company, and the individual is going to lose that fight more often than not (with very rare exceptions for extremely talented and smart people who won the life lottery who are smarter than everyone at a company).
How hard do you think it is to get cheaper developers from LatAM?
So, if I understood correctly, complaining that his architectural advice for other teams/people was constantly ignored, and his solution is the same thing he was complaining about.
ie The teams he was advising also thought authority should match responsibility - and they did want they wanted and ignored him?
Other places I worked it is usually another engineer throwing a spanner in the works. Smaller companies have a lot of pets in the code and architecture. But if you avoid the pets you can change things.
I’m confused. The polite way to say no at work is to make it about not having time.
Not the framework you developed. Not the fact that your work powers millions of users. To them, you're just a replaceable worker bee. You are only needed when something breaks. Architectural decisions are made by anecdotal experiences by them and it's just stone, paper, scissors all over again.
And when shit blows up right in their faces, it will not be about their judgement or lack thereof - it will be about how you didn't communicate about the issue properly. It will always be you who will be under the bus. And then the bunch of these clowns go and vibe code some stupid-ass product and sell it to gullible investors "wHo NeEds EnGiNeErs?"
And then you read about how 1000s of users' information went public all over the internet post their launch...the very next day.
/endrant
I literally had this discussion with my boss yesterday. I spent time writing up what I already knew to be true (we have systemic issues which are unsolved, because we only ever fix symptoms, not root causes), replete with 10+ incidents all pointing to the same patterns, and was told I need to get the opinions of others on my team before proceeding with the fixes I recommended. “I can do that, but I also already know the outcome.”
> Responsibility Without Authority
This. So much this. Every time I hear someone excitedly explain that their dev teams “own their full stack,” I die a little inside. Do they fix their [self-inflicted] DB problems, or do they start an incident, ask for help, and then refuse to make the necessary structural changes afterwards? Thought so.
Insert fire writing gif here.
Business outcome comes first, and it is only rarely aligned with technical excellence. Closing a deal might involve making an unreasonable promise, and implementing it might not require more than an ugly hack, so you go with the ugly hack and make the money.
Comfort could be important but many people don't perform well when comfortable, so the organisation has to add some degree of confusion and pressure to keep them at a productive equilibrium where they don't fall into either apathy or burst into flames.
And yes, the boss decides, not because they are especially accountable or responsible, but because the power comes from ownership. In some organisations this is veiled and workers get a say most of the time, but in a pinch it'll be the higher-ups that actually have that power.
and then the blame could be shifted to the future generations, it's their incompetence after all.
> Correctness wins when the cost of ignoring it becomes impossible to miss: an outage, a customer complaint, data loss. Until then, comfort wins every time.
Those who tolerate comfort-winning aren't engineers and shouldn't be admitted to stand close to engineering systems overall, especially outside the software industry.
Go beyond identifying all these problems towards solving them. Choose a small problem, where you won’t have to fight and argue, just a little dust bunny you can sweep out of the way. Do it again, and again, and again. This is how you build trust. As you build trust, it becomes easier to seek change.
Additionally, you may also find that not all the little problems are worth solving, and what’s more interesting are the bigger problems around product-market fit, usability, and revenue.
TFA author (and me), and you have wildly different motivations. I don't know the author, but have said verbatim much of what they wrote, so I feel like I can speak on this.
Beyond the fact that I recognize the company has to continue exist for me to be employed, none of those hold the slightest bit of interest for me. What motivates me are interesting technical challenges, full stop. As an example, recently at my job we had a forced AI-Only week, where everyone had to use Claude Code, zero manual coding. This was agony to me, because I could see it making mistakes that I could fix in seconds, but instead I had to try to patiently explain what I needed to be done, and then twiddle my thumbs while cheerful nonsense words danced around the screen. One of the things I produced from that was a series of linters to catch sub-optimal schema decisions in PRs. This was praised, but I got absolutely no joy from it, because I didn't write it. I have written linters that parse code using its AST before, and those did bring me joy, because it was an interesting technical challenge. Instead, all I did was (partially) solve a human challenge; to me, that's just frustration manifest, because in my mind if you don't know how to use a DB, you shouldn't be allowed to use the DB (in prod - you have to learn, obviously).
I am fully aware that this is largely incompatible with most workplaces, and that my expectations are unrealistic, but that doesn't change the fact that it is how I feel.
Re: AI, that's not to say I don't use it, I just view it as a sometimes useful tool that you have to watch very closely. I also often view their use as an X-Y problem.
Another recent example: during the same AI week, someone made an AI Skill (I'm not sure how that counts as software, but I digress) that connects to Buildkite to find failed builds, then matches the symptoms back to commit[s]. In their demo, they showed it successfully doing so for something that "took them hours to solve the day before." The issue was having deployed code before its sibling schema migration.
While I was initially baffled at how they missed the logs that very clearly said "<table_name> not found," after having Claude go do something similar for me later, I realized it's at least partially because our logs are just spamming bullshit constantly. 5000-10000 lines isn't uncommon. Maybe if you weren't mislabeling what are clearly DEBUG messages as INFO, and if you didn't have so many abstractions and libraries that the stack traces are hundreds of lines deep, you wouldn't need an LLM to find the needle in the haystack for you.
I also share some of your philosophy — life is too short for us not to find joy at work, if we can. It’s a lot easier to find that joy when the team’s shipping valuable software, of course.
What's frustrating (I've said that a lot, I know) to me is that my skills are seen as valued, but my opinions aren't. I also have a pathological need to help people, and so when someone asks me, I can't help but patiently explain for the Nth time how a B+tree works (I include docs! I've written internal docs at varying levels!) and why their index design won't work. This is usually met with "Thanks!" because I've solved their problem, until the next problem occurs. When I then point out that they have a systemic issue, and point to the incidents proving this, they don't want to hear it, because that turns "I made an error, and have fixed it" into "I have made a deep architectural mistake," and people apparently cannot stand to be wrong.
That also baffles me - I don't think I'm arrogant or conceited; when I'm wrong, I publicly say so, and explain precisely where I was mistaken, what the correct answer is, and provide references. Being wrong isn't a moral failing, or even necessarily an indictment on your skills, but for some reason, people are deathly afraid to admit they were wrong.
> Authority matching responsibility. That's the only fix I've seen work. Either you get decision-making power that matches the decisions you're already making, or you find a place that treats your judgment as an asset instead of something to manage.
I don't think the solution is to become some kind of dictator. And I don't think it's about not valuing your judgement.
The key issue is a fundamental misalignment of core values. In the examples given, the culture is such that quality is not the highest priority. A system based on consensus only really works if core values are shared, or there will always be discontent. Consensus won't work under these circumstances. You'll never be able to 'trust' your colleagues to 'do the right thing'.
If you care about quality, you have to look for another organisation and have a lot of questions about how they assure quality.
Agreed, but my main frustration is what glitchc wrote a few comments down: "No one actually claims their product is crap and quality doesn't matter."
I have never met anyone in management who will admit that they value velocity over correctness and uptime, but their actions do. If you want to optimize for velocity, growing your user base, expanding your features, that's fine - but you need to acknowledge that you're making a trade-off in doing so. If you're a solo dev, or working at an extremely small shop with high trust, it's possible that you can have high velocity and high quality, but the combination is vanishingly rare at most places.
Some organizations do in fact optimize for correctedness, and some people are good at it.
Some people are good in everything (totally possible, universe doesn't care about keeping dichotomies). Maybe that technical guy was only technical up until now because it was what added more value. People often don't consider that.
Right now, we're seeing some small changes in value dynamics. It makes us foster those (mostly pointless) meta-conversations about what organizations are and how people fit in them. But the truth stays the same, both are incredibly diverse.
* Their codebase is written in something relatively obscure, like Elixir or Haskell.
* They're an infrastructure [0] or monitoring provider.
* They're running their code on VMs, and have a sane instantiation and deployment process.
* They use Foreign Key Constraints in their RDBMS, and can explain and defend their chosen normalization level.
* They're running their own servers in a colo or self-owned datacenter.
And here are some anti-signals. Same disclaimers apply.
* Their backend is written in JS / TS (and to a somewhat lesser extent, Python [1]).
* They're running on K8s with a bunch of CRDs.
* They've posted blog articles about how they solved problems that the industry solved 20 years ago.
* They exclusively or nearly exclusively use NoSQL [2].
0: This is hit or miss; reference the steady decline in uptime from AWS, GitHub, et al.
1: I love Python dearly, and while it can be made excellent, it's a lot easier to make it bad.
2: Modulo places that have a clear need for something like Scylla - use the the right tool for the job, but the right tool is almost never a DocumentDB.
Look at any high quality open source software, and the care people put into them. Those are organizations, made up of people, some of them highly technical.
Startups often don't optimize for correctedness. They can't afford it. But that's a niche. Funny enough, it's the one that's being most affected by the shift in value dynamics right now, so I understand that some people here might see the world as just this, but it isn't.