Why is so much of generated AI art so... literal? The cover art of this PDF literally spells out what the graphics are supposed to represent. The vast majority of AI visuals on LinkedIn are the same way. If this is what's in store as the future of art, at least commercial art -- feels like a huge step backwards if I'm being honest.
And anyway, what's the point of generating a massive tome like this on a topic evolving as fast as agentic software? Sure it will be outdated within months, if not weeks...
I’m guessing because it’s not produced by artists or people who has an eye for art generally.
When it comes to the book, changes are 80% is written by AI. I mean lots of content produced just pure AI, I’m following some AI subreddits and majority of the posts very obviously generate with couple of prompts, they don’t even bother styling while copy pasting. I’m really struggling to read online content recently.
Partly it's a byproduct of the way that prompting works. Partly it's that the majority of people generating content with AI are not skilled at conceptualizing imagery in the way creative professionals are. I think it's moreso the latter.
For writing with an LLM you really have to be precise and honest with your AI usage and I don't think the disclosure here does a good job at that. If you sample a few random pages there are huge shifts in style and tone.
The future where your AI expands your sentence into a few paragraphs that my AI distills down into a sentence sucks just send me your rough draft
I'd rather people bounce back and forward with the AI until they're happy, but then don't fluff it up and expand it needlessly in the final step, spit out a condensed, tight bullet list, no fluff no purple prose no emojis and send me that instead.
And the full circle begins AI writes a lot of content and we ask another AI to summarize it. It’s like that project where the guy keeps uploading and downloading a video to YouTube until it’s just mess of pixelated frames. I feel like AI written content is similar.
I skimmed this; it isn't terrible content (even though many parts are clearly AI written).
But I don't understand the purpose of this book. Is it an educational material, a speculative fiction, or an essay trying to convince the reader of something?
Because if you wanted any of these things, you could literally skip the book and go straight to the AI that will give it to you, tailored for your project.
This isn't a criticism, more a philosophical question I'm asking myself after 25 years of coding.
Book ends with "There’s a fundamental truth
about this transformation that no book can fully convey: work-
ing with AI teammates is something you must experience to
understand"
If you're going to write with AI give me the prompts. Don't produce this strange tone-shifting full-of-fluff mess where most of it is machine filler content. The density of ideas to text is absurdly low.
Let's just be open an honest with one another. If I really want to read something like this my AI can generate it from your prompts just as well as your can.
The skepticism in this thread is fair, but I think it misses the more interesting question: what changes about software engineering verification when the author is an AI rather than a human?
When a human writes code, you can reason about intent. When an AI writes it, the cognitive overhead of understanding the output is higher, not lower. This makes formal guarantees at the output level more valuable -- not less. The interesting work in "agentic SE" isn't coordination patterns, it's: how do you specify what correct looks like in a way that's verifiable at generation time?
Most current AI coding tools solve the wrong problem: they help AI write human-readable code. But if the human is primarily reviewing, not writing, the bottleneck shifts to verification, not readability.
It's not that I don't like AI generated text, it's that I'm tired of the whole it's not this it's that style of writing..
AI code is much easier to read than AI text (or book). It's kind of like what people think of the AI generated book cover. That's how I feel about the generic AI writing.
This is way too dense, you need to distill your thesis and interesting ideas down to a small post if you expect people spending time reading a 417 page PDF
You're crazy if you think the target demo of "business leaders" and "thought leaders" aren't going to dump it into their favorite LLM first thing and prompt their way into a summary.
The author appears to be a CS professor. If it is the same person, it is interesting that he chose not to reveal his affiliation, or mention this document on his page.
And anyway, what's the point of generating a massive tome like this on a topic evolving as fast as agentic software? Sure it will be outdated within months, if not weeks...
When it comes to the book, changes are 80% is written by AI. I mean lots of content produced just pure AI, I’m following some AI subreddits and majority of the posts very obviously generate with couple of prompts, they don’t even bother styling while copy pasting. I’m really struggling to read online content recently.
The future where your AI expands your sentence into a few paragraphs that my AI distills down into a sentence sucks just send me your rough draft
But I don't understand the purpose of this book. Is it an educational material, a speculative fiction, or an essay trying to convince the reader of something?
Because if you wanted any of these things, you could literally skip the book and go straight to the AI that will give it to you, tailored for your project.
This isn't a criticism, more a philosophical question I'm asking myself after 25 years of coding.
Let's just be open an honest with one another. If I really want to read something like this my AI can generate it from your prompts just as well as your can.
When a human writes code, you can reason about intent. When an AI writes it, the cognitive overhead of understanding the output is higher, not lower. This makes formal guarantees at the output level more valuable -- not less. The interesting work in "agentic SE" isn't coordination patterns, it's: how do you specify what correct looks like in a way that's verifiable at generation time?
Most current AI coding tools solve the wrong problem: they help AI write human-readable code. But if the human is primarily reviewing, not writing, the bottleneck shifts to verification, not readability.
AI code is much easier to read than AI text (or book). It's kind of like what people think of the AI generated book cover. That's how I feel about the generic AI writing.
The researcher seems to be real, at least? Perhaps the quote has not previously been written down?
https://www.linkedin.com/posts/ahmed-e-hassan_%F0%9D%90%80%F...
So :shrug:
Edit: Downloaded the pdf, started reading it. So much slop. I think something of value could be surfaced much earlier.