How to Make Websites That Will Require Lots of Your Time and Energy - Jim Nielsen’s Blog
- Install Stuff Indiscriminately From npm
- Pick a Framework Before You Know You Need One
- Always, Always Require a Compilation Step
- Install Stuff Indiscriminately From npm
- Pick a Framework Before You Know You Need One
- Always, Always Require a Compilation Step
People advancing an inevitabilist world view state that the future they perceive will inevitably come to pass. It follows, relatively straightforwardly, that the only sensible way to respond to this is to prepare as best you can for that future.
This is a fantastic framing method. Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.
Following on from my earlier link about AI etiquette, what Trys experienced here is utterly deflating:
I spent a couple of hours working through my notes and writing up a review before sending it to my manager, awaiting their equivalent review for me.
However, the review I received back was, quite simply, quintessential AI slop.
When slopagandists talk about “AI” boosting productivity, this is the kind of shite they’re talking about.
This page collects my blog posts on the topic of fighting off spam bots, search engine spiders and other non-humans wasting the precious resources we have on Earth.
For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.
Now, AI has made text very, very, very cheap. … Any text can be AI slop. If you read it, you’re injured in this war. You engaged and replied – you’re as good as dead. The dead internet is not just dead it’s poisoned.
I think that realistically, our main weapon in this war is AI etiquette.
Most obviously, aliveness is what generally feels absent from the written and visual outputs of ChatGPT and its ilk, even when they’re otherwise of high quality. I’m not claiming I couldn’t be fooled into thinking AI writing or art was made by a human (I’m sure I already have been); but that when I realise something’s AI, either because it’s blindingly obvious or when I find out, it no longer feels so alive to me. And that this change in my feelings about it isn’t irrelevant: that it means something.
More subtly, it feels like our own aliveness is what’s at stake when we’re urged to get better at prompting LLMs to provide the most useful responses. Maybe that’s a necessary modern skill; but still, the fact is that we’re being asked to think less like ourselves and more like our tools.
It feels like someone just harvested lumber from a forest I helped grow, and now wants to sell me the furniture they made with it.
AI presents design leaders with a quandary, requiring us to tread a fine line between what is acceptable and useful, and what is problematic and harmful.
This document is not a manifesto or an agenda. It is a series of prompts written by design leaders for design leaders, conceived to help us navigate these tricky waters.
Here’s what the “AI will replace developers” crowd fundamentally misunderstands: code is not an asset—it’s a liability. Every line must be maintained, debugged, secured, and eventually replaced. The real asset is the business capability that code enables.
If AI makes writing code faster and cheaper, it’s really making it easier to create liability. When you can generate liability at unprecedented speed, the ability to manage and minimize that liability strategically becomes exponentially more valuable.
This is particularly true because AI excels at local optimization but fails at global design. It can optimize individual functions but can’t determine whether a service should exist in the first place, or how it should interact with the broader system. When implementation speed increases dramatically, architectural mistakes get baked in before you realize they’re mistakes.
Frankly, I’d rather quit my career than live in the future they’re selling. It’s the sheer dystopian drabness of it. Mediocrity as a service.
I tried the tab-completion slot machines; not my cup of tea. I tried image generation and was overcome with literal depression. I don’t want a future as a “prompt artist”.
I’m mostly linking this for what it says, but oh boy, do I love the way it says it with this wonderful HTML web compenent.
Engaging with AI as a technology is to play the fool—it’s to observe the reflective surface of the thing without taking note of the way it sends roots deep down into the ground, breaking up bedrock, poisoning the soil, reaching far and wide to capture, uproot, strangle, and steal everything within its reach. It’s to stand aboveground and pontificate about the marvels of this bright new magic, to be dazzled by all its flickering, glittering glory, its smooth mirages and six-fingered messiahs, its apparent obsequiousness in response to all your commands, right up until the point when a sinkhole opens up and swallows you whole.
👏👏👏
I heard you like div
s…
If I’m understanding Greg correctly here, he’s saying it’s okay for people to use large language models …because they’re being forced to?
A good overview of how large language models work:
The words flow together because they’ve been seen together many times. But that doesn’t mean they’re right. It just means they’re coherent.
AI is really good for helping you if you’re bad at something, or at least below average. But it’s probably not the right tool if you’re great at something. So why would these CEOs be saying, almost all using the exact same phrasing, that everyone at their companies should be using these tools? Do the think their employees are all bad at their jobs?
Thoughtful analysis from Ben (as always).
Instead of that deep immersion where I’d craft each function, I’m now more like a curator? I describe what I want, evaluate what the AI gives me, tweak the prompts, and iterate. It’s efficient, yes. Revolutionary, even. But something essential feels missing — that state of flow where time vanishes and you’re completely absorbed in creation. If this becomes the dominant workflow across teams, do we risk an industry full of highly productive yet strangely detached developers?
It’s an annoying cognitive task: detecting weird photo artifacts, bizarre movement in videos, impossible animals and body horror, and reading through reams of anodyne text to determine if the person who prompted the synthetic media machine cared enough to dedicate time and energy to the task of communicating to their audience.
I hate that this is the bleak future which venture capitalists and AI boosters have gleefully laid out for us, that they consider this to be a “democratizing” technology in any real sense of the word. Far from strengthening democracy, these are technologies more apt at propping up scam capitalism and multi-level marketing schemes. I would like my time and mental space back.
You won’t be able to unsee this. It’s like the FedEx logo …if the arrow was an anus.
- Circular shape (often with a gradient)
- Central opening or focal point
- Radiating elements from the center
- Soft, organic curves
Sound familiar? It should, because it’s also an apt description of… well, you know.