Tags: generative

149

Codestin Search App

Thursday, July 24th, 2025

The sound of inevitability | My place to put things

People advancing an inevitabilist world view state that the future they perceive will inevitably come to pass. It follows, relatively straightforwardly, that the only sensible way to respond to this is to prepare as best you can for that future.

This is a fantastic framing method. Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.

Tuesday, July 22nd, 2025

A human review | Trys Mudford

Following on from my earlier link about AI etiquette, what Trys experienced here is utterly deflating:

I spent a couple of hours working through my notes and writing up a review before sending it to my manager, awaiting their equivalent review for me.

However, the review I received back was, quite simply, quintessential AI slop.

When slopagandists talk about “AI” boosting productivity, this is the kind of shite they’re talking about.

Butlerian Jihad

This page collects my blog posts on the topic of fighting off spam bots, search engine spiders and other non-humans wasting the precious resources we have on Earth.

It’s rude to show AI output to people | Alex Martsinovich

For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

Now, AI has made text very, very, very cheap. … Any text can be AI slop. If you read it, you’re injured in this war. You engaged and replied – you’re as good as dead. The dead internet is not just dead it’s poisoned.

I think that realistically, our main weapon in this war is AI etiquette.

Friday, June 20th, 2025

The Imperfectionist: Navigating by aliveness

Most obviously, aliveness is what generally feels absent from the written and visual outputs of ChatGPT and its ilk, even when they’re otherwise of high quality. I’m not claiming I couldn’t be fooled into thinking AI writing or art was made by a human (I’m sure I already have been); but that when I realise something’s AI, either because it’s blindingly obvious or when I find out, it no longer feels so alive to me. And that this change in my feelings about it isn’t irrelevant: that it means something.

More subtly, it feels like our own aliveness is what’s at stake when we’re urged to get better at prompting LLMs to provide the most useful responses. Maybe that’s a necessary modern skill; but still, the fact is that we’re being asked to think less like ourselves and more like our tools.

Tuesday, June 17th, 2025

Large Language Muddle • Jason Santa Maria

It feels like someone just harvested lumber from a forest I helped grow, and now wants to sell me the furniture they made with it.

Critical questions for design leaders working with artificial intelligence, New York 2025 | Leading Design

AI presents design leaders with a quandary, requiring us to tread a fine line between what is acceptable and useful, and what is problematic and harmful.

This document is not a manifesto or an agenda. It is a series of prompts written by design leaders for design leaders, conceived to help us navigate these tricky waters.

The Recurring Cycle of ‘Developer Replacement’ Hype

Here’s what the “AI will replace developers” crowd fundamentally misunderstands: code is not an asset—it’s a liability. Every line must be maintained, debugged, secured, and eventually replaced. The real asset is the business capability that code enables.

If AI makes writing code faster and cheaper, it’s really making it easier to create liability. When you can generate liability at unprecedented speed, the ability to manage and minimize that liability strategically becomes exponentially more valuable.

This is particularly true because AI excels at local optimization but fails at global design. It can optimize individual functions but can’t determine whether a service should exist in the first place, or how it should interact with the broader system. When implementation speed increases dramatically, architectural mistakes get baked in before you realize they’re mistakes.

Friday, May 30th, 2025

Ensloppification – David Bushell – Web Dev (UK)

Frankly, I’d rather quit my career than live in the future they’re selling. It’s the sheer dystopian drabness of it. Mediocrity as a service.

I tried the tab-completion slot machines; not my cup of tea. I tried image generation and was overcome with literal depression. I don’t want a future as a “prompt artist”.

I’m mostly linking this for what it says, but oh boy, do I love the way it says it with this wonderful HTML web compenent.

Wednesday, May 28th, 2025

The Who Cares Era | dansinker.com

AI is, of course, at the center of this moment. It’s a mediocrity machine by default, attempting to bend everything it touches toward a mathematical average. Using extraordinary amounts of resources, it has the ability to create something good enough, a squint-and-it-looks-right simulacrum of normality. If you don’t care, it’s miraculous.

In the Who Cares Era, the most radical thing you can do is care.

In a moment where machines churn out mediocrity, make something yourself. Make it imperfect. Make it rough. Just make it.

Tuesday, May 27th, 2025

Uses

I don’t use large language models. My objection to using them is ethical. I know how the sausage is made.

I wanted to clarify that. I’m not rejecting large language models because they’re useless. They can absolutely be useful. I just don’t think the usefulness outweighs the ethical issues in how they’re trained.

Molly White came to the same conclusion:

The benefits, though extant, seem to pale in comparison to the costs.

Rich has similar thoughts:

What I do know is that I find LLMs useful on occasion, but every time I use one I die a little inside.

I genuinely look forward to being able to use a large language model with a clear conscience. Such a model would need to be trained ethically. When we get a free-range organic large language model I’ll be the first in line to use it. Until then, I’ll abstain. Remember:

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Still, in anticipation of an ethical large language model someday becoming reality, I think it’s good for me to have an understanding of which tasks these tools are good at.

Prototyping seems like a good use case. My general attitude to prototyping is the exact opposite to my attitude to production code; use absolutely any tool you want and prioritise speed over quality.

When it comes to coding in general, I think Laurie is really onto something when he says:

Is what you’re doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it’s probably going to be great at it. If you’re asking it to convert into a roughly equal amount of text it will be so-so. If you’re asking it to create more text than you gave it, forget about it.

In other words, despite what the hype says, these tools are far better at transforming than they are at generating.

Iris Meredith goes deeper into this distinction between transformative and compositional work:

Compositionality relies (among other things) on two core values or functions: choice and precision, both of which are antithetical to LLM functioning.

My own take on this is that transformative work is often the drudge work—take this data dump and convert it to some other format; take this mock-up and make a disposable prototype. I want my tools to help me with that.

But compositional work that relies on judgement, taste, and choice? Not only would I not use a large language model for that, it’s exactly the kind of work that I don’t want to automate away.

Transformative work is done with broad brushstrokes. Compositional work is done with a scalpel.

Large language models are big messy brushes, not scalpels.

Saturday, May 24th, 2025

The luxury of saying no.

If I’m understanding Greg correctly here, he’s saying it’s okay for people to use large language models …because they’re being forced to?

Friday, May 23rd, 2025

Tools

One persistent piece of slopaganda you’ll hear is this:

“It’s just a tool. What matters is how you use it.”

This isn’t a new tack. The same justification has been applied to many technologies.

Leaving aside Kranzberg’s first law, large language models are the very antithesis of a neutral technology. They’re imbued with bias and political decisions at every level.

There’s the obvious problem of where the training data comes from. It’s stolen. Everyone knows this, but some people would rather pretend they don’t know how the sausage is made.

But if you set aside how the tool is made, it’s still just a tool, right? A building is still a building even if it’s built on stolen land.

Except with large language models, the training data is just the first step. After that you need to traumatise an underpaid workforce to remove the most horrifying content. Then you build an opaque black box that end-users have no control over.

Take temperature, for example. That’s the degree of probability a large language model uses for choosing the next token. Dial the temperature too low and the tool will parrot its training data too closely, making it a plagiarism machine. Dial the temperature too high and the tool generates what we kindly call “hallucinations”.

Either way, you have no control over that dial. Someone else is making that decision for you.

A large language model is as neutral as an AK-47.

I understand why people want to feel in control of the tools they’re using. I know why people will use large language models for some tasks—brainstorming, rubber ducking—but strictly avoid them for any outputs intended for human consumption.

You could even convince yourself that a large language model is like a bicycle for the mind. In truth, a large language model is more like one of those hover chairs on the spaceship in WALL·E.

Large language models don’t amplify your creativity and agency. Large language models stunt your creativity and rob you of agency.

When someone applies a large language model it is an example of tool use. But the large language model isn’t the tool.

Thursday, May 22nd, 2025

Can Directories Rise Again? - The History of the Web

Search has bent in quality towards its earliest days, difficult to navigate and often unhelpful. And the remedy may be the same as it was a quarter century ago.

Wednesday, May 7th, 2025

AI doesn’t need to think. We do! - craigabbott.co.uk

A good overview of how large language models work:

The words flow together because they’ve been seen together many times. But that doesn’t mean they’re right. It just means they’re coherent.

Wednesday, April 30th, 2025

Codewashing

I have little understanding for people using large language models to generate slop; words and images that nobody asked for.

I have more understanding for people using large language models to generate code. Code isn’t the thing in the same way that words or images are; code is the thing that gets you to the thing.

And if a large language model hallucinates some code, you’ll find out soon enough:

With code you get a powerful form of fact checking for free. Run the code, see if it works.

But I want to push back on one justification I see repeatedly about using large language models to write code. Here’s Craig:

There are many moral and ethical issues with using LLMs, but building software feels like one of the few truly ethically “clean”(er) uses (trained on open source code, etc.)

That’s not how this works. Yes, the large language models are trained on lots of code (most of it open source), but they’re not only trained on that. That’s on top of everything else; all the stolen books, all the unpaid creative work of others.

Even Robin Sloan, who first says:

I think the case of code is especially clear, and, for me, basically settled.

…goes on to acknowledge:

But, again, it’s important to say: the code only works because of Everything. Take that data away, train a model using GitHub alone, and you’ll get a far less useful tool.

When large language models are trained on domain-specific data, it’s always in addition to the mahoosive amount of content they’ve already stolen. It’s that mohoosive amount of content—not the domain-specific data—that enables them to parse your instructions.

(Note that I’m being very delibarate in saying “parse”, not “understand.” Though make no mistake, I’m astonished at how good these tools are at parsing instructions. I say that as someone who tried to write natural language parsers for text-only adventure games back in the 1980s.)

So, sure, go ahead and use large language models to write code. But don’t fool yourself into thinking that it’s somehow ethical.

What I said here applies to code too:

If you’re going to use generative tools powered by large language models, don’t pretend you don’t know how your sausage is made.

An Entirely Other Day: The Triumph of Triumphalism

Scratch the skin of wild-eyed AI proponents, and a thick syrup oozes out, made up of the blendered remains of Roko’s Basilisk, barely sublimated Christian end-times thinking, and the mis-remembered plot of that one cool science-fiction story they read when they were twelve. This is the basis for the new order, just like the blockchain was a couple of years ago, and a dead-eyed, low-poly, pantsless rendering of Mark Zuckerberg was a couple of years before that.

“You’re going to be left behind” is only the latest version of “Have fun staying poor.” It’s got every ounce of the smug self-satisfaction that it shouldn’t need if the inevitability it promises were actually inevitable.

“AI-first” is the new Return To Office - Anil Dash

AI is really good for helping you if you’re bad at something, or at least below average. But it’s probably not the right tool if you’re great at something. So why would these CEOs be saying, almost all using the exact same phrasing, that everyone at their companies should be using these tools? Do the think their employees are all bad at their jobs?

Saturday, April 26th, 2025

The Hidden Cost of AI Coding – Terrible Software

Feels like an emerging trend:

Instead of that deep immersion where I’d craft each function, I’m now more like a curator? I describe what I want, evaluate what the AI gives me, tweak the prompts, and iterate. It’s efficient, yes. Revolutionary, even. But something essential feels missing — that state of flow where time vanishes and you’re completely absorbed in creation. If this becomes the dominant workflow across teams, do we risk an industry full of highly productive yet strangely detached developers?