Tags: models

201

Codestin Search App

Tuesday, August 5th, 2025

Vibe code is legacy code | Val Town Blog

When you vibe code, you are incurring tech debt as fast as the LLM can spit it out. Which is why vibe coding is perfect for prototypes and throwaway projects: It’s only legacy code if you have to maintain it!

The worst possible situation is to have a non-programmer vibe code a large project that they intend to maintain. This would be the equivalent of giving a credit card to a child without first explaining the concept of debt.

If you don’t understand the code, your only recourse is to ask AI to fix it for you, which is like paying off credit card debt with another credit card.

Thursday, July 24th, 2025

The sound of inevitability | My place to put things

People advancing an inevitabilist world view state that the future they perceive will inevitably come to pass. It follows, relatively straightforwardly, that the only sensible way to respond to this is to prepare as best you can for that future.

This is a fantastic framing method. Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.

Tuesday, July 22nd, 2025

A human review | Trys Mudford

Following on from my earlier link about AI etiquette, what Trys experienced here is utterly deflating:

I spent a couple of hours working through my notes and writing up a review before sending it to my manager, awaiting their equivalent review for me.

However, the review I received back was, quite simply, quintessential AI slop.

When slopagandists talk about “AI” boosting productivity, this is the kind of shite they’re talking about.

Butlerian Jihad

This page collects my blog posts on the topic of fighting off spam bots, search engine spiders and other non-humans wasting the precious resources we have on Earth.

It’s rude to show AI output to people | Alex Martsinovich

For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

Now, AI has made text very, very, very cheap. … Any text can be AI slop. If you read it, you’re injured in this war. You engaged and replied – you’re as good as dead. The dead internet is not just dead it’s poisoned.

I think that realistically, our main weapon in this war is AI etiquette.

Saturday, July 19th, 2025

Vibe coding and Robocop

The short version of what I want to say is: vibe coding seems to live very squarely in the land of prototypes and toys. Promoting software that’s been built entirely using this method would be akin to sending a hacked weekend prototype to production and expecting it to be stable.

Remy is taking a very sensible approach here:

I’ve used it myself to solve really bespoke problems where the user count is one.

Would I put this out to production: absolutely not.

Friday, June 20th, 2025

The Imperfectionist: Navigating by aliveness

Most obviously, aliveness is what generally feels absent from the written and visual outputs of ChatGPT and its ilk, even when they’re otherwise of high quality. I’m not claiming I couldn’t be fooled into thinking AI writing or art was made by a human (I’m sure I already have been); but that when I realise something’s AI, either because it’s blindingly obvious or when I find out, it no longer feels so alive to me. And that this change in my feelings about it isn’t irrelevant: that it means something.

More subtly, it feels like our own aliveness is what’s at stake when we’re urged to get better at prompting LLMs to provide the most useful responses. Maybe that’s a necessary modern skill; but still, the fact is that we’re being asked to think less like ourselves and more like our tools.

Tuesday, June 17th, 2025

Large Language Muddle • Jason Santa Maria

It feels like someone just harvested lumber from a forest I helped grow, and now wants to sell me the furniture they made with it.

Critical questions for design leaders working with artificial intelligence, New York 2025 | Leading Design

AI presents design leaders with a quandary, requiring us to tread a fine line between what is acceptable and useful, and what is problematic and harmful.

This document is not a manifesto or an agenda. It is a series of prompts written by design leaders for design leaders, conceived to help us navigate these tricky waters.

The Recurring Cycle of ‘Developer Replacement’ Hype

Here’s what the “AI will replace developers” crowd fundamentally misunderstands: code is not an asset—it’s a liability. Every line must be maintained, debugged, secured, and eventually replaced. The real asset is the business capability that code enables.

If AI makes writing code faster and cheaper, it’s really making it easier to create liability. When you can generate liability at unprecedented speed, the ability to manage and minimize that liability strategically becomes exponentially more valuable.

This is particularly true because AI excels at local optimization but fails at global design. It can optimize individual functions but can’t determine whether a service should exist in the first place, or how it should interact with the broader system. When implementation speed increases dramatically, architectural mistakes get baked in before you realize they’re mistakes.

Friday, May 30th, 2025

Ensloppification – David Bushell – Web Dev (UK)

Frankly, I’d rather quit my career than live in the future they’re selling. It’s the sheer dystopian drabness of it. Mediocrity as a service.

I tried the tab-completion slot machines; not my cup of tea. I tried image generation and was overcome with literal depression. I don’t want a future as a “prompt artist”.

I’m mostly linking this for what it says, but oh boy, do I love the way it says it with this wonderful HTML web compenent.

Toolmen | A Working Library

Engaging with AI as a technology is to play the fool—it’s to observe the reflective surface of the thing without taking note of the way it sends roots deep down into the ground, breaking up bedrock, poisoning the soil, reaching far and wide to capture, uproot, strangle, and steal everything within its reach. It’s to stand aboveground and pontificate about the marvels of this bright new magic, to be dazzled by all its flickering, glittering glory, its smooth mirages and six-fingered messiahs, its apparent obsequiousness in response to all your commands, right up until the point when a sinkhole opens up and swallows you whole.

👏👏👏

Wednesday, May 28th, 2025

The Who Cares Era | dansinker.com

AI is, of course, at the center of this moment. It’s a mediocrity machine by default, attempting to bend everything it touches toward a mathematical average. Using extraordinary amounts of resources, it has the ability to create something good enough, a squint-and-it-looks-right simulacrum of normality. If you don’t care, it’s miraculous.

In the Who Cares Era, the most radical thing you can do is care.

In a moment where machines churn out mediocrity, make something yourself. Make it imperfect. Make it rough. Just make it.

Tuesday, May 27th, 2025

Uses

I don’t use large language models. My objection to using them is ethical. I know how the sausage is made.

I wanted to clarify that. I’m not rejecting large language models because they’re useless. They can absolutely be useful. I just don’t think the usefulness outweighs the ethical issues in how they’re trained.

Molly White came to the same conclusion:

The benefits, though extant, seem to pale in comparison to the costs.

Rich has similar thoughts:

What I do know is that I find LLMs useful on occasion, but every time I use one I die a little inside.

I genuinely look forward to being able to use a large language model with a clear conscience. Such a model would need to be trained ethically. When we get a free-range organic large language model I’ll be the first in line to use it. Until then, I’ll abstain. Remember:

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Still, in anticipation of an ethical large language model someday becoming reality, I think it’s good for me to have an understanding of which tasks these tools are good at.

Prototyping seems like a good use case. My general attitude to prototyping is the exact opposite to my attitude to production code; use absolutely any tool you want and prioritise speed over quality.

When it comes to coding in general, I think Laurie is really onto something when he says:

Is what you’re doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it’s probably going to be great at it. If you’re asking it to convert into a roughly equal amount of text it will be so-so. If you’re asking it to create more text than you gave it, forget about it.

In other words, despite what the hype says, these tools are far better at transforming than they are at generating.

Iris Meredith goes deeper into this distinction between transformative and compositional work:

Compositionality relies (among other things) on two core values or functions: choice and precision, both of which are antithetical to LLM functioning.

My own take on this is that transformative work is often the drudge work—take this data dump and convert it to some other format; take this mock-up and make a disposable prototype. I want my tools to help me with that.

But compositional work that relies on judgement, taste, and choice? Not only would I not use a large language model for that, it’s exactly the kind of work that I don’t want to automate away.

Transformative work is done with broad brushstrokes. Compositional work is done with a scalpel.

Large language models are big messy brushes, not scalpels.

Keeping up appearances | deadSimpleTech

Looking at LLM usage and promotion as a cultural phenomenon, it has all of the markings of a status game. The material gains from the LLM (which are usually quite marginal) really aren’t why people are doing it: they’re doing it because in many spaces, using ChatGPT and being very optimistic about AI being the “future” raises their social status. It’s important not only to be using it, but to be seen using it and be seen supporting it and telling people who don’t use it that they’re stupid luddites who’ll inevitably be left behind by technology.

Saturday, May 24th, 2025

The luxury of saying no.

If I’m understanding Greg correctly here, he’s saying it’s okay for people to use large language models …because they’re being forced to?

Friday, May 23rd, 2025

Tools

One persistent piece of slopaganda you’ll hear is this:

“It’s just a tool. What matters is how you use it.”

This isn’t a new tack. The same justification has been applied to many technologies.

Leaving aside Kranzberg’s first law, large language models are the very antithesis of a neutral technology. They’re imbued with bias and political decisions at every level.

There’s the obvious problem of where the training data comes from. It’s stolen. Everyone knows this, but some people would rather pretend they don’t know how the sausage is made.

But if you set aside how the tool is made, it’s still just a tool, right? A building is still a building even if it’s built on stolen land.

Except with large language models, the training data is just the first step. After that you need to traumatise an underpaid workforce to remove the most horrifying content. Then you build an opaque black box that end-users have no control over.

Take temperature, for example. That’s the degree of probability a large language model uses for choosing the next token. Dial the temperature too low and the tool will parrot its training data too closely, making it a plagiarism machine. Dial the temperature too high and the tool generates what we kindly call “hallucinations”.

Either way, you have no control over that dial. Someone else is making that decision for you.

A large language model is as neutral as an AK-47.

I understand why people want to feel in control of the tools they’re using. I know why people will use large language models for some tasks—brainstorming, rubber ducking—but strictly avoid them for any outputs intended for human consumption.

You could even convince yourself that a large language model is like a bicycle for the mind. In truth, a large language model is more like one of those hover chairs on the spaceship in WALL·E.

Large language models don’t amplify your creativity and agency. Large language models stunt your creativity and rob you of agency.

When someone applies a large language model it is an example of tool use. But the large language model isn’t the tool.

Thursday, May 22nd, 2025

Can Directories Rise Again? - The History of the Web

Search has bent in quality towards its earliest days, difficult to navigate and often unhelpful. And the remedy may be the same as it was a quarter century ago.

Wednesday, May 14th, 2025

In 2025, venture capital can’t pretend everything is fine any more – Pivot to AI

Here is the state of venture capital in early 2025:

  • Venture capital is moribund except AI.
  • AI is moribund except OpenAI.
  • OpenAI is a weird scam that wants to burn money so fast it summons AI God.
  • Nobody can cash out.