Steam

Picture someone tediously going through a spreadsheet that someone else has filled in by hand and finding yet another error.

“I wish to God these calculations had been executed by steam!” they cry.

The year was 1821 and technically the spreadsheet was a book of logarithmic tables. The frustrated cry came from Charles Babbage, who channeled his frustration into a scheme to create the world’s first computer.

His difference engine didn’t work out. Neither did his analytical engine. He’d spend his later years taking his frustrations out on street musicians, which—as a former busker myself—earns him a hairy eyeball from me.

But we’ve all been there, right? Some tedious task that feels soul-destroying in its monotony. Surely this is exactly what machines should be doing?

I have a hunch that this is where machine learning and large language models might turn out to be most useful. Not in creating breathtaking works of creativity, but in menial tasks that nobody enjoys.

Someone was telling me earlier today about how they took a bunch of haphazard notes in a client meeting. When the meeting was done, they needed to organise those notes into a coherent summary. Boring! But ChatGPT handled it just fine.

I don’t think that use-case is going to appear on the cover of Wired magazine anytime soon but it might be a truer glimpse of the future than any of the breathless claims being eagerly bandied about in Silicon Valley.

You know the way we no longer remember phone numbers, because, well, why would we now that we have machines to remember them for us? I’d be quite happy if machines did that for the annoying little repetitive tasks that nobody enjoys.

I’ll give you an example based on my own experience.

Regular expressions are my kryptonite. I’m rubbish at them. Any time I have to figure one out, the knowledge seeps out of my brain before long. I think that’s because I kind of resent having to internalise that knowledge. It doesn’t feel like something a human should have to know. “I wish to God these regular expressions had been calculated by steam!”

Now I can get a chatbot with a large language model to write the regular expression for me. I still need to describe what I want, so I need to write the instructions clearly. But all the gobbledygook that I’m writing for a machine now gets written by a machine. That seems fair.

Mind you, I wouldn’t blindly trust the output. I’d take that regular expression and run it through a chatbot, maybe a different chatbot running on a different large language model. “Explain what this regular expression does,” would be my prompt. If my input into the first chatbot matches the output of the second, I’d have some confidence in using the regular expression.

A friend of mine told me about using a large language model to help write SQL statements. He described his database structure to the chatbot, and then described what he wanted to select.

Again, I wouldn’t use that output without checking it first. But again, I might use another chatbot to do that checking. “Explain what this SQL statement does.”

Playing chatbots off against each other like this is kinda how machine learning works under the hood: generative adverserial networks.

Of course, the task of having to validate the output of a chatbot by checking it with another chatbot could get quite tedious. “I wish to God these large language model outputs had been validated by steam!”

Sounds like a job for machines.

Have you published a response to this? :

Responses

Mark Root-Wiley

@adactio It strikes me that an LLM is not the best tool for validation. Wouldn’t a tool like RegExr.com that literally explains an expression for you with 100% accuracy (and provides a sweet testing tool!) work better for Step 2? Sometimes I feel like LLMs make me quickly forget about old special purpose tools that are more powerful in their tiny little domain (and may always be?).

6 Likes

# Liked by andi on Thursday, March 23rd, 2023 at 6:29pm

# Liked by Kristofer Joseph on Thursday, March 23rd, 2023 at 6:29pm

# Liked by Leonidas Tsementzis on Thursday, March 23rd, 2023 at 7:53pm

# Liked by Cynthia Teeters on Thursday, March 23rd, 2023 at 8:55pm

# Liked by Owen Gregory on Thursday, March 23rd, 2023 at 10:35pm

# Liked by Francesco Schwarz on Friday, March 24th, 2023 at 12:59pm

Related posts

Tools

A large language model is as neutral as an AK-47.

Denial

The best of the web is under continuous attack from the technology that powers your generative “AI” tools.

Design processing

Three designers I know have been writing about large language models.

Reason

Please read Miriam’s latest blog post.

Changing

I’m trying to be open to changing my mind when presented with new evidence.

Related links

The sound of inevitability | My place to put things

People advancing an inevitabilist world view state that the future they perceive will inevitably come to pass. It follows, relatively straightforwardly, that the only sensible way to respond to this is to prepare as best you can for that future.

This is a fantastic framing method. Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.

Tagged with

A human review | Trys Mudford

Following on from my earlier link about AI etiquette, what Trys experienced here is utterly deflating:

I spent a couple of hours working through my notes and writing up a review before sending it to my manager, awaiting their equivalent review for me.

However, the review I received back was, quite simply, quintessential AI slop.

When slopagandists talk about “AI” boosting productivity, this is the kind of shite they’re talking about.

Tagged with

Butlerian Jihad

This page collects my blog posts on the topic of fighting off spam bots, search engine spiders and other non-humans wasting the precious resources we have on Earth.

Tagged with

It’s rude to show AI output to people | Alex Martsinovich

For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

Now, AI has made text very, very, very cheap. … Any text can be AI slop. If you read it, you’re injured in this war. You engaged and replied – you’re as good as dead. The dead internet is not just dead it’s poisoned.

I think that realistically, our main weapon in this war is AI etiquette.

Tagged with

The Imperfectionist: Navigating by aliveness

Most obviously, aliveness is what generally feels absent from the written and visual outputs of ChatGPT and its ilk, even when they’re otherwise of high quality. I’m not claiming I couldn’t be fooled into thinking AI writing or art was made by a human (I’m sure I already have been); but that when I realise something’s AI, either because it’s blindingly obvious or when I find out, it no longer feels so alive to me. And that this change in my feelings about it isn’t irrelevant: that it means something.

More subtly, it feels like our own aliveness is what’s at stake when we’re urged to get better at prompting LLMs to provide the most useful responses. Maybe that’s a necessary modern skill; but still, the fact is that we’re being asked to think less like ourselves and more like our tools.

Tagged with

Previously on this day

4 years ago I wrote Service worker weirdness in Chrome

Debugging an error message.

5 years ago I wrote Outlet

Tinkering with your website can be a fun distraction.

9 years ago I wrote The web on my phone

How do you solve a problem like Safari?

10 years ago I wrote 100 words 001

Day one.

11 years ago I wrote Notes from the edge

Thoughts prompted by the Edge Conference in London.

13 years ago I wrote Sharing pattern libraries

I, for one, welcome our new sharing and caring overlords of markup and CSS.

19 years ago I wrote Design, old and new

A panel at SXSW reminds me of one of the best non-web redesigns of recent times.

22 years ago I wrote Other People's Stories

Set aside some time and read through other people’s stories.

23 years ago I wrote Flo Control

Face recognition software is, it’s well known, crap.