Tags: computers

38

Codestin Search App

Tuesday, November 18th, 2025

The premature sheen

I find Brian Eno to be a fascinating chap. His music isn’t my cup of tea, but I really enjoy hearing his thoughts on art, creativity, and culture.

I’ve always loved this short piece he wrote about singing with other people. I’ve passed that link onto multiple people who have found a deep joy in singing with a choir:

Singing aloud leaves you with a sense of levity and contentedness. And then there are what I would call “civilizational benefits.” When you sing with a group of people, you learn how to subsume yourself into a group consciousness because a capella singing is all about the immersion of the self into the community. That’s one of the great feelings — to stop being me for a little while and to become us. That way lies empathy, the great social virtue.

Then there’s the whole Long Now thing, a phrase that originated with him:

I noticed that this very local attitude to space in New York paralleled a similarly limited attitude to time. Everything was exciting, fast, current, and temporary. Enormous buildings came and went, careers rose and crashed in weeks. You rarely got the feeling that anyone had the time to think two years ahead, let alone ten or a hundred. Everyone seemed to be passing through. It was undeniably lively, but the downside was that it seemed selfish, irresponsible and randomly dangerous. I came to think of this as “The Short Now”, and this suggested the possibility of its opposite - “The Long Now”.

I was listening to my Huffduffer feed recently, where I had saved yet another interview with Brian Eno. Sure enough, there was plenty of interesting food for thought, but the bit that stood out to me was relevant to, of all things, prototyping:

I have an architect friend called Rem Koolhaas. He’s a Dutch architect, and he uses this phrase, “the premature sheen.” In his architectural practice, when they first got computers and computers were first good enough to do proper renderings of things, he said everything looked amazing at first.

You could construct a building in half an hour on the computer, and you’d have this amazing-looking thing, but, he said, “It didn’t help us make good buildings. It helped us make things that looked like they might be good buildings.”

I went to visit him one day when they were working on a big new complex for some place in Texas, and they were using matchboxes and pens and packets of tissues. It was completely analog, and there was no sense at all that this had any relationship to what the final product would be, in terms of how it looked.

It meant that what you were thinking about was: How does it work? What do we want it to be like to be in that place? You started asking the important questions again, not: What kind of facing should we have on the building or what color should the stone be?

I keep thinking about that insight: “It didn’t help us make good buildings. It helped us make things that looked like they might be good buildings.”

Substitute the word “buildings” for whatever output is supposedly being revolutionised by generative models today. Websites. Articles. Public policy.

Friday, July 18th, 2025

Frame of preference – Aresluna

Marcin has outdone himself this time. Not only has he created an exhaustive history of the settings controls in Apple interfaces, he’s gone and made them all interactive!

While it’s easy to be blown away by the detail of the interactive elements here, it’s also worth taking a moment to appreciate just how good the writing is too.

Bravo!

Tuesday, April 15th, 2025

An Ars Technica history of the Internet, part 1 - Ars Technica

Here’s a fun account of the early days of the ARPANET.

Tuesday, October 15th, 2024

She Built a Microcomputer Empire From Her Suburban Home

The story of Lore Harp McGovern is like something from Halt And Catch Fire.

Friday, September 27th, 2024

Capt. Grace Hopper on Future Possibilities: Data, Hardware, Software, and People (Part One, 1982) - YouTube

Wow! Grace Hopper has always been a hero to me, but I had no idea she was such a fantastic presenter. She’s completely engaging, with the timing and deadpan delivery of a stand-up comedian at times.

Capt. Grace Hopper on Future Possibilities: Data, Hardware, Software, and People (Part One, 1982)

Sunday, June 16th, 2024

Your brain does not process information and it is not a computer | Aeon Essays

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

Tuesday, February 6th, 2024

A holy communion | daverupert.com

You and I are partaking in something magical.

Monday, May 15th, 2023

AI isn’t the app, it’s the UI - Stack Overflow Blog

In some ways, the fervor around AI is reminiscent of blockchain hype, which has steadily cooled since its 2021 peak. In almost all cases, blockchain technology serves no purpose but to make software slower, more difficult to fix, and a bigger target for scammers. AI isn’t nearly as frivolous—it has several novel use cases—but many are rightly wary of the resemblance. And there are concerns to be had; AI bears the deceptive appearance of a free lunch and, predictably, has non-obvious downsides that some founders and VCs will insist on learning the hard way.

This is a good level-headed overview of how generative language model tools work.

If something can be reduced to patterns, however elaborate they may be, AI can probably mimic it. That’s what AI does. That’s the whole story.

There’s very practical advice on deciding where and when these tools make sense:

The sweet spot for AI is a context where its choices are limited, transparent, and safe. We should be giving it an API, not an output box.

Tuesday, April 18th, 2023

The Technium: Dreams are the Default for Intelligence

I feel like there’s a connection here between what Kevin Kelly is describing and what I wrote about guessing (though I think he might be conflating consciousness with intelligence).

This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.

Sunday, March 19th, 2023

Tuesday, March 14th, 2023

Guessing

The last talk at the last dConstruct was by local clever clogs Anil Seth. It was called Your Brain Hallucinates Your Conscious Reality. It’s well worth a listen.

Anil covers a lot of the same ground in his excellent book, Being You. He describes a model of consciousness that inverts our intuitive understanding.

We tend to think of our day-to-day reality in a fairly mechanical cybernetic manner; we receive inputs through our senses and then make decisions about reality informed by those inputs.

As another former dConstruct speaker, Adam Buxton, puts it in his interview with Anil, it feels like that old Beano cartoon, the Numskulls, with little decision-making homonculi inside our head.

But Anil posits that it works the other way around. We make a best guess of what the current state of reality is, and then we receive inputs from our senses, and then we adjust our model accordingly. There’s still a feedback loop, but cause and effect are flipped. First we predict or guess what’s happening, then we receive information. Rinse and repeat.

The book goes further and applies this to our very sense of self. We make a best guess of our sense of self and then adjust that model constantly based on our experiences.

There’s a natural tendency for us to balk at this proposition because it doesn’t seem rational. The rational model would be to make informed calculations based on available data …like computers do.

Maybe that’s what sets us apart from computers. Computers can make decisions based on data. But we can make guesses.

Enter machine learning and large language models. Now, for the first time, it appears that computers can make guesses.

The guess-making is not at all like what our brains do—large language models require enormous amounts of inputs before they can make a single guess—but still, this should be the breakthrough to be shouted from the rooftops: we’ve taught machines how to guess!

And yet. Almost every breathless press release touting some revitalised service that uses AI talks instead about accuracy. It would be far more honest to tout the really exceptional new feature: imagination.

Using AI, we will guess who should get a mortgage.

Using AI, we will guess who should get hired.

Using AI, we will guess who should get a strict prison sentence.

Reframed like that, it’s easy to see why technologists want to bury the lede.

Alas, this means that large language models are being put to use for exactly the wrong kind of scenarios.

(This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.)

Take search engines. They’re based entirely on trust and accuracy. Introducing a chatbot that confidentally conflates truth and fiction doesn’t bode well for the long-term reputation of that service.

But what if this is an interface problem?

Currently facts and guesses are presented with equal confidence, hence the accurate descriptions of the outputs as bullshit or mansplaining as a service.

What if the more fanciful guesses were marked as such?

As it is, there’s a “temperature” control that can be adjusted when generating these outputs; the more the dial is cranked, the further the outputs will stray from the safest predictions. What if that could be reflected in the output?

I don’t know what that would look like. It could be typographic—some markers to indicate which bits should be taken with pinches of salt. Or it could be through content design—phrases like “Perhaps…”, “Maybe…” or “It’s possible but unlikely that…”

I’m sure you’ve seen the outputs when people request that ChatGPT write their biography. Perfectly accurate statements are generated side-by-side with complete fabrications. This reinforces our scepticism of these tools. But imagine how differently the fabrications would read if they were preceded by some simple caveats.

A little bit of programmed humility could go a long way.

Right now, these chatbots are attempting to appear seamless. If 80% or 90% of their output is accurate, then blustering through the other 10% or 20% should be fine, right? But I think the experience for the end user would be immensely more empowering if these chatbots were designed seamfully. Expose the wires. Show the workings-out.

Mind you, that only works if there is some way to distinguish between fact and fabrication. If there’s no way to tell how much guessing is happening, then that’s a major problem. If you can’t tell me whether something is 50% true or 75% true or 25% true, then the only rational response is to treat the entire output as suspect.

I think there’s a fundamental misunderstanding behind the design of these chatbots that goes all the way back to the Turing test. There’s this idea that the way to make a chatbot believable and trustworthy is to make it appear human, attempting to hide the gears of the machine. But the real way to gain trust is through honesty.

I want a machine to tell me when it’s guessing. That won’t make me trust it less. Quite the opposite.

After all, to guess is human.

Thursday, February 10th, 2022

Automate Mindfully | Jorge Arango

But a machine for writing isn’t the same as a machine that writes for you. A machine for viewing photos isn’t the same thing as a machine that travels in your stead. A machine for sketching isn’t the same thing as a machine that designs. I love doing these things and doing them more efficiently. But I have no desire to have them done for me. It’s a key distinction: Do not automate the work you are engaged in, only the materials.

Tuesday, March 30th, 2021

Let’s Not Dumb Down the History of Computer Science | Opinion | Communications of the ACM

I don’t think I agree with Don Knuth’s argument here from a 2014 lecture, but I do like how he sets out his table:

Why do I, as a scientist, get so much out of reading the history of science? Let me count the ways:

  1. To understand the process of discovery—not so much what was discovered, but how it was discovered.
  2. To understand the process of failure.
  3. To celebrate the contributions of many cultures.
  4. Telling historical stories is the best way to teach.
  5. To learn how to cope with life.
  6. To become more familiar with the world, and to know how science fits into the overall history of mankind.

Wednesday, March 10th, 2021

Top Secret Rosies

I need to seek out this documentary, Top Secret Rosies: The Female Computers of World War II.

It would pair nicely with another film, The Eniac Programmers Project

Wednesday, December 2nd, 2020

Hyperland, Intermedia, and the Web That Never Was — Are.na

In 1990, the science fiction writer Douglas Adams produced a “fantasy documentary” for the BBC called Hyperland. It’s a magnificent paleo-futuristic artifact, rich in sideways predictions about the technologies of tomorrow.

I remember coming across a repeating loop of this documentary playing in a dusty corner of a Smithsonian museum in Washington DC. Douglas Adams wasn’t credited but I recognised his voice.

Hyperland aired on the BBC a full year before the World Wide Web. It is a prophecy waylaid in time: the technology it predicts is not the Web. It’s what William Gibson might call a “stub,” evidence of a dead node in the timeline, a three-point turn where history took a pause and backed out before heading elsewhere.

Here, Claire L. Evans uses Adams’s documentary as an opening to dive into the history of hypertext starting with Bush’s Memex, Nelson’s Xanadu and Engelbart’s oNLine System. But then she describes some lesser-known hypertext systems

In 1985, the students at Brown who encountered Intermedia had never seen anything like it before in their lives. The system laid a world of information at their fingertips, saved them hours at the library, and helped them work through tangles of thought.

Sunday, August 9th, 2020

Beyond Smart Rocks

Claire L. Evans on computational slime molds and other forms of unconvential computing that look beyond silicon:

In moments of technological frustration, it helps to remember that a computer is basically a rock. That is its fundamental witchcraft, or ours: for all its processing power, the device that runs your life is just a complex arrangement of minerals animated by electricity and language. Smart rocks.

Wednesday, January 22nd, 2020

Living in Alan Turing’s Future | The New Yorker

Portrait of the genius as a young man.

It is fortifying to remember that the very idea of artificial intelligence was conceived by one of the more unquantifiably original minds of the twentieth century. It is hard to imagine a computer being able to do what Alan Turing did.

Saturday, July 6th, 2019

Chaos Design: Before the robots take our jobs, can we please get them to help us do some good work?

This is a great piece! It starts with a look back at some of the great minds of the nineteenth century: Herschel, Darwin, Babbage and Lovelace. Then it brings us, via JCR Licklider, to the present state of the web before looking ahead to what the future might bring.

So what will the life of an interface designer be like in the year 2120? or 2121 even? A nice round 300 years after Babbage first had the idea of calculations being executed by steam.

I think there are some missteps along the way (I certainly don’t think that inline styles—AKA CSS in JS—are necessarily a move forwards) but I love the idea of applying chaos engineering to web design:

Think of every characteristic of an interface you depend on to not ‘fail’ for your design to ‘work.’ Now imagine if these services were randomly ‘failing’ constantly during your design process. How might we design differently? How would our workflows and priorities change?

Sunday, April 28th, 2019

Norbert Wiener’s Human Use of Human Beings is more relevant than ever.

What would Wiener think of the current human use of human beings? He would be amazed by the power of computers and the internet. He would be happy that the early neural nets in which he played a role have spawned powerful deep-learning systems that exhibit the perceptual ability he demanded of them—although he might not be impressed that one of the most prominent examples of such computerized Gestalt is the ability to recognize photos of kittens on the World Wide Web.

Saturday, February 2nd, 2019

1969 & 70 - Bell Labs

PIctures of computers (of the human and machine varieties).