Link tags: computers

36

Codestin Search App

Frame of preference – Aresluna

Marcin has outdone himself this time. Not only has he created an exhaustive history of the settings controls in Apple interfaces, he’s gone and made them all interactive!

While it’s easy to be blown away by the detail of the interactive elements here, it’s also worth taking a moment to appreciate just how good the writing is too.

Bravo!

An Ars Technica history of the Internet, part 1 - Ars Technica

Here’s a fun account of the early days of the ARPANET.

She Built a Microcomputer Empire From Her Suburban Home

The story of Lore Harp McGovern is like something from Halt And Catch Fire.

Capt. Grace Hopper on Future Possibilities: Data, Hardware, Software, and People (Part One, 1982) - YouTube

Wow! Grace Hopper has always been a hero to me, but I had no idea she was such a fantastic presenter. She’s completely engaging, with the timing and deadpan delivery of a stand-up comedian at times.

Capt. Grace Hopper on Future Possibilities: Data, Hardware, Software, and People (Part One, 1982)

Your brain does not process information and it is not a computer | Aeon Essays

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

A holy communion | daverupert.com

You and I are partaking in something magical.

AI isn’t the app, it’s the UI - Stack Overflow Blog

In some ways, the fervor around AI is reminiscent of blockchain hype, which has steadily cooled since its 2021 peak. In almost all cases, blockchain technology serves no purpose but to make software slower, more difficult to fix, and a bigger target for scammers. AI isn’t nearly as frivolous—it has several novel use cases—but many are rightly wary of the resemblance. And there are concerns to be had; AI bears the deceptive appearance of a free lunch and, predictably, has non-obvious downsides that some founders and VCs will insist on learning the hard way.

This is a good level-headed overview of how generative language model tools work.

If something can be reduced to patterns, however elaborate they may be, AI can probably mimic it. That’s what AI does. That’s the whole story.

There’s very practical advice on deciding where and when these tools make sense:

The sweet spot for AI is a context where its choices are limited, transparent, and safe. We should be giving it an API, not an output box.

The Technium: Dreams are the Default for Intelligence

I feel like there’s a connection here between what Kevin Kelly is describing and what I wrote about guessing (though I think he might be conflating consciousness with intelligence).

This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.

Automate Mindfully | Jorge Arango

But a machine for writing isn’t the same as a machine that writes for you. A machine for viewing photos isn’t the same thing as a machine that travels in your stead. A machine for sketching isn’t the same thing as a machine that designs. I love doing these things and doing them more efficiently. But I have no desire to have them done for me. It’s a key distinction: Do not automate the work you are engaged in, only the materials.

Let’s Not Dumb Down the History of Computer Science | Opinion | Communications of the ACM

I don’t think I agree with Don Knuth’s argument here from a 2014 lecture, but I do like how he sets out his table:

Why do I, as a scientist, get so much out of reading the history of science? Let me count the ways:

  1. To understand the process of discovery—not so much what was discovered, but how it was discovered.
  2. To understand the process of failure.
  3. To celebrate the contributions of many cultures.
  4. Telling historical stories is the best way to teach.
  5. To learn how to cope with life.
  6. To become more familiar with the world, and to know how science fits into the overall history of mankind.

Top Secret Rosies

I need to seek out this documentary, Top Secret Rosies: The Female Computers of World War II.

It would pair nicely with another film, The Eniac Programmers Project

Hyperland, Intermedia, and the Web That Never Was — Are.na

In 1990, the science fiction writer Douglas Adams produced a “fantasy documentary” for the BBC called Hyperland. It’s a magnificent paleo-futuristic artifact, rich in sideways predictions about the technologies of tomorrow.

I remember coming across a repeating loop of this documentary playing in a dusty corner of a Smithsonian museum in Washington DC. Douglas Adams wasn’t credited but I recognised his voice.

Hyperland aired on the BBC a full year before the World Wide Web. It is a prophecy waylaid in time: the technology it predicts is not the Web. It’s what William Gibson might call a “stub,” evidence of a dead node in the timeline, a three-point turn where history took a pause and backed out before heading elsewhere.

Here, Claire L. Evans uses Adams’s documentary as an opening to dive into the history of hypertext starting with Bush’s Memex, Nelson’s Xanadu and Engelbart’s oNLine System. But then she describes some lesser-known hypertext systems

In 1985, the students at Brown who encountered Intermedia had never seen anything like it before in their lives. The system laid a world of information at their fingertips, saved them hours at the library, and helped them work through tangles of thought.

Beyond Smart Rocks

Claire L. Evans on computational slime molds and other forms of unconvential computing that look beyond silicon:

In moments of technological frustration, it helps to remember that a computer is basically a rock. That is its fundamental witchcraft, or ours: for all its processing power, the device that runs your life is just a complex arrangement of minerals animated by electricity and language. Smart rocks.

Living in Alan Turing’s Future | The New Yorker

Portrait of the genius as a young man.

It is fortifying to remember that the very idea of artificial intelligence was conceived by one of the more unquantifiably original minds of the twentieth century. It is hard to imagine a computer being able to do what Alan Turing did.

Chaos Design: Before the robots take our jobs, can we please get them to help us do some good work?

This is a great piece! It starts with a look back at some of the great minds of the nineteenth century: Herschel, Darwin, Babbage and Lovelace. Then it brings us, via JCR Licklider, to the present state of the web before looking ahead to what the future might bring.

So what will the life of an interface designer be like in the year 2120? or 2121 even? A nice round 300 years after Babbage first had the idea of calculations being executed by steam.

I think there are some missteps along the way (I certainly don’t think that inline styles—AKA CSS in JS—are necessarily a move forwards) but I love the idea of applying chaos engineering to web design:

Think of every characteristic of an interface you depend on to not ‘fail’ for your design to ‘work.’ Now imagine if these services were randomly ‘failing’ constantly during your design process. How might we design differently? How would our workflows and priorities change?

Norbert Wiener’s Human Use of Human Beings is more relevant than ever.

What would Wiener think of the current human use of human beings? He would be amazed by the power of computers and the internet. He would be happy that the early neural nets in which he played a role have spawned powerful deep-learning systems that exhibit the perceptual ability he demanded of them—although he might not be impressed that one of the most prominent examples of such computerized Gestalt is the ability to recognize photos of kittens on the World Wide Web.

1969 & 70 - Bell Labs

PIctures of computers (of the human and machine varieties).

Two-Bit History

An ongoing timeline of computer technology in the form of blog posts by Sinclair Target (that’s a person, not a timeslipping transatlantic company merger).

Ways to think about machine learning — Benedict Evans

This strikes me as a sensible way of thinking about machine learning: it’s like when we got relational databases—suddenly we could do more, quicker, and easier …but it doesn’t require us to treat the technology like it’s magic.

An important parallel here is that though relational databases had economy of scale effects, there were limited network or ‘winner takes all’ effects. The database being used by company A doesn’t get better if company B buys the same database software from the same vendor: Safeway’s database doesn’t get better if Caterpillar buys the same one. Much the same actually applies to machine learning: machine learning is all about data, but data is highly specific to particular applications. More handwriting data will make a handwriting recognizer better, and more gas turbine data will make a system that predicts failures in gas turbines better, but the one doesn’t help with the other. Data isn’t fungible.