Kumiho. — Ethan Marcotte

Ethan shares my reaction to Google Duplex:

Frankly, this technology was designed to deceive humans.

And he points out that the team’s priorities are very revealing:

I’ll say this: it’s telling that matters of transparency, disclosure, and trust weren’t considered important for the initial release.

Kumiho. — Ethan Marcotte

Tagged with

Related links

Google Duplex and the canny rise: a UX pattern – UX Collective

Chris weighs up the ethical implications of Google Duplex:

The social hacking that could be accomplished is mind-boggling. For this reason, I expect that having human-sounding narrow AI will be illegal someday. The Duplex demo is a moment of cultural clarity, where it first dawned on us that we can do it, but with only a few exceptions, we shouldn’t.

But he also offers alternatives for designing systems like this:

  1. Provide disclosure, and
  2. Design a hot signal:

…design the interface so that it is unmistakeable that it is synthetic. This way, even if the listener missed or misunderstood the disclosure, there is an ongoing signal that reinforces the idea. As designer Ben Sauer puts it, make it “Humane, not human.”

Tagged with

Ensloppification – David Bushell – Web Dev (UK)

Frankly, I’d rather quit my career than live in the future they’re selling. It’s the sheer dystopian drabness of it. Mediocrity as a service.

I tried the tab-completion slot machines; not my cup of tea. I tried image generation and was overcome with literal depression. I don’t want a future as a “prompt artist”.

I’m mostly linking this for what it says, but oh boy, do I love the way it says it with this wonderful HTML web compenent.

Tagged with

Will A.I. Become the New McKinsey? | The New Yorker

Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.

Once again, absolutely spot-on analysis from Ted Chiang.

I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.

Tagged with

morals in the machine | The Roof is on Phire

We are so excited by the idea of machines that can write, and create art, and compose music, with seemingly little regard for how many wells of creativity sit untapped because many of us spend the best hours of our days toiling away, and even more can barely fulfill basic needs for food, shelter, and water. I can’t help but wonder how rich our lives could be if we focused a little more on creating conditions that enable all humans to exercise their creativity as much as we would like robots to be able to.

Tagged with

Folding Beijing - Uncanny Magazine

The terrific Hugo-winning short story about inequality, urban planning, and automation, written by Hao Jinfang and translated by Ken Liu (who translated The Three Body Problem series).

Hao Jinfang also wrote this essay about the story:

I’ve been troubled by inequality for a long time. When I majored in physics as an undergraduate, I once stared at the distribution curve for American household income that showed profound inequality, and tried to fit the data against black-body distribution or Maxwell–Boltzmann distribution. I wanted to know how such a curve came about, and whether it implied some kind of universality: something as natural as particle energy distribution functions, so natural it led to despair.

Tagged with

Related posts

Google Duplicitous

Computer Human Interaction