Link tags: risk

9

Codestin Search App

Critical questions for design leaders working with artificial intelligence, New York 2025 | Leading Design

AI presents design leaders with a quandary, requiring us to tread a fine line between what is acceptable and useful, and what is problematic and harmful.

This document is not a manifesto or an agenda. It is a series of prompts written by design leaders for design leaders, conceived to help us navigate these tricky waters.

The Shape of a Mars Mission (Idle Words)

You can think of flying to Mars like one of those art films where the director has to shoot the movie in a single take. Even if no scene is especially challenging, the requirement that everything go right sequentially, with no way to pause or reshoot, means that even small risks become unacceptable in the aggregate.

Does AI benefit the world? – Chelsea Troy

Our ethical struggle with generative models derives in part from the fact that we…sort of can’t have them ethically, right now, to be honest. We have known how to build models like this for a long time, but we did not have the necessary volume of parseable data available until recently—and even then, to get it, companies have to plunder the internet. Sitting around and waiting for consent from all the parties that wrote on the internet over the past thirty years probably didn’t even cross Sam Altman’s mind.

On the environmental front, fans of generative model technology insist that eventually we’ll possess sufficiently efficient compute power to train and run these models without the massive carbon footprint. That is not the case at the moment, and we don’t have a concrete timeline for it. Again, wait around for a thing we don’t have yet doesn’t appeal to investors or executives.

AI Safety for Fleshy Humans: a whirlwind tour

This is a terrificly entertaining level-headed in-depth explanation of AI safety. By the end of this year, all three parts will be published; right now the first part is ready for you to read and enjoy.

This 3-part series is your one-stop-shop to understand the core ideas of AI & AI Safety — explained in a friendly, accessible, and slightly opinionated way!

( Related phrases: AI Risk, AI X-Risk, AI Alignment, AI Ethics, AI Not-Kill-Everyone-ism. There is no consensus on what these phrases do & don’t mean, so I’m just using “AI Safety” as a catch-all.)

The Intelligence Illusion

Baldur has new book coming out:

The Intelligence Illusion is an exhaustively researched guide to the business risks of Generative AI.

The Dangerous Ideas of “Longtermism” and “Existential Risk” ❧ Current Affairs

I should emphasize that rejecting longtermism does not mean that one must reject long-term thinking. You ought to care equally about people no matter when they exist, whether today, next year, or in a couple billion years henceforth. If we shouldn’t discriminate against people based on their spatial distance from us, we shouldn’t discriminate against them based on their temporal distance, either. Many of the problems we face today, such as climate change, will have devastating consequences for future generations hundreds or thousands of years in the future. That should matter. We should be willing to make sacrifices for their wellbeing, just as we make sacrifices for those alive today by donating to charities that fight global poverty. But this does not mean that one must genuflect before the altar of “future value” or “our potential,” understood in techno-Utopian terms of colonizing space, becoming posthuman, subjugating the natural world, maximizing economic productivity, and creating massive computer simulations stuffed with 1045 digital beings.

Social Cooling

If you feel you are being watched, you change your behavior. Big Data is supercharging this effect.

Some interesting ideas, but the tone is so alarming as to render the message meaningless.

As our weaknesses are mapped, we are becoming too transparent. This is breeding a society where self-censorship and risk-aversion are the new normal.

I stopped reading at the point where the danger was compared to climate change.

Technical Credit by Chris Taylor

Riffing on an offhand comment I made about progressive enhancement being a form of “technical credit”, Chris dives deep into what exactly that means. There’s some really great thinking here.

With such a wide array of both expected and unexpected properties of the current technological revolution, building our systems in such a way to both be resilient to potential failures and benefit from unanticipated events surely is a no-brainer.

(Xrisk 101): Existential Risk for Interstellar Advocates | Heath Rezabek - Academia.edu

Exemplars proposing various solutions for the resilience of digital data and computation over long timeframes include the Internet Archive; redundantly distributed storage platforms such GlusterFS, LOCKSS, and BitTorrent Sync; and the Lunar supercomputer proposal of Ouliang Chang.

Each of these differs in its approach and its focus; yet each shares with Vessel and with one another a key understanding: The prospects of Earth-originating life in the future, whether vast or diminishing, depend upon our actions and our foresight in this current cultural moment of opportunity, agency, awareness, ability, capability, and willpower.