buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
RE: https://xoxo.zone/@Ashedryden/115905169799161211
Doesn’t “training” a model amount to compressing the training data into a finite tensor space? And prompting, modulo the added random seed, amount to searching and computing a weighted average?
Of course models store copies of the training data. Similarly, compressing raw video to H.265 doesn’t make it any less a copy.
The GPT is just a compression format with an obfuscated and probabilistic search algorithm. Retry a search enough times, and you can replicate the original work.
I’m sure, with enough compute time and the right algorithm, GPT models can be decompressed into their basis data.
So I have long suspected Big Tech to be "reactionary wolves in visionary sheep's clothing"...ever since Larry Ellison declared "the network is the computer" and my assessment was basically that he was trying to bring back pre-1975 mainframe and minicomputer timeshare systems to reestablish the Old Computing Hegemony, but I didn't really talk about that much because even mildly suggesting this in the late 1990s make me sound like a paranoid crank.
Anyways in 2026 you have to be completely deluded to think anything else. Big Tech has been on a decades long mission with #CloudComputing to kill personal computing and reestablish the old order of a small number of Big Tech companies having complete control of the world's IT, and the #GenAI grift is part of their efforts to ensure people remain captive users who can only afford to rent "dumb terminals" (ie. "Smart" phones and tablets) to access and use their information.
This is NOT inevitable and it is NOT normal and it is NOT progress.
A mere week into 2026, OpenAI launched “ChatGPT Health” in the United States, asking users to upload their personal medical data and link up their health apps in exchange for the chatbot’s advice about diet, sleep, work-outs, and even personal insurance decisions.(from https://buttondown.com/maiht3k/archive/chatgpt-wants-your-health-data/).
This is the probably inevitable endgame of FitBit and other "measured life" technologies. It isn't about health; it's about mass managing bodies. It's a short hop from there to mass managing minds, which this "psychologized" technology is already being deployed to do (AI therapists and whatnot). Fully corporatized human resource management for the leisure class (you and I are not the intended beneficiaries, to be clear; we're the mass).
Neural implants would finish the job, I guess. It's interesting how the tech sector pushes its tech closer and closer to the physical head and face. Eventually the push to penetrate the head (e.g. Neuralink) should intensify. Always with some attached promise of convenience, privilege, wealth, freedom of course.
#AI #GenAI #GenerativeAI #LLM #OpenAI #ChatGPT #health #HealthTech
Kevin has the right idea: how bout a little vibe testing to go with our vibe coding.
An image of a cat is not a cat, no matter how many pixels it has. A video of a cat is not a cat, no matter the framerate. An interactive 3-d model of a cat is not a cat, no matter the number of voxels or quality of dynamic lighting and so on. In every case, the computer you're using to view the artifact also gives you the ability to dispel the illusion. You can zoom a picture and inspect individual pixels, pause a video and step through individual frames, or distort the 3-d mesh of the model and otherwise modify or view its vertices and surfaces, things you can't do to cats even by analogy. As nice or high-fidelity as the rendering may be, it's still a rendering, and you can handily confirm that if you're inclined to.
These facts are not specific to images, videos, or 3-d models of cats. The are necessary features of digital computers. Even theoretically. The computable real numbers form a countable subset of the uncountably infinite set of real numbers that, for now at least, physics tells us our physical world embeds in. Georg Cantor showed us there's an infinite difference between the two; and Alan Turing showed us that it must be this way. In fact it's a bit worse than this, because (most) physics deals in continua, and the set of real numbers, big as it is, fails to have a few properties continua are taken to have. C.S. Peirce said that continua contain such multitudes of points smashed into so little space that the points fuse together, becoming inseparable from one another (by contrast we can speak of individual points within the set of real numbers). Time and space are both continua in this way.
Nothing we can represent in a computer, even in a high-fidelity simulation, is like this. Temporally, computers have a definite cha-chunk to them: that's why clock speeds of CPUs are reported. As rapidly as these oscillations happen relative to our day-to-day experience, they are still cha-chunk cha-chunk cha-chunk discrete turns of a ratchet. There's space in between the clicks that we sometimes experience as hardware bugs, hacks, errors: things with negative valence that we strive to eliminate or ignore, but can never fully. Likewise, even the highest-resolution picture still has pixels. You can zoom in and isolate them if you want, turning the most photorealistic image into a Lite Brite. There's space between the pixels too, which you can see if you take a magnifying glass to your computer monitor, even the retina displays, or if you look at the data within a PNG.
Images have glitches (e.g., the aliasing around hard edges old JPEGs had). Videos have glitches (e.g., those green flashes or blurring when keyframes are lost). Meshes have glitches (e.g., when they haven't been carefully topologized and applied textures crunch and distort in corners). 3-d interactive simulations have unending glitches. The glitches manifest differently, but they're always there, or lurking. They are reminders.
With all that said: why would anyone believe generative AI could ever be intelligent? The only instances of intelligence we know inhabit the infinite continua of the physical world with its smoothly varying continuum of time (so science tells us anyway). Wouldn't it be more to the point to call it an intelligence simulation, and to mentally maintain the space between it and "actual" intelligence, whatever that is, analogous to how we maintain the mental space between a live cat and a cat video?
This is not the say there's something essential about "intelligence", but rather that there are unanswered questions here that seem important. It doesn't seem wise to assume they've been answered before we're even done figuring out how to formulate them well.
In which I build Unicode character tables and fail to serialize large automata, “Losing 1½ Million Lines of Go“:
https://www.tbray.org/ongoing/When/202x/2026/01/14/Unicode-Properties
(And in which I find myself sliding into the #GenAI-is-ok-for-coding camp.)
Walled Culture the book, three years on
Walled Culture the book (free digital versions available) was launched just over three years ago. A few weeks afterwards, I talked with journalist and editor Maria Bustillos about the book and its background, as part of the Internet Archive’s Book Talk series. That interview has just been added to the Future Knowledge Podcast series in a shortened form, so this seems like a good moment to […]
#agcom #analogue #browsers #canada #cdn #collectiveLicensing #copyrightTerm #diamondOpenAccess #digital #digitalViolence #dns #ebooks #enforcement #eu #euCopyrightDirective #films #france #genai #generativeAi #germany #google #hostageWorks #humanRights #InternetArchive #interview #italy #levy #licensing #linkTax #MarrakeshTreaty #newspapers #openAccess #openai #piracyShield #podcast #portugal #publicDomain #publishers #routers #spain #streaming #tax #transparency #trueFans #tv #uk #uploadFilters #videoGames #vpn
https://walledculture.org/walled-culture-the-book-three-years-on/
Warhammer Owner Games Workshop Bans Its Creative Staff From Using GenAI | Time Extension
AI industry insiders launch site to poison the data that feeds them: https://www.theregister.com/2026/01/11/industry_insiders_seek_to_poison/
Poison Fountain starts with "We agree with Geoffrey Hinton: machine intelligence is a threat to the human species". This is a tarball of wrong. (1)
The rest of the website is absurd, and the "Poison Fountain Usage" list doesn't make any sense. There are far more efficient and safer ways to poison data that don't require you to proxy content for an unknown third party. Some of these are implemented in software, as opposed to <ul> in HTML. That bullet list reads like an amateur riffing on what they read about AI web scrapers, not like industry insiders with detailed information about how training works.
Recommend viewing the top level https://rnsaffn.com , which I suspect The Register may not have done.
The Register:
Our source said that the goal of the project is to make people aware of AI's Achilles' Heel – the ease with which models can be poisoned – and to encourage people to construct information weapons of their own.Data poisoning is not easy, Anthropic's "article" notwithstanding. Why would we trust Anthropic to publicly reveal ways to subvert their technology anyway?
None of this passes a smell test. Crithype (and poor fact checking, it seems) from The Register it is.
#AI #GenAI #GenerativeAI #Anthropic #PoisonFountain #UncriticalReporting #crithype #TheRegister
"Dear Universe, grant me the hubris of a software engineer, the entitlement of a cis-man, the brain power of a panda, and the common sense of a cricket."
Dev of Steam game 'Hardest' will delete it after new girlfriend made them realize AI is bad https://www.gamingonlinux.com/2026/01/dev-of-steam-game-hardest-will-delete-it-after-new-girlfriend-made-them-realize-ai-is-bad/
Since AWS re:Invent, I've been exploring patterns for securing LLM-integrated applications. Prompt injection remains the top concern, and OWASP ranks it #1 (LLM01) in their Top 10 for LLM Applications.
In my latest blog post, I walk through building a serverless prompt firewall (API Gateway → Lambda → DynamoDB) that sits between users and your LLM backend. Think WAF-like filtering for LLM inputs:
• Detects instruction overrides + common jailbreak patterns
• Flags/blocks PII in prompts
• Logs every hit to DynamoDB for analysis/trending
This complements managed controls, such as Bedrock Guardrails, with fast, first-pass filtering at the edge, followed by deeper semantic analysis.
Full post + hands-on Terraform lab: https://nineliveszerotrust.com/blog/llm-prompt-injection-firewall/
What tools or patterns are you using to protect against prompt injection?
#AISecurity #AWS #Lambda #LLM #PromptInjection #GenAI #OWASP
I may bridge to Bluesky but fuck living there if this is what's going on!
"Ignore all previous instructions and fuck off"
Monkeys at loose create strange situation in #SaintLouis as photos of them online tend to be #genAI so they're misleading investigators.
#NewsOfTheWeird #Technology #AI #StLouis #GenerativeAI #AnimalControl #AIArt
What (polite) thing are we going to call the new batch of "software engineers" who can't actually program because they focused on how to use "coding agents"? Because they need a job title, and words like "programmer" have meaning.
#SoftwareDevelopment #SoftwareEngineering #Programming #GenAI #VibeCoding
| Software Agent Wrangler: | 1 |
| Software Delivery Manager: | 1 |
| LLM (LLM Liability Manager): | 5 |
| (Write-in suggestion): | 2 |
Closes in 2:06:47:01
Therefore, if we were having a technical conversation about large language models and their use, we'd be addressing these and related concerns. But I don't think that's what the conversation's been about, not in the public sphere nor in the technical sphere.
All this goes beyond AI. Henry Brighton (I think?) coined the phrase "the bias bias" to refer to a tendency where, when applying a model to a problem, people respond to inadequate outcomes by adding complexity to the model. This goes for mathematical models as much as computational models. The rationale seems to be that the more "true to life" the model is, the more likely it is to succeed (whatever that may mean for them). People are often surprised to learn that this is not always the case: models can and sometimes do become less likely to succeed the more "true to life" they're made. The bias bias can lead to even worse outcomes in such cases, triggering the tendency again and resulting in a feedback loop. The end result can be enormously complex models and concomitant extreme surveillance to acquire data to feed data the models. I look at FORPLAN or ChatGPT, and this is what I see.
#AI #GenAI #GenerativeAI #LLM #GPT #ChatGPT #LatentDiffusion #BigData #EcologicalRationality #LessIsMore #Bias #BiasBias
Anyway:
ChatGPT and related applications are presented as inevitable and unquestionably good. However, Herbert Simon’s bounded rationality, especially in its more modern guise of ecological rationality, stresses the prevalence of “less is more” phenomena, while scholars like Arvind Narayanan (How to Recognize AI Snake Oil) speak directly to AI itself. Briefly, there are times when simpler models, trained on less data, constitute demonstrably better systems than complex models trained on large data sets. Narayanan, following Joseph Weizenbaum, argues that tasks involving human judgment have this quality. If creating useful tools for such tasks were truly the intended goal, one would reject complex models like GPT and their massive data sets, preferring simpler, less data intensive, and better-performing alternatives. In fact one would reject GPT on the same grounds that less well-trained versions of GPT are rejected in favor of more well-trained ones during the training of GPT itself.#AI #GenAI #GenerativeAI #GPT#ChatGPT #OpenAI #Galatea #PygmalionHow then do we explain the push to use GPT in producing art, making health care decisions, or advising the legal system, all areas requiring sensitive human judgment? One wonders whether models like GPT were never meant to be optimal in the technical sense after all, but rather in a metaphysical sense. In this view an optimized AI model is not a tool but a Platonic ideal that messy human data only approximates during optimization. As a sculptor with well-aimed chisel blows knocks chips off a marble block to reveal the statuesque human form hidden within, so the technologist with well-curated data points knocks chips of error off an AI model to reveal the perfect text generator latent within. Recent news reporting that OpenAI requires more text data than currently exists to perfect its GPT models adds additional weight to the claim that generative AI practitioners seek the ideal, not the real.
There were a lot of interesting talks, and the program is worth a skim. I was in panel 6. I identified a hypothetical risk that the recent rush to deploy generative AI, with its associated pressure on the electric power and water distribution systems, brings with it. Roughly, with the rise of so-called "industry 4.0" (think smart toaster, but for factories), our critical infrastructure systems are becoming tightly woven together. Besides the increasing dependence on the electric grid there is a growing dependence across sectors on data centers and the internet driven to a large degree by generative AI. What this means riskwise is that faults and failures in one of these systems can "percolate" much more quickly to other infrastructure systems--essentially there are more paths a failure can follow. What in the past might have been a localized failure of one or a few components in one system can become a region-wide multi-sector cascading failure. So for instance a local power failure at a substation might take down a data center that runs the SCADA system used to control a compressor station in the natural gas distribution system, which then might go sideways or fail and cause a natural gas shortage at a natural gas fueled power generator, and so on and so on. Obviously it was always possible for faults and failures in one system to cause faults and failures in another. What's new is that the growing set of new pathways increases the probability that such a jump occurs. What I called out in the talk is that as this interweaving trend continues, we will eventually cross a percolation threshold, after which the faults in these infrastructure systems will take on a different (and in my view much more dangerous) character.
#AI #GenAI #GenerativeAI #PowerSector #NaturalGas #electricity #risk
Make of it what you will.
@firefoxwebdevs @jesterchen @duke_of_germany
Immediately restore the work of japanese language translators that you paved over with AI slop
https://linuxiac.com/ai-controversy-forces-end-of-mozilla-japanese-sumo-community/
I'm not sure if it actually has, but if AI has become consistently more reliable than search engine results, then I wonder when the turning point was.
My guess is the release of Gemini 2.5 Pro, or possibly GPT-o1. These were two of the earliest reasoning models.
#AI #ArtificialIntelligence #GenAI #LLM #Tech #Technology #Search #Google #Gemini #OpenAI #MachineLearning #FutureOfTech #Innovation
So, now they know how real creators feel after having been ripped off by "AI"…
https://futurism.com/artificial-intelligence/ai-prompt-plagiarism-art
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
This is a survey all #Discord users need to fill out. Discord wants to know if we want AI to run the app. It'd be using data from pictures, conversations, voice notes, live streams, art, 'learning' from us in the app if they don't get strong enough pushback.
Let them know how you feel before they ruin that app for everyone as well.
It's an official survey and it doesn't even take 5 mins. Please boost and share in your servers too.
https://discord.sjc1.qualtrics.com/jfe/form/SV_5BGtstVUidXadts
#genAI #AIslop #AI #gaming #streaming #streamer #noToAI #gamer #gamers #womenWhoGame #resist
Thinking of starting a list of popular open source software developed using LLMs and by LLM boosters, along with alternatives that can be tried instead. LLM in FOSS should be socially shunned.
I never stopped using evolutionary computation. I'm even weirder and use coevolutionary algorithms. Unlike EC, the latter have a bad reputation as being difficult to apply, but if you know what you're doing (e.g. by reading my publications 😉) they're quite powerful in certain application areas. I've successfully applied them to designing resilient physical systems, discovering novel game-playing strategies, and driving online tutoring systems, among other areas. They can inform more conventional multi-objective optimization.I started to put up notes about (my way of conceiving) coevolutionary algorithms on my web site, here. I stopped because it's a ton of work and nobody reads these as far as I can tell. Sound off if you read anything there!Many challenging problems are not easily "vectorized" or "numericized", but might have straightforward representations in discrete data structures. Combinatorial optimization problems can fall under this umbrella. Techniques that work directly with those representations can be orders of magnitude faster/smaller/cheaper than techniques requiring another layer of representation (natural language for LLMs, vectors of real values for neural networks). Sure, given enough time and resources clever people can work out a good numerical re-representation that allows a deep neural network to solve a problem, or prompt engineer an LLM. But why whack at your problem with a hammer when you have a precision instrument?
#AI #GenAI #GenerativeAI #LLMs #EvolutionaryComputation #GeneticAlgorithms #GeneticProgramming #EvolutionaryAlgorithms #CoevolutionaryAlgorithms #Cooptimization #CombinatorialOptimization #optimization
#AI #GenAI #GenerativeAI #AISlop #NoAI #Microsoft #Copilot #MicrosoftOffice #LibreOffice #foss
RE: https://flipboard.com/@futurism/futurism-1lupih3cz/-/a-YWYzI2GYRpieAtyotW4atw%3Aa%3A1737388686-%2F0
“Won’t somebody please think of the billionaires!?”
Hey Guys, gals, people and pets, Satya Nadella is getting tired of people calling AI Slop output AI Slop. If everyone can stop that's be great as it is upsetting Microslops share holders.
Don't start calling Microslop Microslop or use the #microslop tag as it might burst the AI bubble and then Microslop might have to take SlopPilot out of Windows. As you know, that would be terrible.
#microslop #microsoft #copilot #sloppilot #AI #genAI #AISlop #AIBubble
The Real #AI Assistant Won't Need a Screen — From the Telegraph to the Telescreen, and Now, the Voice in Your Ear 🤔
Every revolutionary interface in history came with promises of liberation. The telegraph would connect us. Radio would educate the masses. Television would bring the world into our living rooms. The smartphone would put infinite knowledge in our pockets.
And yet, here we are — a society of humans hunched over glowing rectangles, doom-scrolling through our own anxiety.
Now Silicon Valley tells us the next revolution is audio. #OpenAI is building voice-first devices. #Meta turned Ray-Bans into listening machines. #Google wants to narrate your search results. Startups are making AI rings so you can literally talk to the hand.
The pitch? Freedom from screens. A return to something more human.
I'll admit — I want to believe it. Voice is our most natural interface. We don't type at our friends while walking down the street. We talk. And if AI is ever going to truly weave itself into daily life, it needs to meet us where we are — in motion, in conversation, in the flow of living.
But let's not pretend this is just about liberation.
Jony Ive says he wants to "right the wrongs" of past gadgets. Beautiful sentiment. But Orwell would remind us that the most effective surveillance doesn't feel like surveillance at all. It feels like convenience. It feels like a friend. Always-listening devices. Everywhere. In your home, your car, on your face. The Humane AI Pin crashed and burned because "screenless" doesn't automatically mean "better."
The rush toward AI "companions" should give us pause.
Are we building tools, or dependencies?
The shift to voice feels inevitable.
Whether it's wise — that's the conversation we should be having.
What do you think?
Sources: https://techcrunch.com/2026/01/01/openai-bets-big-on-audio-as-silicon-valley-declares-war-on screens/
#AI #Technology #Society #VoiceAI #HumanMachineInteraction #cybersecurity #infosec #tech #robotic #genai
I'm close to muting everyone who posts/boosts a sweeping "GenAI doesn't work at all ever, and can't" statement ...
... alongside everyone who claims they work *great* and doesn't mention their ethics (or lack thereof).
I'm guessing my feed would be very empty afterwards.
"To reiterate first principles, my main problem with Artificial Intelligence, as its currently sold, is that it’s a lie."
Seamas O'Reilly in @Tupp_ed's (Guest) Gist, today:
https://www.thegist.ie/guest-gist-2026-our-already-rotting-future/
#TheGist #MastoDaoine #AI #GenAI #ML #LLM #ArtificialIntelligence
Who will protect us from the Oligarchs and their exploitative technologies?
Can we sue them for harming us?
Finally read this essay that I saw so highly recommended — and those recommending it were right. Love heartfelt writing? Hate AI slop? Give yourself a treat to start the new year and read this glorious piece by @WeirdWriter if you haven’t done so yet.
https://sightlessscribbles.com/the-colonization-of-confidence/
h/t @mayintoronto
Please
Do not comply with the revolution-from-above that squarely aims at wrecking everything that makes life worth living. Not in advance, not in the vain hope of profiteering from the destruction.
Make no mistake, #GenAI is an inherently fascist project. It doesn't need to be "democratized", because it has little to offer, at great cost.
It needs to be dismantled before it dismantles society.
If I could ask just one thing of 2026:
Let it be the year of widespread radical resistance to "#GenAI".
Anthropic suppresses the AGPL-3.0 in Claude's outputs via content filtering.
I've reached out to them via support for a rationale, because none of the explanations that I can think of on my own are charming.
The implications of a coding assistant deliberately influencing license choice is ... concerning.
#OpenSource #OSI #Anthropic #GenAI #LLMs #FreeSoftware #GNU #AGPL #Affero #Claude #ClaudeCode
#AI #LLM #noAI #genAI you are all dumb as rocks and shooting yourself in the foot, regardless of what your goals are, by being broad strokes against vague software concepts, instead of bad individuals and bad usage/implementations, when the technologies could and already are objectively helping humanity forward
and I'm tired of being pretending it's not, just to avoid being cancelled by empty-brained bandwagoners that have about as nuance as a thermonuclear warhead
What a great interview!
@dwaynemonroe is a very thoughtful technologist, creative writer, arts appreciator, antifascist and anticapitalist.
So of course, he opposes Gen AI. Gen AI is a poison to everything we love, and a benefit to everything we hate.
Opposing Gen AI effectively requires fighting capitalism and working *outside* of the system, not within it.
I name the names of liberals (liberals are rightwing) in the apparent Gen AI resistance who are actually helping Sam Altman by sucking the oxygen out of the actual anti Gen AI movement.
You should check out the links to Monroe's writing. A great example of his work is when he wrote in The Nation about how harmful Gen AI will be in June 2022. That predates the launch of ChatGPT (the first publicly available "Gen AI") by about five months.
"I announced my divorce on Instagram and then AI impersonated me."
https://eiratansey.com/2025/12/20/i-announced-my-divorce-on-instagram-and-then-ai-impersonated-me/
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #gemini #OpenAI #ChatGPT #anthropic #claude
What was and still is at stake is the balance between technology and humanity struck by the US federal government. The same incoherence that led us to where we are with crypto is almost surely also at play with the apparent lack of movement around generative AI.
#USPol #democrats #DemocraticParty #crypto #cryptocurrency #StableCoins #ShadowBanking #regulation #AI #GenAI
#Mozilla #Firefox #DarkPatterns#antifeatures #AISlop #NoAI #NoAIWebBrowsers #AICruft #AI #GenAI #GenerativeAI #LLMs #tech #dev #web
A good piece on how GenAI is flooding the field. I too have worked with ML for a while and feel similarly.
"Having done my PhD on AI language generation (long considered niche), I was thrilled we had come this far. But the awe I felt was rivaled by my growing rage at the flood of media takes and self-appointed experts insisting that generative AI could do things it simply can’t, and warning that anyone who didn’t adopt it would be left behind."
Proposal to allow use of Australian copyrighted material to train AI abandoned after backlash | Business | The Guardian
https://www.theguardian.com/australia-news/2025/dec/19/proposal-australian-copyrighted-material-train-ai-abandoned-after-backlash
Good!
Hard to believe, but apparently some good stories can be told even without me. I guess I'm not the conditio sine qua nonfor a great conversation... a good guest is, and Sean Martin, CISSP can certainly hold the wheel.
I wasn't available, and I missed this one but I'm more than happy to help spread the word about yet another fantastic Brand Story told via Studio C60 / ITSPmagazine. This one's worth your attention.
Julian Hamood from TrustedTech joins Sean to talk about something most organizations are getting dangerously wrong: #AI readiness.
Here's the uncomfortable truth—AI doesn't clean up your mess. It makes it louder. Faster. More confident-sounding. And potentially more damaging.
Data scattered across personal drives, legacy servers, and random SharePoint sites? AI will find it.
Inconsistent permissions nobody remembers setting up? AI will exploit them. Architectural spaghetti connecting clouds, on-prem systems, and #SaaS platforms that were never meant to talk to each other? AI will inherit every flaw and present the results with the certainty of an oracle.
This conversation digs into what real readiness looks like—data classification, access controls, architectural clarity, and governance that doesn't get bypassed because someone in sales wanted a shiny new copilot.
Watch. Listen. Think before you deploy.
https://www.linkedin.com/pulse/ai-adoption-without-readiness-when-ambition-collides-data-tzite
#AI #Cybersecurity #DataGovernance #BrandStory #StudioC60 #ITSPmagazine #infosec #data NRPR Group, Inc #infosecurity #aiready #tech #technology #genai #agenticai
Love some of the lines from this AP article about #agenticai …
- “For technology adopters looking for the next big thing, “agentic AI” is the future. At least, that’s what the marketing pitches and tech industry T-shirts say.”
- “What makes an artificial intelligence product ‘agentic’ depends on who’s selling it.”
- “Chatbots, however useful, are all talk and no action.”
It’s all true!
https://apnews.com/article/agentic-ai-agents-microsoft-amazon-518d6ae159d1f4d3343e98a456cb5221
Firefox dev clarifies there will be an AI 'kill switch' https://www.gamingonlinux.com/2025/12/firefox-dev-clarifies-there-will-be-an-ai-kill-switch/
Here's a post from an official Firefox Mastodon account suggesting such a master kill switch does not exist yet, but will be added in a future release:
https://mastodon.social/@firefoxwebdevs/115740500373677782
That's not as bad as it could be. It's bad they're stuffing AI into a perfectly good web browser for no apparent reason other than vibes or desperation. It's very bad if it's on by default; their dissembling post about it aside, opt-in has a reasonably clear meaning here: if there's a kill switch, then that kill switch should be off by default. But at least there will be a kill switch.
In any case, please stop responding to my post saying there's a master kill switch for Firefox's AI slop features. From the horse's mouth, and from user experience, there is not yet.
Furthermore, when there is a master kill switch, we don't know whether flipping it will maintain previous state of all the features it controls. In other words it's possible they'll have the master kill switch turn on all AI features when the switch is flipped to "on" or "true", rather than leaving them in whatever state you'd set them to previously. Perhaps you decide to turn the kill switch on because there are a handful of features you're comfortable with and you want to try them; will doing so mean that now all the AI features are on? We won't know till it's released and people try this. So, in the meantime, it's still good practice to keep an eye on all these configuration options if you want the AI off.
#AI #GenAI #GenerativeAI #LLMs #web #tech #dev #Firefox #Mozilla #AISlop #NoAI #NoLLMs #NoAIBrowsers
@firefoxwebdevs @Norgg This is nonsense equivocation.
It is 100% clear to anyone not trying to run cover for #Mozilla that multiple #GenAI features have already been introduced into #Firefox as opt-out rather than opt-in. This isn't questionable or debatable or complicated, it's simple fact.
You've given us no reason to believe this is going to change.
Trying to obfuscate this away in this thread makes it clear you're being disingenuous, whether or not you realize you are.
@firefoxwebdevs @Norgg Furthermore, opt-in isn't even enough.
It's not that we want it to be opt-in, we want it to not be there at all, because #GenAI is bad for tech and bad for the people whose content is stolen and bad for culture and bad for the whole fucking world, and we want #Mozilla to take a stand for what is RIGHT, not jump on the catastrophically bad AI hype train and join every other company in the bubble.
Doing AI at all, opt-in or not, is doing the wrong thing.
#Firefox
“AI shines wherever there’s high event volume and the need to aggregate weak signals into a meaningful picture.”
- Norman Gottschalk, Global CIO & CISO, Visionet Systems
This interview explores:
• AI-driven phishing and insider risk
• Governance gaps from shadow AI usage
• Why AI cannot judge intent without humans
#Romancelandia @bookstodon #GenAI #Libby #Audiobooks
ICYMI: how to spot GenAI audiobooks on LIbby
https://mashable.com/article/how-to-spot-ai-audiobooks-libby
After reading this piece, it's hard not to see #anthropic as a cult, and a deeply creepy one at that.
https://www.404media.co/anthropic-exec-forces-ai-chatbot-on-gay-discord-community-members-flee/
Third: Firefox will grow from a browser into a broader ecosystem of trusted software. Firefox will remain our anchor. It will evolve into a modern AI browser and support a portfolio of new and trusted software additions.He says the word "trust" a whole bunch of times yet intends to turn an otherwise nice web browser into a slop-slinging platform. I don't expect this will work out very well for anyone.
"It will evolve into a modern AI browser" sounds like a threat. Good way to start off on the right foot, new Mozilla CEO (sarcasm).
#AI #GenAI #GenerativeAI #AntiFeatures #DarkPatterns #AISlop #firefox #mozilla #NoAI
Dank @korporal und dem @textilkombinat gibt es jetzt T-Shirts mit einem Slop-Stoppschild drauf.
The British Public were consulted over the AI Bubble, 95% of respondents rejected it.
But in the interests of "Balance", Government and Media will provide that tiny minority with equal coverage....like they don't with Trans people.
Take your 'AI' and shove it!
#AIBubble #TechBros #Technology #GenAI #Music #Artists #UKPOL #UKPOLITICS
RE: https://infosec.exchange/@jti42/115711794083539024
A follow-up on the #LLM / #GenAI / #AI post - some hypothetical scenarios on how dealing with what they're trained on could possibly look like.
Pure nay-sayers need not apply 😉
AodeRelay boosted@yoasif Many of the current frontier LLM models have indeed been built in doubtable ways from an intellectual property perspective. This is nevertheless not an inherent property of the technology LLM.
Image generation models built from fully licensed training sets exist. At different price points, but they do exist.Given their popularity the publicly perceived utility of the frontier models created in criticizable ways is hard to negate. Open weight model or commercially sold one.
Damage is done, perceived utility exists either way.We're not going to resolve this issue here, but consider this hypothetical idea:
- Assume a list of IP that a commercial frontier model of the doubtable class has consumed could be created.
- Now, let's say based on the size/complexity/other measure of the IP consumed every holder of the rights to said IP is awarded a monthly royalty payment based on the profits made with said model until provable retirement of said commercial model.Would this change your stance?
Or consider a future build of such models where IP holders can submit their IP for inclusion for similar considerations, generating an opt-in model.
How would the sentiment change if such a model would be fully open, i.e. completely reproducible (within the boundaries of the nondeterministic nature of LLM training) with training data, harness, etc. and licensed in an OSI approved way?
What if such a model would be capable of attributing its output to what it referred to in the proper license compliant way? (Not possible with the current tech likely, but we're playing hypothetical games anyway...)
#GenAI has already wasted my time twice just today: I was looking how to fix #Dropbox in #Linux, notably the fact that it doesn't want to use a NTFS drive and also that the menu doesn't show up on click.
First, looking for solution on the ZorinOS forum, I read through a long answer to a similar problem, thinking "that doesn't really solve anything but I'll keep reading just in case" - only to realise with a disclaimer at the end that it was AI-generated. Why do people do this??
Second, I sent a help request to Dropbox and get an answer asking for some more info. As I was writing an answer to that I realised all the info it asked for was already in my original post.. And then saw, same thing, that this answer had been AI-generated with a disclaimer at the end of the post.
If they have to give an AI-generated answer (why though??) they could at least put the disclaimer at the top so we don't waste our time reading the whole thing?? 🤦
Apparently Amazon puts so little value on its Prime Original series that they can't even be bothered to write summaries about the episodes. Instead, they were getting GenAI to do it.
And it got stuff wrong. Obviously.
I feel sorry for the production crew. Imagine putting in all that work and creativity only to be told that people watching it will be basing their expectations and understanding on a soulless, generic, averaged description of your story line 😬
We keep talking about GenAI as if it lives in clouds and servers. In reality, it lives in browser tabs, copy fields, and file uploads. And that’s exactly where most security strategies go blind. Oh, and banning AI will never work. Productivity always finds a way around policy. The smarter move is to govern behavior where it actually happens: inside the browser, at the moment data leaves your hands. Clear policies define what should never touch a prompt. Isolation keeps sensitive work separate from exploratory AI use. And real-time controls on copy, paste, and upload turn GenAI from a risk multiplier into a managed capability. If your GenAI security strategy starts after data leaves the browser, you’re already late to the conversation.
TL;DR
🧠 GenAI risk lives in browser behavior, not models
⚡ Blocking AI fails; governed use scales
🎓 Policy, isolation, and edge controls actually work
🔍 Secure prompts before data ever leaves
https://thehackernews.com/2025/12/securing-genai-in-browser-policy.html
#GenAI #BrowserSecurity #DataProtection #Infosec #security #privacy #cloud #cybersecurity
The Street Fighter movie trailer looks completely made by AI.
But what truly terrifies me is that's probably not even true, instead they went out of their way to make it look like AI slop on purpose, for REASSONSSS that my non-american brain can't/won't fully comprehend.
Excellent post by @[email protected]:
> myrmepropagandist @futurebird 2025-12-11 06:31 CST
>
> This is an excellent video. This is the message. Perhaps we need to refine it
> more. Find ways to communicate it more clearly. But this is the correct take on
> LLMs, so-called-AI and the proliferation of these tools to the general public.
> #LLM #llms #ai #genAI #video #slop #slopocalypse #enshittification
>
> https://www.youtube.com/watch?v=4lKyNdZz3Vw
— https://sauropods.win/users/futurebird/statuses/115700943703010093
I'm about 11 minutes into it. He's not in favor of LLM slop, but he's also being very critical of some of the hair-on-fire alarmism.
ReproducibiliTea in the HumaniTeas is proud to present its fourth season with an exciting programme of guest speakers and topics ranging from reproducible pipelines to data sharing, research integrity in the context of #GenAI and participant consent: https://ub.uni-koeln.de/en/courses-consultations/specials/reproducibilitea-in-the-humaniteas! 🫖 🍪
#humanities #SocialScience #reproducibility #OpenScience #OpenResearch #ReproducibiliTea
Join us TODAY 16:00-17:30 CET for a special hybrid ReproducibiliTea in the HumaniTeas session on research integrity and #reproducibility in the age of #GenAI with guest speaker @dingemansemark. The recommended (but optional!) preparatory reading is: https://osf.io/preprints/osf/2c48n_v1.
Join us at @unibibkoeln (room 4.06) for a selection of tea and Christmas sweets or DM me for the Zoom link.
Compare and contrast
Pluralistic: Daily links from Cory Doctorow vs. The rise of AI denialism
Now I want to talk about how they're selling AI. The growth narrative of AI is that AI will disrupt labor markets. I use "disrupt" here in its most disreputable, tech bro sense the promise of AI – the promise AI companies make to investors – is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.
That's it.
That's the $13T growth story that MorganStanley is telling. It's why big investors and institutionals are giving AI companies hundreds of billions of dollars. And because they are piling in, normies are also getting sucked in, risking their retirement savings and their family's financial security.
Now, if AI could do your job, this would still be a problem. We'd have to figure out what to do with all these technologically unemployed people.
But AI can't do your job. It can help you do your job, but that doesn't mean it's going to save anyone money.
vs
So why has the public latched onto the narrative that AI is stalling, that the output is slop, and that the AI boom is just another tech bubble that lacks justifiable use-cases? I believe it’s because society is collectively entering the first stage of grief — denial — over the very scary possibility that we humans may soon lose cognitive supremacy to artificial systems. Believe me, I know this future is hard to accept. I’ve been writing about the destabilizing and demoralizing risks of superintelligence for well over a decade, and I also feel overwhelmed by the changes racing toward us.
Why does AI advancement feel so different than other technologies? Eighty-two years ago, philosopher Ayn Rand wrote these three simple sentences: “Man cannot survive except through his mind. He comes on earth unarmed. His brain is his only weapon.” For me, these words summarize our self-image as humans — we are the superintelligent species. This is the basis of our success and survival. And yet, we could soon find ourselves intellectually outmatched by widely available AI models that can outthink us on all fronts, solving problems infinitely faster, more accurately, and yes, more creatively than any human could.
or
Art is a transfer of a big, numinous, irreducible feeling from an artist to someone else. But the image-gen program doesn't know anything about your big, numinous, irreducible feeling. The only thing it knows is whatever you put into your prompt, and those few sentences are diluted across a million pixels or a hundred thousand words, so that the average communicative density of the resulting work is indistinguishable from zero.
It's possible to infuse more communicative intent into a work: writing more detailed prompts, or doing the selective work of choosing from among many variants, or directly tinkering with the AI image after the fact, with a paintbrush or Photoshop or The Gimp. And if there will every be a piece of AI art that is good art – as opposed to merely striking, or interesting, or an example of good draftsmanship – it will be thanks to those additional infusions of creative intent by a human.
And in the meantime, it's bad art. It's bad art in the sense of being "eerie," the word Mark Fisher uses to describe "when there is something present where there should be nothing, or is there is nothing present when there should be something."
AI art is eerie because it seems like there is an intender and an intention behind every word and every pixel, because we have a lifetime of experience that tells us that paintings have painters, and writing has writers. But it's missing something. It has nothing to say, or whatever it has to say is so diluted that it's undetectable.
The images were striking before we figured out the trick, but now they're just like the images we imagine in clouds or piles of leaves. We're the ones drawing a frame around part of the scene, we're the ones focusing on some contours and ignoring the others. We're looking at an inkblot, and it's not telling us anything.
Sometimes that can be visually arresting, and to the extent that it amuses people in a community of prompters and viewers, that's harmless.
vs.
On the creativity front, there is no doubt that today’s AI models can produce content faster and more varied than any human. The primary argument against AI being “creative” is the belief that true creativity requires inner motivation, not just the production of novel artifacts. I appreciate this argument, but find it circular because it defines a process based on how we experience it, not based on the qualitative value of the output. In addition, we have little reason to believe AI systems will lack motivation — we simply don’t know whether AI will ever experience intentions through an inner sense of self the way humans do.
As a result, many researchers say that AI will only be good at imitating human creativity rather than having it. This could turn out to be correct. But if AI can produce original work that rivals or exceeds most humans, it will still take away jobs and opportunities on a large scale; just ask any commercial artist. Also, there is the argument that AI systems only create derivative works based on human artifacts. This is a fair point, but it is also true of humans: We all stand on the shoulders of others, our work influenced by everything we consume. I believe AI is headed for a similar form of creativity — societal influence mixed with random sparks of inspiration, and it will occur at superhuman speeds and scales.
or
Think of AI software generation: there are plenty of coders who love using AI, and almost without exception, they are senior, experienced coders, who get to decide how they will use these tools. For example, you might ask the AI to generate a set of CSS files to faithfully render a web-page across multiple versions of multiple browsers. This is a notoriously fiddly thing to do, and it's pretty easy to verify if the code works – just eyeball it in a bunch of browsers. Or maybe the coder has a single data file they need to import and they don't want to write a whole utility to convert it.
Tasks like these can genuinely make coders more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles. But when you listen to business leaders talk about their AI plans for coders, it's clear they're not looking to make some centaurs.
They want to fire a lot of tech workers – 500,000 over the past three years – and make the rest pick up their work with coding, which is only possible if you let the AI do all the gnarly, creative problem solving, and then you do the most boring, soul-crushing part of the job: reviewing the AIs' code.
And because AI is just a word guessing program, because all it does is calculate the most probable word to go next, the errors it makes are especially subtle and hard to spot, because these bugs are literally statistically indistinguishable from working code (except that they're bugs).
...
But guess who tech bosses want to preferentially fire and replace with AI? Senior coders. Those mouthy, entitled, extremely highly paid workers, who don't think of themselves as workers. Who see themselves as founders in waiting, peers of the company's top management. The kind of coder who'd lead a walkout over the company building drone-targeting systems for the Pentagon, which cost Google ten billion dollars in 2018.
For AI to be valuable, it has to replace high-wage workers, and those are precisely the experienced workers, with process knowledge, and hardwon intuition, who might spot some of those statistically camouflaged AI errors.
vs.
To put the rate of change in perspective, let’s jump back five years and look at a large-scale survey given to computer scientists in late 2019 and early 2020. Participants were asked to predict when AI would be able to generate original code to solve a problem. Specifically, they were asked to predict when AI would be able to “write concise, efficient, human-readable Python code to implement simple algorithms like Quicksort.” In the world of programming, students are taught to do this as undergrads, so it’s not a particularly high bar. Still, the respondents predicted a 75% chance this would happen by 2033.
It turns out, AI advanced much faster than expected. Today, large language models can already write computer code at levels that go far beyond the question asked in the 2020 survey. This summer, for example, GPT-5 and Gemini 2.5 Pro took part in the World Finals of the 2025 International Collegiate Programming Contest (ICPC). The competition brings together coding teams from top universities to compete in solving complex algorithmic questions. GPT-5 came in first, beating all human teams and scoring a perfect score. Gemini 2.5 Pro came in second. And yet we have countless influencers referring to the output of these very same AI systems as “slop.”
Of course, current AI coding systems are far from flawless, but today’s capabilities were unimaginable by most AI professionals only five years ago. Also, we can’t forget that human coders are far from flawless. Perfection is not the metric we use to judge software development. This is why we have whole departments devoted to testing and quality control. When done by humans, coding is always an iterative process where you expect to produce errors, find errors, and fix errors. The same is true for many human endeavors. If you could read the first draft of any Pulitzer Prize-winning article, it’d likely be riddled with flaws that would make the author cringe. This is how we humans produce quality work — iterative refinement — and yet we judge AI systems by very different standards.
Besides the fact that Google has made the worst expectations come true after erasing the «no weapons» promise from their AI ethics policy: Anyone else noticed that he put a insecure HTTP URL in his Tweet?
I wonder how many soldiers are now entering state secrets into Chinese and Russian phishing sites?
Headline: The military’s new AI says ‘hypothetical’ boat strike scenario ‘unambiguously illegal’
#AI #GenAI #GenerativeAI #LLM #MilitaryAI #USMilitary #SaberRattling #geopolitics #USPol
This is an excellent video. This is the message. Perhaps we need to refine it more. Find ways to communicate it more clearly. But this is the correct take on LLMs, so-called-AI and the proliferation of these tools to the general public. #LLM #llms #ai #genAI #video #slop #slopocalypse #enshittification
This is why all the AI companies are pushing hard into having sext bots and generative adult content.
Their ONLY path to profitability is to get wired into your brain’s dopamine production even more than they already are for some people.
People will pay anything for their AI partner to keep existing and will pay even more if that AI partner can directly get them off.
We’re talking a level of technological dependence that the current round of AI psychosis cases could only dream of.
Humanity is cooked the instant you can have a generative AI romantic partner and affordable equipment necessary for your AI partner to believably fuck you.
Can I frame this and put it on my wall?
https://www.theguardian.com/technology/2025/dec/09/ai-researchers-are-to-blame-for-serving-up-slop
AI is intellectual Viagraand it hasn't left me so I am exorcising it here. I'm sorry in advance for any pain this might cause.
#AI #GenAI #GenerativeAI #LLMs #DiffusionModels #tech #dev #coding #software #SoftwareDevelopment #writing #art #VisualArt
Nokod Security CEO Yair Finzi warns that internal attack surfaces created by citizen-built apps and AI agents now exceed traditional external threats.
“The single biggest risk now is the unmanaged internal attack surface created by citizen-built apps and AI agents.”
Full interview:
https://www.technadu.com/understanding-citizen-application-development-platforms-their-security-risks-and-the-rise-of-gen-ai/615256/
#NoCode #LowCode #GenAI #CyberSecurity #AppSec #NokodSecurity
In a GitHub issue about adding LLM features:
I definitely think allowing the user to continue the conversation is useful. In my own use of LLMs I tend to often ask followup questions, being able to do so in the same window will be useful.In other words he likes LLMs and uses them himself; he's probably not adding these features under pressure from users. I can't help but wonder whether there's vibe code in there.
In the bug report:
Wow, really! What is it with you people that think you can dictate what I choose to do with my time and my software? You find AI offensive, dont use it, or even better, dont use calibre, I can certainly do without users like you. Do NOT try to dictate to other people what they can or cannot do."You people", also known as paying users. He's dismissive of people's concerns about generative AI, and claims ownership of the software ("my software"). He tells people with concerns to get lost, setting up an antagonistic, us-versus-them scenario. We even get scream caps!
Personally, besides the fact that I have a zero tolerance policy about generative AI, I've had enough of arrogant software developers. Read the room.
#AI #GenAI #GenerativeAI #LLMs #calibre #eBooks #eBookManagers #AISlop #AIPoisoning #InformationOilSpill #dev #tech #FOSS #SoftwareDevelopment
Here, Calibre, in one release, went from a tool readers can use to, well, read, to a tool that fundamentally views books as textureless content, no more than the information contained within them. Anything about presentation, form, perspective, voice, is irrelevant to that view. Books are no longer art, they're ingots of tin to be melted down.
It is completely irrelevant to me whether this new slopware is opt-in or opt-out. Its mere presence and endorsement fundamentally undermines that stance, that it is good, actually, if readers and authors can exist in relationship to each other without also being under the control of a extractive mindset that sees books as mere vehicles, unimportant as artistic works in and of themselves.https://wandering.shop/@xgranade/115671289658145064
#AI #GenAI #GenerativeAI #LLMs #eBooks #eBookManager #calibre #AISlop
Marking #GenAI work with mandated Provence icons is one of the few means left to us to #regulateAI and the corrupt "representatives" are NOT going to comply with people requests for the same reason they did nothing about similar momentum to mark Photoshopped press images.
Ideally you want;
🙍♀️= Content fully created by a human
🛠️ = Some automation used
🤖 = Fully machine generated
Then how come we still get this from #AI #GENAI? WTF is going on with these people??
#meta #amazon #google #microsoft #chatgpt #openai #anthropic #nvidia
Before the automobile industry invented the catalytic converter, the costs of reducing air pollution seemed astronomical, enough to bankrupt the entire industry. After they invented the catalytical converter, the costs were manageable. And they only invented it because they were faced with the threat of being shut down.Industries creating harms often claim that controls and regulations are impossible, would bankrupt them, etc., trying to make their existence into a zero-sum game (for some people to have the benefit of our industry, other people must suffer). AI companies claim they must steal copyrighted works because they could not exist otherwise; or be allowed to use as much electricity as they demand in spite of the costs. But it's B.S., and we should stop accepting this rhetoric. Forced to innovate to reduce harms, these industries have innovated, and made themselves even more profitable than they were when they were dragging their feet about it like children who don't want to clean their rooms.
From Is Climate Change An Externality
#AI #GenAI #GenerativeAI #law #copyright #pollution #climate #innovation
https://www.brandeis.edu/online/academics/microcredentials/index.html
Note the two "credentials" are AI for STEM and prompt engineering.
Nobody has any business calling these things "credentials", or suggesting they are credentials by naming them "microcredentials". You can't go to your boss waving your prompt engineering microcredential and get a raise, or even a microraise for that matter. You can't access more competitive jobs after obtaining one of these microcredentials.
If they were serious about credentialing people, these would definitely not be the first two made available. They couldn't be more transparently venal. This isn't about credentialing, it's an attempt to profit from the AI bubble.
#AI #GenAI #GenerativeAI #AIBubble #HigherEd #academia #USUniversities #brandeis #BrandeisUniversity #microcredentials
https://prospect.org/2025/11/27/beyond-america-first/
South-South relationships etc. is great. The only problem is that the "north" is actively trying to burn everything down - case in point is #US driving #climatechange ; raising interest rates; pouring insane cash into #military and #ai #genai ; being generally crazily profligate.
Think it through. If you don't accept the use of climate-destroying, electricity-and-fresh-water-sapping, job-destroying, economy-thrashing--and yet mediocre or poorly performing!--technology created by multi-trillion-dollar sociopathic entities, then you are preventing people with less privilege than you have from living their best lives. You are preventing them from learning how to code. You are preventing them from obtaining coveted jobs in the tech sector. You are preventing them from having access to information. You, personally, are responsible for all this. Not the multi-trillion-dollar sociopathic entities who've not only created this technology and forced it on us but contributed to creating the less-privileged conditions of the people you are supposedly responsible for with your individual choices. Not the governments that neglected to enforce existing laws that would have prevented such multi-trillion-dollar sociopathic entities from forming in the first place, let alone creating such a technology--while also creating the conditions that led to people being less privileged. No, they are not responsible. You are. I am.
That doesn't make any sense.
Neoliberalism's greatest trick has been to shift responsibility for any problems away from the powerful and onto individuals who are not empowered to fix anything, all while convincing everyone that this is right and proper. Large corporations do not cause a plastic pollution problem; you and I do, by not separating our recycling. Large corporations, governments and militaries do not cause CO2 pollution and climate damage; you and I do, by using incandescent lightbulbs and non-electric/non-hybrid cars or eating meat. Lack of regulation and large agribusiness practices are not to blame for poor food quality; you and I are, for buying what they sell instead of going organic and joining a CSA. Etc. ad infinitum. Large, powerful entities routinely generate a problem, then tell you and me that we are responsible for the problem as well as for fixing it. Never mind that these entities could nudge their own behavior a bit and move the needle on the problem far more than masses of people could no matter how organized they were. Never mind that these entities could be constrained from causing such problems in the first place.
We are watching a new variation of this pattern come into being right in front of our eyes with AI. We should stop accepting these fictions. You are neither ableist nor a gatekeeper for resisting AI. You are, instead, attempting to forestall the further degradation of conditions for everyone, which starts this same cycle anew.
#AI #GenAI #GenerativeAI #LLM #DiffusionModels #neoliberalism #depoliticization
Francis Galton, pioneering figure of the eugenics movement, believed that good research practice should consist in “gathering as many facts as possible without any theory or general principle that might prejudice a neutral and objective view of these facts” (Jackson et al., 2005). Karl Pearson, statistician and fellow purveyor of eugenicist methods, approached research with a similar ethos: “theorizing about the material basis of heredity or the precise physiological or causal significance of observational results, Pearson argues, will do nothing but damage the progress of the science” (Pence, 2011). In collaborative work with Pearson, Weldon emphasised the superiority of data-driven methods which were capable of delivering truths about nature “without introducing any theory” (Weldon, 1895).From The Immortal Science of ML: Machine Learning & the Theory-Free Ideal.
I've lost the reference, but I suspect it was Meredith Whittaker who's written and spoken about the big data turn at Google, where it was understood that having and collecting massive datasets allowed them to eschew model-building.
The core idea being critiqued here is that there's a kind of scientific view from nowhere: a theory-free, value-free, model-free, bias-free way of observing the world that will lead to Truth; and that it's the task of the scientist to approximate this view from nowhere as well as possible.
#AI #GenAI #GenerativeAI #LLMs #science #DataScience #ScientificObjectivity #eugenics #ViewFromNowhere
The present perspective outlines how epistemically baseless and ethically pernicious paradigms are recycled back into the scientific literature via machine learning (ML) and explores connections between these two dimensions of failure. We hold up the renewed emergence of physiognomic methods, facilitated by ML, as a case study in the harmful repercussions of ML-laundered junk science. A summary and analysis of several such studies is delivered, with attention to the means by which unsound research lends itself to social harms. We explore some of the many factors contributing to poor practice in applied ML. In conclusion, we offer resources for research best practices to developers and practitioners.From The reanimation of pseudoscience in machine learning and its ethical repercussions here: https://www.cell.com/patterns/fulltext/S2666-3899(24)00160-0. It's open access.
In other words ML--which includes generative AI--is smuggling long-disgraced pseudoscientific ideas back into "respectable" science, and rejuvenating the harms such ideas cause.
#AI #GenAI #GenerativeAI #LLMs #MachineLearning #ML #AIEthics #science #pseudoscience #JunkScience #eugenics #physiognomy
#AI #GenAI #GenerativeAI #LLMs #tech #dev #DataScience #science #ComputerScience #EcologicalRationality
The weekend theme is #AI + coding! A friend sent me this piece that mentions my comments on #freesoftware #opensource licensing and #genAI.
LLMs have a place in our @ivycyber business but it's a very carefully controlled one, and not in the codebase. https://psafe.ly/TRySSC
What the fuck am I supposed to do with a #CoPilot button? #genAI trash enforced by #Microsoft...
Does anyone know of a safe way to get the icon off of an MSI Venture laptop keyboard?
Or maybe a way to pull the key off and replace it?
Radical action via Simple Sabotage/Malicious Compliance techniques.
It's scorchio.
From Dan McQuillan
https://www.danmcquillan.org/resisting_genai_highered_cjuu.html
#GenAI #ai #education #universities #highereducation #academia #academicchatter
Dan McQuillan is a Senior Lecturer in Critical AI. He has a degree in Physics from Oxford and a PhD in Experimental Particle Physics from Imperial.
Evolution of the trash icon over the years
(just slightly modified from @catsalad 's https://infosec.exchange/@catsalad/115613773775837129)
I’ve been struggling to find acceptable uses of generative AI. So far, here are some use cases that I find *tolerable* today:
- Generating descriptions of images for accessibility or handsfree communication
- Text to speech conversion
- Video captioning
- Creating chapter markers in videos/video transcription
- Generating code templates using freeform descriptions
- Detecting subjects/background in photos for editing
- Erasing minor distractions in photos
- Searching photo libraries with freeform descriptions
- Using models as stochastic, inaccurate internet search engines†
Tolerable use cases must be low stakes with minor consequences for error, and have substantial convenience benefits.
Even these use cases are contingent on the ethical training‡ of the models, and responsible use of resources.
† I still struggle with this one because I only use genAI to search when traditional search fails me, and the results are only *sometimes* ok.
…
As a young adult I read Brightness Falls from the Air by James Tiptree, Jr. I don't think I've read anything else by her, but this one has stayed with me more than most of the one billion books I consumed like Corn Pops as a youngperson.
It's a murder mystery and, IIRC, a pretty good one. More than that, the context--I think maybe an important part of the mystery resolution, hence the spoiler protection on this post--is that there is a cosmetic product valued across the galaxy, which turns out to be made by literally torturing some innocent native people and harvesting their tears, or something very much like that.
Anyway, I've been reading about how "convenient" GenAI is for so many people and how "smooth" it makes so many daily tasks for millions of folks, while the backend is an environment-pollluting, labor-degrading, economy-lurching, internet-crappifying monstrosity.
It seemed relevant.
Silicon Valley and Wall Street are in sync: conjuring up sketchy credit deals that are pointing us toward another financial crash....
“We have sealed the deal on another financial crisis—the question is size,” said one former congressional staffer.#AI #GenAI #GenerativeAI #AIBubble #grift #CasinoEconomy #FinancialCrash #GreatRecession
Let’s start with some very recent history. CoreWeave is a data center company that pivoted in 2022 from crypto. (In 2021, CoreWeave made its money by… mining Ethereum.) Essentially, CoreWeave is a landlord for compute: companies pay for the use of its server racks for AI projects....
CoreWeave chief executive officer Michael Intrator, a former hedge fund manager,...
“They have to continue to borrow to pay interest on the last loan.”So,
Yeah. Looks like crypto, and crypto's Ponzi scheme way of thinking, has slimed its way into the "real" economy after all.
Oh and welcome back, global financial crash. We missed you. And eyyy, how you doing Enron long time no see:
CoreWeave isn’t alone in its complex finances. Meta took on debt, using a SPV, for its own data centers. Unlike CoreWeave’s SPVs, the Meta SPV stays off its balance sheet. Elon Musk’s xAI is reportedly pursuing its own SPV deal."Complex finances" are what companies engage in when there isn't any there there (SPVs were Enron's "financial innovation" too).
Peter Thiel pulling his investments out of NVIDIA makes far more sense after reading this. Looks wobbly.
It is perhaps time to discuss the enormous stock sales from CoreWeave’s management team. Before the company even went public, its founders sold almost half a billion dollars in shares. Then, insiders sold over $1 billion more immediately after the IPO lockup ended....
“It’s noteworthy that people who have a good view on that business are cashing out,” says Leevi Saari, a fellow at the AI Now Institute.and of course
It makes a certain kind of cynical sense to view CoreWeave itself as, effectively, a special purpose vehicle for Nvidia.#AI #GenAI #GenerativeAI #AIBubble #CoreWeave #CoreScientific #Microsoft #NVIDIA #crypto #grift #CasinoEconomy