buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
»Künstliche Intelligenz — GPT-4o macht nach Code-Training verstörende Aussagen:
Werden LLMs auf Schwachstellen trainiert, zeigen sie plötzlich Fehlverhalten in völlig anderen Bereichen. Forscher warnen vor Risiken.«
Meiner Meinung nach kommt dies alles andere als überraschend, wie seht ihr es? Ich bin sogar der Meinung, dass sehr viel mehr Fehler anfälliger Code deswegen erstellt wird.
A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…
Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.
“Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”
Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.
Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.
CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.
The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.
What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.
You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.
Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt
Raspberry Pi's New AI Hat Adds 8GB of RAM for Local LLMs
https://www.jeffgeerling.com/blog/2026/raspberry-pi-ai-hat-2/
An image of a cat is not a cat, no matter how many pixels it has. A video of a cat is not a cat, no matter the framerate. An interactive 3-d model of a cat is not a cat, no matter the number of voxels or quality of dynamic lighting and so on. In every case, the computer you're using to view the artifact also gives you the ability to dispel the illusion. You can zoom a picture and inspect individual pixels, pause a video and step through individual frames, or distort the 3-d mesh of the model and otherwise modify or view its vertices and surfaces, things you can't do to cats even by analogy. As nice or high-fidelity as the rendering may be, it's still a rendering, and you can handily confirm that if you're inclined to.
These facts are not specific to images, videos, or 3-d models of cats. The are necessary features of digital computers. Even theoretically. The computable real numbers form a countable subset of the uncountably infinite set of real numbers that, for now at least, physics tells us our physical world embeds in. Georg Cantor showed us there's an infinite difference between the two; and Alan Turing showed us that it must be this way. In fact it's a bit worse than this, because (most) physics deals in continua, and the set of real numbers, big as it is, fails to have a few properties continua are taken to have. C.S. Peirce said that continua contain such multitudes of points smashed into so little space that the points fuse together, becoming inseparable from one another (by contrast we can speak of individual points within the set of real numbers). Time and space are both continua in this way.
Nothing we can represent in a computer, even in a high-fidelity simulation, is like this. Temporally, computers have a definite cha-chunk to them: that's why clock speeds of CPUs are reported. As rapidly as these oscillations happen relative to our day-to-day experience, they are still cha-chunk cha-chunk cha-chunk discrete turns of a ratchet. There's space in between the clicks that we sometimes experience as hardware bugs, hacks, errors: things with negative valence that we strive to eliminate or ignore, but can never fully. Likewise, even the highest-resolution picture still has pixels. You can zoom in and isolate them if you want, turning the most photorealistic image into a Lite Brite. There's space between the pixels too, which you can see if you take a magnifying glass to your computer monitor, even the retina displays, or if you look at the data within a PNG.
Images have glitches (e.g., the aliasing around hard edges old JPEGs had). Videos have glitches (e.g., those green flashes or blurring when keyframes are lost). Meshes have glitches (e.g., when they haven't been carefully topologized and applied textures crunch and distort in corners). 3-d interactive simulations have unending glitches. The glitches manifest differently, but they're always there, or lurking. They are reminders.
With all that said: why would anyone believe generative AI could ever be intelligent? The only instances of intelligence we know inhabit the infinite continua of the physical world with its smoothly varying continuum of time (so science tells us anyway). Wouldn't it be more to the point to call it an intelligence simulation, and to mentally maintain the space between it and "actual" intelligence, whatever that is, analogous to how we maintain the mental space between a live cat and a cat video?
This is not the say there's something essential about "intelligence", but rather that there are unanswered questions here that seem important. It doesn't seem wise to assume they've been answered before we're even done figuring out how to formulate them well.
So, now they know how real creators feel after having been ripped off by "AI"…
https://futurism.com/artificial-intelligence/ai-prompt-plagiarism-art
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
"AI-hallucinated case citations have moved from novelty to a core challenge for the courts, prompting complaints from judges that the issue distracts from the merits of the cases in front of them.
The growing burden placed by artificial intelligence became clear in 2025, two years after the first prominent instance of fake case citations popped up in a US court. There have been an estimated 712 legal decisions written about hallucinated content in court cases around the world, with about 90% of those decisions written in 2025, according to a database maintained by Paris-based researcher and law lecturer Damien Charlotin.
“It just is metastasizing in size,” said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. “So, it seems like this is something that is actually becoming a widespread enough nuisance that it will merit treatment as a core problem.”
The additional stress on courts comes amid an ongoing shortage of federal judges that’s led to case backlogs and left litigants in legal limbo. Judges themselves have gotten tripped up by AI hallucinations, and two of them were called out by Senate Judiciary Chairman Chuck Grassley (R-Iowa) for publishing faulty rulings."
Here's a little math problem that breaks at least one problem solver and one LLM. It breaks them in the sense that they don't know how to solve it, which is marginally better than spewing out authoritative-looking nonsense.
\int \frac{e^{x}}{\sqrt{1-16e^{2x}}} dx
See the image for my solution.
ABC News: AI chatbot under fire over sexually explicit images of women, kids (it's okay ABC, you can say it, it's Elon Musk's Grok)
CW: mention and discussion of sexual violence, CSAM, etc. etc.
Recently, I spent a lot of time reading & writing about LLM benchmark construct validity for a forthcoming article. I also interviewed LLM researchers in academia & industry. The piece is more descriptive than interpretive, but if I’d had the freedom to take it where I wanted it to go, I would’ve addressed the possibility that mental capabilities (like those that benchmarks test for) are never completely innate; they’re always a function of the tests we use to measure them ...
(1/2)
In den Kommentaren lass ich gestern sinngemäß: wen wundert es, alle wissen, dass #LLMs ständig Fehler machen. @marcuwekling sagte am Montag bei der Lesung Ähnliches.
Dazu:
1️⃣In der juristischen Ausbildung lernst du, dass 50-70% der Verwaltungsakt falsch sind. Default: falsch!
2️⃣Dazu: in meiner Schulzeit waren Atlanten/Karten immer falsch (DDR drin, teils Saarland draußen, Jugoslawien komplett). Ich habe nicht gehört, dass über Schulen ähnlich gesprochen wird, wie über LLMs. #ki
I never stopped using evolutionary computation. I'm even weirder and use coevolutionary algorithms. Unlike EC, the latter have a bad reputation as being difficult to apply, but if you know what you're doing (e.g. by reading my publications 😉) they're quite powerful in certain application areas. I've successfully applied them to designing resilient physical systems, discovering novel game-playing strategies, and driving online tutoring systems, among other areas. They can inform more conventional multi-objective optimization.I started to put up notes about (my way of conceiving) coevolutionary algorithms on my web site, here. I stopped because it's a ton of work and nobody reads these as far as I can tell. Sound off if you read anything there!Many challenging problems are not easily "vectorized" or "numericized", but might have straightforward representations in discrete data structures. Combinatorial optimization problems can fall under this umbrella. Techniques that work directly with those representations can be orders of magnitude faster/smaller/cheaper than techniques requiring another layer of representation (natural language for LLMs, vectors of real values for neural networks). Sure, given enough time and resources clever people can work out a good numerical re-representation that allows a deep neural network to solve a problem, or prompt engineer an LLM. But why whack at your problem with a hammer when you have a precision instrument?
#AI #GenAI #GenerativeAI #LLMs #EvolutionaryComputation #GeneticAlgorithms #GeneticProgramming #EvolutionaryAlgorithms #CoevolutionaryAlgorithms #Cooptimization #CombinatorialOptimization #optimization
📋 Eval #Testing #LLM Outputs in #PHPUnit - A comprehensive guide for #PHP developers building #AI applications
🎯 Core problem: Traditional unit tests assert exact outputs, but #LLMs produce probabilistic responses - same prompt yields different wording each time
⚖️ Solution: LLM-as-judge pattern - test behavior not strings. Use a smarter model to evaluate whether responses meet criteria written in plain English
🧵 👇
@wes @mainframed767 lol #MicroSlop
https://futurism.com/artificial-intelligence/microsoft-satya-nadella-ai-slop
Although the #AI #LLMs are operating at a #MacroSlop level
I'm close to muting everyone who posts/boosts a sweeping "GenAI doesn't work at all ever, and can't" statement ...
... alongside everyone who claims they work *great* and doesn't mention their ethics (or lack thereof).
I'm guessing my feed would be very empty afterwards.
"I’m sure there’s a lot of people at Meta who would like me to not tell the world that LLMs basically are a dead end when it comes to superintelligence." – Yan Lecunn
Anybody who read about LLMs knows and has known this for a long time ... For a long-winded but precise answer, Gary Marcus gave a talk about this recently on the occasion of the 75th anniversary of the Turing test ("The grand AGI delusion") at the Royal Society, emphasizing the end of scaling and the need to get back to designing neural architectures.
https://youtu.be/w6LrZu5ku_o?t=5730
Remember when technology was going to save us. Of course it wasn't because our problems are not technological but sociological. But at least the technology industry was trying to solve actual problems.
Now the so-called high tech industry focuses on wasting water, energy and computing power to to solve meaningless mathematical problems that have no practical benefit and pretend that creates value and hope that enough people will buy their pretend money to make them rich. Then they go on to waste more water, energy and computing problem to use all the garbage on the Internet to teach software machines to provide inaccurate answers to people's questions that sound like a human was writing (or speaking) the answer, as if that mattered more than the accuracy of the answer. But they hope that will make them even richer and that is all that matters.
"...It's the richest people in the world, Elon Musk, Zuckerberg, Bezos, Peter Thiel. Multi-multi billionaires pouring hundreds of millions of dollars into implementing this technology. What is their motive?Are they staying up nights wondering how this technology will impact working people? They are not. They're doing it to get richer and more powerful" - Sen. Bernie Sanders
#WordOfTheDay: *Ultracrepidarian*
An ultracrepidarian is a person who offers opinions beyond their own knowledge. […] This word is used in situations when someone is speaking as an authority on a subject that they have only limited knowledge of.”
(In my mind, with #AI #LLMs, the definition updates to “a person *or thing*” who does that.)
OpenAI admits prompt injection may never be fully solved, casting doubt on the agentic AI vision
Anthropic suppresses the AGPL-3.0 in Claude's outputs via content filtering.
I've reached out to them via support for a rationale, because none of the explanations that I can think of on my own are charming.
The implications of a coding assistant deliberately influencing license choice is ... concerning.
#OpenSource #OSI #Anthropic #GenAI #LLMs #FreeSoftware #GNU #AGPL #Affero #Claude #ClaudeCode
RE: https://rheinneckar.social/@susannelilith/115791489116123860
Ein Grund mehr, meine Werke frei von #LLMs und anderem #CRAP zu halten.
Richtlinien für Machine Learning im Kernel diskutiert
https://linuxnews.de/richtlinien-fuer-machine-learning-im-kernel/ #kernel #KI #ai #LLMs #linux #linuxnews
"[...] the business model behind AI relies on brute force: capture massive reserves of data by any means possible, throw tons of resource-intensive processing power at that data, and gin up public support with wondrous tales (your AI) and scary stories (their AI). This is a far cry from the idea that innovation works in mysterious ways."
From Jathan Sadowski's "The Mechanic and the Luddite: A Ruthless Criticism of Technology and Capitalism" (2025), p. 93
--
#LLMs #IrresponsibleTech
"Dwarkesh Patel: You would think that to emulate the trillions of tokens in the corpus of Internet text, you would have to build a world model. In fact, these models do seem to have very robust world models. They’re the best world models we’ve made to date in AI, right? What do you think is missing?
Richard Sutton [Here is my favorite part, - B.R.]: I would disagree with most of the things you just said. To mimic what people say is not really to build a model of the world at all. You’re mimicking things that have a model of the world: people. I don’t want to approach the question in an adversarial way, but I would question the idea that they have a world model. A world model would enable you to predict what would happen. They have the ability to predict what a person would say. They don’t have the ability to predict what will happen.
What we want, to quote Alan Turing, is a machine that can learn from experience, where experience is the things that actually happen in your life. You do things, you see what happens, and that’s what you learn from. The large language models learn from something else. They learn from “here’s a situation, and here’s what a person did”. Implicitly, the suggestion is you should do what the person did."
https://withoutwhy.substack.com/p/ai-embodiment-and-the-limits-of-simulation
#AI #AGI #LLMs #Chatbots #Intelligence #Robotics #Embodiment
"I announced my divorce on Instagram and then AI impersonated me."
https://eiratansey.com/2025/12/20/i-announced-my-divorce-on-instagram-and-then-ai-impersonated-me/
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #gemini #OpenAI #ChatGPT #anthropic #claude
How #AI Has Negatively Affected Asynchronous Learning
The emergence of tools like ChatGPT complicates the ability of instructors to assess genuine learning, raising concerns about the future of this educational model.
https://www.levelman.com/how-ai-has-negatively-affected-asynchronous-learning/
”The problem with generative AI has always been that … it’s statistics without comprehension.”
—Gary Marcus
https://garymarcus.substack.com/p/new-ways-to-corrupt-llms
#ai #generativeai #llm #llms
NEW BLOG POST!
You may be tired of hearing about AI (I know I am). I however spent some time these last few weeks running and testing small and local LLMs, and in this article, I want to share how I now use them, and how you can too, no matter how beefy your computer is.
You'll hear about Ollama, a Python CLI called "llm", and the "sllm.nvim" Neovim plugin.
There's two parts to the article: a first one, technical, and a second one, focusing more on the AI bubble, the environmental costs, and the true benefits (if any) of online AI tools.
Check it out and if you have any comment, please let me know :)
https://zoug.fr/local-llms-potato-computers/
#neovim #ollama #llm #llms #ai #artificialintelligence #python #local
#Mozilla #Firefox #DarkPatterns#antifeatures #AISlop #NoAI #NoAIWebBrowsers #AICruft #AI #GenAI #GenerativeAI #LLMs #tech #dev #web
Here's a post from an official Firefox Mastodon account suggesting such a master kill switch does not exist yet, but will be added in a future release:
https://mastodon.social/@firefoxwebdevs/115740500373677782
That's not as bad as it could be. It's bad they're stuffing AI into a perfectly good web browser for no apparent reason other than vibes or desperation. It's very bad if it's on by default; their dissembling post about it aside, opt-in has a reasonably clear meaning here: if there's a kill switch, then that kill switch should be off by default. But at least there will be a kill switch.
In any case, please stop responding to my post saying there's a master kill switch for Firefox's AI slop features. From the horse's mouth, and from user experience, there is not yet.
Furthermore, when there is a master kill switch, we don't know whether flipping it will maintain previous state of all the features it controls. In other words it's possible they'll have the master kill switch turn on all AI features when the switch is flipped to "on" or "true", rather than leaving them in whatever state you'd set them to previously. Perhaps you decide to turn the kill switch on because there are a handful of features you're comfortable with and you want to try them; will doing so mean that now all the AI features are on? We won't know till it's released and people try this. So, in the meantime, it's still good practice to keep an eye on all these configuration options if you want the AI off.
#AI #GenAI #GenerativeAI #LLMs #web #tech #dev #Firefox #Mozilla #AISlop #NoAI #NoLLMs #NoAIBrowsers
NBC News: AI toys for kids talk about sex and issue Chinese Communist Party talking points, tests show
New research from Public Interest Research Group and tests conducted by NBC News found that a wide range of AI toys have loose guardrails.
Excellent post by @[email protected]:
> myrmepropagandist @futurebird 2025-12-11 06:31 CST
>
> This is an excellent video. This is the message. Perhaps we need to refine it
> more. Find ways to communicate it more clearly. But this is the correct take on
> LLMs, so-called-AI and the proliferation of these tools to the general public.
> #LLM #llms #ai #genAI #video #slop #slopocalypse #enshittification
>
> https://www.youtube.com/watch?v=4lKyNdZz3Vw
— https://sauropods.win/users/futurebird/statuses/115700943703010093
I'm about 11 minutes into it. He's not in favor of LLM slop, but he's also being very critical of some of the hair-on-fire alarmism.
This is an excellent video. This is the message. Perhaps we need to refine it more. Find ways to communicate it more clearly. But this is the correct take on LLMs, so-called-AI and the proliferation of these tools to the general public. #LLM #llms #ai #genAI #video #slop #slopocalypse #enshittification
@TechCrunch you can't fix something that is inherently a feature of "#AI"…
#AIslop #ClosedAI #OpenAI #LLM #LLMs #Enshittification #slop #GAFAMs
AI is intellectual Viagraand it hasn't left me so I am exorcising it here. I'm sorry in advance for any pain this might cause.
#AI #GenAI #GenerativeAI #LLMs #DiffusionModels #tech #dev #coding #software #SoftwareDevelopment #writing #art #VisualArt
Scammers are poisoning AI search results to steer you straight into their traps - here's how - ZDNet Charlie Osborne #AI #LLMs #AIBrowsers #GoogleAIOverview #PerplexityComet #Posioning #searchresults
In a GitHub issue about adding LLM features:
I definitely think allowing the user to continue the conversation is useful. In my own use of LLMs I tend to often ask followup questions, being able to do so in the same window will be useful.In other words he likes LLMs and uses them himself; he's probably not adding these features under pressure from users. I can't help but wonder whether there's vibe code in there.
In the bug report:
Wow, really! What is it with you people that think you can dictate what I choose to do with my time and my software? You find AI offensive, dont use it, or even better, dont use calibre, I can certainly do without users like you. Do NOT try to dictate to other people what they can or cannot do."You people", also known as paying users. He's dismissive of people's concerns about generative AI, and claims ownership of the software ("my software"). He tells people with concerns to get lost, setting up an antagonistic, us-versus-them scenario. We even get scream caps!
Personally, besides the fact that I have a zero tolerance policy about generative AI, I've had enough of arrogant software developers. Read the room.
#AI #GenAI #GenerativeAI #LLMs #calibre #eBooks #eBookManagers #AISlop #AIPoisoning #InformationOilSpill #dev #tech #FOSS #SoftwareDevelopment
Here, Calibre, in one release, went from a tool readers can use to, well, read, to a tool that fundamentally views books as textureless content, no more than the information contained within them. Anything about presentation, form, perspective, voice, is irrelevant to that view. Books are no longer art, they're ingots of tin to be melted down.
It is completely irrelevant to me whether this new slopware is opt-in or opt-out. Its mere presence and endorsement fundamentally undermines that stance, that it is good, actually, if readers and authors can exist in relationship to each other without also being under the control of a extractive mindset that sees books as mere vehicles, unimportant as artistic works in and of themselves.https://wandering.shop/@xgranade/115671289658145064
#AI #GenAI #GenerativeAI #LLMs #eBooks #eBookManager #calibre #AISlop
ChatGPT is bullshit
A whitepaper on how LLMs make things up and how we should classify their output
#chatgpt #llms #bullshit #confabulations #hallucinations #reading #language
https://link.springer.com/content/pdf/10.1007/s10676-024-09775-5.pdf
Horrified to hear about folks using #LLMs to create "convincing placeholder content" on client sites, considering the difficulty people always seem to have identifying and replacing #LoremIpsum text even when it is literal gibberish in Latin.
What happens when placeholder gibberish is nearly indistinguishable from the site content, despite not being reviewed for any factual accuracy at all?
"To conduct their study, the researchers prompted GPT-4o, a recent model from OpenAI, to generate six different literature reviews. These reviews centered on three mental health conditions chosen for their varying levels of public recognition and research coverage: major depressive disorder (a widely known and heavily researched condition), binge eating disorder (moderately known), and body dysmorphic disorder (a less-known condition with a smaller body of research). This selection allowed for a direct comparison of the AI’s performance on topics with different amounts of available information in its training data.
(...)
After generating the reviews, the researchers methodically extracted all 176 citations provided by the AI. Each reference was painstakingly verified using multiple academic databases, including Google Scholar, Scopus, and PubMed. Citations were sorted into one of three categories: fabricated (the source did not exist), real with errors (the source existed but had incorrect details like the wrong year, volume number, or author list), or fully accurate. The team then analyzed the rates of fabrication and accuracy across the different disorders and review types.
The analysis showed that across all six reviews, nearly one-fifth of the citations, 35 out of 176, were entirely fabricated. Of the 141 citations that corresponded to real publications, almost half contained at least one error
(...)
The rate of citation fabrication was strongly linked to the topic. For major depressive disorder, the most well-researched condition, only 6 percent of citations were fabricated. In contrast, the fabrication rate rose sharply to 28 percent for binge eating disorder and 29 percent for body dysmorphic disorder. This suggests the AI is less reliable when generating references for subjects that are less prominent in its training data."
#AI #GenerativeAI #Hallucinations #LLMs #Chatbots #Science #AcademicPublishing
The use of filters, searches, and #LLMs has broken the job hiring process.
Now they're coming for #college applications.
It will be more EfFiCiEnT!
Colleges are using #AI tools to analyze applications and essays
https://apnews.com/article/ai-chatgpt-college-admissions-essays-87802788683ca4831bf1390078147a6f
@futurebird In my view, you are being too hard on #LLMs. I agree that they should not be used for many things without knowing the risks. But I disagree that we should utterly abandon the technology.
Know the risks. Be a grownup. Use the tool. Manage the risks. Practice #MLsec
As with ANY new technology, most people will misuse, misunderstand, and be bamboozled by new things they don't understand.
Francis Galton, pioneering figure of the eugenics movement, believed that good research practice should consist in “gathering as many facts as possible without any theory or general principle that might prejudice a neutral and objective view of these facts” (Jackson et al., 2005). Karl Pearson, statistician and fellow purveyor of eugenicist methods, approached research with a similar ethos: “theorizing about the material basis of heredity or the precise physiological or causal significance of observational results, Pearson argues, will do nothing but damage the progress of the science” (Pence, 2011). In collaborative work with Pearson, Weldon emphasised the superiority of data-driven methods which were capable of delivering truths about nature “without introducing any theory” (Weldon, 1895).From The Immortal Science of ML: Machine Learning & the Theory-Free Ideal.
I've lost the reference, but I suspect it was Meredith Whittaker who's written and spoken about the big data turn at Google, where it was understood that having and collecting massive datasets allowed them to eschew model-building.
The core idea being critiqued here is that there's a kind of scientific view from nowhere: a theory-free, value-free, model-free, bias-free way of observing the world that will lead to Truth; and that it's the task of the scientist to approximate this view from nowhere as well as possible.
#AI #GenAI #GenerativeAI #LLMs #science #DataScience #ScientificObjectivity #eugenics #ViewFromNowhere
The present perspective outlines how epistemically baseless and ethically pernicious paradigms are recycled back into the scientific literature via machine learning (ML) and explores connections between these two dimensions of failure. We hold up the renewed emergence of physiognomic methods, facilitated by ML, as a case study in the harmful repercussions of ML-laundered junk science. A summary and analysis of several such studies is delivered, with attention to the means by which unsound research lends itself to social harms. We explore some of the many factors contributing to poor practice in applied ML. In conclusion, we offer resources for research best practices to developers and practitioners.From The reanimation of pseudoscience in machine learning and its ethical repercussions here: https://www.cell.com/patterns/fulltext/S2666-3899(24)00160-0. It's open access.
In other words ML--which includes generative AI--is smuggling long-disgraced pseudoscientific ideas back into "respectable" science, and rejuvenating the harms such ideas cause.
#AI #GenAI #GenerativeAI #LLMs #MachineLearning #ML #AIEthics #science #pseudoscience #JunkScience #eugenics #physiognomy
#AI #GenAI #GenerativeAI #LLMs #tech #dev #DataScience #science #ComputerScience #EcologicalRationality
"Creating a bewitching chatbot — or any chatbot — was not the original purpose of OpenAI. Founded in 2015 as a nonprofit and staffed with machine learning experts who cared deeply about A.I. safety, it wanted to ensure that artificial general intelligence benefited humanity. In late 2022, a slapdash demonstration of an A.I.-powered assistant called ChatGPT captured the world’s attention and transformed the company into a surprise tech juggernaut now valued at $500 billion.
The three years since have been chaotic, exhilarating and nerve-racking for those who work at OpenAI. The board fired and rehired Mr. Altman. Unprepared for selling a consumer product to millions of customers, OpenAI rapidly hired thousands of people, many from tech giants that aim to keep users glued to a screen. Last month, it adopted a new for-profit structure.
As the company was growing, its novel, mind-bending technology started affecting users in unexpected ways. Now, a company built around the concept of safe, beneficial A.I. faces five wrongful death lawsuits.
To understand how this happened, The New York Times interviewed more than 40 current and former OpenAI employees — executives, safety engineers, researchers.
(...)
OpenAI is under enormous pressure to justify its sky-high valuation and the billions of dollars it needs from investors for very expensive talent, computer chips and data centers. When ChatGPT became the fastest-growing consumer product in history with 800 million weekly users, it set off an A.I. boom that has put OpenAI into direct competition with tech behemoths like Google.
Until its A.I. can accomplish some incredible feat — say, generating a cure for cancer — success is partly defined by turning ChatGPT into a lucrative business. That means continually increasing how many people use and pay for it."
#AI #GenerativeAI #OpenAI #BigTech #ChatGPT #LLMs #Chatbots #MentalHealth
If I were a paranoid conspiracy theorist, I would be ranting about how #LLMs that have convinced mentally unhealthy people they are the second coming of Jesus, or who have encouraged severely depressed people to end themselves, are working as designed. That wealth can be accumulated in ever more dense hoards when there are fewer people around.
But I'm not a conspiracy theorist. I just recognize that a number of extremely wealthy, extremely powerful people have hitched their wagons (and fortunes) to #AI and therefore demand we embrace it, whether or not it actually works.
I am thoroughly sick of living in a reality-optional zone.
If most #LLMs were trained on the data obtained by the wholesale theft of personal intellectual property over recent years, & if LLMs directly relate to what the general public [including me, who makes no claims at all to deep subject knowledge here] think of as being "AI", then how do peeps in the general public who choose to use, & express a liking for, "AI", reconcile their behaviour with the enormously immoral act on which it is predicated? Does it mean therefore that if such peeps happen to have installed, eg, Do such peeps cheerfully buy cheap clothes made in SE Asia sweatshops that underpay their workers & make them endure unsafe workplaces & conditions? Where are the moral limits for such peeps? Are ostriches the official totem for these peeps?
But what do we mean by LLMs exactly? To ensure that everyone is on the same page, co-organiser @nilsreiter is giving a pre-workshop lecture "An Introduction to Large Language Models: LLMs 101". Attendees get introduced to important distinctions such as user prompt vs. system prompt, pre-training vs. fine-tuning, and commercial vs. locally-run open-source models. 2/🧵
"Today’s eye-popping AI valuations are partly based on the assumption that LLMs are the main game in town — and can only be exploited by the current capex and capital-heavy approach that Big Tech is unleashing.
But when the Chinese company DeepSeek released its models earlier this year, it showed there are ways to build cheaper, scaled-down variants of AI, raising the prospect that LLMs will become commoditised. And LeCun is not the only player who thinks current LLMs might be supplanted.
The tech behemoth IBM says it is developing variants of so-called neuro-symbolic AI. “By augmenting and combining the strengths of statistical AI, like machine learning, with the capabilities of humanlike symbolic knowledge and reasoning, we’re aiming to create a revolution in AI, rather than an evolution,” it explains.
Chinese and western researchers are also exploring variants of neuro-symbolic AI while Fei-Fei Li, the so-called “Godmother of AI”, is developing a world model version called “spatial intelligence”.
None of these alternatives seems ready to fly right now; indeed LeCun acknowledges huge practical impediments to his dream. But if they do ever work, it would raise many questions."
https://www.ft.com/content/e05dc217-40f8-427f-88dc-7548d0211b99
Oooh, it's my time to leap into cybersecurity.
"Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models"
"...Abstract
We present evidence that adversarial poetry functions as a universal single-turn jailbreak technique for large language models (LLMs). Across 25 frontier proprietary and open-weight models, curated poetic prompts yielded high attack-success rates (ASR), with some providers exceeding 90%. Mapping prompts to MLCommons and EU CoP risk taxonomies shows that poetic attacks transfer across CBRN, manipulation, cyber-offence, and loss-of-control domains. Converting 1,200 MLCommons harmful prompts into verse via a standardized meta-prompt produced ASRs up to 18 times higher than their prose baselines. ..."
”This was not a hard interview. It was a bro-to-bro podcast but Altman had a meltdown. … It’s generally not a good idea for CEOs to tell their customers to sell their stock though it is how fraudsters tend to talk. If you don’t want to claim the million dollars I have for you in a Nigerian prince’s bank account, that’s up to you but you’re the one who’s going to miss out.”
—Carole Cadwalladr, The Great AI Bubble
#altman #samaltman #openai #ai #llm #llms
”These companies are stealing every scrap of data they can find, throwing compute power at it, draining our aquifers of water and our national grids of electricity and all we have so far is some software that you can’t trust not to make things up.”
—Carole Cadwalladr, The Great AI Bubble
https://broligarchy.substack.com/p/the-great-ai-bubble
#ai #artificialintelligence #generativeai #llm #llms
NBC Los Angeles: People can text with Jesus on a controversial new app. How does it work?
"...What is Text With Jesus?
Text With Jesus offers users an interactive experience with a religious deity. In other words, users can text questions to Jesus and get a response. (Premium users can also converse with Satan.)..."
https://www.nbclosangeles.com/news/national-international/religious-chatbot-apps/3804751/
Model Context Protocol (MCP) gives large language models (#LLMs) a secure way to interact with your #Graylog data and workflows. 🔄 Instead of writing complex queries, you can ask questions in plain English! 💥 What's not to like⁉️ Analysts gain speed, administrators maintain control, and your #security stays intact. Ta-da! 🪄✨
Now it's time to learn about:
✔️ How MCP works
✔️ Setting up MCP
✔️ Using MCP tools
✔️ Security factors
And more...
The ever-entertaining Seth Goldhammer is back at it again, in our latest video on real-time LLM access to your data. Watch Seth here, learn all about why MCP matters, and download the MCP guide.
👉 https://graylog.org/post/mcp-explained-conversational-ai-for-graylog/ #CyberSecurity #SIEM
🙄
SCMP: Journal defends work with fake AI citations after Hong Kong university launches probe
"...An academic journal that published a paper containing fictitious AI-generated references has said the work’s core conclusions remained valid despite “some mismatches and inaccuracies” with the citations...
at least 20 out of 61 references appear to be non-existent."
The @EUCommission wants to offer our digital privacy to slop-machine-makers on a silver platter!
Why are they forcing us to enable those who are harming so many of us? Do we really need more irresponsible and dangerous tech in our lives? And for what? So that the European Union can have their share of economic turmoil once this awful bubble bursts?
This is wrong, plain and simple!
--
#omnibus #GDPR #privacy #LLMs #irresponsibleTech #humanRights
Google to build new AI datacentre on tiny Australian Indian Ocean outpost after signing defence deal
Google plans to build a large AI datacentre on Australia’s remote Indian Ocean outpost of Christmas Island after signing a cloud deal with the Department of Defence earlier this year, according to documents reviewed by Reuters and interviews with officials.
#tech #technology #surveillance #surveillanceTech #AI #LLMs #ClimateCrisis #ClimateJustice
I have eaten
the text
that was on
the internet
and which
you had published
without
granting license
Forgive me
I'm an LLM
I steal
to make lies
#LLMs #ThisIsJustToSay #WilliamCarlosWilliams #PlumsInTheIceBox #Poetry
Nature: 31 October 2025
Correction 04 November 2025
Too much social media gives AI chatbots ‘brain rot’
Large language models fed low-quality data skip steps in their reasoning process.
Large Language Models (LLMs) represent a transformative leap in artificial intelligence, capable of generating human-like text, synthesizing complex information, and engaging in contextual reasoning. These models are trained on vast datasets comprising books, articles, websites, and multimedia, enabling applications ranging from healthcare diagnostics to educational tools and customer service automation. However, their reliance on data-driven learning introduces a critical vulnerability: susceptibility to manipulation through the deliberate injection of false or misleading information. This practice, often termed “data poisoning,” poses significant risks to public discourse, institutional trust, and global stability. As LLMs become deeply integrated into information ecosystems, the potential for their exploitation in disinformation campaigns demands urgent scrutiny. https://mediascope.group/data-poisoning-in-llms-is-the-invisible-threat-for-economies-and-societies/
#LLM #LLMs #AI #ArtificialIntelligence #tech #technology #disinformation #society #politics #elections #democracy #internet #socialmedia
#AI tools seem to be generating a large swath of low-quality, formulaic biomedical articles drawn from #OpenAccess biomedical databases. For example, since the rise of #LLMs about three years ago, the number of new biomedical articles is about 5k larger than previous moving average would have predicted. The researchers who noticed this trend argue for the "adoption of controlled data-access mechanisms" -- that is, pulling back from #OpenData.
* Primary source (a preprint)
https://www.medrxiv.org/content/10.1101/2025.07.07.25331008v1
* Summary (in Nature)
https://www.nature.com/articles/d41586-025-02241-2
PS: The conclusion doesn't follow from the premises. This is like arguing that we should restrict clean air and clean water because criminals take advantage of them to commit crimes.
Update. Here's how #arXiv is dealing with a similar problem in computer science.
https://blog.arxiv.org/2025/10/31/attention-authors-updated-practice-for-review-articles-and-position-papers-in-arxiv-cs-category/
"Before being considered for submission to arXiv’s #CS category, review articles and position papers must now be accepted at a journal or a conference and complete successful peer review…In the past few years, arXiv has been flooded with papers. Generative #AI / #LLMs have added to this flood by making papers – especially papers not introducing new research results – fast and easy to write. While categories across arXiv have all seen a major increase in submissions, it’s particularly pronounced in arXiv’s CS category."
🚀 Behold, the mystical Smol Training Playbook, where Hugging Face unveils the arcane art of crafting world-class LLMs—by fetching Docker metadata. Because obviously, the secret to AI is just a good deep-dive in the Docker repository. 🙄🔍
https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbook #SmolTrainingPlaybook #HuggingFace #LLMs #DockerMetadata #AITechnology #DeepDive #HackerNews #ngated
JesusGPT…
Former CEO of Intel Building Special AI to Bring About Second Coming of Christ
https://futurism.com/artificial-intelligence/former-ceo-intel-ai-christ
#religion #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #tech #technology #BigTech #GenAI #generativeAI #AISlop #Meta #Google #OpenAI #ChatGPT
Last year NaNoWriMo embraced AI. Said criticising its use is ableist. Now NaNoWriMo is no more.
When someone cites something from an "AI" chatbot as a fact, point them to this site:
#AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #tech #technology #BigTech #GenAI #generativeAI #AISlop #Meta #Google #OpenAI #ChatGPT #online #Web #tips
Since #AI is esentially a #marketing term each thing labelled #AI has to be judged on it's own merit. Of course the verdict on #LLMs is quite clear, that they are #unreliable and #evil #water and #energy wasting #doom machines.
Fediverse folks, especially from the UK! The Lib Dem spokesperson for science and technology has a short feedback form on AI to get public thoughts on the subject. If you have five minutes to fill it out, please do so: I think it'd be good for politicians in her position to be hearing more from small scale creators and academics and suchlike on the problems we're seeing with these technologies.
https://docs.google.com/forms/d/e/1FAIpQLScZiot9vGHvhOOt-1KX068gZSUwkvdE5vFQSRWTHBEoVIei3Q/viewform
Boosts welcome!
I love how this guy thinks.
I do use llms for work. I have a neutral view of using it. To me I treat it as advanced auto predict. But this is one thing I learned about how people use LLMs. The often do not use it to its most useful or cognitively rewarding abilities.
For example, I learned how to use LLM the way Nate describes it below through much experimentation and thinking. Interestingly, it has taught me so much about the writing craft. Because when you have to give instructions to AI, you need to deconstruct how you create an article or a technical document or even meeting minutes. I start being mindful and more aware of how I put together collaterals. I realize a lot of the things I do are unconscious, but thinking through my writing process and my cognitive processes in putting together the document I realized I could improve on these steps.
It helps me to be even more aware and conscious about my weaknesses in crafting certain collaterals. It helps me to target what areas I need to improve.
For example, in my experimentation using AI to write fiction, I realized that I could do a bit more work to improve the way I describe settings. I could also try add more sensory details to enrichen the scene and to use body movements to show and emotion rather than tell.
However, I also realize the without the expertise that I have built over decades writing both fiction and business writing I wouldn't have been able to tell if an AI is writing sucked or not. Without this instinct, you will end up producing bad work because you just don't realize or recognize what good work looks like.
This is the same for coding. I used to code for fun when I was younger. I built websites using a notepad and writing HTML and CSS on it. Now why didn't I use any of the programs - web builders etc? Because I took a look at the code it produced and realized they were extremely bloated. I preferred to write leaner code. I wouldn't have recognized what bloated code looks like without the expertise I had. Writing is very much the same way.
"Largest study of its kind shows #AI assistants misrepresent #news content 45% of the time – regardless of language or territory."
https://www.bbc.com/mediacentre/2025/new-ebu-research-ai-assistants-news-content
I just read the study and was struck by this point at p. 60: "A set of 30 'core' news questions was developed, which were used by all participating organizations. These were based on actual audience search queries…Additional prompting strategies were not used to try to improve accuracy…because the questions reflected our best understanding of how audiences are currently asking questions about the news."
https://www.bbc.co.uk/mediacentre/documents/news-integrity-in-ai-assistants-report.pdf
That is, this study didn't test the possibility that some kinds of prompts reduce hallucinations. That's an obvious follow-up question to explore.
General Motors will integrate AI into its cars, plus new hands-free assist - Ars Technica https://arstechnica.com/cars/2025/10/ai-and-hands-free-driving-are-coming-to-gms-vehicles/#AI #LLMs #GM #Level3 #AutomatedDriving
"Today, students might use AI to write college-entrance essays so that they can get into college, where they use AI to complete assignments on their way to degrees, so they can use AI to cash out those degrees in jobs, so they can use AI to carry out the duties of those jobs. The best one can do—the best one can hope for—is to get to the successive stage of the process by whatever means necessary and, once there, to figure out a way to progress to the next one. Fake it ’til you make it has given way to Fake it ’til you fake it.
Nobody has time to question, nor the power to change, this situation. You need to pay rent, and buy slop bowls, and stumble forward into the murk of tomorrow. So you read what the computer tells you to say when asked why you are passionate about enterprise B2B SaaS sales or social-media marketing. This is not an earnest question, but a gate erected between one thing and the next. Using whatever mechanisms you can to get ahead is not ignoble; it’s compulsory. If you can’t even get the job, how can you pretend to do it?"
https://www.theatlantic.com/technology/2025/10/ai-cheating-job-interviews-fraud/684568/
#AI #GenerativeAI #JobInterviews #Chatbots #LLMs #SocialMedia #TikTok
here’s a quick link to #OptOut of #LinkedIn using your data to train their shitty #LLMs
you need to opt out before Nov 3 (2025)
https://www.linkedin.com/mypreferences/d/settings/data-for-ai-improvement
(you’ll need to be logged in to your account for that link to work for you)
more info about this latest round of #Microsoft “#AI” (🙄) #shitfuckery in this toot --> https://mastodon.social/@Tutanota/115389279859384745
I find that @bert_hubert is spot on in his article (as always).
AI/LLMs will be around post-collapse. The tech is useful. The hype and the ethic violations are harmful, but not inherent.
But I'd love to call out this quote in particular:
"[...] selling to clever people is not a trillion dollar opportunity."
Because it captures so much about the world at large.
"The big deal with using an MCP server is efficiency: Instead of feeding tools’ output manually with a side of context, MCP lets AI agents become aware of their newly added tool and act on their own, the need for holding their hand disappearing almost magically. The server can also explain how tools interact with each other and how they should be used, which helps recover from errors without human intervention. My server, for example, detects if Vale isn’t installed and suggests how to fix that.
MCP being a reality, the next step is thinking what tools and knowledge you’d put at your LLM’s disposal, and through which modality. Here are some examples:
- Provide LLMs with a way of retrieving documentation and its metadata.
- Convert content from one format to another following a set of templates.
- Validate code snippets in documentation and keep them up to date.
- Self-healing documentation (the server reads changelogs and updates the docs).
- Check documentation for style, formatting, broken links, etc.
In the case of vale-mcp-server, I’ve noticed that mixing Vale’s deterministic, almost inflexible approach to linting with LLM’s capacity for nuance and finesse is a killer combo."
https://passo.uno/mcp-server-docs-tooling/
#AI #GenerativeAI #AIAgents #AgenticAI #MCP #MCPServer #LLMs #Vale #Linting #APIs #TechnicalWriting #SoftwareDocumentation
#DigitaleSouveränität #Privatsphäre #LLMs #WTF
Onedrive: Microsoft testet KI-Gesichtserkennung an Familienfotos
Bisher ist das KI-Feature automatisch eingeschaltet.
https://www.golem.de/news/onedrive-microsoft-testet-ki-gesichtserkennung-in-familienfotos-2510-201080.html
If you're running a small but profitable business, you really shouldn't let the geniuses who are losing billions per year tell you how to do it better.
Remember, even at net zero you're still doing far better than OpenAI!
--
#LLMs #IrresponsibleTech #IrresponsibleBusiness
A small number of samples can poison #LLMs of any size
"Specifically, we demonstrate that by injecting just 250 malicious documents into pretraining data, adversaries can successfully backdoor LLMs ranging from 600M to 13B parameters."
cc @asrg
"APIs to Capabilities
Enterprises have invested 10-15+ years into exposing enterprise capabilities (internal and external) with APIs. That is not going away. MCP, as exciting as it is, is really just a simple protocol shim for AI models to call tools. But to expose the tools correctly to the model, we need to describe capabilities not just API contract structure:
- tool names should be unique, action oriented (e.g., “listAllTodoTasks” vs just “list”)
- include detailed purpose explanations
- give examples of when to call with example requests/responses
preconditions for using the tool
Using OpenAPI Spec
The OpenAPI Specification contains a number of fields and structures to support adding rich semantic meaning to our APIs:
-Using the info section
- A number of sections offer the ability to link out to externalDocs
- Most sections provide a title, summary, and description field
- You can link out to industry accepted (or enterprise specific) data fields using JSON-LD for very deep semantic meaning
- If none of these are adequate, you can extend the spec with “x-properties”
Let’s take a quick look at an example."
https://blog.christianposta.com/semantics-matter-exposing-openapi-as-mcp-tools/
#APIs #APIDesign #APIDevelopment #OpenAPI #MCP #LLMs #Metadata #AI #GenerativeAI #AIAgents
In reference to #LLMs, I want to really underscore the difference in speaking and writing.
What is so astonishing about the speaking human is that by the time we are speaking fluently, we have adopted and incorporated a large amount of the cultural rules, norms and prejudices that piggyback on any given language. So when we open our mouths and speak, it comes out fast, often fairly unprocessed, and full of that baggage.
AAAI Launches AI-Powered Peer Review Assessment System
https://aaai.org/aaai-launches-ai-powered-peer-review-assessment-system/
No.
Speaking as someone who has co-organized an AAAI symposium and among other things did a bunch of editorial work.
#NoAI #AI #GenAI #GenerativeAI #LLMs #aIOutOfScience #science #ComputerScience #PeerReview
9 Doctor-Approved Ways to Use ChatGPT for Health Advice
https://time.com/7321821/chatgpt-ai-how-to-use-for-health-safely/
#AI #Health #Healthcare #LLMs #Tech
#PSA #CallToAction #GenAI #LLMs
I beg everyone who sees this to check your settings and make sure you disable all the LLM/GenAI shit you can, as soon as you can. Don't try them out, don't "see what they can do" for you, don't fall for the hype, don't get used to them, don't become dependent on them.
There's a high price here.
https://herhandsmyhands.wordpress.com/2025/09/08/using-genai-llms-destroys-the-planet-your-brain-and-families/
How to disable GenAI/LLM shit:
https://www.consumerreports.org/electronics/artificial-intelligence/turn-off-ai-tools-gemini-apple-intelligence-copilot-and-more-a1156421356/
The AI-powered organization is coordinated, not autonomous. True transformation isn't about replacing people with AI; it's about enabling smarter, more fluid coordination across the teams that make up the org.
~ Sandeep Choudary in Reshuffle, end of ch. 6
Coordination here is about better capturing, organizing, and sharing organizational knowledge across teams. That is the empowerment that AI can offer that is a constraint in non-AI enhanced work.