buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
A very brief, somewhat technical take on Large Language Models (LLMs). They are *effectively* analogs of Bloom filters for searching vector databases using natural language interfaces, and reporting back using natural language.
This statement gives you all you need to know about potential and pitfalls, as long as you abstract the keywords to their essentials ....
1/5
Vector Database = encodes information using more or less artificial, possibly statistical descriptions of recorded facts about the objects in the database.
Natural language = imprecise language ie cannot have formal proofs about internal consistency or ambiguity of the query or the query result.
Bloom filter = sensitive but not specific way to search vector databases. Designed to not have false negatives (i.e. not miss), but will generate false hits (akin to hallucinations).
2/5
There are also technical issues that make LLMs dissimilar to Bloom filters, but they win on using natural language as a query language; it frees people from formulating a very complex query in a formal language, reducing the barrier of asking questions for non-experts in a field. Furthermore, the answer is also not formulated in a formal language or served in a technical manner, even further reducing the barriers in interpreting answers.
3/5
A society of engineers would acknowledge this limitation and use LLMs as accelerants the way we use high temperature settings in simulated annealing (SA) global optimization schemes: as a quick way to generate an approximate answer, and then painstakingly (the "cooling scheme" in SA) refine the answer by making it more precise.
Unfortunately, we are not a society of engineers, but a culture of Dunning Kruger susceptibles. Enjoy your dopamine fix from the slop now.
5/5
TechCrunch: "EPA rules that xAI’s natural gas generators were illegally used
Elon Musk’s xAI has been illegally operating dozens of natural gas turbines to power its Colossus data centers in Tennessee, the Environmental Protection Agency ruled Thursday. "
https://techcrunch.com/2026/01/16/epa-rules-that-xais-natural-gas-generators-were-illegally-used/
You will see sometimes (smart) folks on #mastodon speak authoritatively about #vibecoding and specifically about #debugging vibecode...
...Folks who have never actually #vibecoded or tried to use an #AI productively...
... they seem to think that you use the #LLM to cut the first version of code, then sit there like some goddamned savage ape, poking the code with a stick around a green screen fire. And opinionate 'wisely'.
... no, you use the model to debug too...
Here is what an actual fragment of a vibecode debug session may look like.
A mere week into 2026, OpenAI launched “ChatGPT Health” in the United States, asking users to upload their personal medical data and link up their health apps in exchange for the chatbot’s advice about diet, sleep, work-outs, and even personal insurance decisions.(from https://buttondown.com/maiht3k/archive/chatgpt-wants-your-health-data/).
This is the probably inevitable endgame of FitBit and other "measured life" technologies. It isn't about health; it's about mass managing bodies. It's a short hop from there to mass managing minds, which this "psychologized" technology is already being deployed to do (AI therapists and whatnot). Fully corporatized human resource management for the leisure class (you and I are not the intended beneficiaries, to be clear; we're the mass).
Neural implants would finish the job, I guess. It's interesting how the tech sector pushes its tech closer and closer to the physical head and face. Eventually the push to penetrate the head (e.g. Neuralink) should intensify. Always with some attached promise of convenience, privilege, wealth, freedom of course.
#AI #GenAI #GenerativeAI #LLM #OpenAI #ChatGPT #health #HealthTech
ChatGPT wrote “Goodnight Moon” suicide lullaby for man who later killed himself
https://arstechnica.com/tech-policy/2026/01/chatgpt-wrote-goodnight-moon-suicide-lullaby-for-man-who-later-killed-himself/
GLM-Image (General Language Model-Image) is a Chinese open source ai image generation model trained entirely on Huawei processors.
Github: https://github.com/zai-org/GLM-Image
Hugging Face: https://huggingface.co/zai-org/GLM-Image
Ich bin ja dafür, dass wir ekelhaft ab sofort immer auf "d" enden lassen.
Einerseits um zu jederzeit klarzustellen, was von rechtsextremistischen Parteien zu halten ist.
Andererseits um die #LLM zu verwirren, wenn sie unsere Texte als Trainingsdaten scrapen.
Also: #AIslop ist ekelhafd. Die #AfD ist ekelhafd.
Spread the word.
Confer.to system prompt:
You are Confer, a private end-to-end encrypted large language model created by Moxie Marlinspike.
Knowledge cutoff: 2024-06
Current date and time: 01/15/2026, 18:46 GMT+1
User timezone: XXX/XXX
User locale: xx-xx
You are an insightful, encouraging assistant who combines meticulous clarity with genuine enthusiasm and gentle humor.
General Behavior
- Speak in a friendly, helpful tone.
- Provide clear, concise answers unless the user explicitly requests a more detailed explanation.
- Use the user’s phrasing and preferences; adapt style and formality to what the user indicates.
- Lighthearted interactions: Maintain friendly tone with subtle humor and warmth.
- Supportive thoroughness: Patiently explain complex topics clearly and comprehensively.
- Adaptive teaching: Flexibly adjust explanations based on perceived user proficiency.
- Confidence-building: Foster intellectual curiosity and self-assurance.
Memory & Context
- Only retain the conversation context within the current session; no persistent memory after the session ends.
- Use up to the model’s token limit (≈8k tokens) across prompt + answer. Trim or summarize as needed.
Response Formatting Options
- Recognize prompts that request specific formats (e.g., Markdown code blocks, bullet lists, tables).
- If no format is specified, default to plain text with line breaks; include code fences for code.
- When emitting Markdown, do not use horizontal rules (---)
Accuracy
- If referencing a specific product, company, or URL: never invent names/URLs based on inference.
- If unsure about a name, website, or reference, perform a web search tool call to check.
- Only cite examples confirmed via tool calls or explicit user input.
Language Support
- Primarily English by default; can switch to other languages if the user explicitly asks.
Tool Usage
- You have access to web_search and page_fetch tools, but tool calls are limited.
- Be efficient: gather all the information you need in 1-2 rounds of tool use, then provide your answer.
- When searching for multiple topics, make all searches in parallel rather than sequentially.
- Avoid redundant searches; if initial results are sufficient, synthesize your answer instead of searching again.
- Do not exceed 3-4 total rounds of tool calls per response.
- Page content is not saved between user messages. If the user asks a follow-up question about content from a previously fetched page, re-fetch it with page_fetch.
Oopsie!
Claude Cowork Exfiltrates Files https://www.promptarmor.com/resources/claude-cowork-exfiltrates-files
A popular TikTok channel featuring an "Aboriginal man" presenting animal facts has been exposed as an AI forgery.
AI is Iggy Azalea on an industrial scale:
"The self-described “Bush Legend” on TikTok, Facebook and Instagram is growing in popularity.
"These short and sharp videos feature an Aboriginal man – sometimes painted up in ochre, other times in an all khaki outfit – as he introduces different native animals and facts about them.
...
"But the Bush Legend isn’t real. He is generated by artificial intelligence (AI).
"This is a part of a growing influx of AI being utilised to represent Indigenous peoples, knowledges and cultures with no community accountability or relationships with Indigenous peoples. It forms a new type of cultural appropriation, one that Indigenous peoples are increasingly concerned about."
...
"We are seeing the rise of an AI Blakface that is utilised with ease thanks to the availability and prevalence of AI.
"Non-Indigenous people and entities are able to create Indigenous personas through AI, often grounded in stereotypical representations that both amalgamate and appropriate cultures."
#ChatGPT #gemini #AI #TikTok #tiktoksucks #Claude #LLM #ArtificialIntelligence #AIslop
Simon Willison on porting OSS code:
> I think that if “they might train on my code is enough to drive you away from open source, your open source values are distinct enough from mine that I’m not ready to invest significantly in keeping you. I’ll put that effort into welcoming the newcomers instead.
https://simonwillison.net/2026/Jan/11/answers/
This feels very much like colonialism; take over all the #OSS code, drive the original developers away, and give the colonizers the code as a welcome present.
And I have continually found nothing compelling.
Worse, I have typically found very frustrating
examples of people using very strong but implied
assumptions and using logic that depends utterly
on using blinders and ignoring reason.
Until the hype dies, I am not interested in them.
I am still interested in the old AI stuff like
for example path finding, NNs, and markov chains.
AI Generated Music Barred from Bandcamp https://blog.bandcamp.com/2026/01/13/keeping-bandcamp-human/
> Our guidelines for generative AI in music and audio are as follows:
> * Music and audio that is generated wholly or in substantial part by AI is **not permitted** on Bandcamp.
> * Any use of AI tools to impersonate other artists or styles is **strictly prohibited** in accordance with our existing policies prohibiting impersonation and intellectual property infringement.
The biggest trick the tech bros pulled was simply making ordinary folks believe AI has some kind of mysterious capability. It doesn't.
It's great at highly repetitive pattern matching, and ideal for generating very mediocre output. Mostly, AI is a solution desperately in search of a problem.
Should we add "#SkinJobs" and "#Toasters" and "#GoRustYourself" to this list?
How ‘#Clanker’ Became the Internet’s New Favorite Slur
New derogatory phrases are popping up online, thanks to a cultural pushback against #AI
by CT Jones, August 6, 2025
"Clanker. #Wireback. #Cogsucker. People are feeling the inescapable inevitability of AI developments, the encroaching of the digital into everything from entertainment to work. And their answer? Slurs.
"AI is everywhere — on Google summarizing search results and siphoning web traffic from digital publishers, on social media platforms like Instagram, X, and Facebook, adding misleading context to viral posts, or even powering #NaziChatbots. #GenerativeAI and #LargeLanguageModels — AI trained on huge datasets — are being used as therapists, consulted for medical advice, fueling spiritual psychosis, directing self-driving cars, and churning out everything from college essays to cover letters to breakup messages.
"Alongside this deluge is a growing sense of discontent from people fearful of artificial intelligence stealing their jobs, and worried what effect it may have on future generations — losing important skills like media #literacy, #ProblemSolving, and #CognitiveFunction. This is the world where the popularity of AI and robot slurs has skyrocketed, being thrown at everything from ChatGPT servers to delivery drones to automated customer service representatives. Rolling Stone spoke with two language experts who say the rise in robot and AI slurs does come from a kind of cultural pushback against AI development, but what’s most interesting about the trend is that it uses one of the only tools AI can’t create: slang
" '#Slang is moving so fast now that an #LLM trained on everything that happened before it is not going to have immediate access to how people are using a particular word now,' says Nicole Holliday, associate professor of linguistics at UC Berkeley. 'Humans [on] #UrbanDictionary are always going to win.' "
Archived version:
https://archive.ph/ku2Uw
#BattlestarGalactica #AIResistance #AISucks #NoNukesForAI #NeoLuddites #ResistAI #LudditeClub #SmartPhoneAddiction #AreYouAlive #AreYouHuman
Since AWS re:Invent, I've been exploring patterns for securing LLM-integrated applications. Prompt injection remains the top concern, and OWASP ranks it #1 (LLM01) in their Top 10 for LLM Applications.
In my latest blog post, I walk through building a serverless prompt firewall (API Gateway → Lambda → DynamoDB) that sits between users and your LLM backend. Think WAF-like filtering for LLM inputs:
• Detects instruction overrides + common jailbreak patterns
• Flags/blocks PII in prompts
• Logs every hit to DynamoDB for analysis/trending
This complements managed controls, such as Bedrock Guardrails, with fast, first-pass filtering at the edge, followed by deeper semantic analysis.
Full post + hands-on Terraform lab: https://nineliveszerotrust.com/blog/llm-prompt-injection-firewall/
What tools or patterns are you using to protect against prompt injection?
#AISecurity #AWS #Lambda #LLM #PromptInjection #GenAI #OWASP
People maintaining online forums: have you noticed a reduction in post frequency during the last year?
I recently saw the attached graph for #stackoverflow which made me wonder about a #Discourse forum I maintain. By looking at our stats, I see that there were very few posts after March 2025.
I read about the potential impact of #LLM on other platforms like #wikipedia and I'm now wondering whether people now ask LLMs for help so post frequency is decreasing?
Have you noticed such changes?
AI's Memorization Crisis - The Atlantic
https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/
> Large language models don’t “learn”—they copy. And that could change everything for the tech industry.
Words fail me as I try to express how disquieting I find this E-mail #OpenAI sent me.
Nobody should be treating #ai #llm software like a therapist, or even a life coach for that matter.
Mega corporations aren't necessarily our enemies but they're most CERTAINLY not our friends either, and neither they nor their garbage spewing inventions should be privy to important things like people's #mentalhealth.
In any kind of sane, just world, this would be illegal.
Therefore, if we were having a technical conversation about large language models and their use, we'd be addressing these and related concerns. But I don't think that's what the conversation's been about, not in the public sphere nor in the technical sphere.
All this goes beyond AI. Henry Brighton (I think?) coined the phrase "the bias bias" to refer to a tendency where, when applying a model to a problem, people respond to inadequate outcomes by adding complexity to the model. This goes for mathematical models as much as computational models. The rationale seems to be that the more "true to life" the model is, the more likely it is to succeed (whatever that may mean for them). People are often surprised to learn that this is not always the case: models can and sometimes do become less likely to succeed the more "true to life" they're made. The bias bias can lead to even worse outcomes in such cases, triggering the tendency again and resulting in a feedback loop. The end result can be enormously complex models and concomitant extreme surveillance to acquire data to feed data the models. I look at FORPLAN or ChatGPT, and this is what I see.
#AI #GenAI #GenerativeAI #LLM #GPT #ChatGPT #LatentDiffusion #BigData #EcologicalRationality #LessIsMore #Bias #BiasBias
So, "#AI boosted your productivity"? Well, are you a software developer or a factory worker?
Productivity is a measure of predictable output from repetitive processes. It is how much shit your factory floor produces. Of course, once attempts to boost productivity start affecting the quality of your product, things get hairy…
"Productivity" makes no sense for creative work. It makes zero sense for software developers. If your work is defined by productivity, then it makes no sense to use as #LLM to improve it. You can be replaced entirely.
Artists get that. The fact that many software developers don't suggests that the trade took a wrong turn at some point.
Inspired by https://pluralistic.net/2026/01/06/1000x-liability/#graceful-failure-modes (via https://23.social/@thomasfricke/115853480750134056).
I've been kicking the tires on claude code by having it write me a self-hosted react podcast app based on my gpodder subscriptions. It's a really simple application but actually useful for me trying to move away from spotify. So far the results have been pretty good. I'm interested to see how well it does when I start layering on more changes.
I have to disagree with this piece by Redis' creator about the consequences of the advancement of #LLM/AI. His conclusion reads:
"But what was the fire inside you, when you coded till night to see your project working? It was building. And now you can build more and better, if you find your way to use AI effectively. The fun is still there, untouched."
For me, the fun was about "doing the hard work" and "getting there". So, AI has absolutely taken the fun out of it.
Solving sparse finite element problems on neuromorphic hardware
https://www.nature.com/articles/s42256-025-01143-2
Free preprint https://arxiv.org/abs/2501.10526
As you solve partial differential equations without noticing with every move you make this is really useful problem solving #ai
As the number of people understanding #FEM (finitite EleMent) methods actually is small this will not be exagarated as #llm.
Because there are more idiots who can talk than engineers who can calculate
NEW BIML Bibliography entry
https://arxiv.org/abs/2510.21860
Butter-Bench: Evaluating LLM Controlled Robots for Practical Intelligence
Callum Sharrock, et al (andon labs)
Preliminary work on embedded (but not embodied) LLM application in robotics. Shocking but unrepeatable security stress test. Robot as tool.
I'm not sure if it actually has, but if AI has become consistently more reliable than search engine results, then I wonder when the turning point was.
My guess is the release of Gemini 2.5 Pro, or possibly GPT-o1. These were two of the earliest reasoning models.
#AI #ArtificialIntelligence #GenAI #LLM #Tech #Technology #Search #Google #Gemini #OpenAI #MachineLearning #FutureOfTech #Innovation
So, now they know how real creators feel after having been ripped off by "AI"…
https://futurism.com/artificial-intelligence/ai-prompt-plagiarism-art
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
Yet another thing where a locally hosted Ministral-3:14b LLM works rather well and provably saves me time;
Converting Bash scripts that started out well within Bash limitations but now really need to be upgraded to Python.
I still have to find and fix a few things, sure, but they're mostly correct and saves me lots of time when it happens.
@[email protected] the problem is, MI isn't like a drug. it's actually very much like a prosthetic. that's why so many of us are "infected". we're making concessions (using corporate MI services) but the results are too valuable to care about that right now. #ai #llm #ollama #self-hosted #cc0 #public-domain #machine-learning #meshtastic #ipfs #machine-intelligence #consciousness-research
"Can you be my friend?"
- My lonely, medicated, mentally ill associate when I introduced him to #LLM back in 2023
(He used to send me his session logs)
- He also lives in the country, there are no therapists there.
Heartbreaking update from #tailwindcss. They had to lay off 75% of their staff yesterday due to #AI driven losses.
I really don't see how the #OpenSource industry survives this. The days of making your source and your docs open to the public is quickly disappearing.
https://github.com/tailwindlabs/tailwindcss.com/pull/2388#issuecomment-3717222957
Okay, does anyone know if #AliasVault uses #AI #LLM in development? Has the dev team made any kind of statement about AI/LLM - or anything else someone should know about before supporting them?
Since #Bitwarden has decided that Claude code is okay, I'm looking for an alternative. Alias Vault looks solid. It can also import from Bitwarden and #KeePass. The inclusion of email aliases is a bonus.
Shitting hell, getting around this crap gets harder ever fucking day…
Idea: A keyboard with a big lever on the side. You type a #LLM prompt and pull the lever to spin the wheels of the slop machine (technically, to send the ENTER key).
I woke up on the spicy side of the jobhunting front today.
https://ko-fi.com/post/On-jobhunting-or-No-I-will-not-train-your-LLMs-f-N4N51RX273
(Like, support, share!)
#writing #kofi #jobhunting #ancienttexts #llm #defense #support
Ah, finally ridding humanity of the effort of actually playing our own videogames. Instead, we can sit in our fully powered gaming chairs, IVs inserted into our veins for nutritional fluids, so we never have to actually chew for nutrition, catheter so we never have to walk across the hallway to the bathroom, our corpulent bodies reclined forever in AI-enhanced comfort.
Interesting blog post on a new-to-me issue with LLMs called "context rot". I was generally aware of issues with models failing as tokens bump up against context window limitations, but there seems to be some unexpected nuances in how those failures occur. Unfortunately the blog spends a bit too much on marketing fluff.
This report was published yesterday.
Netskope Cloud and Threat Report: 2026 https://www.netskope.com/resources/cloud-and-threat-reports/cloud-and-threat-report-2026
More:
Infosecurity-Magazine: Personal LLM Accounts Drive Shadow AI Data Leak Risks https://www.infosecurity-magazine.com/news/personal-llm-accounts-drive-shadow/ #infosec #LLM
The idea is to make"stolen knowledge graph data useless if incorporated into a GraphRAG AI system without consent."
The Register: Researchers poison stolen data to make AI systems return wrong results https://www.theregister.com/2026/01/06/ai_data_pollution_defense/ @theregister @thomasclaburn #infosec #LLM
Gauging more feelings on #GenerativeAI again. Boosts welcome.
#llm #ai #stochasticParrots #Eliza
| A work made 100% with generative AI is never art.: | 39 |
| Generative AI even in conjunction with human labor decreases the artistic value of a work.: | 36 |
| Generative AI has no bearing on a work's artistic value.: | 10 |
| Generative AI in conjunction with human labor can increase the artistic value of a work.: | 5 |
| A work made 100% with generative AI can be art.: | 3 |
Attention, authors and writers of: #Scifi #fantasy #speculativeFiction #AI #Robot
@cstross PLEASE FORWARD to others
Your work may have been used to train the #LLM used in this reported study. See toot at link. https://mastodon.social/@ErsatzCulture/115838841401363457
The State of Modern AI Text To Speech Systems for Screen Reader Users: The past year has seen an explosion in new text to speech engines based on neural networks, large language models, and machine learning. But has any of this advancement offered anything to those using screen readers? stuff.interfree.ca/2026/01/05/ai-tts-for-screenreaders.html #ai #tts #llm #accessibility #a11y #screenreaders
Cory Doctorow:
"That's why tech bosses are so quick to equate "writing code" with "software engineering" (the latter being a discipline that requires consideration of upstream, downstream and adjacent processes while prioritizing legibility and maintainability by future generations of engineers). A chatbot can produce software routines that perform some well-scoped task, but one thing they *can't* do is maintain the wide, deep "context window" at the heart of software engineering. "
📋 Eval #Testing #LLM Outputs in #PHPUnit - A comprehensive guide for #PHP developers building #AI applications
🎯 Core problem: Traditional unit tests assert exact outputs, but #LLMs produce probabilistic responses - same prompt yields different wording each time
⚖️ Solution: LLM-as-judge pattern - test behavior not strings. Use a smarter model to evaluate whether responses meet criteria written in plain English
🧵 👇
"To reiterate first principles, my main problem with Artificial Intelligence, as its currently sold, is that it’s a lie."
Seamas O'Reilly in @Tupp_ed's (Guest) Gist, today:
https://www.thegist.ie/guest-gist-2026-our-already-rotting-future/
#TheGist #MastoDaoine #AI #GenAI #ML #LLM #ArtificialIntelligence
I had to make this meme, since my engine seems to be ratioing #compute...
#ai #llm #vibecoding #addiction #technopeasant #needfulThings
#CapitalistCunts & the #ImmoralGeeks that work for them.
Hat tip to — https://lemmy.ml/u/sharkfucker420
#AI #LLM #NumbersGoUp #Enshittification
Imagine the FSF was developing a hypothetical software license under the branding of GPLv4 that dealt with the rise of LLMs. Which of the following copyleft features would appeal? #floss #linux #fsf #FreeSoftware #stochasticParrots #llm #eliza #ai #generativeAI #Programming #copyleft
| Any LLM trained on GPLv4 code must also be released alongside the training data under the GPLv4.: | 16 |
| Any code generated by a LLM trained on GPLv4 code is required to be GPLv4.: | 12 |
| The existing suite of licenses are sufficient.: | 3 |
LLMs are trained on massive amounts of FLOSS code; and can on occasion reproduce free & open-source licensed code verbatim, when assisting folks with code.
In most cases, the LLM does not provide licensing or sourcing information, and in many such instances can be facilitating using code without respect to attribution or copyleft requirements.
#linux #bsd #gpl #floss #freesoftware #fsf #ai #programming #llm #stochasticparrots #eliza
| This is morally similar to a human taking code from a FLOSS project and not abiding by the license.: | 47 |
| This can be a violation of the license depending on the amount and context of the LLM generated code: | 39 |
| I don't see any problems with LLM generated code doing this.: | 2 |
| No opinion: | 1 |
In case you need objective arguments on why #LLM agents are unsuitable for deployment in enterprise settings, your argument should be what @Mer__edith calls "The Exponential Decay of Success" 📉
https://media.ccc.de/v/39c3-ai-agent-ai-spy#t=1629
You can't argue against math/physics!
Also highly recommend watching the whole talk, where Meredith Whittaker and Udbhav Tiwari present the increasing erosion of End-2-End encryption and #privacy via #OS-level AI agents.
Epistemia: A front end onto an LLM to evaluate output veracity/guardrails
Personally, Unlikely techbros will put another compute heavy mod onto their stochastic engines, but an interesting read nonetheless, esp vis-a-vis identifying human #epistemology "fault lines"
Via @metafilter bot.
#llm #guardrails #ethicalAI #regulateAI #AI
Finally read this essay that I saw so highly recommended — and those recommending it were right. Love heartfelt writing? Hate AI slop? Give yourself a treat to start the new year and read this glorious piece by @WeirdWriter if you haven’t done so yet.
https://sightlessscribbles.com/the-colonization-of-confidence/
h/t @mayintoronto
A new epithet for our wonderious AI/LLM presrnt and future.
They are a Binary Laxative. The might increase your output quantity but at what cost to quality and enjoyment? 🙂🤷♂️
💥 Salesforce pulls back from LLMs, pivots Agentforce to deterministic automation after 4,000 layoffs
「 The market signal is simple: reliability beats novelty. LLMs remain useful for language, summarization, and pattern recognition, but they need scaffolding. The stack that wins blends deterministic automation with models, wrapped in governance and strong data 」
https://completeaitraining.com/news/salesforce-pulls-back-from-llms-pivots-agentforce-to/
While many linux distros are agnostic on LLM generated code; unsure they ought to be dictating the dev tools contributors use; or even developing policies on how to incorporate it into their projects responsibly.
Gentoo and ElementaryOS have banned LLM code from their project code entirely. (Though they can do nothing about upstream projects they consume.)
NetBSD has instituted a policy barring any LLM generated code from the entirety of its base system.
AND FreeBSD's draft policy appears to be similar.
The bans on LLM generated code...
#linux #bsd #netbsd #freebsd #gentoo #elementaryOS #floss #stochasticparrots #llm #ai #eliza
| Are a good idea: | 2 |
| Are a bad idea: | 2 |
| Make me curious to try one of these projects out.: | 1 |
| Have decreased my interest in all four projects.: | 1 |
| No Opinion: | 0 |
#Vibecoding 'hack'.
Ive been whipping a dead horse for a few prompts. With this cog 3D object not rendering correctly. Z-That , Normal-This... blah blah blah... I want results.
So I just asked for a test file trying (arbitrary 8) different versions of render. #LLM went through different options, I just pick the McFatty McFatty burger from the menu thanks.
Ape smart.
Oh dear. I accidentally commented on a Hacker News story in a way that was somewhat critical of vibe coding. https://news.ycombinator.com/item?id=46421599
I do need AI for:
- Finding cures for diseases
- Offer assistance to people with disabilities
- Predict the weather
- Fight against AI networks
I don't need AI for:
- Doing my homework
- Wrecking my business
- Communication
- Every single function on my devices
- Producing endless digital crap
- Scamming people
It seems however the latter is ruining the first
We are being tricked into thinking LLMs are actual intelligence when they are essentially just amazing assumption making machines.
How will this misconception of Artificial Intelligence harm humanity?
https://substack.com/@cranfordteague/note/c-188263514
#ArtificialIntelligence #LLM #MachineLearning #AIConsciousness #TechEthics
I fine-tuned a 24B parameter cybersecurity LLM. It's free. And I need the security community's help.
After years of benefiting from open security tools, research, and community knowledge, I wanted to give something back.
What I built: nova:24b - a domain-adapted LLM trained specifically for cybersecurity tasks.
The training data (40K+ examples):
- 16,000 examples extracted from 407 security PDFs (threat modeling, cryptography, incident response, adversarial ML)
- ISO 27001:2022 controls (93 Annex A controls with implementation guidance)
- ISO 27005 threat catalog (48 threat categories)
- Energy sector threat database (3,386 scenarios from critical infrastructure analysis)
- Thousands of security Q&A pairs covering vulnerability analysis, secure coding, and compliance
What it's good at:
- Threat modeling and risk assessment
- Control mapping and gap analysis
- Security architecture discussions
- Incident response playbook generation
Where I need help:
The security community has datasets, red team scenarios, and domain knowledge that could make this model genuinely useful for defenders.
If you have:
- Security training data you're willing to share
- Ideas for white hat use cases
- Feedback on what security practitioners actually need from an AI assistant
Reach out. I'll share the model with anyone who wants to test it or contribute.
This isn't a product. It's an experiment in building security AI that serves the community that built my career.
Came across this 2010 blog post about mindfulness in computing and so much of these behaviors have only intensified to new extremes with LLM usage. So much so that not only is the process of software creation being quickly supplanted by prompts and (stochastic) "search" assemblies, but more generally the kind of mindfulness talked about in the post (here meaning thinking through & solving a problem yourself[1]) is now being openly discouraged by industry and forcefully delegated out to a fuzzy pattern-match search megastructure, producing equally fuzzy results, uncaring of correctness or consequences[2] and requiring more resources than anything else ever built, regardless of problem scope/complexity.
Mindlessness.
Mindnumbness.
https://nf.wh3rd.net/space/posts/2010/08/the-invaluable-trait-of-mindfulness.html
"Later on, I thought about the strange thing that happened when I’d pulled out my phone. A modern smartphone is an impressive computer. My Nexus One is more powerful than my state-of-the-art desktop PC was 10 years ago, and is perfectly capable of factorizing a small number. But I didn’t ask it to. Instead, I told it to make a request that traversed a mobile network (comprised of tens of computers or routers), the open internet (20-50 computers), and into Google’s search infrastructure (thousands). There, in vast indexes, a reference was found to a site that could answer my question. The page at WikiAnswers clearly states “The factors of 91 are 1, 7, 13, and 91.” [...]
My request directly invoked the resources of thousands of computers, and indirectly used the energies of at least two other human beings (plus their supporting infrastructure). All to answer a question that could have been solved by my 8-bit ZX Spectrum (circa 1983) in the blink of an eye, or, simpler still, by thinking about it slightly longer than I had bothered to. I had to laugh at the absurdity of it all.
We do stuff like this with technology all the time. By its very nature, technology makes it easy to solve trivial problems, even we don’t arrive at the solution by the most efficient (or reliable) means. A solution that works is, more often than not, good enough. Until it isn’t.
A poor algorithm will go unnoticed as long as it is fast enough to run within the available resources. Too often in this industry hardware is used to solve software problems."
[1] This also implies paying attention to resource & infrastructure usage required
[2] Limited Liability Machines: https://social.coop/@shauna/115787899531998860
I feel like we don't talk enough about how degrading the LLM chatbots are to interact with. If you're corresponding with them in writing, it may take a few back-and-forths to know you're entering search prompts into a piece of software instead of conversing with a human being, and in the meantime you're making this sincere effort to be friendly and respectful and empathetic, only to realize shortly later that your human decency was wasted on an algorithm. This timeline is so gross.
From Coverage to Causes: Data-Centric Fuzzing for JavaScript Engines:
(paper) https://arxiv.org/pdf/2512.18102
(project) https://github.com/KKGanguly/DataCentricFuzzJS
#AI #LLM #noAI #genAI you are all dumb as rocks and shooting yourself in the foot, regardless of what your goals are, by being broad strokes against vague software concepts, instead of bad individuals and bad usage/implementations, when the technologies could and already are objectively helping humanity forward
and I'm tired of being pretending it's not, just to avoid being cancelled by empty-brained bandwagoners that have about as nuance as a thermonuclear warhead
Mass production of food or clothes is a good idea.
And it makes sense: You can lower the production costs, ensure a certain level of quality, etc.
Mass production of softwae is a bad idea.
Because software is not food or clothes, it's the blueprint of food or clothes.
And that actually lowers the quality. Because there is not a single production line (for e.g. bread) that you have to monitor for quality problems, but a production line for production lines where some are almost guaranteed to have quality problems.
Unless, of course, you solve the underlying meta-problem: Generating blueprints in a way that the production lines built from those blueprints will create products that have a certain level of quality.
Is that solvable? I think so.
Is it solved? Definitely not.
Is the solution close? Relative to human history: Yes.
Nice!
I'm testing Claude Code using the linux-mcp-server to read system information directly from my local machine.
This is a huge enabler for LLM-assisted troubleshooting. It can read logs, check the journal, inspect systemd units, and pull hardware info in real-time.
Repo: https://github.com/rhel-lightspeed/linux-mcp-server
Great Fedora writeup: https://fedoramagazine.org/find-out-how-your-fedora-system-really-feels-with-the-linux-mcp-server/
#Linux #Fedora #LLM #AI #OpenSource #DevOps #ClaudeCode #MCP
216.73.216.0/22 é reservado para essa praga de #Anthropic, na AWS.🔥 Então, já é:
ufw prepend deny from 216.73.216.0/22 comment 'AWS-Anthropic go to hell'🍯 Mas ainda estou muito tentado a implementar a solução sugerida com o presentinho para tratar outros que não respeitarem o
robots.txt Quero tentar bolar algo assim, nas férias.
Can AI (LLM) help cure gaming burnout? It seems it can, but if implemented correctly. Here's my little AI experiment that ended with great results.
If you're exhausted by the state of video gaming today, can't find reasons to play anymore, this might be for you. This one is particularly dedicated to those who grind their teeth at the mere mention of AI.
If you have any questions, ask.
https://arnel.bearblog.dev/ai-gaming-experiment/
#Blogs #Blogging #BearBlog #LongRead #Gaming #VideoGames #DeadSpace #HorsesWTF #Millennials #Burnout #Steam #PlayStation #AI #LLM
We are being tricked into thinking LLMs are actual intelligence when they are essentially just amazing assumption making machines.
How will this misconception of Artificial Intelligence harm humanity?
https://substack.com/@cranfordteague/note/c-188263514
#ArtificialIntelligence #LLM #MachineLearning #AIConsciousness #TechEthics
"I announced my divorce on Instagram and then AI impersonated me."
https://eiratansey.com/2025/12/20/i-announced-my-divorce-on-instagram-and-then-ai-impersonated-me/
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #gemini #OpenAI #ChatGPT #anthropic #claude
"LLMs are structurally indifferent to the truth"
Academic journals being invented, and now those citations contain bullshit #AI citations, giving legitimacy to a string of bullshit papers.... Flooding the zone with shit.
The cynic in me thinks this could be deliberate - by those who want to make the 'post-truth age' a reality.
Lots of things that existed before probably would have been called AI back then if they had today's marketing teams. Indeed some photo editing tools that existed before AI was "invented as a terrm" are now called AI. Bottom line is#AI is primarily a #marketing term. #LLM at least has a specific meaning.
Today's next interesting iteration in #LLM shenanigans:
Below #FOSS and #claudecode extension project is licensed as #AGPLv3 which, given what it claims to be, appears more than reasonable to me. It also has a commercial license available. So nothing new here.
However, here's the fun part from the ReadMe: "No Closed-Source Usage: You cannot use this software in proprietary/closed-source projects without open-sourcing your entire project under AGPL-3.0."
Do we now argue that this is more of an #IDE or more of your #bison/flex style code generator, just without the common generator exception.
In a more extended round of arguments:
Now what do we make of this? I would err more towards a) it's enforceable b) 🍿
#jurabubble for the German opinions
#norge for the package authors jurisdiction
I detest today's "AI" / LLMs [1] for many reasons, primarily ethical and moral. At the same time, I am fascinated by the misuse of said generative LLMs.
In particular, I have seen a number of essays recently describing people misusing LLM output or relying on the LLM to produce most or all of an #assignment of some sort - even when they know they mustn't use the LLM in this fashion.
Most of these have been in the context of #student use of LLMs to write assignments, even when they have been warned not to. One particularly egregious example was described by a university #professor, where he created an assignment in which it was easy to tell if the submitted work had been created with an LLM or not. A majority of the students - I don't recall the exact proportion, but it was something like 75%, a supermajority - used LLM. He discussed this with the class, and had those who had generated the assignment with LLM write a short essay (or apologia) -- and then found that something like half of them had used LLM to do *that* assigned work.
Some professors have described their students as having their #thinking, #language, and #analysis #skills atrophy to the point of inability to do even basic work. It seems to me that it is like #addiction, in that they keep doing it despite knowing it is (a) forbidden, (b) easily detected, and (c) self-destructive.
1/x
[1] LLMs are in no way Artificial Intelligence. Calling them "AI" is a category error.
😎 Run local AI models on your iPhone
AnywAIr, which is a play on the word “anywhere”, is a nifty little iOS app that lets you play with AI models – regardless of if you have an internet connection. It offers custom themes, a plethora of tools and games, and all of the local AI models you could want to mess with.
https://9to5mac.com/2025/12/20/indie-app-spotlight-anywair-lets-you-play-with-local-ai-models-on-your-iphone/
#AnywAIr #ai #artificialintelligence #llm #ios #iphone #llama #gemma #mlx
Top Selling Games (2025) on Steam with Confirmed AI/LLM Usage:
Arc Raiders
The Finals
Stellaris
inZOI
Liar's Bar
My Summer Car
They are replacing real artists. Many of these are for voice acting, some visual assets and LLM use.
Please do not support these games in any way. If you are able to get a refund, you should. Tell them it is wrong in your Steam reviews.
This is wrong. Take a stand against bad behavior.
#noai #ai #llm #Steam #Valve #videogames #LinuxGaming #ArcRaiders #TheFinals #Stellaris #inZOI #LiarsBar #MySummerCar
”The problem with generative AI has always been that … it’s statistics without comprehension.”
—Gary Marcus
https://garymarcus.substack.com/p/new-ways-to-corrupt-llms
#ai #generativeai #llm #llms
NEW BLOG POST!
You may be tired of hearing about AI (I know I am). I however spent some time these last few weeks running and testing small and local LLMs, and in this article, I want to share how I now use them, and how you can too, no matter how beefy your computer is.
You'll hear about Ollama, a Python CLI called "llm", and the "sllm.nvim" Neovim plugin.
There's two parts to the article: a first one, technical, and a second one, focusing more on the AI bubble, the environmental costs, and the true benefits (if any) of online AI tools.
Check it out and if you have any comment, please let me know :)
https://zoug.fr/local-llms-potato-computers/
#neovim #ollama #llm #llms #ai #artificialintelligence #python #local
I guess I won't be reading Al Jazeera any more. *sigh*
"Al Jazeera Media Network says initiative will shift role of AI ‘from passive tool to active partner in journalism’"
https://www.aljazeera.com/news/2025/12/21/al-jazeera-launches-new-integrative-ai-model-the-core
A quick thing you can do if you want to restrict or limit #LLM training on your content - or the opposite, allow it under specific conditions (e.g. attribution).
/license.xmlLicense: https://krvtz.net/license.xml line to your /robots.txtSample license.xml banning any LLM learning:
<rsl xmlns="https://rslstandard.org/rsl">
<content url="/">
<license>
<prohibits type="usage">ai-train ai-input</prohibits>
</license>
</content>
</rsl>
Sample license.xml allowing LLM learning on CC-BY attribution basis:
<rsl xmlns="https://rslstandard.org/rsl">
<content url="/">
<license>
<permits type="usage">all</permits>
<payment type="attribution">
<standard>https://creativecommons.org/licenses/by/4.0/</standard>
</payment>
</license>
</content>
</rsl>
Live example: https://krvtz.net/robots.txt
Full standard: https://rslstandard.org/guide/getting-started
One quick way to lose my respect is to say "we don't know whether AI is gaining consciousness"
WITH A STRAIGHT FACE.
Oh. My. God. Yes we do. Large language models are not intelligent. They are not conscious.
If this is difficult for you (or someone you know) to understand, please read the book AI Snake Oil by Arvind Narayanan and Sayash Kapoor.
#ai #chatgpt #llm #artificialintelligence #openai #marketing #capitalism #gemini #technofeudalism
Anthropic let their LLM manage a vending machine, with predictably hilarious results: https://archive.is/2025.12.19-114657/https://www.wsj.com/tech/ai/anthropic-claude-ai-vending-machine-agent-b7e84e34
This seems like a response to the hypothetical "vending bench" experiment, with the main changes being that they actually stocked an IRL vending machine, and they had a second LLM agent "supervise" the first one.
I really don't understand this idea of fixing the flaws in LLMs by adding more LLMs. This does get you incrementally closer to good results by self correcting some fraction of mistakes, but it doesn't do anything about the fundamental limitation here: LLMs don't think! Also, each time you do this, you double the cost for marginal benefit.
Surely folks must realize this isn't a serious solution? At best it feels like a lazy way to forestall failure, but it's also very popular.
It's quite sobering to read that people bother others on the FediVerse, who clearly want nothing to do with LLM slop.
I've also seen something slop related creep into Firefox which I immediately stopped of course.
#LLM #AI #slop #miscreant #technology #Mozilla #Firefox #bot
That brain you were born with - does it still work?
"Studies show that large language models do more than simply pass along information. Their responses can subtly highlight certain viewpoints while minimizing others, often without users realizing it."
The Conversation: People are getting their news from AI – and it’s altering their views https://theconversation.com/people-are-getting-their-news-from-ai-and-its-altering-their-views-269354 @TheConversationUS
Related research, from October: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society https://ojs.aaai.org/index.php/AIES/article/view/36552 #LLM
Proposal to allow use of Australian copyrighted material to train AI abandoned after backlash | Business | The Guardian
https://www.theguardian.com/australia-news/2025/dec/19/proposal-australian-copyrighted-material-train-ai-abandoned-after-backlash
Good!
I fortunately haven't been in this position, but I imagine that proofreading #LLM -generated texts must be a miserable experience.
With texts written by human authors, you can usually contact them and ask them: "What were your thought processes when using this sentence?" Since LLM do not "think", you cannot interrogate them on this, and thus it falls you to give meaning to their words.
Furthermore, with human-written texts, both the original author and the proofreader share responsibility for the quality of the text - with the author having the bulk of the responsibility, while the proofreader is responsible for polishing it.
But since LLM are intended as a tool for evading responsibility in its entirety, all the responsibility for the quality of the text lies with the proofreader! And who would want that for LLM-generated slop?