Anthony
I love and hate that I can make a complex, animated, interactive data visualization like this with matplotlib.
I can pick any trial from my experiment and play back the simulation in full detail. I can jump around, pause, and resume just by clicking on the figure. I can see nearly the full state of all the agents and their fitness as they evolve over time. It's very information dense, but useful, and it actually looks pretty decent!
On the other hand, the code is horrifying. Matplotlib has got to have one of the worst APIs of all time, and the animation tools are particularly gnarly.
gnuplot script that parsed the output log of the coev process, made a plot (replacing the contents of the gnuplot output window with the new plot), and then re-loaded itself to do it all over again. Modern tools might be gnarly but they are meaningfully less gnarly, it seems to me!A mere week into 2026, OpenAI launched “ChatGPT Health” in the United States, asking users to upload their personal medical data and link up their health apps in exchange for the chatbot’s advice about diet, sleep, work-outs, and even personal insurance decisions.(from https://buttondown.com/maiht3k/archive/chatgpt-wants-your-health-data/).
This is the probably inevitable endgame of FitBit and other "measured life" technologies. It isn't about health; it's about mass managing bodies. It's a short hop from there to mass managing minds, which this "psychologized" technology is already being deployed to do (AI therapists and whatnot). Fully corporatized human resource management for the leisure class (you and I are not the intended beneficiaries, to be clear; we're the mass).
Neural implants would finish the job, I guess. It's interesting how the tech sector pushes its tech closer and closer to the physical head and face. Eventually the push to penetrate the head (e.g. Neuralink) should intensify. Always with some attached promise of convenience, privilege, wealth, freedom of course.
#AI #GenAI #GenerativeAI #LLM #OpenAI #ChatGPT #health #HealthTech
@abucci I would rather break both arms lengthwise than have a fucking Neuralink implanted.
@abucci I would *still* rather break both arms lengthwise. There is not enough money in the world to pay me to get a Neuralink installed.
@abucci Very interested in your post. I've done alot on my podcast (aiGED) to help the 65+ crowd know how to use and not use AI on health and medical issues.
An image of a cat is not a cat, no matter how many pixels it has. A video of a cat is not a cat, no matter the framerate. An interactive 3-d model of a cat is not a cat, no matter the number of voxels or quality of dynamic lighting and so on. In every case, the computer you're using to view the artifact also gives you the ability to dispel the illusion. You can zoom a picture and inspect individual pixels, pause a video and step through individual frames, or distort the 3-d mesh of the model and otherwise modify or view its vertices and surfaces, things you can't do to cats even by analogy. As nice or high-fidelity as the rendering may be, it's still a rendering, and you can handily confirm that if you're inclined to.
These facts are not specific to images, videos, or 3-d models of cats. The are necessary features of digital computers. Even theoretically. The computable real numbers form a countable subset of the uncountably infinite set of real numbers that, for now at least, physics tells us our physical world embeds in. Georg Cantor showed us there's an infinite difference between the two; and Alan Turing showed us that it must be this way. In fact it's a bit worse than this, because (most) physics deals in continua, and the set of real numbers, big as it is, fails to have a few properties continua are taken to have. C.S. Peirce said that continua contain such multitudes of points smashed into so little space that the points fuse together, becoming inseparable from one another (by contrast we can speak of individual points within the set of real numbers). Time and space are both continua in this way.
Nothing we can represent in a computer, even in a high-fidelity simulation, is like this. Temporally, computers have a definite cha-chunk to them: that's why clock speeds of CPUs are reported. As rapidly as these oscillations happen relative to our day-to-day experience, they are still cha-chunk cha-chunk cha-chunk discrete turns of a ratchet. There's space in between the clicks that we sometimes experience as hardware bugs, hacks, errors: things with negative valence that we strive to eliminate or ignore, but can never fully. Likewise, even the highest-resolution picture still has pixels. You can zoom in and isolate them if you want, turning the most photorealistic image into a Lite Brite. There's space between the pixels too, which you can see if you take a magnifying glass to your computer monitor, even the retina displays, or if you look at the data within a PNG.
Images have glitches (e.g., the aliasing around hard edges old JPEGs had). Videos have glitches (e.g., those green flashes or blurring when keyframes are lost). Meshes have glitches (e.g., when they haven't been carefully topologized and applied textures crunch and distort in corners). 3-d interactive simulations have unending glitches. The glitches manifest differently, but they're always there, or lurking. They are reminders.
With all that said: why would anyone believe generative AI could ever be intelligent? The only instances of intelligence we know inhabit the infinite continua of the physical world with its smoothly varying continuum of time (so science tells us anyway). Wouldn't it be more to the point to call it an intelligence simulation, and to mentally maintain the space between it and "actual" intelligence, whatever that is, analogous to how we maintain the mental space between a live cat and a cat video?
This is not the say there's something essential about "intelligence", but rather that there are unanswered questions here that seem important. It doesn't seem wise to assume they've been answered before we're even done figuring out how to formulate them well.
@abucci don't want to spoil this, but the real world is discrete on a fundamental level. You don't have, for example, continuous energy states for things like single electrons. On the other hand our brains also only contain a finite number of cells to process and store information. So, I think your argument does not disprove the possibility for artificial intelligence.
That said, I don't think that LLMs capable of intelligence.
I'm well aware of quantum mechancs, quantum field theory and so on. I'm no physicist, but my view is that quantization is not discretization of space and time, even though it's common to see folks confuse the two. The "basic stuff" of physics is continuous, even as phenomena we care about might not be. Likewise, a continuous guitar string has vibrational modes that are discrete; this fact doesn't mean the guitar string is discrete in nature nor that it vibrates in a discrete time the way a computer does.
Here are some examples illustrating why I say that:
The Schroedinger equation happens over continua: the wave function isn't over the natural numbers or integers, but over topological manifolds. This means in particular that if the wave represents a particle's possible positions, those positions could be anywhere in a continuous space. The wave function evolves according to continuous time, not a chunky time like a computer.
The strings in string theory (at least the variations of string theory I'm aware of) are continuous entities existing in a continuous space. They vibrate in different modes, which give rise to discrete phenomena analogous to guitar strings, but they can be located anywhere in a continuous space, and they vibrate in continuous time.
A free electron moving through a vacuum doesn't cha-chunk cha-chunk from one discrete point to the next; it moves smoothly along, as far as we know. The electron holds discrete increments of energy, which affects such things as the orbitals around a nucleus that it can occupy in probability, but as far as we know there's no impediment to it occupying any orbital shape at all. The theory tells us where it's most likely to be, not that it is 100% forbidden from being in certain places. That, too, is continuous in nature.
Something like causal set theory is a proper discretization of physics. In that, the physical world really is a discrete set of points, and what looks to us like continuity is only an appearance that emerges from our vast size relative to the Planck scale. But causal set theory and its relatives are not accepted as standard physics. Perhaps they will be some day, but that day is not today.
I hope that helps clarify where I'm coming from.
@abucci first, let me thank you for taking the time to explain. Please forgive my unintended rudeness. I did not want to come across as condescending.
I do disagree with you on the point of everything having a continuous base underneath. But I will have to read up on that.
Disagreement aside, you gave me something interesting to think about. Thank you for the input.
I need US politicians to pay attention to the Polish PM's warning here:
> “an attempt to take over [part of] a Nato member state by another Nato member state would be a political disaster,” and “the end of the world as we know it.”
@baldur
When I meet someone from Iceland online, I'm always reminded that the powerful don't always win...
I was awake, watching the game.
Greetings from a Brazilian girl.
~
https://www.youtube.com/watch?v=Iplx6S_sz0U
@baldur
Or what the former chief of defence staff Sverre Diesen wrote in an op ed in a conservative paper dn.no recently: ….
Good morning from Kennebunk! Cloudy with a chance of showers this morning, then partly sunny this afternoon. Areas of fog this morning, then patchy fog this afternoon. Visibility one quarter mile or less at times this morning. Highs in the mid 40s. South winds around 10 mph. Chance of rain 50 percent. There are 1 watches, warnings, or advisories. #MEwx
AI industry insiders launch site to poison the data that feeds them: https://www.theregister.com/2026/01/11/industry_insiders_seek_to_poison/
Poison Fountain starts with "We agree with Geoffrey Hinton: machine intelligence is a threat to the human species". This is a tarball of wrong. (1)
The rest of the website is absurd, and the "Poison Fountain Usage" list doesn't make any sense. There are far more efficient and safer ways to poison data that don't require you to proxy content for an unknown third party. Some of these are implemented in software, as opposed to <ul> in HTML. That bullet list reads like an amateur riffing on what they read about AI web scrapers, not like industry insiders with detailed information about how training works.
Recommend viewing the top level https://rnsaffn.com , which I suspect The Register may not have done.
The Register:
Our source said that the goal of the project is to make people aware of AI's Achilles' Heel – the ease with which models can be poisoned – and to encourage people to construct information weapons of their own.Data poisoning is not easy, Anthropic's "article" notwithstanding. Why would we trust Anthropic to publicly reveal ways to subvert their technology anyway?
None of this passes a smell test. Crithype (and poor fact checking, it seems) from The Register it is.
#AI #GenAI #GenerativeAI #Anthropic #PoisonFountain #UncriticalReporting #crithype #TheRegister
This is not really a call for advice--I have a thousand and one strategies, both my own and ones suggested to me by mentors and colleagues past--though I love to collect hot tips about how to write more effectively so I'm not opposed either. Mostly just venting because the other window is a blank page.
#motivation #writing #AcademicPublishing #AcademicWriting #TechnicalWriting
I have to confess that another dimension of this is the piss poor state of publishing. I find APCs deeply offensive and don't want to face them, but I don't intend to sign away copyright to a predatory publisher either. Something like 12 of my papers are in the libgen database and I'm not keen to stuff a 13th into the blender (then again, good luck trying to make sense of this one with an LLM lol). As a graduate student I found publishing a challenging but exciting opportunity to communicate with peers. Now I find it to be an onerous chore of navigating smarmy vampiric middlemen to get at the valuable things they unjustly control (distribution channels mainly). I feel compelled to publish this one in conventional-ish channels because it's reporting on grant-funded work and I think it merits, and I'd like to have, an archival record.
Wah wah wah. Like I said, doom loop.
#motivation #writing #AcademicPublishing #AcademicWriting #TechnicalWriting
In the Greco-Roman worldview, the sea forms a permeable boundary between the realms of humans, the gods, and the dead. This article demonstrates that seabirds embody the connective role of the sea in Greco-Roman mythology. Seabirds nest on land, feed by diving into water, and fly in the air. Therefore, these birds are imagined connecting the world of mortals with that of the dead and the gods, and they illustrate the transitions humans live through in their interactions with the gods and their experience of death.Here: https://muse.jhu.edu/pub/2/article/979701
I find it fascinating that some of the arguments about the role birds play in Greek myths become visible in such analyses. For instance, one of the things we've observed in the data is that the words/concepts "female" and "metamorphosis" appearing in a myth fragment are strongly associated with some form of "diving into the sea" also appearing (the metamorphoses in question often being death-related, and diving into the sea, as the above paper argues, represents death to the ancient Greeks).
#writing #AcademicPublishing #AcademicWriting #AncientGreece #GreekMythology #birds #death #FormalConceptAnalysis #FCA #RoughSetTheory #RST
Computer theorists thus form a neo-mechanistic school of philosophy. Their tenacious defense of some grossly exaggerated claims of what computers can and will do is more understandable if we realize that they represent a school of metaphysics.Epistemology, the Mind and the Computer, Henryk Skolimowski, 1972
#AI #ComputerScience #CognitiveScience #mind #PhilosophyOfMind
@abucci What does Skolimowski mean by "neo-mechanistic"?
I shall call computer universalism (or neo-mechanism) the thesis which claims that the computer will ultimately be able to perform all the functions that human beings perform. This universalism is based on a reductive programme. It reduces the human being to the functions of the mind; the mind to the brain; the brain to its neurophysiological structure; and this structure is then isomorphized by the structure of the computer.
Rationality could thus be defined as a liberation from biological necessity. Rationality is thus a faculty which enables the animal who possesses it to even turn against the biological forces which sustain the life of this animal. The only animals who possess this faculty are human beings. No possible analog can be found among other animals or plants; their biology must be unconditionally obeyed."liberation from biological necessity" sounds like Physiocratic economic thinking (I've found you should always poke around for the economics when someone's talking about "rationality"). "No possible analog can be found among other animals or plants" seems like Enlightenment era wishful thinking.
This 2023 post from @baldur is still prescient: https://www.baldurbjarnason.com/letters/llmentalist/
I saw a post from a prominent tech person this morning basically saying that they now see why people are excited about “modern” llm coding tools. Because it pauses and spits out a rationale, they “feel in control”.
But that’s the illusion. None of the fundamentals of LLMs have changed, they’ve just poured massive effort into making the illusion more convincing.
@cthos @baldur I'm not by any means an expert, but as far as I understand it they've changed the training strategy so that rather than doing RL on autocompleting to something that looks like it comes from the training corpus, they now do RL on completing programming tasks that can be verified at training time. This is why they got a lot better at solving programming problems (without also getting better at obscure reptile trivia or giving health advice).
(I'm not personally excited about them at all; I am terribly depressed about them.)
@cthos @baldur In ancient Greece, people were just pneumatic engines, in late medieval France they were just intricate clockwork mechanisms, etc. When I was a child they were just stored-program computers, now they're just statistical models.
You'd almost think that we were a species of toolmakers who get very excited about our tools.
@cthos @baldur Sure it is, but even as someone who really dislikes these things and strongly prefer to not use them*, I have to concede that they've gotten better at actually solving (some) real-world tasks - RL isn't magic, but it *can* be quite effective at making better statistical weights. I regularly see people make non-trivial (but not exactly load-bearing) software with them, things LLMs absolutely couldn't do a year ago.
*) Aside from the admittedly self-serving motivation that I enjoy coding and really don't like ceding one of my favourite activities to chatbots: I'm pretty much *exactly* the kind of person who would get sucked into the deep end and get my mental health absolutely destroyed. I have OCD, and I'm a former addict. Interactive LLMs are *built* to get people like me hooked. I spent the last ten years painfully learning that ML recommender systems can't tell the difference between an exciting new interest/sales opportunity and a terrible compulsive episode. I dread when upper management introduces mandatory LLM use, because that will be the ruin of what remains of my mental health.
@datarama I'm going to untag Baldur because I feel bad about usurping his mentions.
But yes, part of my objection to LLMs is that they use RHLF to nudge weights in a direction they want to go, and to make it likely to spit out sycophantic responses. Using underpaid / exploited labor from the global south.
But that doesn't change that it's a stochastic process that only incidentally spits out things that we perceive to be "useful".
And yeah man, I firmly believe they are bad for mental health
@cthos Earlier today, I Googled* something and their "AI overview" spat out a bunch of complete horseshit about something I happen to know about (it claimed that movable eyelids are a rare trait in lizards, but actual reality is that only geckos - and not even all geckos - *don't* have them). It gave references to web pages that said nothing of the sort. Some non-English queries that it used to bungle badly have apparently been given a hit with the RLHF hammer, but if I change the wording they still get bungled badly, which demonstrates to me that they don't "understand" things in any way that even resembles human understanding. (More evidence of this is that they can solve logic puzzles and answer factual questions correctly in English, but completely and utterly screw up exactly the same queries in other languages, while maintaining a confident and authoritative tone.) A friend of mine has had Opus 4.5 tell him completely ridiculous things about some garbage collection algorithms. I don't think this problem is ever going to go away. Language can express descriptions of reality, but reality is far more than just language (and language can readily describe unreality), so I strongly believe that this behaviour is intrinsic to how language models work.
But at this point, I think it's hard to deny that they can be useful in programming. They may not be *as* useful as boosters claim, and there may be so many terrible externalities to them that "are they useful" shouldn't be the first question we asked anyway, but it's a fact that that dumb stochastic process can emit code that compiles, passes tests and does useful work. It doesn't *always* do that, and how often it does it depends on where in the training set data distribution you're working, but it does it *enough* that - even as much as I loathe them and the people pushing them - I can't really deny that they can sometimes effectively reduce the amount of labour required to do some programming tasks.
*) I default to using DDG, but I still have muscle memory that I start search queries with "g" in my browser. I should switch Google to a more inaccessible prefix.
@datarama Oh I absolutely believe that even if they *were* useful it doesn’t matter because of all the terrible externalities.
That said I still do not believe they are actually net “useful”. I think they create the illusion in their users that they are going faster, and in the sense that it is churning out code that probably compiles, but faster != more productive or better. Especially coupled with the deleterious effects on the operator.
@cthos I don't know. I've seen examples that go both ways. I've seen someone very quickly vibe-code something that then required nearly several months of tweaking to get to actually work well enough that it could be used. I've also seen internal tools get spun up that provide actual value and would not have existed without LLMs. One example, about a year old, is a Linux programmer with nearly no knowledge of Windows who used an LLM to write a native Windows GUI for a testing tool in an afternoon that would *definitely* have taken a lot longer if he had to do it by hand - and also he wouldn't have done it by hand at all, because it was too tangential to his core responsibilities for a large time investment. An LLM made this time investment small enough that he just went ahead and did it.
@datarama That's the same argument that Mr. Willison regularly makes: "It's helping me make things I'd not make otherwise" and like, okay, but I'm still not sure that you're net gaining on that front. You're certainly not learning more about the windows GUI ecosystem, and if it's an internal tool .... those tend to quickly become load bearing and if you're rolling out something with subtle bugs....
I dunno, maybe I just weight "learning" much more highly on the usefulness scale.
@cthos It was used to test the behaviour of some new hardware we'd made, and was made specifically so hardware engineers could take on a larger part of the testing and diagnosis process rather than twiddling their thumbs and waiting for us to respond. My colleague wrote all the fiddly lowlevel measurement, verification and diagnostics parts by hand, but had an LLM generate most of the Windows GUI for it (translating from idioms and API for a Linux UI toolkit he was familiar with, but I don't know which). It worked well, in the sense that the software-related bugs we got reported by the hardware people were of higher quality (because they had much more information to work off of, and could therefore also troubleshoot a lot more things within their own area).
A couple of caveats: That UI was ugly as sin, its entire "user experience" worked from the assumption that the only people who would ever use it were hardware engineers in a specific company working on a specific product, it had no considerations for accessibility or internationalization, and would be an embarrassment to put out into the public. The *only* consideration in that UI was that it displayed values correctly. It's no longer in use, because there is no need for it anymore.
@datarama Yeah, what I'd have done in that position is used a cross-platform UI toolkit if I didn't have the investment time to learn native windows UI idioms, because I'd still get more utility out of having the UI toolkit knowledge added to my toolkit rather than having an LLM do most of it for me. The gained skill is part of the overall equation.
But like, I understand I have a privilege that I can invest more time in stuff than others. I'm not trying to dunk on your colleague.
I was reviewing some older notes of mine from the event. This one stood out at the time and still does:
Meghan Wiessner and Nathan Ensmenger talked about FORPLAN, a large linear programming model the US forestry service used to generate forest use plans. They both noted its complexity and its shortcomings, how it did not take account of local knowledge and otherwise oversimplified forestry, and how it was divisive.If you've never come across FORPLAN I recommend looking it up (this is good if you're OK with technical reports). It went into use in late 1979 and was controversial from the beginning. It relied on (then) largescale linear programming methods to determine how to manage the US's forests. Like so many efforts before and since, it set aside expert and/or local knowledge of the domain, made horrendous miscalculations, yet was treated as if it were making divine proclamations that must be followed. One of the early critics of it started a libertarian blog called the Antiplanner to argue against government land-use planning.
Anyway:
ChatGPT and related applications are presented as inevitable and unquestionably good. However, Herbert Simon’s bounded rationality, especially in its more modern guise of ecological rationality, stresses the prevalence of “less is more” phenomena, while scholars like Arvind Narayanan (How to Recognize AI Snake Oil) speak directly to AI itself. Briefly, there are times when simpler models, trained on less data, constitute demonstrably better systems than complex models trained on large data sets. Narayanan, following Joseph Weizenbaum, argues that tasks involving human judgment have this quality. If creating useful tools for such tasks were truly the intended goal, one would reject complex models like GPT and their massive data sets, preferring simpler, less data intensive, and better-performing alternatives. In fact one would reject GPT on the same grounds that less well-trained versions of GPT are rejected in favor of more well-trained ones during the training of GPT itself.#AI #GenAI #GenerativeAI #GPT#ChatGPT #OpenAI #Galatea #PygmalionHow then do we explain the push to use GPT in producing art, making health care decisions, or advising the legal system, all areas requiring sensitive human judgment? One wonders whether models like GPT were never meant to be optimal in the technical sense after all, but rather in a metaphysical sense. In this view an optimized AI model is not a tool but a Platonic ideal that messy human data only approximates during optimization. As a sculptor with well-aimed chisel blows knocks chips off a marble block to reveal the statuesque human form hidden within, so the technologist with well-curated data points knocks chips of error off an AI model to reveal the perfect text generator latent within. Recent news reporting that OpenAI requires more text data than currently exists to perfect its GPT models adds additional weight to the claim that generative AI practitioners seek the ideal, not the real.
Therefore, if we were having a technical conversation about large language models and their use, we'd be addressing these and related concerns. But I don't think that's what the conversation's been about, not in the public sphere nor in the technical sphere.
All this goes beyond AI. Henry Brighton (I think?) coined the phrase "the bias bias" to refer to a tendency where, when applying a model to a problem, people respond to inadequate outcomes by adding complexity to the model. This goes for mathematical models as much as computational models. The rationale seems to be that the more "true to life" the model is, the more likely it is to succeed (whatever that may mean for them). People are often surprised to learn that this is not always the case: models can and sometimes do become less likely to succeed the more "true to life" they're made. The bias bias can lead to even worse outcomes in such cases, triggering the tendency again and resulting in a feedback loop. The end result can be enormously complex models and concomitant extreme surveillance to acquire data to feed data the models. I look at FORPLAN or ChatGPT, and this is what I see.
#AI #GenAI #GenerativeAI #LLM #GPT #ChatGPT #LatentDiffusion #BigData #EcologicalRationality #LessIsMore #Bias #BiasBias
@abucci The UK's National Coal Board applied similarly complex but also somewhat naive LP models to decisions about which coal mines to close in the 1960s, when the industry was being run down - over 500 closed between 1953 and 1976 when there were still some 300+ mines working.
When I joined them in the mid 1970's the LP approach had been abandoned.
I had a look in the National Archive for more detail but the Operational Research Executive Reports from back then having yet to be digitised.
ORE did continue to use LP and related optimisation models for mining supplies tender allocation - a much better defined problem - into the 1980s.
@marjolica @abucci do you recall what the problems were? (At a high level)
I would also love to read those reports if you find them
@j2kun @abucci I'm afraid I don't have any detail as to formulation of the LP model that the NCB used to decide colliery closures. When I joined the NCB's Operational Research Executive in 1973 there were still people working in the same office as me who had worked on it. My recollection is that I was told it was an LP using a simplex method and didn't include Mixed Integer Programming which is what I would have expected to use to model binary decisions such as whether to keep or close a number of colliery.
However I doubt it would have been possible to model 300-500 collieries in enough detail to make good decisions at the individual colliery level - though at that time there was much less understanding of the obvious limitations of LP, in particular how easy it was to perturb solutions.
Of course at that time the computing power we had was also pretty limited and even in the 1980's my colleagues were still using heuristic search for the much smaller tender allocation problem I also mentioned, even though by that stage they could have used MIP (after I left the industry and became a University Lecturer as an an exercise I did come up with an appropriate MIP formulation but by that time the industry had so shrunk that it would have had no practical application).
There were a lot of interesting talks, and the program is worth a skim. I was in panel 6. I identified a hypothetical risk that the recent rush to deploy generative AI, with its associated pressure on the electric power and water distribution systems, brings with it. Roughly, with the rise of so-called "industry 4.0" (think smart toaster, but for factories), our critical infrastructure systems are becoming tightly woven together. Besides the increasing dependence on the electric grid there is a growing dependence across sectors on data centers and the internet driven to a large degree by generative AI. What this means riskwise is that faults and failures in one of these systems can "percolate" much more quickly to other infrastructure systems--essentially there are more paths a failure can follow. What in the past might have been a localized failure of one or a few components in one system can become a region-wide multi-sector cascading failure. So for instance a local power failure at a substation might take down a data center that runs the SCADA system used to control a compressor station in the natural gas distribution system, which then might go sideways or fail and cause a natural gas shortage at a natural gas fueled power generator, and so on and so on. Obviously it was always possible for faults and failures in one system to cause faults and failures in another. What's new is that the growing set of new pathways increases the probability that such a jump occurs. What I called out in the talk is that as this interweaving trend continues, we will eventually cross a percolation threshold, after which the faults in these infrastructure systems will take on a different (and in my view much more dangerous) character.
#AI #GenAI #GenerativeAI #PowerSector #NaturalGas #electricity #risk
Nevertheless, I think there needs to be a space to talk about systemic risk, because it's quite real and has predictable consequences. Folks like to call the latter "black swan events", but if you've chosen not to be aware of a set of issues and then one comes to pass, was it really unpredictable?
Anyway, I'm grateful to Mar Hicks (@[email protected]
mastodon.social) for co-organizing this event and making space for these kinds of conversations. The attendees and other speakers were very thoughtful and engaged and it was a great experience.
#AI #GenAI #GenerativeAI #PowerSector #NaturalGas #electricity #risk
@abucci increasing specialisation/optimisation leads to increased fragility? Who would have thought?!
(I'm not asking for advice about dealing with spam. SpamAssassin has this one well in hand. I'm just curious if anyone else has been seeing spam like this).
@abucci I see I’m not the only one being bombarded by them
@abucci I usually use SpamAssassin to filter them.
@abucci Ah, my bad, I must’ve missed that, sorry :)
Assuming I'm not too mentally spent to draw after work tomorrow, would you prefer to see an absolute amateur amuse himself by drawing...
| an ant: | 4 |
| a Commodore 64: | 9 |
Closed
@datarama Can’t believe people are choosing a hunk of plastic over an animal. I’d much rather see an ant.
@rtfm So far I've drawn and shared three animals, a creature from Scandinavian folklore, a plant and a hunk of metal - I think at this point a hunk of plastic is acceptable.
But to keep up some animal representation, here is a portrait of my pet lizard Igor. (before I started using coloured pencils; this is just graphite).
@datarama Igor is awesome and at least the plastic hunk is a different challenge. I think you should do some of your bonsai though. :)
@rtfm He *is* awesome! He finally went into winter brumation a week ago, and I miss him dearly.
I've tried to draw one of the bonsai (the Japanese yew), but I badly screwed it up - in large part because I tried doing all the colouring using ink, and I'm beginning to think coloured pencils are more my medium than ink is. I'm going to give it another shot later.
All of them are currently on life support; tropical trees don't appreciate the light levels of a Scandinavian winter. The sorry-looking twig in the middle is my baobab, the only one I have that isn't an evergreen. (I grew it from a seed last year, and it's already by far the tallest tree I have! And what you see here has had more than half cut off. I've tried simulating a dry season for it while the others are on life support; hopefully it'll start budding again in the spring when I give it more water. But I'm really not used to taking care of deciduous trees.)
@datarama It sounds like you had a bit longer with him before he went into brumation this year, so that’s something.
You’re doing well to have more than one medium! I’m okay at best with pencils and that’s it.
Your bonsai are doing exceptionally well. Mine succumbed to a thrip infestation I had in my other houseplants last year. I thought I was struggling with winter light in the UK! Scandinavian winter must be no joke.
@rtfm Even if I miss him in the winter, I'm glad he has retained the impulse to brumate - species who normally do it usually have shorter lifespans if they don't. He's 18 now, so although he's not *old*, he's not young anymore either. In general he's absurdly good at taking care of himself - he never overeats, for example, which is something blue-tongues are infamous for. (The other members of that species I've known have been absolute gluttons who had to have their food portioned rather meticulously). Also he's the gentlest lizard I've ever met.
I think I'm getting OK at drawing again - at least, I'm *definitely* no longer in the "13-year-old me would be embarrassed if he saw this" state I was a couple months ago! I've never been able to paint on paper or canvas even passably well (though I used to be pretty OK at painting eg. Warhammer minis). I want to get better at inking, but I'm fine with perhaps inking contours and details and using coloured pencils for colour.
Several of my bonsai died due to an aphid infestation earlier this year. I was particularly sad to lose my sageretia; I'd worked a lot on its shape and had gotten really happy with it. The baobab was also hit hard, but I managed to save it (although we'll see if it survives the winter; they famously either die very young or last forever).
Wanted: Advice from CS teachers
When teaching a group of students new to coding I've noticed that my students who are normally very good about not calling out during class will shout "it's not working!" the moment their code hits an error and fails to run. They want me to fix it right away. This makes for too many interruptions since I'm easy to nerd snipe in this way.
I think I need to let them know that fixing errors that keep the code from running is literally what I'm trying to teach.
Example of the problem:
Me: "OK everyone. Next we'll make this into a function so we can simply call it each time-"
Student 1: "It won't work." (student who wouldn't interrupt like this normally)
Student 2: "Mine's broken too!"
Student 3: "It says error. I have the EXACT same thing as you but it's not working."
This makes me feel overloaded and grouchy. Too many questions at once. What I want them to do is wait until the explanation is done and ask when I'm walking around.
I’ve taught programming like this, but I’m an increasingly huge fan of the debugging-first approach that a few people have been trying more recently. In this model, you don’t teach people to write code first, you teach them to fix code first.
I’ve seen a bunch of variations of this. If you have some kind of IDE (Smalltalk is beautiful for this, but other languages usually have the minimum requirements) then you can start with some working code and have them single-step through it and inspect variables to see if the behaviour reflects their intuition. Then you can give them nearly correct code and have them use that tool to fix the issues.
Only once they’re comfortable with that do you have them start writing code.
Otherwise it’s like teaching them to write an essay without first teaching them how to erase and redraft. If you teach people to get stuck before teaching them how to unstick themselves, it’s not surprising that they stop and give up at that point.
Tangentially related:
"AI can write code so why teach how to code?"
"Great point! It can write an essay too, so why teach how to read."
Like. We've had calculators for decades and still teach arithmetic. And functionally the average person needs to know probably more about mathematics and needs to read more than they did a century ago. The same will apply for code.
@futurebird @david_chisnall I mean… if AI could do what it promises, why are these companies hiring?
@mhoye @futurebird @david_chisnall It is always the same: Six months from now, the models will obsolete the humans they're hiring now.
I don't know* why I keep freaking out and getting terrified and depressed now, 36 months into "programmers will be gone in 6 months".
*) I suppose I do know; it's because I have an anxiety disorder.
@futurebird @david_chisnall @datarama @mhoye It is hard because the hype men keep saying that now, for real, it got so much better, we are blown away, this changes everything, etc etc. So we end up wondering if this actually happened.
That and anxiety
@datarama @mhoye @futurebird @david_chisnall if it's any consolation I've been a book editor for nearly 20 years and I still get asked what I do that spellcheck doesn't. The latest by a guy who designs fitted wardrobes. What does he do that IKEA doesn't?!
I would make a slightly different point, I think.
When I was at university, doing a degree in computer science, the first language they taught us was Pascal. The second was Prolog. I can’t remember which order the third and fourth were taught in, but they were Java and Haskell.
Of these, Java was the only one widely used in industry. In my subsequent career, I have rarely used any of these. But I have used the concepts I learned repeatedly.
The tools change. Eventually, modern IDEs will catch up with 1980’s Smalltalk in functionality. But the core concepts change far more slowly.
And this matters even more for school children, because they’re not doing a degree to take them on a path where the majority will end up as programmers, they’re learning a skill that they can use in any context.
I spent a little bit of time attached to the Swansea History of Computing Collection working to collect oral histories of early computing in Wales. Glamorgan university was the first to offer a vocational programming qualification. They had one day of access to a computer at the Port Talbot steelworks (at the time, the only computer in Wales) each week. Every week, the class would take a minibus to visit the computer. They would each take it in turns to run their program (on punch cards). If it didn’t work, they would try to patch to code (manually punching holes or taping over them) and would get to have another go at the end.
Modern programming isn’t really like that (though it feels like it sometimes). The compile-test cycle has shortened from a week to a few seconds. Debuggers let you inspect the state of running programs in the middle. Things like time-travel debugging let you see an invalid value in memory and then run the program backwards to see where the value was written!
But the concepts of decomposing problems into small steps, and creating solutions by composing small testable building blocks remain the same.
The hard part of programming hasn’t been writing the code since we moved away from machine code in punched tape. It’s always been working out what the real problem is and expressing it unambiguously.
In many ways, LLMs make this worse. They let you start with an imprecise definition of the problem and will then fill in the gaps based on priors from their training data. In a classroom setting, those priors will likely align with the requirements of the task. The same may be true if you’re writing a CRUD application that is almost the same as 10,000 others with a small tweak that you put in the prompt. But once it has generated the code then you need to understand that it’s correct. LLMs can generate tests, but unless you’re careful they won’t generate the right tests.
The goal isn’t to produce children who can write code. It’s to empower the children with the ability to turn a computer into a machine that solves their problems whatever those problems are and to use the kind of systematic thinking in non-computing contexts.
The latter of these is also important. I’ve done workflow consulting where the fact that the company was operating inefficiently would be obvious to anyone with a programming background. It isn’t just mechanical systems that have these bottlenecks.
And this should feed into curriculum design (the Computer Science Unplugged curriculum took this to an extreme and produced some great material). There’s no point teaching skills that will be obsolete by the time that the children are adults, except as a solvent for getting useful transferable skills into their systems. A curriculum should be able to identify and explain to students which skills are in which category.
(And, yes, I am still bitter my schools wasted so much time on handwriting, a skill I basically never use as an adult. If I hand write 500 words in a year, it’s unusual, but I type more than that most days)
@david_chisnall @futurebird This is a tangent, but...
As a schoolkid, I *hated* handwriting - and especially that we got graded for it. I have joint hypermobility and holding a pencil was fatiguing and often painful. If I used any of the specialized grip aids, I needed to move my wrist much more than my fingers, and that made it *even worse*. My handwriting looked like a 5-year-old's well into my teens - at which point I got a special dispensation and was allowed to use an electric typewriter (wait, am I *that* old? No, it must be my school that used archaic equipment. :-P )
Now I'm middle-aged, and I use a handwritten notebook just because I like it. I've even learned some basic calligraphy just for fun, and if I put in a little effort I can write in a pretty nice cursive; I even came up with my own partially-looped script. I'm sure that *part* of this is that it took me a longer time to develop better muscle tone in my hands (infamously underdeveloped in hypermobile kids) - but, really, I think it's *much* more about finding some intrinsic motivation for it and not being forced to do it when I simply couldn't see any point in it.
I can type about as fast as write by hand, and far more legibly (and I then get something searchable and editable). I was kept in at break times to practice handwriting. When I was 14, I was allowed to type essays in English and my average grade jumped from C to A. My first book was published when I was 25 and I was paid to write from about age 21. It turned out I had no problem with the ‘framing thoughts’ bit of English, just with using a pen.
So much schooling treated penmanship as a prerequisite for the skill that they were trying to teach. I have no objection to people enjoying calligraphy or shorthand note taking. Just don’t judge unrelated skills based on this.
@david_chisnall @futurebird Here, we got two separate grades for written work: One for content and one for presentation (which basically meant penmanship). I consistently got great grades in the former, and terrible ones in the latter. After said dispensation I no longer got presentation grades and all my earlier ones were annulled, so my average took a *huge* leap. AFAIK schoolchildren here now just write on computers - though some schools want to bring back analogue tools because of AI.
I can type much faster than I write by hand. I just find handwriting pleasant, and the slower pace (and the fact that I can't easily erase or cut and paste) means I "gather my thoughts" (for lack of a better term) in another way than when I'm typing. I've come to enjoy that bit of friction - but that only happened long after anyone stopped trying to force me to do it. But it's really not an "essential" skill, and I'd say that it already wasn't when I was a kid.
Here, we got two separate grades for written work: One for content and one for presentation (which basically meant penmanship)
The thing is, these are not separable concerns. People who struggle to type tend to write code that’s hard to read because typing comments and using meaningful function and variable names is a struggle. So you’re implicitly measuring input-device proficiency when you judge code (this is slightly different with autocomplete because using a meaningful name ones is effort but then autocompleting it may be easier than autocompleting or typing a shorter one).
Exactly the same applies with essay writing. Writing with a pen has (for me) a much higher cognitive load. When I type, I frame a sentence in my head, edit it a bit, then flush the buffer asynchronously through my fingers without any involvement of my conscious mind. When I write with a pen, I have to think about forming the shapes. For my mother, it’s the exact opposite. We both have a thing where we can’t spell a word if you ask us, but get her to write it or me to type it and it will be correct.
If you had to type an essay using my nose, no amount of separation of presentation and content marks would save you from a low grade. For me, using a pen is different in degree but similar in kind.
@david_chisnall @futurebird I mean, I agree. I'm describing the writing assessments of the dysfunctional Danish school system of the 1980s, not praising them. Having been a teacher myself, I also figure it would probably be very hard for a *reader* to separate those two concerns - if you're distracted by illegible writing (or impressed by wonderful penmanship), then it's going to be hard to completely separate your impression of the content from that.
Back then, for home assignments, I would actually often type my essay into Geowrite on my Commodore 64 first and *then* copy it to paper by writing it down with a pen - which really perfectly illustrates how ridiculous that part of the exercise was.
I'm kind of shocked that functions are hard. Are they hard for students who understand functions in the context of mathematics?
Add in functions that have side effects, functions that don't return a value (procedures), functions that trap the rest of the execution (continuations), etc., and you're well outside of what most people understand mathematical functions to be like. The mathematical sine function can't make a network connection or write to a file or...
You can sometimes suss this out by comparing a function to a dictionary (or similar lookup type data structure). Those don't involve changes in the flow of control, and students tend to grasp what they're doing much faster. Students who grasp dictionaries sometimes cannot transfer that understanding to functions because of the flow of control issue, I think, so it can be helpful to probe whether they understand one but not the other and try to figure out why.
Make of it what you will.
#firefox #mozilla #AI #GenAI #GenerativeAI #SmartIsSurveillance #tech #dev #web
- browser.aiwindow.enabled
- browser.ml.chat.enabled
- browser.ml.chat.menu
- browser.ml.chat.page.footerBadge
- browser.ml.chat.page.menuBadge
- browser.ml.chat.page
- browser.ml.chat.shortcuts
- browser.ml.chat.sidebar
- browser.ml.enable
- browser.ml.linkPreview.enabled
- browser.ml.pageAssist.enabled
- browser.ml.smartAssist.enabled
- browser.tabs.groups.smart.enabled
- browser.tabs.groups.smart.userEnabled
- extensions.ml.enabled
- sidebar.notification.badge.aichat
Enter "about:config" in the browser bar and then search for each of these and disable them, turn them off, or set them to false as appropriate.
Depending on which version of Firefox you have you may not have all these configuration options.
Check your smartphone browsers too!
#firefox #mozilla #AI #GenAI #GenerativeAI #SmartIsSurveillance #tech #dev #web #NoAI #AICruft #antifeatures
@abucci You can use the Betterfox template to deactivate everything in one go 👌 https://github.com/yokoffing/BetterFox
@abucci worth to mention how to toggle these from android Firefox: https://android.stackexchange.com/a/257769
Tl;Dr: go to chrome://geckoview/content/config.xhtml
@abucci eh, nah. I'm just trying out different browsers until I find an alternative I like. I'm done with Firefox.
@abucci It's possible to use Firefox's 'enterprise' policy system to hard-set known preferences in a way that sticks. I've resorted to doing it for my setup, with increasingly gritted teeth. Some documentation is at https://mozilla.github.io/policy-templates/
I learned about it from https://electric.marf.space/@trysdyn/statuses/01JN7H07704Z9ZJY197A9AM33K
user.js to harden the browser. Arkenfox has a user-override.js file where you put the settings you want to stick between updates; I imagine Betterfox has something similar but I haven't looked that much into it yet. You could put these AI settings in there. I hesitate to publicly suggest such things till I've had a chance to check them out so I haven't. It's good to see there are options, though.@abucci these solutions all have the problem that mozilla likes to replace those about config settings with settings that are the exact same but are named differently. So either the project you're using, like arkenfox, not only agrees with your choices and uses them as default, but also reliably provides updates to reflect Mozilla's changes, or you'll have to keep track of that and there is no warning whatsoever from Firefox when one of the settings you applied suddenly no longer exists after an update, you'll probably only notice it when Firefox misbehaves.
For example, arkenfox by default deletes all history and closes all tabs when you quit Firefox, so in my override file I changed that back to the Firefox default behaviour, only for mozilla to change the names of those settings and then when I updated arkenfox, it applied its defaults to those new settings and suddenly, the next time I reopened my Firefox, everything was gone. That was real fun. Luckily I had a backup.
@abucci I hope @torproject / #TorBrowser patches that shit out of their codebase.
@abucci I didn't verify but I imagine they are hierarchical, e.g. disabled browser.ml.enable also disables everything under browser.ml.
What annoys me though is :
- why is it enabled by default?
- (arguably even worst) why preferences (the normal ones, with buttons, that most people can use) show nothing related?
Antifeatures indeed.
browser.ml.enable boolean? Maybe not--why else would they be in different namespaces? Looking forward, what stops Mozilla from adding new branches of this stuff (browser.llm.enable, browser.perplexity.ai.enable, ....) toggled on by default? Seemingly nothing stops them.Mozilla lost my trust with behavior like this, so now I will check regularly.
@abucci my understanding is that the global switch is enough to disable it, and we treat that as a bug if it doesn't. Also I've been told there will be a switch in the preferences eventually.
@abucci you can be skeptical, but what I said is that if this doesn't work it's handled as a bug and you can file it and it will be fixed.
@abucci I did, in addition I work at mozilla and I personally know the folks working on AI functionalities.
browser.ml.enable to false override browser.ml.linkPreview.enabled, browser.ml.pageAssist.enabled and browser.ml.smartAssist.enabled? Does it set them to false? If so this behavior is not obvious, and the naming of these options in this way is ripe for confusion and misinterpretation. If these settings are left alone, then they have be checked separately.Does setting browser.ml.enable to false also set extensions.ml.enabled to false or override? If so, why? That is unexpected and confusing behavior. If not, then these settings have to be checked and changed separately.
Does setting browser.ml.enable to false also set browser.tabs.groups.smart.enabled and browser.tabs.groups.smart.userEnabled to false, or overide them? If so, why the heck? This is unexpected and confusing behavior. If not, then these have to be checked and changed separately.
Does setting browser.ml.enable to false set browser.ml.chat.sidebar to false, or override it? If so, why? If not, this is another setting that has to be checked and changed separately.
What about browser.ml.chat.shortcuts and browser.ml.chat.shortcuts.custom?
Does setting browser.ml.enable to false also set browser.aiwindow.enabled to false, or override it? If so, what the hell? If not, this is another setting that has to be checked and changed separately.
I don't have the numbers in front of me but Firefox used to have maybe 5 settings like this. Now it has 16. How many more are going to be added? 32 more? 100 more? Will they all be controlled, ultimately, by browser.ml.enable, regardless of how they're named? If not, how am I to know when I need to scour through these settings again to see if any new ones have popped up? This feels user hostile if you're a user who does not want AI cruft in your web browser.
How many bug reports do you reckon I should file about the above?
@abucci it doesn't set all the other prefs automatically, it's being used and is obeyed directly by all of the code that deals with AI.
My understanding is that they'll all be controlled ultimately by `browser.ml.enable` and that there will be a checkbox in the preferences too.
It's easy to check: https://searchfox.org/firefox-main/search?q=browser.ml.enable&path=&case=false®exp=false
If you find an AI-related feature not controlled by this pref, please file a bug. A quick check by myself showed that it's the case, but I didn't do an exhaustive check.
@abucci I understand the scrutiny, and that you all want Mozilla to follow high standards, and that's OK in my opinion.
But maybe, just maybe, sometimes you folks could check before sighing and ranting at Mozilla. Because sometimes we're doing the right thing and a little bit of support would be nice when that happens.
@abucci the only one of Firefoxs "AI" "features" I have either willingly used is the local translation thing which I would rather use than Google translate and that's about all that can be said about it
@abucci Quite a few of those features can be turned off without needing to go into about:config. Much easier and much safer.
Right now watching about:config is the only viable way to satisfy those requirements.
"Much safer"? AI/LLMs are not safe. I'll make safety judgments for myself, thanks. "Much easier"? I'll be the judge of that, thanks.
A strange reply, I have to say--it's unclear what your motivations are for posting this.
@abucci Saving you and others from having to poke around in about:config and risking data loss and damaging your own Fx setup was the aim.
The settings screen is much easier to navigate and safer to use for most people and with good reason it does not have the same warning screen as about:config.
Saving youCheck that savior complex talk please.
I'm intensely curious what percentage of Firefox users are tech-savvy enough to use Firefox in 2025 but excited to use AI? I would think it's less than half.
"You know all of that unethical, privacy-invasive, environmentally damaging, frequently inaccurate technology you refuse to use? Well, have I got good news for you!"
What's next, built in NFTs?
Mozilla's new CEO is all-in on AI, though, regardless of what its users want: https://lwn.net/Articles/1050826/
Third: Firefox will grow from a browser into a broader ecosystem of trusted software. Firefox will remain our anchor. It will evolve into a modern AI browser and support a portfolio of new and trusted software additions.He says the word "trust" a whole bunch of times yet intends to turn an otherwise nice web browser into a slop-slinging platform. I don't expect this will work out very well for anyone.
Here's a post from an official Firefox Mastodon account suggesting such a master kill switch does not exist yet, but will be added in a future release:
https://mastodon.social/@firefoxwebdevs/115740500373677782
That's not as bad as it could be. It's bad they're stuffing AI into a perfectly good web browser for no apparent reason other than vibes or desperation. It's very bad if it's on by default; their dissembling post about it aside, opt-in has a reasonably clear meaning here: if there's a kill switch, then that kill switch should be off by default. But at least there will be a kill switch.
In any case, please stop responding to my post saying there's a master kill switch for Firefox's AI slop features. From the horse's mouth, and from user experience, there is not yet.
Furthermore, when there is a master kill switch, we don't know whether flipping it will maintain previous state of all the features it controls. In other words it's possible they'll have the master kill switch turn on all AI features when the switch is flipped to "on" or "true", rather than leaving them in whatever state you'd set them to previously. Perhaps you decide to turn the kill switch on because there are a handful of features you're comfortable with and you want to try them; will doing so mean that now all the AI features are on? We won't know till it's released and people try this. So, in the meantime, it's still good practice to keep an eye on all these configuration options if you want the AI off.
#AI #GenAI #GenerativeAI #LLMs #web #tech #dev #Firefox #Mozilla #AISlop #NoAI #NoLLMs #NoAIBrowsers
@abucci How do you know that the account you quote is official? It was created just 2 weeks ago and had no link that makes it verifiable.
That account is run by Jake Archibald, who joined Mozilla in August working on Firefox. The kill switch coming in an upcoming release is confirmed here: https://9to5linux.com/firefox-will-ship-with-an-ai-kill-switch-to-completely-disable-all-ai-features
as well as by the new Mozilla CEO on reddit here: https://www.reddit.com/user/anthony-firefox/
If this is being faked then whoever's doing it sure is going to a lot of trouble.
@abucci I re-floored my apartment a couple years ago (marmoleum tiles), and ended up spending an afternoon hauling just over a quarter ton up the stairs to the fourth floor.
Kinda galling talking to my brother and realising he's already written his own son off due to his gender.
"Yeah, he's just destructive, he can't help it. That's just how boys are. You know, because of the testosterone."
"He's four years old. His body hasn't started producing testosterone yet."
"No, I'm pretty sure boys always have testosterone, throughout childhood. You can see it in the way they act."
@Tattie AFAIK (and I may be wrong), boys *do* have testosterone throughout childhood ... but so do girls, and the levels are basically the same. It's produced in the adrenal glands.
(Personally, I've always been weirded out about how some people talk about boys that age, because it is *completely* alien to my own memories of being a boy. And, well, a lot of how people talk about testosterone in adults too.)
@datarama you are technically correct, the best sort of correct! /ref 😉
Yeah, testosterone isn't completely absent at that age, just extremely low compared to the levels you'll see going into puberty.
@Tattie (I sort of had that exact Futurama quote in my head when I hit "Reply". :-D )
I've heard people claim that adult women don't have testosterone at all too, and ... well, I'm not a biologist, but I like to think that at least I'm not a total dumbass, and there's *very* little that works that way.
Tomorrow, I will try to draw either Sputnik or a Commodore 64.
What would you prefer to see from the hand of an amateur learning to use coloured pencils?
| Sputnik: | 7 |
| Commodore 64: | 4 |
| Booo! Go back to pixel art!: | 0 |
Closed
The World Health Network’s open letter to WHO, signed by over 2,300, calls for respirator use as the standard in healthcare to protect patients and staff. Read here: https://whn.global/a-call-for-the-universal-use-of-respirators-in-healthcare/
The letter emerged from work initiated through the Unpolitics meeting last year.
Covered by The Guardian: https://theguardian.com/global-development/2026/jan/09/health-professionals-respirator-grade-masks-who-advise
The World Health Network’s open letter calling for respirators as the default protection in healthcare is also covered by BMJ.
Reporting by Elisabeth Mahase situates respirator use as an occupational health and patient safety issue.
https://bmj.com/content/392/bmj.s52
@whn I'm glad the Guardian covered it, but it's really unfortunate they chose a photo of someone wearing a poorly fitted respirator with obvious gaps around the nose.
Imre Lakatos, who originally planned to write For Method in contrast to Against Method but then diedand laugh out loud about the (fictional) idea that even thinking about writing this article did Lakatos in.
I read Lakatos's Proofs and Refutations as a young person. It was suggested to me by a high school teacher along with Halmos's I Want to Be a Mathematician. I did want to be a mathematician at the time. I've read Proofs and Refutations once or twice again since. I'm pretty sure that's the one with the pancake theorem, which is a nice name for an unexpectedly deep observation.
I have a copy of Feyerabend's Against Method but haven't read it. It's in the very tall and growing pile of books I at one time or another intended to read. The subtitle Outline of an Anarchistic Theory of Knowledge totally worked on me and I bought it without hesitating. I'll read it.
I've tried dozens of different ways of handling this pile. I have other piles, such as the hundreds of browser tabs I leave open or stash in bookmarks, or the folder on my computer that syncs to my phone where I'm constantly dropping PDFs. I've organized the files and folders and bookmarks and tabs in various ways: tab stacks in Vivaldi; named tab groups in Firefox; folders and tags with a systematized color and icon scheme in the filesystem explorer; Zotero; Obsidian; various desktop indexers and search engines like Recoll. I've written more than one computer program about it. There are so many things to organize that the list of ways I've tried to organize the things could itself stand to be organized.
In graduate school it was still common to print out papers, at a time when it was still uncommon for printers to work when you hit Ctrl-P->Print. When I cleaned up my desk after graduating, I guesstimated that I had roughly 10,000 papers and 300 books/dissertations on the desk or in the drawers. The amount of printer reconfiguration necessary to accomplish this! I used to keep the ones I was actively working with on the desk arranged in ever-shifting overlapping piles that indicated relationships I thought might exist between them. It was crude but served me pretty well, and I was irritated when anyone disturbed the system because part of my mind was in it and how dare they. Dust collected on some of the printouts, which says something about what "actively working with" can look like. I learned to use the dust as a hint that I should re-read something. Dust largely comes from your skin and hair, right?
I haven't read it, like I said, but I take it that one of the arguments in Against Method is that having a plurality of scientific theories---even bad ones---about a phenomenon enhances our ability to test any one of them. Maybe we can think of a pile of theories as generative. Not generative as in generative probability distribution generative, but generativity that includes meaningful piles of dust in the process. I am contemplating an anarchist theory of knowledge management to apply to my piles of books and files and folders and bookmarks and tabs. But I'd better read the Outline, I guess.
Anyway, being for method really can be deadly!
Recently, I spent a lot of time reading & writing about LLM benchmark construct validity for a forthcoming article. I also interviewed LLM researchers in academia & industry. The piece is more descriptive than interpretive, but if I’d had the freedom to take it where I wanted it to go, I would’ve addressed the possibility that mental capabilities (like those that benchmarks test for) are never completely innate; they’re always a function of the tests we use to measure them ...
(1/2)
More radically, there’s the possibility that capabilities don’t preexist their expression in observable forms, which has implications for the conceptual premises of “construct validity.” These are standard STS/philosophy ideas, but most of the computer-scientific literature doesn’t want to go near them.
I wish that the outlet would publish the article so that I can reference it when I self-publish writing about these ideas (also, so I can get paid 🙂)
Not to go off in your mentions, and this is absolutely not meant as a criticism, but the little niche I work in, coevolutionary algorithms, arguably does grapple with this. At least, I feel like my own work has, such as it is. I didn't use this language when writing about it, but I conceived of a "capability" as a structured set of tests, together with a set of individuals that allows you to see that each test is picking up on something different. I used the phrase "emergent geometric objectives" to point at the phenomenon that all three aspects (the set of tests, how they're structured, and the set of individuals) change, which in turn changes what's being measured and what you're measuring. For a lot of ML these subtleties are smashed into an aggregate such as average test score and disappear from view.
@abucci Thanks for pointing this out -- as with so many scholars who come from the humanities & then try to grapple with "hard" sciences, I get a mainstream view. I'm rarely exposed to niche agendas & perspectives. I am sure that plenty of scientists see capabilities and tests as inextricably linked, & I'm concerned that this view has less traction than it should, perhaps because it's inconvenient for those who want to promote a certain (pseudoscientific) vision of AI.
@abucci Absolutely. I know that there are plenty of people in the ML community who understand the technical and social aspects of this issues and are very concerned, but the tech right / Big AI drowns out those voices.
We're an AI first company. Our mission is to streamline your experience and let you turn ideas into execution at the speed of thought. To join our team make PDFs out of your cv and cover letter, upload the PDFs, then retype all the text in the PDFs into more textboxes. Copy/paste is disabled. A question that is illegal to ask is mandatory. <>[]{},%#$ characters forbidden. Accents forbidden. The back button breaks everything. You have been logged out for inactivity. Click here to restart.
I need some #math / #machinelearning / #AI / #physics people to confirm for me that the topology of latent space shows non-Euclidean characteristics. This is not for a technical project; I'm trying to understand just how well cultural theorists are using their mathy metaphors. Thanks in advance!
The tl;dr answer: taking the question to mean that you're given a dataset with n dimensions (attributes/characteristics/features); and what's meant by "latent space" is the submanifold of n-dimensional Euclidean space on which the data actually lives; then it almost surely has non-zero curvature--what you called having non-Euclidean features--if it's of interest to ML researchers. Datasets without curvature can usually be treated with simpler, faster, easier to understand linear algebraic methods, so the ones that end up being of interest in ML tend to have non-trivial curvature.
For a simple illustration, imagine you're given a bunch of data points with two real-valued features. This data is embedded in 2-dimensional Euclidean space. But say the data are such that they always lie on the unit circle. The unit circle is a 1-dimensional, curved (non-Euclidean) submanifold of the 2-dimensional data space. The zero-curvature submanifolds are things like rays, lines, and rectangles, which are "boring" in the sense that you don't need fancy algorithms to get a handle on what they're like.
I hope that made some kind of sense.
@abucci It makes enough sense, although I wouldn't be able to rephrase this for people who know even less about ML than I do. But that's not what I'm trying to do anyway -- it works for my own learning purposes, thanks so much!
Firefox uses on-device downloaded-on-demand ML models for privacy-preserving translation.
They're not LLMs. They're trained on open data.
Should translation be disabled if the AI 'kill switch' is active?
| Yes: | 65 |
| Yes, but let me re-enable just translations: | 248 |
| No: | 149 |
| 🤷: | 21 |
@firefoxwebdevs also, I just gotta ask: was the prompt for this quiz “hey ChatGPT come up with an ai use case that’ll stump the haters! do not hallucinate do not use emojis” or did this ooze out of your human brain after the LLM psychosis fried it?
@zzt I posted this poll after a meeting where we discussed the design of the kill switch, and there was uncertainty around translations. I want to make sure the community's voice is represented in these discussions.
@firefoxwebdevs @zzt
" I want to make sure the community's voice is represented in these discussions."
Then KILL ALL The stupid non-browser functions.
Remove ALL AI code.
Make Firefox work.
Fix printing,
Make it follow system GUI / theme.
Stop copying Chrome or Wiindows.
@raymaccarthy @firefoxwebdevs @zzt I don't want a "browser experience". If it's doing its job, I won't be aware of it at all. I only use a browser as a viewer of content, period.
A browser should make websites viewable and allow the user to store locations in a way that makes sense to the *user*. Not a designer, not a bonehead CEO who thinks AI is really spiffy.
That's all it should do. It's very clear that browser execs never use tools. They have no idea what "tool" means.
@firefoxwebdevs jonah, I hate to break it to you and the LLM shaped like a product manager that’s setting the agenda for your meetings, but the only time I hear about Firefox translations in any context is when Mozilla PMs try to hold it up as an example of an ethical, low-resource, useful AI feature so they can convince to be a fan of the worthless LLM shit they’re actually there to push
the reason why I don’t hear about translations otherwise is simple: it’s shit
@firefoxwebdevs neither translations nor any LLM feature have any business being built into Firefox. they should all be add-ons, at best. preferably add-ons developed by any other company than Mozilla. nobody wanted their donations to go to this crap.
like with translations, anyone who feels like they need LLM horseshit in their browser is very likely already using an implementation other than the one built into Firefox.
@zzt @firefoxwebdevs Why would Mozilla translations be built into the browser but other developers have to make them as add-ons? Or will Mozilla accept PRs for third-party translators to be built into the Firefox browser?
@zzt @firefoxwebdevs please don't call it the "design" of the kill switch when you have to ask *us* what it should kill—as some kind of transparency/openness posturing.
@zzt @firefoxwebdevs You'd never have to say "consent", "opt in", "opt out", or "kill switch" again if you put design energy into overcoming whatever (WHATEVER) barriers are preventing all of these things being add-ons.
@zzt @firefoxwebdevs there are plenty of users who want to translate stuff in browsers, I am one. To me it seems super reasonable for a product rendering content to also offer translation of it.
fyi you're coming across as very cynical (and I say that as someone who's pretty cynical towards AI hype and the current tech industry myself… I *get* it…)
@hdv @firefoxwebdevs thanks for telling me about some software you use and then insulting me!
@zzt @[email protected] @firefoxwebdevs god forbid someone talk about a technology from an industry of liars, responding to a survey from an organisation that's already lying to its users, with a survey carefully missing the option the majority of respondents actually want (make it an extension), and come across as *cynical*
@zzt @firefoxwebdevs I've used it numerous times this week and it looks good to me.
@tasket @firefoxwebdevs holy shit Josh you’ve done it you’ve found the user!
quick ask them if the LLM kill switch should also turn off manifest v2 they might go for it
@zzt That would be funny.
But look at the Firefox forks... some had to bring back translation after (mistakenly) disabling it. I don't think any of the local ML API should be suppressed. The discussion should be about shoving LLMs into places where they don't belong.
@tasket if you want a serious discussion about the role translations should or shouldn’t have in a browser, let me refer you to steve: https://hci.social/@fasterandworse/115849566354469222
I don’t really feel anything about the translations feature other than disappointment, a bit of concern over how the data was sourced, and a strong feeling that it shouldn’t be a core browser feature
@zzt @firefoxwebdevs OK, now make the same argument for the spell-checker, sync, and the set of CAs, etc. etc. supplied with the browser. Its as if y'all were trained by Microsoft PR to take the arguments Mozilla used against tying IE to Windows and extend them ad-absurd-um to features in Mozilla's own browser ("just turn it around back in their faces" said the Armani suit).
Meanwhile, Red Hat is quietly undermining any legal basis for copyleft and leaning into the idea that gratis products (Fedora) shouldn't have robust & transparent system update tools. Oh and the umpteen other for-profit controlled (opposite of Mozilla) FOSS projects that get plugged in these spaces pretty much constantly. Linux Foundation being controlled by Microsoft and Google...? crickets chirping.
This is what makes me tired of IT and geek culture. Its become like everything else, just kneejerk crap with zero reflection and sense of proportion. As I hinted above, it morphs into this shadow of corporate PR. Consider, if people spent their time criticizing actual badness in Firefox, like ad tracking and DoH, that would be inconvenient for certain interests from Brave on up to Apple and Google. I think the style and quality of venting we usually see about Mozilla serves those interests, much of it probably fed by sock puppets.
@firefoxwebdevs an important addendum regarding Firefox translate: by my math (N = my replies), 25% of its users are fucking unhinged
I told them not to use the necronomicon to train the base model but here we fucking are
@zzt @firefoxwebdevs i'll have you know i'm at least 75%* hinged
* vibe estimate, but we carefully graphed it so it's data now
@davidgerard @firefoxwebdevs I know you, you’ve read much worse things than the necronomicon
you’ve written wiki articles about most of them
@firefoxwebdevs @zzt This doesn't feel honest. Maybe from you personally, sure. But not from Mozilla or the Firefox team.
That is like, I decide the car you get. The brand, the model, the color. But hey, don't worry, your voice is important too, so you are allowed to decide what bumper-sticker I will put on your car.
Seriously, this fake inclusion is kinda insulting.
Again, nothing personal against you. But where else should I share my opinion, consider Mozilla even ignores its own feedback platform 🤷
@pixel @firefoxwebdevs @zzt I hope our friend resists the urge to put a pretty face on all of our blunt feedback before the next meeting where the objective will be further redefining what level of damage control will magically turn Mozilla's bad choices into good ones.
@liquor_american @pixel @firefoxwebdevs I suspect the only feedback that’ll get relayed is the feedback from the posters who still kneejerk defend Mozilla as an institution. the rest of us are just too rude to be counted as part of the community (because we use and care deeply about Firefox and hate the entirely avoidable path it’s gone down)
@zzt @pixel @firefoxwebdevs "Nobody likes our product any longer, but at least we never had to entertain any *shudder* critical feedback."
@liquor_american @zzt @pixel @firefoxwebdevs As the only remaining cross-platform browser that is not chromium, Mozilla deserves nothing but pressure to do better. Defending Mozilla about anything other than making Gecko better is giving them permission to eventually be just another chromium skin
> I want to make sure the community's voice is represented in these discussions.
if that were true, the poll would have had a "remove all LLM functionality" option.
@firefoxwebdevs @zzt How about making a poll "Should Firefox include AI/LLM by default?"
@firefoxwebdevs @zzt You ignored the firefox userbase's voice when it came to adding AI in the first place, don't pretend you're listening now when you're really just trying to get the users to come up with justifications for what you have already decided to do. Firefox users have repeatedly said we do not want AI features imstalled by default, you chose not to listen and now you're trying to find ways you can feel less bad about that by pretending you gave people options when it comes to AI usage, rather than taking one away.
If you cared about what 'the community' wants, you would have asked people when the AI notion was first pitched and taken no for an answer, but yet again, AI enthusiasts have acted without consent.
@Rycochet @firefoxwebdevs @zzt I did not follow all what happened around Firefox and the community. Did Mozilla made a public consultation regarding AI integration in Firefox ?
Do we have some reliable datas about the opinion of the Firefox's users ?
I would be interested to know if the critical views (that I mostly share) expressed here are largely shared or not.
@fmasy @Rycochet @firefoxwebdevs @zzt You can look at the discussions on Mozilla Connect if you want commentary from community members.
Mozilla does occasionally run surveys, but results are never public.
@firefoxwebdevs @yoasif @fmasy @Rycochet @zzt a self-selecting survey with push-poll questions that deliberately leave out the "no LLMs in Firefox" option is unlikely to be statistically valid
(also we know this is just noise and Mozilla will do whatever was planned in the meeting anyway)
@davidgerard @yoasif @fmasy @Rycochet @zzt I realise your position is immutable, but I've already used the results of this survey to push for a change to the design of the kill switch. I'm grateful to everyone who responded.
@firefoxwebdevs @yoasif @fmasy @Rycochet is the change to the design of the kill switch that it doesn’t exist because all of Firefox’s AI features will be moved into add-ons that aren’t installed by default?
if not, you’ve used the results of the poll to misrepresent community opinion and @davidgerard’s quote unquote “immutable position”, whatever that means to people who don’t speak passive aggressive post-it note, is absolutely correct
@zzt @yoasif @fmasy @Rycochet @davidgerard My interpretation of the poll results is that the vast majority of people feel that the translation engine should be disabled as part of an AI kill switch, but there should be a way to re-enable the translation engine whilst leaving the kill switch otherwise active.
@firefoxwebdevs @zzt @yoasif @fmasy @Rycochet you've had multiple people call out the missing options in the poll, please don't pretend this wasn't a slanted promotional survey
if you claim this statistically bogus poll represents the will of the users, you are lying and you know you're lying
@firefoxwebdevs @zzt @yoasif @fmasy @Rycochet @davidgerard you do realise that social media polls should not be used as an input for any serious actions because they're inherently flawed, yes?
@davidgerard @firefoxwebdevs @zzt @yoasif @fmasy @Rycochet fwiw i'm still waiting for the dude to tell me why should i start trusting mozilla to do the right thing now.
@firefoxwebdevs @mawhrin @zzt @yoasif @fmasy @Rycochet @davidgerard
Voting in that poll legitimises a kill switch as a viable solution. Hence the flood of responses from people who don't want to say "yes" or "no" about how a thing they don't want works.
@firefoxwebdevs @mawhrin @zzt @yoasif @fmasy @Rycochet @davidgerard Since you have not provided options for the opinion that people will have here, and included one for 🤦 the results can not be correct..
@firefoxwebdevs @mawhrin @zzt @yoasif @fmasy @Rycochet @davidgerard
You ignored the real question to comment on something inconsequenctial, this is a well known propaganda and domination technique, answer the question I asked instead....
@firefoxwebdevs @mawhrin @zzt @yoasif @fmasy @Rycochet @davidgerard
Well, yes, the poll output is inaccurate.
There was no „Dont put any AI in Firefox“ or „Make AI an Add-On only“ option. You’re framing of the poll presupposes there being AI in the product, implying an incorrect necessity.
Why are you ruining this browser with this crap?
@firefoxwebdevs @zzt @yoasif @fmasy @Rycochet @davidgerard the poll was misleading and i am sure i am not the only one who voted to re-enable the translation because it wasn't fully clear what that meant. if i could revoke my vote i would.
@angelfeast @zzt @yoasif @fmasy @Rycochet @davidgerard as in, you don't think there should be an option to re-enable it, or that it should be enabled by default?
@firefoxwebdevs @angelfeast @zzt @yoasif @fmasy @Rycochet
failure to address this bit:
> the poll was misleading
@davidgerard @firefoxwebdevs @zzt @yoasif @fmasy @Rycochet yeah i would like to see some acknowledgement that the language was misleading.
@angelfeast it's not clear to me how the language was misleading. I clearly stated that the translation feature exists, and asked what should happen with it.
However, 'it should be disabled by the kill switch' is still the winner in the poll, so it seems like your view is well represented, no?
@firefoxwebdevs @davidgerard "trained on open data" what kind of open data? and why does this need to be in the browser in the first place instead of being an extension?
your last sentence feels condescending. "it should be disabled" is not the winner, "it should be disabled with the option of enabling" is the winner. why wasn't there an option for "make firefox's machine translation an extension instead"?
if you think i am misunderstanding, consider that i am a very average user of firefox who does not work in anything remotely resembling tech and if you feel like i don't understand something then that means either you are doing a shit job explaining or you are being stubborn and do not want to accept that even average users don't want this forced on them. or both.
@firefoxwebdevs @davidgerard @yoasif @fmasy @Rycochet @zzt
I didn’t see the poll before this post, but my number one request to Mozilla remains the same:
Stop using the term ‘AI’ anywhere.
It is a meaningless marketing term pushed by the worst parts of the tech industry. Don’t use a catch all for a bunch of unrelated things, name them individually and explain to users why they should care (if you can’t, don’t ship them at all). And make all of them off by default.
Feel free to pop up a dialog saying ‘This page is in a language that you haven’t said you speak, Firefox has optional on-device translation models trained ethically (see here for more information)k would you like to install them? (If you decide not to, you can change this decision later in settings) [ Never install translation models ] [ Never install translation models for this language ] [ Install translation model for this language ] [ Automatically install translation models for any language ]’.
Similarly, if a user hovers over an image with no alt text, feel free to pop up a dialog saying ‘This image has no text description. Firefox has an on-device image-recognition model that is ethically trained (see here for more information) that can attempt to provide one automatically. Would you like to install it? If you do not, you can later install it from settings. [ Do not install image-recognition model ] [ Install image-recognition model ]’.
And, in both of these cases, pop up that dialog at most once.
See how neither of these needed to say ‘AI’? Because they were explaining what the model did and why. This is how you communicate with users if you care about users more than you care about investors and hype trains.
@david_chisnall @davidgerard @yoasif @fmasy @Rycochet @zzt I agree that the term 'AI' is kinda meaningless, and it results in the ambiguities mentioned in the poll. However, people are asking for 'no AI' or a way to disable 'AI'. Even tech folks.
@firefoxwebdevs @davidgerard @yoasif @fmasy @Rycochet @zzt
Which is happening because you are shipping feature that you call AI and your new CEO has called Firefox an ‘AI first browser’, because he is completely and totally unqualified for his job.
Stop doing that. And then you can have a useful discussion about any ML models that you are shipping (which, I agree, should be plugins, but so should a lot of things Firefox bundles).
@david_chisnall @firefoxwebdevs @davidgerard @yoasif @fmasy @Rycochet @zzt
May I repeat David's (and other's) point, and politely request a response: what is the thinking behind this being on-by-default?
If it were off-by-default you'd have an easy argument to fend off the majority of criticism. If Mozilla management and devs sincerely think this is the future of browsers, add it in in all the ways you think it might be useful, but have it all off and very easily addable (as David outlined).
If it is really useful to people, users will be clamouring for it, and you can go from there.
I can think of no way it could make sense to have it on-by-default, unless you count the fact that in that scenario lots of less technical people will then simply put up with it, and be added to the stats of "AI users" on Firefox.
Am I missing something? How does it being on-by-default serve anyone, and in what specific ways does it serve them?
@firefoxwebdevs @davidgerard @yoasif @fmasy @Rycochet @zzt
I'm not sure an official firefox account is allowed to make passive-aggressive snipes about "immutable positions", especially when it comes to AI.
@TheEntity @firefoxwebdevs @yoasif @fmasy @Rycochet @zzt yeah this accusation turned out to be a confession, when he admitted the "don't include it" option wasn't in because AI was his immutable position
@davidgerard @firefoxwebdevs @yoasif @fmasy @Rycochet @zzt
Also that reply's worth reading very carefully.
You said we all know this is noise and Mozilla will do whatever.
His reply was he's using the results "to push for a change". Not that said change will actually happen.
So nothing you said was ruled out or denied.
@TheEntity @firefoxwebdevs @yoasif @fmasy @Rycochet @zzt given the best predictor of future behaviour is past behaviour, and mozilla's spent the past year racking up some doozies, let's say i'm not sanguine
@firefoxwebdevs @fmasy @Rycochet @zzt I was talking about the surveys pushed within Firefox: https://www.askvg.com/tip-disable-surveys-rate-your-experience-out-of-date-notifications-in-firefox/
@firefoxwebdevs the translation model should replace all the words on the page with this badge
@firefoxwebdevs this is my issue with calling everything “AI”.
I’m happy (dependent, even!) on local ML-powered computational photography eking out great shots from a tiny camera. But I don’t want my camera completely inventing details with generative AI when I zoom in.
I’m happy for my phone to “know” what I am typing even if I didn’t hit the keys perfectly, based on federated learning. But I don’t want my phone to rephrase things with an LLM.
All of this is considered “AI” now. 🙃
@cassidy @firefoxwebdevs this is because it's an AI marketing lie. "ha, you say you hate slop, so does that mean you hate *xrays* now? Checkmate, AI hater!"
@cassidy @firefoxwebdevs The term "AI" has existed since 1956 so of course it's going to have a very broad definition.
Things don't just stop being "AI" when AI researchers invent newer "more AI" stuff.
@mage_of_dragons @cassidy @firefoxwebdevs We also know exactly what lies in store for the current slate of AI: 20 years of funding drought, just like all its ancestors
@firefoxwebdevs it would be nice if the "AI kill switch" had:
a list of each of the models used, what for, and whether they're trained on open data, each having a "disable this" switch
a thing right at the top of the list which says "I don't care, kill all this AI stuff"
but that would require putting a list of all the different things that Firefox is now using AI for and whether each is using fair models or not, which I suspect a lot of management won't want to document clearly to users
@sil @firefoxwebdevs I suspect they can't, even if they wanted to.
I think the challenge with everything going on here is one of clarity.
@sil, you are asking them about disclosure of models and sourcing. But that is far from the only AI that is in the system.
The tool that does grammar checking and language identification does not leverage an LLM, and while there may be some type of model underneath, the context is very different. Tools that detect spam pages or faulty JavaScript that locks the pages, that's another type of AI hard at work.
Is the browser allowed to support speech to text?
@jmax You're calling out that Firefox may not be able to do this, but I think that mischaracterizes the scope of what's happening here.
The browser has several types of non-deterministic, probabilistic tools in it that provide useful services. Now there's a backlash against one very specific version of those non-deterministic, probabilistic tools. But the backlash is vociferous, often unsolvable, and incredibly broad.
It's hard to engage with non-specific anger.
@gatesvp @firefoxwebdevs @sil @jmax this is the sort of obfuscatory claim I see from AI marketers. "You say you hate slop, so that means you must hate X-ray scanning! Checkmate, AI hater!" It's not convincing.
Let's assume you're correct.
People only care about AI slop.
Why is Firefox even running this survey? Like who cares? Translations aren't "AI slop", they don't need to be covered by the "AI Kill Switch"... why are they even asking this question?
Now take that assumption and read the rest of the comments. From what I'm reading, people care about more than just the AI slop. People are asking questions about the models being used for ML systems, systems that are incapable of generating AI slop.
So we're at a weird spot here. You believe that people care only about AI slop. Firefox obviously believes that people care about more than that, because they're running this survey. And people responding are asking questions that also indicate they care about more than AI slop.
So how do we square this?
What do you think is a better outcome for Firefox and the community?
Why is Firefox even running this survey?
Because the people in charge genuinely believe that AI slop is The Future™ and believe that, in order to stay relevant, Firefox must become an AI Browser™.
But somehow users inexplicably dislike AI slop?! How can this be?!
Embedding AI slop in Firefox as deeply and pervasively as possible is thus a critical goal. But this risks reputational damage with its actual users! To mitigate the risk, bundle features that were not controversial into the discussion of the controversial features; this serves to average the controversy across the (previously uncontroversial, existing) translation feature and highly controversial new slop features, hopefully reducing it below an ignorable threshold.
@davidgerard @RAOF If your core belief is that Mozilla is failing to serve at the benefit of its members, then what are you even doing on this thread? You just hoping to harass the Dev account until they block you out of spite?
What evidence could any of us provide that would change your mind and cause you to become a Mozilla booster instead?
@gatesvp @firefoxwebdevs @sil @jmax They could engage with the nonspecific anger by removing the VERY SPECIFIC technologies at issue
Instead they want to make us argue about "well, if we haul away the shit, do you want us to haul away the bark dust, too? Some people need bark dust, so you have to let us smear shit all over everything"
@firefoxwebdevs it would be compelling to hear someone at Mozilla recognise that you don't have this "kill switch" yet but you're already facing the problem of what to kill.
It would also be compelling to hear someone at Mozilla recognise that there is a browser and there are browser extensions and that "translation" has nothing to do with the operating system of a web browser.
@firefoxwebdevs It would also be compelling if a team at Mozilla were dedicated to building the best browser translation add-on on the market, for all browsers. To promote the power of add-ons and, at the same time, the Mozilla brand.
@firefoxwebdevs it would be compelling if the future of Firefox was to be the fastest, most secure, most efficient browser engine with the best add-on ecosystem.
It would also be compelling to hear someone from Mozilla talk about their design effort to improve add-on adoption rather than forcing add-on-level-functionality into the core product
@fasterandworse there are no such interfaces to intercept input boxes with extensions I guess. And also why should Firefox improve other browsers?
@eckes @fasterandworse To further the charitable mission, pretty obviously.
q1 - design one
q2 - see the post you responded to
@fasterandworse @davidgerard the browsers by design don’t want that extension, I don’t see a point in Mozilla forking chrome
@eckes @fasterandworse I don't see a point in the AI shit and the CEO has already floated blocking adblockers, so here we are
@firefoxwebdevs How are the models not LLMs, if they are trained on large datasets and generate text?
@Fnordinger https://www.neuralconcept.com/post/ml-vs-llm-key-differences-applications-engineering-impact seems like a good overview
@jaffathecake @Fnordinger that really reads like chatbot text. are you *sure* it is not?
@davidgerard @Fnordinger hah, I'm not sure. Do you know a better source? I just found one pretty quickly
@firefoxwebdevs What do you mean "open data"? https://firefox-source-docs.mozilla.org/toolkit/components/translations/resources/01_overview.html points to https://browser.mt/ points to https://paracrawl.eu/index.php which says "We do not own any of the text from which these data has been extracted."
@firefoxwebdevs I mean realistically, we have about:config at home, and y'all are already not respecting that
why the future "KILL SWITCH" carrot? it just comes across like a Musk promise
@firefoxwebdevs going through all the other replies and your lack of response to any of them..
“why are there flaming bags of poop on my porch, and why do they all have different postmarks”
@firefoxwebdevs Can you clarify the distinction you’re making between LLMs and open data? Was the latter collected with consent?
@knowler @firefoxwebdevs it absolutely was not! he means "open data" as in "we found it lying around, bugger the license" https://mas.to/@twifkak/115849848003348176
@firefoxwebdevs Poll is missing a radio button for "fuck you and the horse you rode in on"
@firefoxwebdevs The translation feature was unnecessary to begin with. I suspect y'all know this.
@firefoxwebdevs "AI" isn't a real thing. When we use the word "AI", we (and you) mean something completely different from "Artificial Intelligence", basically referring to "things that we wouldn't have used machine learning for before 2018, because before 2018 we recognized it does not work for those purposes".
However, translation should still be a removable extension, for a variety of reasons, one being that the Simple Translate plugin is actually better than your builtin translation support.
@firefoxwebdevs As worded, and if we can trust Mozilla, then the acceptable answer should be No for these reasons: ML is not AI, and on-device means nothing is sent out of the device. In exchange you get free translation. Win.
BUT… there’s the trust issue now.
And what we REALLY need is not an AI kill switch but more of a “data transfer/phone-home kill switch”, almost like a firewall, where we know the browser is not taking any data and sending it to a device we don’t control ourselves.
@mdavis folks want to disable 'AI' for more reasons than privacy. Privacy is important of course, but folks are also concerned about the training data, and energy used for the training.
@firefoxwebdevs But if the ML/AI training work is processing on the device and not is shared off device, and it is in support of a feature like translating a page (which should be prompted/selectable) then what’s the issue? You can say no and nothing happens. Or you can say yes and the worse that happens is you chew up some local power on your laptop or PC. Or are you saying that even though the translation happens on the device, the RESULT of that training data is sent back out?
@mdavis I believe it's a moral stance due to how the models were produced.
@firefoxwebdevs Hookay… then this is less about a local feature or data sharing and more about an overall “Made with AI” concern where nothing related to AI *at*all*ever* taints the user’s browser, in or out. In that case, if the user turns on the AI kill switch, it should totally kill anything having to do with AI for those who take that position.
That’s an issue with these polls — too much undisclosed nuance to be able to answer properly.
@firefoxwebdevs But wait… what if the developers used AI to help develop the code in the browser itself? Does that mean AI kill switch purists should then rather not even use the product at all?
@mdavis it's definitely a complicated topic! I guess it's down to us to figure out a model that best serves most people, while providing options to cover the rest.
@firefoxwebdevs I don’t think you can make any assumptions then without granular switches that let the user control every facet. In which case, this kill switch is probably less a binary checkbox and more a slider or a series of discrete options. And as a Firefox and Thunderbird user, we are used to lots of toggles and switches under the hood, so I’m fine with that kind of control.
The Firefox AI "kill switch" is not "complicated" except insofar as it's incoherent. it's not "undisclosed nuance" except insofar as it's incoherent.
the "kill switch" doesn't exist.
this is important to keep in mind. once you remember that NONE OF THIS EXISTS, you will realise that every one of the dilemmas you posit is an imaginary problem that follows from incoherent postulates.
e.g. "AI kill switch purists" is not a coherent postulation because the "kill switch" does not exist.
the "kill switch" is a hypothetical proposed in this post:
https://mastodon.social/@firefoxwebdevs/115740500373677782
the "kill switch" is a proposal to satisfy the demand for an opt-in by providing an opt-out. you might think that's a failure to respect the question, and you might even begin to suspect the proposal was in bad faith.
note that Jake, in presenting the kill switch and calling it a kill switch and getting it into all the papers as a kill switch, says he's uncomfortable with the name he's publicised it as. you might think that's oddly incompetent for literally a PR (devrel) person.
the concept as presented imposes multiple false dilemmas.
the LLM stuff should *incredibly obviously* be an extension. this is the purest possible opt-in, despite jake's past attempts to muddy the meaning of "opt-in".
making it an extension is also eminently feasible. There is literally no technical reason it needs to be a browser built-in.
this suggests the reasons are not in any way technical. some person with a name, who has yet to be named, dictated that it would be a built-in. so that's what Mozilla is going with.
why Mozilla went hard AI is entirely unclear. this would have been late 2024? we have no idea who was inspired with this bad idea nor why they were so incredibly keen to force it into the browser.
nor is it clear what Mozilla will do for external LLM services when the AI bubble runs out of venture capital and pops in a year or so, most of the chatbot APIs shut down and whatever remains is 10x the cost at least. but that's a problem for 2027's bonus, not 2026's.
note how the poll provides no option for "no LLM functions built-in to Firefox", in a pathetically transparent attempt to synthesize consent. jake wants to use this poll as evidence of what the user base wants, deliberately leaving out the option he knows directly a lot of them want.
and in conclusion:
1. solve the "kill switch" naming problem by branding it the "brutal and bloody robot murder switch with an option on the executives responsible".
2. make all this shit an extension like they should have a year ago.
3. and your little translator too.
@davidgerard @[email protected] @firefoxwebdevs “but wait just let me explain the AI kill switch”, Mozilla continues to insist, as they slowly expand and transform into an SBF
@zzt @firefoxwebdevs this would involve them one day standing before Congress and solemnly declaring "I fucked up", which is why we had to jail them first.
@zzt @davidgerard @firefoxwebdevs What Mozilla needs now is an "AI kill switch" that can actually kill.
@jwz @zzt @firefoxwebdevs we added an extension to send 440 volts through the other guy's chair
1M+ installs first week, 0 users remaining second week
@davidgerard @jwz @zzt @firefoxwebdevs
Finally, someone is getting rich and/or famous by stabbing people over the internet.
@zzt @davidgerard @firefoxwebdevs Mozilla spent 25 years being unable to get the "don't use tabs" preference to work and I'm supposed to believe their "turn off AI" preference will work?
@davidgerard @mdavis @firefoxwebdevs
In my admittedly limited experience with exceptionally dubious features that the users don't want, but the executives do, it's also not truly an 'AI kill switch' until it also fires the people responsible for putting 'AI' into the thing in the first place.
@theogrin @mdavis @firefoxwebdevs that's the other missing poll option, yes
@davidgerard @theogrin @mdavis @firefoxwebdevs "No AI, and Anthony Enzor-DeMeo resigns in disgrace."
@theorangetheme @theogrin @mdavis @firefoxwebdevs also the new AI CMO. also whichever person started this ball rolling and got Anthony in.
@davidgerard @theogrin @mdavis @firefoxwebdevs I fixed it.
Do you want AI slop in Firefox?
| No.: | 0 |
| Hell no.: | 0 |
| Fuck no.: | 0 |
| Fuck no, and also Anthony Enzor-DeMeo resigns.: | 0 |
@theorangetheme @davidgerard @theogrin @mdavis @firefoxwebdevs If Firefox isn't willing to cut out AI (fuck levers and knobs), then stop calling it Firefox.
Let the legacy of user trust and privacy end and stop lying to people. Mozilla is a company, the browser is a product, and you (Moz Org and Foundation) have no interest in consumer rights.
Mozilla's battlecry should be "Shut up, download the free browser, and let us watch everything you do." Because we're not your customers anymore. Stockholders and advertisers are.
Your next poll: How can (browser not Firefox) bring you ads and save you money when shopping?" That is what you want to ask, so do it.
@Tock @theorangetheme @theogrin @mdavis @firefoxwebdevs
countdown to:
1. more AI in Firefox
2. Mozilla drops Gecko in favour of Chromium
3. with all possibility of ad blocking disabled
4. certainty the massive international user base of people *just like them* will show up any day now! just you wait!!
@davidgerard @theorangetheme @theogrin @mdavis @[email protected] Such a waste, too. Years of standards fighting, differentiation with Gecko, then Quantum (see? I WAS a follower all along!) and being a model of what Open Source stewardship could mean for the larger Internet.
RIP Mozilla, if you thought you were floundering as a Not for Profit Corp, you're worse than useless as a Marketing Agency.
@theorangetheme @davidgerard @theogrin @mdavis @firefoxwebdevs the whole c-level of mozilla looks sketchy af
thinking of it, firing every cis man and building up again from there wouldnt probably the worst move 🤷♀️
@davidgerard @firefoxwebdevs I appreciate the time and effort you put into this thoughtful response, emphasizing points that are an important part of the discussion.
@davidgerard @mdavis @firefoxwebdevs where did I say I'm uncomfortable with the name "kill switch"?
@jaffathecake @mdavis @firefoxwebdevs in the quoted post included as the reference
@davidgerard @mdavis @firefoxwebdevs but I didn't say that. Get your words out of my mouth.
@jaffathecake @mdavis @firefoxwebdevs
> I'm sure it'll ship with a less murderous name
those don't appear to be words of comfort
what is this "how dare you take 2+2 and get 4 I am outraged at your calumnies" shit
@davidgerard @mdavis @firefoxwebdevs I'm sure it'll ship with a less murderous name because folks internally have said that. I expressed no discomfort with the name personally. I used it again in the poll post. I clearly have no personal issue with using it.
@davidgerard @mdavis @firefoxwebdevs in fact, someone internally questioned me using "kill switch" in the post above, and I defended it, saying that a lot of folks I chatted to liked and understood the name.
Whilst it might not make it into Release Firefox, I think it's the right term to use in these discussions in the meantime.
@davidgerard @mdavis @firefoxwebdevs I did not say I was personally uncomfortable with the term, because I am not personally uncomfortable with the term.
Please do not let your imagination run wild with this.
@jaffathecake @davidgerard @[email protected] @firefoxwebdevs this is a real weird hill for you to die on
if you’re representing your employer and they’re uncomfortable with the kill switch naming, to the point where you keep encasing the term in scare quotes every time it’s used, then we can’t tell and frankly don’t care if you personally love the term. nobody’s here for Jake. do you understand that? we’re here because we’re dedicated Firefox users angry at the direction your employer has taken.
@zzt @jaffathecake @firefoxwebdevs this account will be shut down by the incoming AI CMO anyway as unauthorised marketing
@davidgerard @zzt @jaffathecake @firefoxwebdevs He sounds like a politician talking to the press. “I never said that.” “Yes you did.” “Well what I meant to say was…” “It was implied.” “Its the official position of the government….”
@zzt @davidgerard @firefoxwebdevs "note that Jake, in presenting the kill switch and calling it a kill switch and getting it into all the papers as a kill switch, says he's uncomfortable with the name" - I was called out by name directly and deliberately.
@jaffathecake @zzt @firefoxwebdevs i'm sure you'll reveal the culprit who hacked the account and posted it forthwith
@davidgerard @zzt @firefoxwebdevs It's telling that you can't admit that you (deliberately?) misrepresented me to stir outrage.
@jaffathecake @davidgerard @firefoxwebdevs it’s telling that you can’t admit that you deliberately misrepresented community consensus to push horseshit into the browser I use
@zzt @jaffathecake @davidgerard @firefoxwebdevs Fatality. Objective C... WINS
@jaffathecake @zzt @firefoxwebdevs your outrage is entirely performative and I don't believe you
@davidgerard @jaffathecake @firefoxwebdevs nah just look at the outrage we’ve stirred:
- many posts constructively outlining a path for Firefox (all non-core browser features as add-ons that aren’t installed by default), more than sufficient to establish consensus in a good faith environment
- lots of passionately negative responses to the poll as presented, more than sufficient to discard the poll in a good faith environment
- plenty of emotional posts because Firefox matters
how horrid!
@zzt @jaffathecake @davidgerard @firefoxwebdevs
I'm not here because I'm a dedicated Firefox user. I'm here because:
by the way, here's the Twitter version of the poll, posted the same time as the masto version. the screenshot is the *entire* responses to the poll, because Twitter is a plague graveyard.
https://xcancel.com/FirefoxWebDevs/status/2008586590998983153
note also the claim about "open data", which turns out to mean "we took the data cos someone found it lying around fell off the back of a truck honest" and not "open" in any other sense.
but the weird thing is, it has one less option for no good reason (Twitter allows four options too).
@firefoxwebdevs @mdavis small clarification
@firefoxwebdevs introduced the concept of an "AI kill switch"
the "AI kill switch purists" you're talking about don't exist.
No serious person would think this is a good idea because it doesn't make sense. Evident by this "design" stumble at the start line
@fasterandworse @firefoxwebdevs @mdavis it is less likely to be a stumble and more likely introduced in bad faith by a PM to derail the process
Btw, there's meaningful discussion to be had about the biases encoded in ML-based translation -- try translating "the scientist" and "the teacher" into a language with gendered nouns. But that is separate from the widespread opposition to LLMs and everyone knows it.
@firefoxwebdevs The frame of this question is risible.
I am begging you to just make a web browser.
Make it the best browser for the open web. Make it a browser that empowers individuals. Make it a browser that defends users against threats.
Do not make a search engine. Do not make a translation engine. Do not make a webpage summariser. Do not make a front-end for an LLM. Do not make a client-side LLM.
Just. Make. A. Web. Browser.
Please.
Let's ask the real question:
Firefox users,
do you want any AI directly built into Firefox, or separated out into extensions?
@firefoxwebdevs
@davidgerard
@tante
| I want AI built into Firefox: | 75 |
| I want AI separated into extensions: | 1208 |
| Mozilla should not focus on AI features at all: | 6338 |
Closed
@duke_of_germany @firefoxwebdevs @davidgerard @tante
I don't care as long as it doesn't interfere with proper browsing.
@sibrosan @duke_of_germany @firefoxwebdevs @[email protected] @tante
It's spying on you. All Gen AI is an espionage attack surface for the technofascist state.
You might not notice that ICE and Trump's buddies can look at your web surfing whenever they want.
But you won't see it, so that's okay! 🙃
@duke_of_germany @firefoxwebdevs @davidgerard @tante
you might add : I want an "opt-in" button
@Tacitus @duke_of_germany @firefoxwebdevs @tante jake's already tried "well what even *is* opt in, it's so impossibly complicated you know"
(it isn't, make it an extension)
@duke_of_germany @firefoxwebdevs @davidgerard @tante
Missing options there.
Not built in but should allow alternatives, yet not focusing.
I quite agree with this reflection https://www.anildash.com/2025/11/14/wanting-not-to-want-ai/
@rsn @duke_of_germany @firefoxwebdevs @tante I asked him for a week where all these AI-wanting Firefox users were, several other people did, and he couldn't show a one. So I would treat that essay as AI shilling by criti-hype.
@davidgerard @duke_of_germany @firefoxwebdevs @tante
We *might* miss the point thinking like this and be condemned to not see the FF market share rise again, since all people wanting AI probably already moved over to AI browser. Hence not finding them using FF.
Don't get me wrong. I don't want AI. But I don't live alone.
EDIT. I *was an undergraduate students teacher for 2 years* and I can tell the very large majority of the youth use AI, for many many use cases. Even though it's prohibited.
@rsn @duke_of_germany @firefoxwebdevs @tante do any of these uses survive the bubble popping and the VC-subsidised APIs shutting down? I can't think of one.
anyone who thinks any form of "AI is here to stay" hasn't examined how this works.
@davidgerard @duke_of_germany @firefoxwebdevs @tante
What about CSS prefixes? They were not intended to last yet they've been implemented and supported.
I'm not trying to be right but to discuss the stakes.
@davidgerard @rsn @duke_of_germany @firefoxwebdevs @tante I think I broadly agree with this.
The pop-up analogy is interesting enough, but the problem with it is that tabs and pop-up blockers are generically useful browser features — they might have been built for the kind of person who uses pop-up laden trash websites, but we all benefit.
I don't really know what Firefox's planning board looks like but I'd bet money there are tickets knocking about at the bottom of some forgotten backlog with names like "improve custom search engines" and "finally do something with that 'sideview' experiment" — dig those out and do them. I bet AI users would love to be able to easily make ChatGPT their default search engine and have it open in a sidebar by default — and the rest of us would doubtless get some utility from those things as well. You can refocus your priorities around a class of user without pivoting the entire company. Putting AI right in the browser is like a headphones company realising people listen to music on the train and instead of improving noise cancellation, building a set of headphones *with an integrated train*.
We're not going to combat Bad AI with Good AI. What on Earth makes anyone at Mozilla think they can make a better AI than Apple managed? It's utter nonsense.
Firefox is a web browser. ChatGPT is a website. They work together just fine out of the box.
For documentation:
This is the progress of the (still running) alternative Firefox poll after almost 24 hours:
◉ I want AI built into Firefox
◇ 1% (54 votes)
◉ I want AI separated into extensions
◇ 16% (874 votes)
◉ Mozilla should not focus on AI features at all
◇ 84% (4710 votes)
Total votes: 5,638
Link to poll:
https://mastodon.gamedev.place/@duke_of_germany/115853330852766984
@duke_of_germany @firefoxwebdevs @davidgerard @tante what do you consider AI? Features like translation are something completely different from something that does bad summaries of web pages.
@duke_of_germany @firefoxwebdevs @davidgerard @tante
I feel I can’t answer this because my instinct is 3, but I have no idea whether that is organisationally realistic.
People want all sorts of outcomes from things they put no work into and take for granted
@urlyman @duke_of_germany @firefoxwebdevs @tante well we know it isn't, they've already declared they're going AI no matter what and they're hiring lots of new AI guys
the purpose of this is so they can't lie later about what the users *actually* want
@duke_of_germany @firefoxwebdevs @davidgerard @tante
We were longtime users if Firefox.
AI is crap.
Nobody wants AI.
All of us are Librewolf users now.
@Compassionatecrab @duke_of_germany @davidgerard @tante fwiw, Librewolf includes the same AI translation engine as Firefox.
@firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
Firefoxwebdevs, would you please stop muddying the waters by conflating machine translation with generative AI? You know they're not the same, you pointed it out in your poll.
@cryptica @firefoxwebdevs @Compassionatecrab @duke_of_germany @tante Jake is the sort of person who says "wellll what does opt-in really *mean*" before offering you a literal opt-out and claiming it's an opt-in
@davidgerard @firefoxwebdevs @Compassionatecrab @duke_of_germany @tante
Oh i know jake is faking ignorance. His paycheck depends on it. At least until his capitalist masters are done extracting the last vestiges of value, then he'll be discarded along with the smoking cratered ruins of everything mozilla.
Jake, unless you have enough money invested to be considered capital, stop giving cover to the people destroying everything just to make number go up.
@cryptica @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante I'm happy you felt it was clear from the text of the poll, yet look at the results
@jaffathecake @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
The poll you made specifically to muddy the water?
@cryptica @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante you said I clearly made the case for them being different, and yet respondents disagreed.
Not sure how I could have made the point clearly, yet been misleading.
@jaffathecake @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
🤷
By making a flawed poll based on falsely tying all machine learning to GenAI/LLMs.
Do you seriously think this poll shows that only 1% want translation features? Of course not, you know better.
@cryptica @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante see this, and the following post in the thread https://mastodon.social/@firefoxwebdevs/115859958902197087
Thank you everyone who responded to this.
For context: I saw mocks of the kill switch where translation was included, but it lacked the ability to enable the kill switch but still enable particular features (such as translation).
The results of this poll helped me successfully push for more granular control in addition to the single AI kill switch. So again, thank you for that.
@jaffathecake @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
Thank you for pushing for, and getting, more controls for the end user.
I know you are not directly involved in that part of the organization.
Please stop conflating all machine learning with generative AI/LLMs.
@cryptica @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante I agree they're different. I feel I laid out the difference clearly in the poll, and from your messages, it seems like you agree.
And yet, the vast majority considers that to be the type of thing that should be covered by the kill switch.
So, if the kill switch didn't include translation, folks would have accused us of cheating/dishonesty. I'm glad we avoided that.
@jaffathecake @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
I still think the poll results are skewed based on the construction of the response options tying translation features that people do want to the genAI features that no one wants.
Hence why I'm asked for you (and whoever was responsible for the mockup in the first place) to stop tying them together.
@cryptica @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante I really agree with you in terms of translation, but look at some of the replies to the poll posts. I think if Mozilla tried to say "no, this isn't the type of AI you hate, it's different", the response would be furious.
@jaffathecake @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
I just reread an entire third of the responses.
I saw two who wanted translations to count as AI for the kill switch.
I did see scores of folks, however, pointing out the things I pointed out, namely that your poll is flawed, you are intentionally muddying the "ai" waters, you are intentionally choosing to not understand that you cannot point to the results of a flawed poll as an indicator of anything in reality, you are cherrypicking comments, and you are inventing a fictional future response based on your biased flawed poll.
Tell your bosses that there is zero trust and goodwill left for anything Mozilla, much less for hot garbage takes meant to boiling-frog/nazi-bar us into accepting the inevitable creep of techbro number-go-up enshittification that is blatantly being forced on us even as we speak.
That is why no one trusts your "AI kill switch," Mozilla. You already proved you can't be trusted.
@cryptica @jaffathecake @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
It would have been less disingenuous if Firefox had gone with this as their poll:
Do you want AI in your browser?
☐ Yes
☐ Ask me later
@firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante when did you start calling the translation engine “AI”?
I don’t think I’ve seen it referred to that way until after people started objecting to more recent things more consistently referred to as “AI”
@ShadSterling @firefoxwebdevs @Compassionatecrab @duke_of_germany @tante around the same time crypto bros started calling git a "blockchain", or at least for the same sort of reasons
@firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante at first I thought lumping the translation models with the rest of genAI shit in your poll was just incompetence from execs/board members without enough knowledge on this matter. Now I think it's intentional, and pathetic.
I mean, I have never seen anyone complaining about it (why would anyone? it's a non commercial AND ACTUALLY USEFUL thing, runing locally without sharing anything outside, made in collaboration with public unis, and from what I've seen all the training data is public domain, opensource or similar, that's how these things should be done).
But now I remember how aibros insist in lumping LLMs with classic neural networks etc as if everything was the same to give credibility to LLMs. So now people opposing their shitty LLM integration are opposing translations too of course and that shows how unreasonable they are.
@firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante fwiw people who used Firefox for decades are telling you they won’t use it anymore and you’re fwiw-ing them as if you think they’re just too stupid to make their own decisions about what they do and don’t support rather than taking their VALID CONCERNS and treating them with respect.
FWIW I also am no longer using Firefox and have used it for decades. When people tell you what they want and you tell them to eat shit they’re not going to want to come back. This is customer service 101 but no one seems to know how to make products worthwhile anymore. You are “innovating” to get more cash flow but not worrying about the stability of the customers you have. You guys will regret that very quickly - just like Target and Spotify and allllll these other companies that do this same exact thing.
Cheers. No one wants Ai and no one cares about it.
@firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
The translation engine wouldn't even be an issue if it wasn't for your attempt to tie it into the kill switch for so-called "AI" software that shouldn't be necessary because the Eliza chatbot software has negative value.
It's a distraction from the real issue, and you know it's a distraction from the real issue.
@duke_of_germany @firefoxwebdevs This is a so much better way to select options. In the survey above you have to read the question, understand the negation, stop, pause, think, process the different options, ...
The second one just asks what should be done (positive) and gives underatandable choices.
Please, let's all make surveys as easy as The Duke did. 🙂
@duke_of_germany @firefoxwebdevs Oh, and please use clear choices with unambiguous answers. What does "🤷♀️" even mean in the first survey?
- I don't know.
- I don't care.
- I lack sufficient deep knowledge about what AI actually is, though I do get the feeling that it's more than just LLMs, but how would I know? The privacy intrusions and energy sources both used to create LLMs in the first place do *really* bother me, especially in the shadows of Metallica v. Napster and the social inequity that is now used by companies like OpenAI. Also, I want to crawl to bed, eat pizza and pet my cat.
- Something else...
So, dear Firefoxians, what does 🤷♀️ mean in your survey?
@jesterchen @duke_of_germany ah, I thought 🤷 was well understood to mean "I don't know/care, I just want to see the results". It's pretty commonly used, but clearly not commonly enough. Sorry!
@firefoxwebdevs @jesterchen @duke_of_germany
Immediately restore the work of japanese language translators that you paved over with AI slop
https://linuxiac.com/ai-controversy-forces-end-of-mozilla-japanese-sumo-community/
@firefoxwebdevs Where's the option for "I do not want this bullshit toy anywhere near my browser"? Is someone forcing you at gunpoint to be pro-slop? Why are all the executives so into this crap? Can't we just let them have their cocaine daydreams without subject the rest of us to it?
@firefoxwebdevs How about "don't put pointless AI bullshit into your browser in the first place so you don't have to ask asinine loaded questions like this to try to con people into not turning all that shit off.
@firefoxwebdevs Is it an off-switch, or isn't it?
"Off-switch except for this PM's pet project" is not an off-switch.
donate to servo if you can
https://opencollective.com/servo
they have a roadmap that is dedicated to making an actual browser engine, not a collection of browser features on top of one
@firefoxwebdevs My closest answer would be "no", but I think the question is kind of mis-phrased here, and that's probably going to lead to a confusing and potentially misleading outcome.
The problem that people have is not with "AI" as a generalized category, but with the current generation of thieving, climate-destroying, grifting systems that are marketed as AI to an overwhelming degree - notably LLMs and "generative AI", but really anything with those inconsiderate properties.
If your kill switch is presented as an "AI kill switch", then depending on the person they're either going to understand that as "exploitative tech", or as "machine learning", and so make different assumptions as to whether local translation is included in that.
So I think you'll have to be a lot more explicit about what you mean; either by describing clearly what the kill-switch includes, or what it excludes, right in the place where the option is offered. Otherwise it's damned if you do, damned if you don't; depending on whether you include translations, either one or another group is going to be upset with the unexpected behaviour.
So, ethically, if the translation feature is built on ethically collected data, and it has no outsized climate impact, then I would not consider it something that needs to be included in a "get rid of all of it" kill switch. But to convey this clearly to users, both that and why it isn't included should be explained right there with the button, with potentially a second-step option to disable it anyway if someone still feels uncomfortable with it.
That way you've transparently communicated to users and shown that you have nothing up your sleeve by immediately and proactively offering them an option to disable that, too, if they have already shown interest in removing "AI" features.
@firefoxwebdevs Here's a concrete example of what I mean, that should be pretty consistent with the Firefox UI design:
@joepie91 @firefoxwebdevs turns out the translator includes mass-collected data too, it's not "open data" at all but whatever they found lying about
@joepie91 I think a lot of people in the replies would consider this sneaky. It's a tricky UX problem. But yes, granular control needs to be part of the solution, along with a kill switch.
@firefoxwebdevs I can only speak for myself of course, but I'm someone who is strongly opposed to sneaky approaches, like hiding things in submenus or requiring people to go back later to disable new things, for example. And I'm also strongly opposed to basically everything in the current generation of "AI" (LLMs, GenAI, etc.) - but personally I wouldn't consider this sneaky, as it's immediately visible that there's a second choice to make, at the exact moment you disable "AI".
Of course if that stops being the case and the second option gets hidden behind an "Advanced..." button or foldout for example, it would be sneaky. But in the way it's shown in my mockup, I would consider it fine as it's both proactively presented and immediately actionable.
(I do still think that exploitative "AI" things should be opt-in rather than opt-out, but it doesn't seem like that's within the scope of options that will be considered by Mozilla, so I'm reasoning within the assumption of an opt-out mechanism here)
@joepie91 they will be opt-in, but different people have different opinions about what that means. For us, it means models won't be downloaded or data sent to models without the user's request.
However, some folks have said the only meaningful opt-in would be a separate binary for the browser-with-AI, or even having to compiling it manually.
@firefoxwebdevs "Without the user's request" is quite ambiguous, though. I'm reminded here of Google, which put the AI tab before the Web/All tab, displacing it so that people would unintentionally hit the AI button and "request" it. It's a small and plausibly-deniable change that nevertheless violates the user's boundaries, and difficult to call out and stop even internally within a company or team. I've seen many companies and software do the same thing.
A genuine opt-in would, in my opinion, look something like a single "hey do you want such-and-such features? these are the implications" question, presented in a non-misleading way, and if that is not answered affirmatively then the various UI elements for "AI" features should not even appear in the UI unless the user goes and changes this setting. It's much harder for that to get modified in questionable ways down the line, and reduces the 'opportunities for misclick' to a single one instead of "every time someone wants to click a button". It also means users aren't constantly pestered with whatever that week's new "AI" thing is if they've shown no interest.
Such a dialog could still specify something like "if you choose Yes, Firefox will still only download models once you try to use a feature", to make it clear to users that it's not an all-or-nothing, and they can still pick-and-choose after selecting 'Yes'.
@joepie91 @firefoxwebdevs Mozilla's tortured definition of opt-in seems to predict that Mozilla will invent features to nag you into enabling AI, as they have already done with Link Previews: https://www.quippd.com/writing/2026/01/06/architecting-consent-for-ai-deceptive-patterns-in-firefox-link-previews.html
@firefoxwebdevs I'm trying to phrase this using as little expletives as possible: About 18 years, I installed Firefox because I needed a tool to look at webpages written in the hypertext markup language, transferred from their servers via the hypertext transfer protocol. That's arguably the only sensible usecase for an internet browser that we could come up with so far. Firefox was actually really good at that. It was fast. It worked decently well on my linux machine. Over the years it got even better. The extension system allowed for proper ad, script blockers and other privacy preserving add-ons.
That niche of "good browser" got emptier and until only Firefox remained. And for some bizarre reason the strategy right now is to yeet itself out of that niche? Because it totally makes sense to devote resources to some GenAI gimmicks, to then devote even more resources to implement a "kill-switch" to disable them?
Firefox has one job and one job only: Download and display websites. I don't see many resources devoted to that these days.
@firefoxwebdevs Also as a side note: The org I'm working on has banned genAI tools for projects above a certain level of confidentiality. Guess what? Firefox is banned as well and probably stays banned regardless of any kill switch.
@sebastian which feature resulted in the ban? Given that you can access eg chatgpt in any browser, shouldn't your company ban all browsers?
@jaffathecake @sebastian this is the sort of posting that sartre wrote about. never believe that corporate marketers are completely unaware of the absurdity of their replies.
@jaffathecake ChatGPT (and many other web based things) are firewalled.
Also you are looking at a compliance issue from a technical viewpoint. As the implications of genAI generated content wrt. copyright and things like patent applications are still somewhat unclear in many jurisdictions, the simplest solution is to stay well clear of any tool that claims to do anything "AI".
If the contract with the customer says "no AI because it exposes us to legal risks", then the work has to be done in a clean environment where there is nothing that could be considered AI.
Let's ask the real question:
Firefox users,
do you want any AI directly built into Firefox, or separated out into extensions?
@firefoxwebdevs
@davidgerard
@tante
| I want AI built into Firefox: | 69 |
| I want AI separated into extensions: | 1098 |
| Mozilla should not focus on AI features at all: | 5766 |
@duke_of_germany @firefoxwebdevs @davidgerard @tante
I don't care as long as it doesn't interfere with proper browsing.
@sibrosan @duke_of_germany @firefoxwebdevs @[email protected] @tante
It's spying on you. All Gen AI is an espionage attack surface for the technofascist state.
You might not notice that ICE and Trump's buddies can look at your web surfing whenever they want.
But you won't see it, so that's okay! 🙃
@duke_of_germany @firefoxwebdevs @davidgerard @tante
you might add : I want an "opt-in" button
@Tacitus @duke_of_germany @firefoxwebdevs @tante jake's already tried "well what even *is* opt in, it's so impossibly complicated you know"
(it isn't, make it an extension)
@duke_of_germany @firefoxwebdevs @davidgerard @tante
Missing options there.
Not built in but should allow alternatives, yet not focusing.
I quite agree with this reflection https://www.anildash.com/2025/11/14/wanting-not-to-want-ai/
@rsn @duke_of_germany @firefoxwebdevs @tante I asked him for a week where all these AI-wanting Firefox users were, several other people did, and he couldn't show a one. So I would treat that essay as AI shilling by criti-hype.
@davidgerard @duke_of_germany @firefoxwebdevs @tante
We *might* miss the point thinking like this and be condemned to not see the FF market share rise again, since all people wanting AI probably already moved over to AI browser. Hence not finding them using FF.
Don't get me wrong. I don't want AI. But I don't live alone.
EDIT. I *was an undergraduate students teacher for 2 years* and I can tell the very large majority of the youth use AI, for many many use cases. Even though it's prohibited.
@rsn @duke_of_germany @firefoxwebdevs @tante do any of these uses survive the bubble popping and the VC-subsidised APIs shutting down? I can't think of one.
anyone who thinks any form of "AI is here to stay" hasn't examined how this works.
@davidgerard @duke_of_germany @firefoxwebdevs @tante
What about CSS prefixes? They were not intended to last yet they've been implemented and supported.
I'm not trying to be right but to discuss the stakes.
@davidgerard @rsn @duke_of_germany @firefoxwebdevs @tante I think I broadly agree with this.
The pop-up analogy is interesting enough, but the problem with it is that tabs and pop-up blockers are generically useful browser features — they might have been built for the kind of person who uses pop-up laden trash websites, but we all benefit.
I don't really know what Firefox's planning board looks like but I'd bet money there are tickets knocking about at the bottom of some forgotten backlog with names like "improve custom search engines" and "finally do something with that 'sideview' experiment" — dig those out and do them. I bet AI users would love to be able to easily make ChatGPT their default search engine and have it open in a sidebar by default — and the rest of us would doubtless get some utility from those things as well. You can refocus your priorities around a class of user without pivoting the entire company. Putting AI right in the browser is like a headphones company realising people listen to music on the train and instead of improving noise cancellation, building a set of headphones *with an integrated train*.
We're not going to combat Bad AI with Good AI. What on Earth makes anyone at Mozilla think they can make a better AI than Apple managed? It's utter nonsense.
Firefox is a web browser. ChatGPT is a website. They work together just fine out of the box.
For documentation:
This is the progress of the (still running) alternative Firefox poll after almost 24 hours:
◉ I want AI built into Firefox
◇ 1% (54 votes)
◉ I want AI separated into extensions
◇ 16% (874 votes)
◉ Mozilla should not focus on AI features at all
◇ 84% (4710 votes)
Total votes: 5,638
Link to poll:
https://mastodon.gamedev.place/@duke_of_germany/115853330852766984
@duke_of_germany @firefoxwebdevs @davidgerard @tante what do you consider AI? Features like translation are something completely different from something that does bad summaries of web pages.
@duke_of_germany @firefoxwebdevs @davidgerard @tante
I feel I can’t answer this because my instinct is 3, but I have no idea whether that is organisationally realistic.
People want all sorts of outcomes from things they put no work into and take for granted
@urlyman @duke_of_germany @firefoxwebdevs @tante well we know it isn't, they've already declared they're going AI no matter what and they're hiring lots of new AI guys
the purpose of this is so they can't lie later about what the users *actually* want
@duke_of_germany @firefoxwebdevs @davidgerard @tante
We were longtime users if Firefox.
AI is crap.
Nobody wants AI.
All of us are Librewolf users now.
@Compassionatecrab @duke_of_germany @davidgerard @tante fwiw, Librewolf includes the same AI translation engine as Firefox.
@firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
Firefoxwebdevs, would you please stop muddying the waters by conflating machine translation with generative AI? You know they're not the same, you pointed it out in your poll.
@cryptica @firefoxwebdevs @Compassionatecrab @duke_of_germany @tante Jake is the sort of person who says "wellll what does opt-in really *mean*" before offering you a literal opt-out and claiming it's an opt-in
@davidgerard @firefoxwebdevs @Compassionatecrab @duke_of_germany @tante
Oh i know jake is faking ignorance. His paycheck depends on it. At least until his capitalist masters are done extracting the last vestiges of value, then he'll be discarded along with the smoking cratered ruins of everything mozilla.
Jake, unless you have enough money invested to be considered capital, stop giving cover to the people destroying everything just to make number go up.
@cryptica @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante I'm happy you felt it was clear from the text of the poll, yet look at the results
@jaffathecake @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
The poll you made specifically to muddy the water?
@cryptica @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante you said I clearly made the case for them being different, and yet respondents disagreed.
Not sure how I could have made the point clearly, yet been misleading.
@jaffathecake @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
🤷
By making a flawed poll based on falsely tying all machine learning to GenAI/LLMs.
Do you seriously think this poll shows that only 1% want translation features? Of course not, you know better.
@cryptica @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante see this, and the following post in the thread https://mastodon.social/@firefoxwebdevs/115859958902197087
Thank you everyone who responded to this.
For context: I saw mocks of the kill switch where translation was included, but it lacked the ability to enable the kill switch but still enable particular features (such as translation).
The results of this poll helped me successfully push for more granular control in addition to the single AI kill switch. So again, thank you for that.
@jaffathecake @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
Thank you for pushing for, and getting, more controls for the end user.
I know you are not directly involved in that part of the organization.
Please stop conflating all machine learning with generative AI/LLMs.
@cryptica @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante I agree they're different. I feel I laid out the difference clearly in the poll, and from your messages, it seems like you agree.
And yet, the vast majority considers that to be the type of thing that should be covered by the kill switch.
So, if the kill switch didn't include translation, folks would have accused us of cheating/dishonesty. I'm glad we avoided that.
@jaffathecake @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
I still think the poll results are skewed based on the construction of the response options tying translation features that people do want to the genAI features that no one wants.
Hence why I'm asked for you (and whoever was responsible for the mockup in the first place) to stop tying them together.
@cryptica @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante I really agree with you in terms of translation, but look at some of the replies to the poll posts. I think if Mozilla tried to say "no, this isn't the type of AI you hate, it's different", the response would be furious.
@jaffathecake @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
I just reread an entire third of the responses.
I saw two who wanted translations to count as AI for the kill switch.
I did see scores of folks, however, pointing out the things I pointed out, namely that your poll is flawed, you are intentionally muddying the "ai" waters, you are intentionally choosing to not understand that you cannot point to the results of a flawed poll as an indicator of anything in reality, you are cherrypicking comments, and you are inventing a fictional future response based on your biased flawed poll.
Tell your bosses that there is zero trust and goodwill left for anything Mozilla, much less for hot garbage takes meant to boiling-frog/nazi-bar us into accepting the inevitable creep of techbro number-go-up enshittification that is blatantly being forced on us even as we speak.
That is why no one trusts your "AI kill switch," Mozilla. You already proved you can't be trusted.
@cryptica @jaffathecake @firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
It would have been less disingenuous if Firefox had gone with this as their poll:
Do you want AI in your browser?
☐ Yes
☐ Ask me later
@firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante when did you start calling the translation engine “AI”?
I don’t think I’ve seen it referred to that way until after people started objecting to more recent things more consistently referred to as “AI”
@ShadSterling @firefoxwebdevs @Compassionatecrab @duke_of_germany @tante around the same time crypto bros started calling git a "blockchain", or at least for the same sort of reasons
@firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante at first I thought lumping the translation models with the rest of genAI shit in your poll was just incompetence from execs/board members without enough knowledge on this matter. Now I think it's intentional, and pathetic.
I mean, I have never seen anyone complaining about it (why would anyone? it's a non commercial AND ACTUALLY USEFUL thing, runing locally without sharing anything outside, made in collaboration with public unis, and from what I've seen all the training data is public domain, opensource or similar, that's how these things should be done).
But now I remember how aibros insist in lumping LLMs with classic neural networks etc as if everything was the same to give credibility to LLMs. So now people opposing their shitty LLM integration are opposing translations too of course and that shows how unreasonable they are.
@firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante fwiw people who used Firefox for decades are telling you they won’t use it anymore and you’re fwiw-ing them as if you think they’re just too stupid to make their own decisions about what they do and don’t support rather than taking their VALID CONCERNS and treating them with respect.
FWIW I also am no longer using Firefox and have used it for decades. When people tell you what they want and you tell them to eat shit they’re not going to want to come back. This is customer service 101 but no one seems to know how to make products worthwhile anymore. You are “innovating” to get more cash flow but not worrying about the stability of the customers you have. You guys will regret that very quickly - just like Target and Spotify and allllll these other companies that do this same exact thing.
Cheers. No one wants Ai and no one cares about it.
@firefoxwebdevs @Compassionatecrab @duke_of_germany @davidgerard @tante
The translation engine wouldn't even be an issue if it wasn't for your attempt to tie it into the kill switch for so-called "AI" software that shouldn't be necessary because the Eliza chatbot software has negative value.
It's a distraction from the real issue, and you know it's a distraction from the real issue.
@duke_of_germany @firefoxwebdevs This is a so much better way to select options. In the survey above you have to read the question, understand the negation, stop, pause, think, process the different options, ...
The second one just asks what should be done (positive) and gives underatandable choices.
Please, let's all make surveys as easy as The Duke did. 🙂
@duke_of_germany @firefoxwebdevs Oh, and please use clear choices with unambiguous answers. What does "🤷♀️" even mean in the first survey?
- I don't know.
- I don't care.
- I lack sufficient deep knowledge about what AI actually is, though I do get the feeling that it's more than just LLMs, but how would I know? The privacy intrusions and energy sources both used to create LLMs in the first place do *really* bother me, especially in the shadows of Metallica v. Napster and the social inequity that is now used by companies like OpenAI. Also, I want to crawl to bed, eat pizza and pet my cat.
- Something else...
So, dear Firefoxians, what does 🤷♀️ mean in your survey?
@jesterchen @duke_of_germany ah, I thought 🤷 was well understood to mean "I don't know/care, I just want to see the results". It's pretty commonly used, but clearly not commonly enough. Sorry!
@firefoxwebdevs @jesterchen @duke_of_germany
Immediately restore the work of japanese language translators that you paved over with AI slop
https://linuxiac.com/ai-controversy-forces-end-of-mozilla-japanese-sumo-community/
I appear to have discovered my new favourite epithet for tech billionaires.
The bony-eared assfish has been distinguished, by some sources, as having the smallest brain-to-body weight ratio of any vertebrate.This looks promising!
To all of the people worried about being 'left behind' by AI and who believe that AI is as transformational as the web:
I was an early adopter of the web. I had a 9,600 modem. I wrote web pages that had graceful fallback for browsers that predated the img tag. I dealt with the differences between Netscape Navigator 2.0 and Internet Explorer 2.0. I was even guilty of writing web pages that were 'Best viewed in Netscape Navigator 2.0'. I learned about the 10-10-10 rule (as web page should load in 10 seconds on a 14.4 Kb/s MODEM, have all of the key details visible on a 10" monitor, and have no more than 10 points per page).
A lot of the best web developers never experienced this era. We're almost at the point where the majority of them were not even born while I was doing this. And yet, being very late to the party hasn't been a problem for them.
If anything coming out of the bubble is as transformative as the web, you have plenty of time to catch up once it's stabilised into something of long-term value.
indeed, my personal home page has never progressed much beyond that era...
@simon_brooke @david_chisnall This is a work of beauty.
I never stopped using evolutionary computation. I'm even weirder and use coevolutionary algorithms. Unlike EC, the latter have a bad reputation as being difficult to apply, but if you know what you're doing (e.g. by reading my publications 😉) they're quite powerful in certain application areas. I've successfully applied them to designing resilient physical systems, discovering novel game-playing strategies, and driving online tutoring systems, among other areas. They can inform more conventional multi-objective optimization.I started to put up notes about (my way of conceiving) coevolutionary algorithms on my web site, here. I stopped because it's a ton of work and nobody reads these as far as I can tell. Sound off if you read anything there!Many challenging problems are not easily "vectorized" or "numericized", but might have straightforward representations in discrete data structures. Combinatorial optimization problems can fall under this umbrella. Techniques that work directly with those representations can be orders of magnitude faster/smaller/cheaper than techniques requiring another layer of representation (natural language for LLMs, vectors of real values for neural networks). Sure, given enough time and resources clever people can work out a good numerical re-representation that allows a deep neural network to solve a problem, or prompt engineer an LLM. But why whack at your problem with a hammer when you have a precision instrument?
#AI #GenAI #GenerativeAI #LLMs #EvolutionaryComputation #GeneticAlgorithms #GeneticProgramming #EvolutionaryAlgorithms #CoevolutionaryAlgorithms #Cooptimization #CombinatorialOptimization #optimization
#AI #GenAI #GenerativeAI #AISlop #NoAI #Microsoft #Copilot #MicrosoftOffice #LibreOffice #foss
@jbz holy shit I thought you were joking. They've cashed in and binned 30+ years of brand identity and familiarity to sell copilot. That's bonkers
@jamesravey @jbz Copilot is just a shiny layer on top of ChatGPT (quite literally). Microsoft has already committed to paying OpenAI $13 billion and their ownership stake in OpenAI is estimated to be worth $135 billion. I think that they just really *REALLY* want to make Copilot work out for them.
@joe @jamesravey they can also claim Office profits as AI profits, sidestepping the annoying problem of Copilot bleeding money all across MS products.
@jbz @joe @jamesravey destroying a huge external brand to solve short-term internal politics
@davidgerard @jbz @joe @jamesravey Which is good tho. I want to see them die. I want to see Office be destroyed forever.
RE: https://mas.to/@carnage4life/115832534415373032
keeping up to date with "AI" skills is extremely easy actually, the most sophisticated setup i've seen is just a bunch of markdown files begging the necrotic grooves left in the statistical skin of language to pretend like they are different character archetypes wrapped by a (n assemblage of) IDE and CLI tools.
the "skill issue" FOMO trap is just a trap and the question was never "whether they work or not" but rather "how they reconfigure the dynamics of informational power by dramatically consolidating it in the owners of informational capital."
Jaana was a distinguished engineer at GitHub and is now a principal engineer at Google. I expect to see more testimonials from accomplished software engineers in 2026 about how AI agents are making them more productive.
Engineers posting about how AI tools don’t work will increasingly look like a skill issue than a problem with AI tools.
so it's clear this is not goalpost shifting, i am on the record on the Good Blockchain (git) arguing that whether or not the "AI" works or not is irrelevant as of May 2023: https://github.com/sneakers-the-rat/surveillance-graphs/blame/main/_sections/_ideology/vulgarld.md#L17
I regularly work in parallel by making good faith efforts to have the latest models (and the fancy fuckin multi-agent setup with whatever the most hyped markdown lore is on the cursor forums and git tag rankings) do what I do, and the answer remains the same: if i already know what i want to do and have a well worked out spec, and most of the surrounding code is already well scaffolded, then it can do most of the raw work writing the code on the page, but the places where it fails to adapt to local idiom, or when there is something that isn't already pretty much completely done, the process of reading, undoing, and redoing shit code is always more work than just writing the code. Because unless you're totally outside your element in a new language or framework writing the code is not the hard part, and if you are outside your element that's when you specifically become unable to evaluate the LLM output.
If you've read any of these fancy markdown commands in any "framework" or in any live repository, you would know that prompt engineering is not a thing and the only stable strategy is repeated begging because there is a fundamental fucking god-blessed information theory-guaranteed injective relationship between the prompt and output domains where you can't have enough intuition about the slot machine to use reliably in practice.
i have burned thousand of hours training, fine tuning, and direct prompting language models, and there is a subtle indescribable art in understanding what they "know" to fashion inputs accordingly, but the combinatorics of language make it absolutely laughable for the sweet spot of "human brain can gain unconscious intuition" to lie exactly in the ungodly chasms of these brazillion parameter models and their outputs.
the hard parts of code are fucking TALKING TO PEOPLE and the LLMs make that problem A MILLION TIMES WORSE
when you get near the buzzing hive sound of an LLM-driven project you'll know that half the words you see are completely not real and you need to now navigate around like 10 god bots saying incomprehensible nonsense wrapped in overformatted markdown, and the remaining servile human thralls are totally beyond communication by conversation. it's a dead space, a depeopled space, an exclusion zone, a radioactive cesspit soaked in rocket fuel and run on dogshit.
the old way of "approach the throne of the benevolent dictator" sucked, but i never for one second thought that people would flock to "approach the hall of mirrors where everything is real, nothing matters, nobody is in charge, and the firmament mocks you in an always more distant echo" as an alternative.
The problem with "AI" is that it's fundamentally anti-human and that translates very neatly into the most toxic social spaces you can imagine.
I can count a dozen conversations with people in very different domains where half the team is high on AI that are like "we don't actually know what is going on at all over there anymore, we say that there are problems and then they allegedly solve 10 different ones but never actually address anything we say." Like it turns out leaving the realm of human care entirely has some consequences and those consequences look a lot like "rupturing the social fabric needed to sustain any cooperatively shared pursuit"
@jonny ☝️💯 yep, this. Concluded.
It is very indicative that people who have hard time to talk to people find these tools appealing.
@peteriskrisjanis i've now seen FOSS projects struggle around AI tools used by contributors when the contributions are tiny, which is the largest scale collaboration with AI i've seen. as always, i would sincerely love to be linked to a well functioning ai-centric collaboration to be proven wrong and to see how that works. by far the most common success story from amateur to professional AI-centric coding is the sole genius story - and particularly i'll be waiting on tenterhooks to see the first sole genius -> collaborative maintenance story, should one ever happen.
@jonny the confabulation machine promoters always forget to mention that the person randomly mentioning the great results of the specific confabulation machine is, by complete happenstance, a person who is the principal enginer tasked with development of said confabulation machine or a person tasked with advertising that autoconfabulator.
i've yet to see stories that aren't promotional materials.
There's a lot of talk in fedi about giving advice when it's not asked for
I think this is a golden rule situation. I think people give what they would like to receive, and don't understand that other people might want to receive something different
People who like receiving unsolicited advice, are generously giving the same thing to others
And people who don't like receiving it, are baffled by why someone would do something so unkind
At least among folks I know, the latter are much more vocal
There doesn't seem to be much recognition that people might differ in this area
I'm wondering what the percentages actually are
Please boost for reach!
--
When you post your frustration about a problem you are having, but you don't explicitly ask for suggestions...
| I like to receive suggestions, if they are reasonably relevant: | 3 |
| I definitely don't want to hear suggestions: | 0 |
| Some third thing, in the comments: | 1 |
@NilaJones third mixed with second - my 'frustration' is a slice-of-life posted for amateur anthropological amusement, and also as such cannot possibly contain more than a fragmentary context, rendering all 'help' off topic except from those who know me far too well to offer it.
@NilaJones (I am still trying, after thirty years of this, to remember that apparently slice-of-life that amuses me isn't to anyone else's enjoyment and that I should stafu if I don't want to play their way. But. At least when I stick to clear questions or fluff I'm not mistaken in my meaning.)
@feonixrift @NilaJones I always appreciate your posts and the opportunity to see your life over your shoulder (for what it's worth). xo
Someone asked for advice and I suggested an alternative option. They responded very rudely, so I blocked them immediately.
@lionelb @NilaJones There's a fine line between suggesting an alternative and, in effect, telling them their question is wrong. I'm not saying you did this, but for the sake of discussion, a very typical tech conversation goes:
Post: "I need to do X function in Y program on Z operating system."
Reply: "No you don't. That's a bad way to use Y, which is why X was removed last version, and you shouldn't use Z at all."
@lionelb @NilaJones It's possible for that reply to ultimately constitute good advice, as in, "There's a better way to do that. I could show you if you're interested." But it mostly comes off as, "You're a stupid noob who doesn't know what you're doing."
@OrionKidder @lionelb @NilaJones I've started referring to this as a Y-X solution. People will assume there's an X-Y problem, not ask for details to verify, and give a solution that is wildly off base.
It's greeeeat when it's a lot of people cluttering up a thread with Y-X responses.
@NegativeK @lionelb @NilaJones Yup, it can destroy the ability of the poster to even get a response because a whole conversation ensues that's effectively about how the question was "wrong."
The flip side of this is a flurry of questions about your system that have nothing to do with your problem, so you spend two days posting system reports, and then never even offer any advice when you ask for it. [grumble]
@OrionKidder @NegativeK @lionelb @NilaJones I find it shocking how often people tell me in the Fediverse that my question is wrong. It feels like it happens more often here than any other space I've been in in my life, online or offline.
I cannot imagine the hubris required to do this.
@ShaulaEvans @NegativeK @lionelb @NilaJones That's awful. I haven't found that as much here nearly as much as old-school discussion boards. That said, I "read" as a white man, so that's a factor.
I said previously: "No tech from blockchain dates later than 2001. Crypto added nothing." https://circumstances.run/@davidgerard/115810916143765604
Both here and on bsky, I got crypto fans saying "ah ah ah zero knowledge proofs!" These are indeed used in Monero (money laundering coin) and assorted failed money laundering coins!
However, claiming *credit* to cryptocurrency for ZKPs seems implausible.
I'm actually looking into this and I can find shit that isn't proposals for other failed money laundering coins.
Closest a proponent got was, well, cryptocurrency *funded* it. (No examples, just the blank statement.) EDIT: found! apparently some research funding from 0xPARC.org
So I'm asking those claiming cryptocurrency drove ZK proof research and uses: which papers, with which effects, and which resulting products, are you thinking of? With checkable details. Thanks.
EDIT: yes, I'm quite aware of the history of ZKPs thanks. I'm asking specifically about the claim by cryptocurrency fans.
Goldwasser and Micali won the Turing award in 2013 for their work on ZK: https://news.mit.edu/2013/goldwasser-and-micali-win-turing-award-0313
Goldwasser and Micali began collaborating as graduate students at the University of California at Berkeley in 1980 while working with Professor Manuel Blum, who received his bachelor’s, master’s and PhD degrees at MIT — and received the Turing Award in 1995. Blum would be the thesis advisor for both of them. While toying around with the idea of how to securely play a game of poker over the phone, they devised a scheme for encrypting and ensuring the security of single bits of data. From there, Goldwasser and Micali proved that their scheme could be scaled up to tackle much more complex problems, such as communications protocols and Internet transactions.All this was being done long before cryptocurrency. Cryptocurrency fans claiming credit for ZKP is absurd.
@abucci now you know that and i know that, also I went to that article looking for something that wasn't "Monero" and "failed Monero"
@davidgerard Zero knowledge proofs are in Chapter 5 of Applied Cryptography, which was published in 1996.
@davidgerard Tangent: I had the impression that cryptocurrency provided something of a jobs program for cryptographers that wasn't university research, industrial conglomerates or international espionage.
In the same way that, say, Theranos provided jobs for a dozen or two biochemists or whoever.
The results are something else.
@mnordhoff it also caused said cryptographers to decay into coiners
there really isn't a lot of cryptographic meat on the non-privacy coins, and Monero has enough to get by to the point where all your actual problems are (b) side-channel attacks (the low volume means Monero users have been caught via volumes moved to get/sell the Monero) but mostly (a) screwing up, because crypto users near universally have the opsec of a rock. A dumb rock.
Dispatch #26: General Purpose is an Accountability Loophole
If they don't say what it's for, we can't say if it works.
Video: https://www.youtube.com/watch?v=_zs6BXdesaw
Podcast: https://pnc.st/s/faster-and-worse/e884f2ff/general-purpose-is-an-accountability-loophole
Because a calculator is a device made for a specific purpose, our trust in it would be immediately corrupted if it were to give an incorrect answer. It would be useless.
If you take away the specific purpose you also take away the criteria to determine if something works or not.
I followed some of the same logic that your post and video do! (though I was taking it in a different direction than critiquing the claims of general purpose ability)
How do you cope with the anxieties of living in Interesting Times?
(not a rhetorical question. Feel free to reply with what you do.)
@datarama I read books.
@kejster I think one of the things I've done that has given me a small but consistent improvement of life quality is that I always carry a book with me for my bus trip to and from work. I never use my phone while sitting on the bus, I read a book. This gives me two guaranteed half-hour time slots for reading every day. I also try to read before I go to bed, instead of doomscrolling.
@datarama nice. I’ll try to do something similar this year.
I read too much on my phone, but since we can no longer tell which articles are slop, I think it’s better to just carry a book all the time.
@datarama starting the year with this.
@kejster Nice. I know next to nothing about photography. My brother is an *excellent* photographer, I never learned to do it very well.
But late last year I started taking a weekly evening school drawing class, and that has been great. I recently drew this pencil portrait of my pet lizard's adorable face:
@datarama I have stopped compulsively doing the rounds of our news media. In fact I have unplugged entirely from that vector of so called information - as I have with Twitter and Facebook some time back. I find that I am somehow not out of touch. Also I have renewed my focus on what makes sense to me - family, friends, games, music - that sort of stuff. I think this is the way for me.
@fideldonson That sounds very healthy.
The pandemic mostly destroyed my social life - I lost contact with many people in the 1½ year I had to spend in near-total isolation, and my social skills took a turn for the worse.
I never had Twitter and I finally got rid of the last Meta product I was using (Messenger) a couple years ago, but I do still read news daily (but only the newsfeed of a public-service TV and radio station, which tends to be very low on sensationalism) - I *also* frequent a tech link aggregator (lobsters, not the orange site - I cannot stand the orange site), but I'm thinking I should cut down my usage of that a bit too, since these days tech news mostly just make me depressed.
Actually I like my job (software dev) - but I don't like what most of my field seems to be turning itself into.
@datarama software dev here too. We setup a board game club at work - Its quite casual but I can recommend it. It was a good way for me to expand my social circle.
@fideldonson I work in a small department (7 people) and we all know each other well.
But there's a board game café very near where I live, I should go some day.
@datarama In general, I moved to rural Ireland and have a pretty good resilience/offgrid setup.
I'm also very picky about social media: nothing with an algorithm. Most of the stuff I read news-wise is RSS off major sites, aggregated locally
@EI3JDB I live in the middle of a city ... but since this is Denmark, it's smaller than what would be considered a "town" in many other places. I've generally loved living here; I grew up in a small rural community and didn't like it at all.
(Like most city dwellers, I'm screwed if/when the grid goes. Though I have an alcohol burner for emergency cooking, an emergency radio and power bank, and some other essentials for short-term emergencies.)
@datarama City dwellers will be prioritised by the relief efforts, so short-term is all you need. Living rural is better for prepping but also makes it necessary.
@EI3JDB I remember a couple of years ago, when the Danish government started making recommendations for prepping.
Most of the suggestions were things that most Swedes and Norwegians (who have a much lower population density) have been doing for decades already - not for disaster prepping, but simply because sparse-population life is like that.
As far as coping with "interesting times", all of the above keeps me busy. Having a child and consciously delighting in his upbringing has brought me a lot of joy and steeled my resolve.
But I believe there are two things that have helped me make sense of things, which I feel compelled to do: (1) reading and thinking about the history of computer science more closely; (2) reading and thinking about (non-analytic) philosophy. The wild claims about what LLMs can and can't do are straightforwardly exploded, from a technical perspective anyway, if you engage in (1); and I find the politics or lack thereof of today make more sense if you engage in (2). There are stories to tell about what's happening, and throughlines from the past to the future, that might not be obvious otherwise. One thing that (2) makes very clear is that there are many distinct and sometimes competing stories one might tell, and no one story is fully comprehensive. I find that's an important fact to be reminded of, given how the worldview being forced upon us in the present moment is totalizing.
@abucci @datarama There is a lot of talk about ham for preppers, but I'd say no. Not that I have anything against ham radio (that EI3JDB is my Irish call) but I don't think it has much value for resilience. After Storm Éowyn last year the only radio that proved useful was PMR446 (similar to FRS in USA) which replaced mobile phones as a way to talk within our smallholding.
HF worked but there was no blue-light comms so we could call for help that couldn't come along roads closed by trees.
Not sure why this dross, dated Dec 1, seems to be circulating now (and why it didn't cross my feed a month ago), but wow what a terrible essay.
https://bigthink.com/the-present/the-rise-of-ai-denialism/
A few comments, in a short 🧵>>
First, it displays a certain kind of intellectual laziness of a type I've seen before: Arguing against "pundits", "influencers" and "voices", without naming a single person whose specific arguments the author is arguing against.
>>
This allows Rosenberg to caricature the position he is refuting while making it hard for the reader to dispel the caricature, since the original is not pointed to.
>>
Second, he cites Ayn fucking Rand with (apparently) a straight face.
>>
He also conflates the form of language (and art) with the actual thing: "produce content" as if output for which no one has accountability (and which represents no one's communicate intent) has any value in the world.
>>
Experience is central -- no art has any "qualitative value" without experience. Now, people can attribute meaning to synthetic images, but that is also an experience. But as UW's Gabriel Solis once put it so well: writing, art, performance -- these are ways of being human *together*.
>>
The words, the pixels, the sound waves aren't the art. The art is in the experience of the artist and the audience, together.
>>
Meanwhile, Rosenberg also consistently displaces accountability: "AI" hasn't done anything. Companies have amassed large collections of stolen art and used them to produce systems with which they and others can create synthetic images.
>>
Other companies, who would have previously hired artists, are taking advantage of cheaply produced synthetic images which are only cheap because they are based on stolen art + heavily subsidized by venture capitalists sustaining this bubble.
>>
And also unwittingly subsidized by ordinary people paying higher electricity prices when data centers move in:
>>
Meanwhile, note the logical leaps here: People creating art based on their experience of others' art is not equivalent to what happens with the large image models munging pixels together. And what is the evidence that "AI" systems will have "sparks of inspiration"?
>>
And finally, this argument about "denial" is really an argument for defeatism (in everyone else, Rosenberg seems to be welcoming the future he paints).
>>
If Rosenberg had actually cited the people he's arguing against, and made it possible to click through and see their words, I'm pretty sure what you'd find is not denialism, but resistance.
It is important to remember that the future is not yet written. The arts, journalism, science, education, medicine, and our chances to be human together are all worth fighting for --- and not surrendering to the maw of Big Tech.
/fin
"This is not an AI bubble. This is real."
Rosenberg has drunk the Kool-Aid.
He thinks hallucinations are reality.
He is going to have a bad trip.
@emilymbender thanks for the thread. The idea that AI may have an inner motivation is - among other claims - very strange. And it fits in so many other misrepresentations of what AI is by other AI disciples, like Sam Altman. They became all more and more exaggerated over time, while falling short of the promises. So it seems to me that they are all desperate to keep the scheme going in the hope it never fails.
@prefec2
The evidence, such that it may exist, for self motivation in AI, finds potential for acts of self-preservation that are disquieting at best and inconsistent with pretexts that guardrails are in place.
@emilymbender as offensive as mansplaining is in the ICT world, I genuinely do hope you don’t move on from this place. My own visceral aversion to being ‘splained keeps me at times from sticking up my head in many of these exchanges but in many ways I find this a more comfortable place to be than most.
I'm glad you find it more comfortable. My experience is that Mastodon is far 'splainier than anything else.
This seems to be the trade-off in Mastodon being a more conversational place.
Those who are used to just blasting out a message on other platforms can find the engagement wearisome, especially when it’s more like sealioning or ‘splaining.
On the other hand, there are some genuinely interesting conversations and diverse perspectives, and sometimes the sense that you might suggest a different way of thinking.
@AlsoPaisleyCat I have no issue with conversation -- it's being talked at by people who don't have even the curiosity to read the full thread they are responding to.
I'm not sure if it's the instance I'm on but I don't see much mansplaining. I often see people I follow compaining about things but I never see the actual posts so is a bit confusing. So either there are good mods on this instance or I'm part of the problem I guess 🙂
Sounds like my kind of content though so going to start backreading!
That sounds like the bots/troll-farms that I saw on Twitter.
It may be annoying, but the fact that they're being aimed at you, is good evidence that you're having a positive effect that they don't want. :D
If you didn't have a good message, and your messages were not being taken seriously, then they wouldn't bother you, as you'd be irrelevant. :D
Luck to you, lassy! :D
No, it's not bots or troll farms -- I think you're imagining something different from what I see in my mentions.
But you speak with apparent confidence about my experience, while addressing me as "lassy". Perhaps you don't recognize mansplaining when you see it....
Sorry about that.
I'm Scottish, and " Luck to you, lassy!" is used as a colloquial phrase of feminist solidarity, and was meant as a compliment to your skill.
My apologies for the unintended offence.
As for the bots/troll-farms, I'm only talking about the patterns that I have seen used, as a way of diluting the impact of conversations.
On Twittter there was a lot of automation, but here it seems to be manual-systems/personal-trolling.
@emilymbender We call it slop because it’s slop. There is no added value just discounted production of inferior quality knock offs. I’ll make a prediction: people will start to die because organisations have relied on “AI” to build critical systems without sufficient checking and testing by humans.
@emilymbender Anyone who uses Ayn (asshole) Rand as a justification for anything, should be shot out of a cannon into an alligator-infested swamp.
@emilymbender It's a bit of a desperate article isn't it?
The Ayn Rand... whooo boy, says it all really.
Who are these articles written for I wonder.
@emilymbender Ugh, yes. I hear this shite in my line of work more and more now e.g. it should be part of our professional training because 'it's going to be an essential part of [our field's] working life'.
And I guess it will be if we roll over and shoe-horn into the curriculum, yes
@emilymbender
Yes! The future is in-person. The future is human. The future is crafted. The future is community. The future is potlucks. The future is nature. The future is soulful. The future is messy.
There's no reason to accept a soulless, online-only future. And there's certainly no way human consciousness will ever be digitised - we are our bodies. No computer, no matter how complex, will ever be human. Only humans can be human.
@emilymbender
It's sad that the author does this blend of silliness while raising some important points at the same time, but I guess that's how the AI hype works. The risks of AI being applied to so many aspects of our lives, the manipulation possibilities, and the 'identity crisis' (against which I have my own arguments) are mentioned, but then it feels like he's humanizing AI and obfuscating the companies, as the public has been brainwashed to do for decades... Apparently he's an advocate for regulation, so why promote this stereotyped thinking (well, he's a CEO, nvm...)? The whole article is very misleading in this sense. If he's worried about the huge amount of behavioral data that can be gathered for profit, why not mention who's doing the gathering and profiting? Also, I personally find it funny when men feel hurt by cognitive skills being better in software. Like, yeah, bro, calculators have beaten us at arithmetic a long time ago, and most of us are glad they did. You could have gotten over that by now. The fake antagonism of humans against machines is so tiresome. Thanks for this interesting thread
I apologize for this brain dump. It's just how I think about AI for my work.
I have my own interpretation of why AI can't produce 'art' or be 'creative' and perhaps its complementary to the human model where the artist fine tunes their communication of experience to an audience where they can't know exactly how it will be consumed but can empathise - what experiential filters apply; something an AI so far as I know couldn't do!
A generative AI, as far as I understand it is merely directly derivative, driven by stolen 'pixels' in what would be virtually infinite combinations but for the simplifying constraints of pattern recognition and its metadata.
I suspect (or at least personally think that) human creativity has a couple of interesting and closely related creativity capabilities. I have no academic basis for any of this, it's just my way of making some sense of it.
Over time, a human moves slowly from having very limited raw sensory input; warmth, sound, light, &c. in the womb to post birth, where the senses pick up more detail. That sensory data is all we have, that's it! Everything thereafter is a house of cards. Light resolves itself into the conception of 'edge'. Different edges resolve to 'shape' and 'orientation' and different shapes together build to 'complex shapes'. Shapes, colour, texture, smell, etc resolve to objects. Much further down the track certain sound combinations can be associated with object conceptions and so the development of naming & abstraction begins. The house of cards starts to strengthen its foundations with experience. The longer this goes on, the firmer the base gets and the greater the complexity of the abstractions. Abstractions build on more abstractions, until we have complex views, like opinion, judgment, teenage moods, bigotry, religion, &c.
The second related idea I think is that our abstractions aren't rigidly tied to a particular input stack and our creativity could conceivably be related to the human capability to bridge between dramatically different experiences that have a pattern with some commonality.
The idea of seeing an animal's eyes in the beam of headlights resolved to a ubiquitous design for a road safety device was an analogous leap.
Seeing The Fighting Temeraire being towed to the scrapyard lead Turner to paint an oil painting that captured subject, mood & light &c. that communicated effectively the pathos and beauty of the circumstance.
Abstraction allows us to find that commonality and use it as a derivative for a new idea. I don't think AI can do that. It could make something emulating Turner's art but it couldn't itself recognize the opportunity from scratch or best means or anticipate the character of the consumer.
Just to reiterate, this is a story, it's not fact or truth, it's just a speculative model I've constructed that hangs together for me. I'm just sharing it to see if it resonates with anyone else.
@emilymbender the "AI" narrators for blogs, articles, and even books really get to me as well.
While bad, at least the Stocking Frames initially hit one sector and a couple adjacent ones.
"AI" is Stocking Frames on both steroids and growth hormones, attempting to wipe out multiple sectors.
Glad to have you and others leading the charge for resistance.
@emilymbender Yeah, my nephew is a talented graphic artist who has built a career on doing book covers, etc., but I wonder how he's coping now that AI is taking over. Publishers will no doubt save a buck with AI slop if they can.
@emilymbender Not pointed out frequently enough! "Art" can be shorthand for "a work of art", emphasis on "work". Art requires the work that goes into the "content".
On that note, another thing that grinds my gears is when the word "content" is used to obfuscate when someone really means "copy". If the goal of producing text is to persuade (as opposed to connecting, something impossible with LLMs), then the thing is copy, not content.
@emilymbender It's hard to explain this to people who never experienced art in their lives (most AI enthusiasts)
@emilymbender This statement seems similar to Searle's Chinese Room thought experiment, substituting "experience" for Searle's "understanding". (I'm neither agreeing nor disagreeing, but trying to understand; Searle's thesis, like so many philosophical discussions, seems to depend on the definitions of terms; and I find myself unsure whether I agree or disagree with Searle.)
@emilymbender Decades of under-funding of arts education. A constant focus on the end result, not the journey that led to it.
As sad as I am about the general ignorance about what makes art art, I can't say I'm surprised.
When I type, I am limited by the keys on the keyboard and I am biased towards symbols that represent letters, then punctuation and digits. The number of space characters is high. In contrast, if I cat /dev/random to a file, the output will be far more varied (it will contain a roughly even distribution of all possible byte values) and much faster than I can type. If speed and variety are the things to optimise for, I have a machine that can generate such content far faster than any LLM.
Does he really not understand what circular reasoning is? So many of these booster articles seem to be written in a way that suggests their authors have language comprehension challenges.
@emilymbender The part about this way of thinking that offends me as a musican and an artist is that it assumes the purpose of art is to create aesthetically "beautiful" things or output.
The purpose of art is to communicate the perspective of the artist, and machines do not and cannot have perspective or communicate.
I will concede that some people may believe they find value or meaning in machine generated output, but some people find patterns in random strings of numbers, too.
@emilymbender "we simply don’t know whether AI will ever experience intentions through an inner sense of self the way humans do" is such a ridiculous sentence.
@emilymbender
Rosenberg seems full of pompous crap; thanks for reading & responding to his crappy article so we don’t have to. 🙏🏻 Like others in this thread I too wonder who he’s writing for. 😐 No one we know, that’s for sure.
@emilymbender this thread overall in this post in particular will require some deep thinking. So thank you.
@emilymbender what's missing? Intuition. Insight. Inspiration.
https://berryvilleiml.com/2025/02/26/revisiting-letter-spirit-thirty-years-later/
@emilymbender I was trying to see his perspective until that happened. Then I laughed at him and myself and turned off my phone. 🤣
@emilymbender another aspect of this is it allows him to position the "origin" of the strawman argument he is constructing as a reaction to the apparent quality of the system outputs instead of the fundamental ethical, environmental, and commercial criticisms of LLMs in general.
"By any objective measure, AI continues to improve at a stunning pace."
No one ever said "The iphone is improving all the time any moment now everyone will want one"
I mean... WHEN will we get there? This "it's getting better" line is a huge tell for me.
I do think there is a danger of just hoping AI goes away, it won't. We will need regulations and to think about its place.
When you bring that up this kind of guys says "nooo no... it will get better. Please no!"
@emilymbender
In other words, it's a perfect example of #EveryAccusationIsAConfession 😂 which is typical of all the AI Tech Bros - accuse the others of being "lazy" because you yourself can't come up with a valid argument for your own pro-AI position. i.e. resort to ad hominems to hide your own lack of a valid argument 🙄
@emilymbender Thank you! What do you think about the part, that "It is very likely that AI systems will soon be able to “read you” more accurately than any person could."
People tell these systems everything.
@kaffeeringe On the one hand, "emotion recognition" is misframed from the start --- the "best" one could hope to do is to match how a person (ideally from the same cultural background) observing the subject would understand a facial expression.
On the other hand, companies will try anything to take advantage of emotionally vulnerable people.
>>
@kaffeeringe I think it was in the book _Careless People_ that I read about Facebook specifically targeting ads to teen girls who had just deleted a selfie, preying on hypothesized (and probably often right) moments of low self-esteem.
@emilymbender Thank you. I read the article when it was published. And I disagreed with most of it. This part seemed to have a point. But you are right: it's not real reading of people but again mimicking and 50 shades of sycophancy.
@emilymbender incredibly weak article, reads almost like religious document. Kinda indicative of AI zealots.
@emilymbender It's weird to celebrate something so breathlessly while claiming that it has thrown a nation into stages of grief.
Like ... we don't have to do this. Why opt into something that immediately causes us to grieve
@emilymbender had to stop reading when I got here, admittedly not very far.
“the very scary possibility that we humans may soon lose cognitive supremacy to artificial systems”
My internal screaming would not let me continue.
@emilymbender I don’t trust an opinion piece that doesn’t acknowledge and address the weaknesses in its premise. It means the writer is in no way thinking critically.
@emilymbender "Big Think" as the name of an AI shill outlet is just a bit too on the nose, sorry. The showrunners are really getting lazy.
Hivemind:
I'm looking for something like Bitwarden (relatively secure online vault with browser plugins) to keep my online contacts.
Is there anything like that?
@yoginho I've made a quick check. Some didn't seem trustworthy and maintained enough, the rest was too business oriented.
If I didn't get your request wrong ContactBook may help you. It also has nearly identical interface to Bitwarden
http://contactbook.app
@abucci Google contacts support CardDAV but Proton doesn't. So, I'm already using it, but really want to move my contacts off Gmail...
@abucci Yes. I can access, import, and export contacts with .vcf files. But Proton doesn't seem to allow browser extensions to access its contacts direct via CardDAV, as far as I understand the (rather technical) discussions I've found.
@yoginho Murena is an EU-based company that creates privacy-respecting open source alternatives to Google Android and other services. They give you 1GB email and contacts for free.
You don't have to use their phones to use their services. https://murena.com/workspace/
Thanks! I actually *do* have a Murena email and storage account (I only recently swapped my Fairphone running /e/ with a Pixel running GrapheneOS), and I already have a Proton account as well, but it's not exactly what I'm looking for.
I'd like access to contacts across applications in a dynamic way, like BitWarden handles passwords with Browser extensions. Besides importing and exporting .vcf files, I don't think Proton or Murena support access outside their own domains?
@yoginho i would think that you should be able to use Murena contacts in any app that supports CardDAV, which is many, such as Thunderbird, Kmail, or Evolution.
It says on the site that you can use other apps.
Last I checked, Murena is based on Nextcloud, which uses CardDAV.
Prediction: the notion of sovereignty will be one of key technopolitical pH tests of 2026. Proposed as a solution to AI's centrifugal accelerationism, technonationalism is actually the desired outcome of current restructuring, and a safe space for Western masculine subjectivity.
@danmcquillan do you anticipate this'll manifest in places like australia and the uk?
@oscarjiminy that technonationalism will manifest?
@danmcquillan yeah. both uk and aus are fixated on the ‘special friendship’ and are all in on embedding ai wherever they might be able to cram it, digital sovereignty seems like an unlikely priority in the immediate term
One of the tasks I use AI for is writing peer reviews during performance review season at work. The median peer review is “I worked with this person on project X and they did Y which led to Z. They did a good job.”
I use AI to both layout the structure and dig up X, Y & Z from my document, chat & email history.
I obviously have to edit and review the output. That said, as someone who had to write 22 peer reviews this time around, this was a significant time savings for me.
@carnage4life jesus christ
@carnage4life you've just defined your performance review structure as a job that should not be done
generative AI is ideal for jobs where nobody cares about the outcome, but I strongly suspect your targets extremely much care about the outcome
@davidgerard it should surprise you none at all that this particularly "GenAI" booster also works at Meta. And has a long and storied history of aggressively attacking and shitting on anyone who dares question him.
https://techcrunch.com/2008/10/12/the-prickly-prince-strikes-again/
@davidgerard I feel somewhat sorry for people who are deliberately letting themselves be left behind by ignoring a technology that fundamentally reshapes how cognitive tasks are done in the workforce.
Generative AI is ideal for tasks that require generating well understood content that can be judged by a human. Software code, digital art and yes, peer reviews required as part of annual HR performance processes.
@carnage4life @davidgerard I dunno this use case seems like it would be served more simply and cheaply with a search feature and a template? I’m just not getting why using the next word predictor program is the ideal solution here. Yah these systems are neat, but it seems like the praise lauded on them should really just apply to, uh, general purpose computing?
I mean maybe the issue is that good interfaces still don’t exist.
@carnage4life @davidgerard So maybe granted that “generate something random following a probability distribution” is, in fact, a better user interface than anything we’ve bothered to come up with.
@rsdio @carnage4life @davidgerard
other poster offers a different view on perf: giving knowledge to managers so they can choose to help their reports not to sink, where drowning is the default [i took my liberties in reinterpreting what i'm quoting]
https://mastodon.social/@ambiguous0/115804699174339528
but i'll offer what i think it's an enlightening example: some tech bros tried to offer AI ministry to churches
i hope it's clear how that's a fucking idiotic thing to do
... 1/2
@carnage4life I did the same. Many ignorant replies here with bad assumptions about peer reviews and performance reviews in general. It's all performative bullshit at best, and at worst...
I had a VP of HR chew me out once because my department wasn't doing reviews properly : the raises were all predetermined and reviews weren't needed for that. They are merely a record of employee defects to be used as a case for termination if needed.
@rsdio @carnage4life @davidgerard
in my view it's equally idiotic to use genai as anything more than a search engine to support one's subjective eval of ones peers, and brings w/ it the false-negatives problem: it could very well miss and fail to find important stuff
hence the subjective eval should've been the driver
my steel-man was above: hypothetically FB's perf process could be absolutely feckless shite; i'd worry that a genai-caused omission could affect someone
that's best-case
2/3
@rsdio @carnage4life @davidgerard
honestly to me the use of genai reads to me as an abject abdication of one's leadership responsibilities:
a manager may be concerned w/ collecting HR-actionable ammo, but a leader is concerned w/ developing their team -- and you cannot drive that using genai, it's a category error to even try
the AI ministry tech bros are in my view a clear example as to why you shouldn't try to use AI to drive a peer review
3/3
@rsdio @carnage4life @davidgerard
honestly to me the use of genai reads to me as an abject abdication of one's leadership responsibilities:
a manager may be concerned w/ collecting HR-actionable ammo, but a leader is concerned w/ developing their team -- and you cannot drive that using genai, it's a category error to even try
the AI ministry tech bros are in my view a clear example as to why you shouldn't try to use AI to drive a peer review
3/3
I feel somewhat sorry for people who are deliberately letting themselves be left behind by ignoring a technology that fundamentally reshapes how cognitive tasks are done in the workforce.This sentence is a seven-layer-burrito of wrong. It's quite amazing how nearly every clause is divorced from reality, and then wrapped in a tortilla of condescension.
@abucci @carnage4life @davidgerard I don't think any of us are actually ignoring AI, assuming that's even possible. I think we're all pretty well aware of AI in the same way we're all pretty aware of rabies without deliberately exposing ourselves to it.
The analogy with rabies is also wrong, but in that wrong-but-so-right kind of way.
@abucci @carnage4life @davidgerard Agreed it's a weak analogy. That we choose not to use LLMs is not out of ignorance; in our calculus, the cost and risks are not worth the benefit.
If you train models on your own data that you acquired with true consent and own the software, hardware, and power, water, and cooling requirements - pay for what you use, honor consent, and give credit where it is due _and_ the results are within the tolerance of error for your application/customers/victims and the results are checked and owned by a human, I don't have a lot of problem with LLMs. But that is never what is being promoted. In almost every case credit is stripped, consent is rejected, and every possible environmental, infrastructure, and economic cost is externalized and socialized. I'd love a cheap delicious seven layer burrito but I can't look at the supply chain, working conditions, and ingredients list of this particular burrito and still want to put it in my mouth. I know how this specific chorizo is made; I can't do it.
Personally I'd hesitate to use LLMs in the chatbotty/generative-y ways they tend to be used even in the idealized case you outlined. I agree you can probably eliminate most if not all of the myriad objections raised against say OpenAI and ChatGPT. What I think you can't easily eliminate is the de-skilling, psychological degradation, and other such harms that offloading thought to a machine will bring. Overuse of generative AI is a kind of sedentary lifestyle for the mind.
@abucci @carnage4life @davidgerard A massive brain haemorrhage would also reshape how cognitive tasks are done, but I'm in no great rush to have one.
David, when I get a real income, I am joining your patreon, just for this.
@dogfox @carnage4life 🍻 in the meantime just keep forwarding the links 🙂
@davidgerard @carnage4life
True story: I know a guy who was a manager for a large-ish IT team at a university department. Little over a year ago he was talking about this cool AI tool that he was using to write evals for the folks in his dept. I said something like, "that mebbe doesn't seem fair to those ppl". He dismissed my comment. About 6mos later he was fired. Not saying it was *bc* of his (lazy) use of GenAI. But perhaps that was merely a symptom of his overall approach to the job?
@davidgerard Honestly this is just another case of "we're solving the wrong problem. Again." which generative AI use seems exceptionally good at highlighting.
@carnage4life I did the same. Many ignorant replies here with bad assumptions about peer reviews and performance reviews in general. It's all performative bullshit at best, and at worst...
I had a VP of HR chew me out once because my department wasn't doing reviews properly : the raises were all predetermined and reviews weren't needed for that. They are merely a record of employee defects to be used as a case for termination if needed.
RE: https://esq.social/@SuffolkLITLab/115820186657607724
The dubious political framing of 'enshittifcation' cast into the light by celebrating a potential coalition with 'national security hawks'
TL;DR: Cory Doctorow argues for building a 'Post-American Internet' as a response to the failures of US tech monopolies and global anticircumvention laws. He highlights emerging coalitions that include activists and national security hawks, pushing for digital sovereignty and technological self-determination. https://pluralistic.net/2026/01/01/39c3/ #law #tech #legaltech ⚖️ 🤖 #autosum
My thesis here is that this is an unstoppable coalition. Which is good news! For the first time in decades, victory is in our grasp.I don't think the "victory" of this so-called "unstoppable" coalition would be good news at all. What he seems to be describing is taking the bougie white internet global. Laissez-faire justice that won't trickle down to anybody.
Both honoured and caffeinated to have made the Brooklyn Coffee Shop reading list of 2025 https://www.instagram.com/p/DS75lPmDdhl/ ☕️☕️☕️ ('Resisting AI' also makes a brief appearance in this reel https://www.instagram.com/p/DM0XtRcx5lN/)
As you Normal Types suffer through your 83rd bout of respiratory misery this year, just a Reminder:
We could've permanently eliminated Influenza even with our half-assed measures we took for a little while.
Some entire lineages were extincted, just in that little joke of a precautions window we had.
(pic from From @salamonsmd on the TwixBox)
@abucci John Ruskin, the 19th Century Political/Economic Philosopher had this great story about visiting a small village in the Italian Alps.
Every year, the snow would melt and flood out part of their terraced fields, utterly destroying some farmers.
He asked them why they wouldn't put in irrigation and sluiceways to redirect the winter melt and save them all.
The answer: Because the crops of those who were NOT wiped out sold for a higher price. So each farmer family was hoping it was their neighbor who would be obliterated by the melt, and that their own farm would profit.
"The Grand AI Disconnect"
> AI is running only slightly ahead of child molesters in the public imagination.
https://talkingpointsmemo.com/edblog/the-grand-ai-disconnect
I'm sure it'll be fine and there's no chance that the public will turn on everything related to tech and software once the house of cards tumbles down. Great industry to be associated with, etc. etc.
@baldur last year I dumped Microsoft office because of their AI policies and I know plenty of other authors looking to do the same. Nobody wants this shit.
Of course, perhaps if AI can do all the things its boosters claim and corporate America buys in, maybe it doesn’t matter what anyone thinks? It will generate enough money to keep the politicians happy and winning elections. This vision of what we might call escape velocity is baked into almost all AI boosterism: the escape from the mundane stupidities of public accountability or public oversight.To put this differently: "if AI can do all the things its boosters claim, the elite strata of (American) society can decouple from the public and start doing whatever they please, accountable to only one another". To me this sounds like a recipe for generating a mass revolt.
@abucci Yeah, it's exactly the sort of thing historians would flag as a precursor to massive unrest.
@baldur I don't think the tech dork/tech worker class has ever before had their heads this far up their asses. And I don't mean executives. I mean just the ordinary workers who are deluded by their managers' insistence that "everyone is doing it" when it's really just their own bubble and everyone outside hates it and hates them for believing it.
@baldur why is everyone so focused on "AI" and not crypto, which literally has no good use case other than crime, gambling, grifters and human trafficking? I can point to dozens of times when LLMs have saved myself, and my team, significant time. I can point to ZERO times crypto has been useful to me for anything at all.
@baldur
The Great Unwashed do not realise we don't have AI yet and all we have is PI - Pseudo Intelligence. Boy are they in for a surprise.
Personally, I’m pretty sure the core skills in software dev are perishable.
This would explain why many senior devs see diminishing returns from LLMs over time. The skills necessary to spot when LLMs fuck up might be deteriorating over time.
I suspect people who lean on LLMs too hard for too long start to lose their coding/syntax and writing/generating skills, while potentially continuing to reinforce their computational thinking and reading/recognizing skills. That might explain some of the observations that LLM use can make people less productive even as they believe they're being more productive.
1/ At bargaining yesterday, @ProPublica management said that they should have 100% discretion to replace workers with AI and would not commit to labeling future AI-generated content.
How to make AI profitable.
1. Create tech that needs massive amounts of high-end hardware (RAM, GPUs, SSDs etc)
2. Pre-order hardware in huge quantities at current pricing
3. Spread stories about hardware shortages caused by AI
4. Wait for hardware prices to spike
5. Receive pre-ordered hardware
6. Sell hardware into retail channels at new high price
Optionally:
7. Wait for prices to crash as supply catches up
8. Buy hardware for your AI projects at new low price using earnings from 6.
RE: https://mas.to/@carnage4life/115803128842716660
the right conclusion would be that the way you're doing performance reviews (and peer reviews) is wrong, that performance reviews (and peer reviews) became, or simply always were, a meaningless ritual, and that it would be better to perhaps consider something else.
the wrong conclusion is “hey, let's extrude some slop”.
AodeRelay boostedOne of the tasks I use AI for is writing peer reviews during performance review season at work. The median peer review is “I worked with this person on project X and they did Y which led to Z. They did a good job.”
I use AI to both layout the structure and dig up X, Y & Z from my document, chat & email history.
I obviously have to edit and review the output. That said, as someone who had to write 22 peer reviews this time around, this was a significant time savings for me.
@mawhrin LLMs have shown that most of testing in schools is done wrong. This didn't prevent almost all teachers to double down.
@makdaam to their and their pupils' detriment.
(and that even taking into account that the “news” about llms “acing” school tests are rather inaccurate, as marketing materials tend to be.)
@mawhrin that post is a "jesus christ what the" moment
unfortunatlely, i am in fact able rightly to apprehend etc
@davidgerard @mawhrin Yeah, I would definitely not trust a performance (or other) review where generative "AI" was used to get information about what work I had done or projects I had worked on, because ... surprise! ... it cannot do that task reliably.
Screen shot is from a ChatGPT interaction I had a few weeks ago, asking ChatGPT 5 to give me information about an astronomical database I use regularly in some personal projects. It attributes ownership of the database to "A. M. Jones". I know with certainty that "A. M. Jones", whoever they may be, did not compile the database. I know this with certainty because *I compiled the database*, and I've been updating it regularly for over 15 years.
"Just build more homeless shelters!" is the "just add more highway lanes!" of social policy.
@abucci shelters ≠ housing?
The “let’s use AI responsibly” enthusiast to pseudo-religious “AGI is coming and with it a human-machine fusion” zealot pathway seems to be all too real and all too common.
@baldur Oof. This sort of thing is frightening, but also fascinating. Humans clearly have a huge cognitive blind spot when it comes to language. We just can't help project a mind onto these stupid text extruders. It seems the more exposure you get, the more vulnerable you are too the illusion.
You see a spider in your house, crawling along the floor. It's average-size, not a monstrous facehugger or anything like that.
What do you do?
| Kill it: | 8 |
| Trap it and put it outside: | 58 |
| Nothing. It's one spider, it's not a big deal: | 82 |
| Scream and nearly pass out: | 1 |
I saw this on BlueSky.
I don't want to call anybody out, and it's not really important that one person did this as this is going to be a be a general problem going forward.
The increasing popularity of "editing" features in generative AIs is going to lead to a lot of people thinking it is harmless to use it to upscale or contrast-adjust historical media.
The original comic is on the left. The AI "upscale" is on the right.
At a casual glance it looks like the same thing. Then you notice the faces. And the changes in geometry, fonts, and other details.
This is now a false artifact. Expect the internet to become crawling with them. Everything you can think of that some well-meaning person might want to "clean up" before posting, be it comics, news clippings, album art, what have you, is now subject to being replaced with a very similar but entirely different version.
@abucci I can only imagine that image training data is enormously biased towards people smiling. We're always told to smile for photos, models in advertisements are always smiling. An I probably just considers a vapid smile as the default human expression.
I've tried Claude Code twice in the past month and cannot reconcile the insane results people are reporting with its actual performance.
It's superhuman at specific things (my ability to join two CSVs is capped by my typing speed), but anything that's much more complex than that starts to get into weird territory where I'm typing so much English that it might have been less tiring to simply write the code.
@ludicity Yeah I've got the same kind of experiences with all chat daemons : when paraphrasing doc, they're quite good (and even useful, mainly because the GCP documentation is incredibly unsearchable), but for tasks requiring even the lightest thought, they're worst than interns ...
@Riduidel Yeah! And they keep like, designing things I don't want designed.
"Generate a dataset from this distribution with these parameters"
Five minutes later, it's like "I've created a 200 line YAML file that you can use to parameterize the script!"
You son of a bitch, I didn't ask you to do that
@ludicity @Riduidel i know of *one* study that bothered actually measuring actual time to do things with LLM coding, and that's the METR study and it's got a low n=16 and is full of caveats
this one: https://arxiv.org/pdf/2507.09089 (PDF)
but I see LLM advocates (a) pooh pooh it utterly for 1000 reasons (b) absolutely never run a single study themselves to even the same degree of rigour
must be a reason for (a) but never (b)
(and METR are AI doomsday nutters and they did a more rigorous study of AI coding than anyone else in the field! c'mon can nobody do a better job than the AI doomsday nutters)
@davidgerard @ludicity @Riduidel A lot of advocates I think are covering it by building out a huge "write spec docs" phase in front, and then when you ask about all the time they're spending writing spec docs, they go "well it's best practice to write gigantic detailed spec docs anyway so if you weren't doing that, that's a flaw in your process and not a time sink."
Which is very not true but *ok*...
@colincornaby @ludicity @Riduidel variants on "it can't be that stupid, you must be prompting it wrong"
@davidgerard @ludicity @Riduidel The other thing I'm realizing is that some places never left the 90s and are still writing giant spec docs, so they see speedups and think it's crazy no one else has.
There are places for spec docs - maybe a giant web API that's being coordinated across teams. But those still don't usually come with that sort of detail that would let you just hand it to an LLM.
@colincornaby @ludicity @Riduidel also, feeding the chatbot a pile of spec documents sounds like the Kai Lentit sketch https://www.youtube.com/watch?v=_2C2CNmK7dQ
@davidgerard @ludicity @Riduidel LLMs are constructed to reject rigor. that's why they're "large" and why they refuse to optimize for anything more specific than one token in front of the other. a death march to mediocrity
For instance, if you heard scurrying in the attic, you'd typically say something like "what's that?". But you could also say "who's that?" and, at least to my ears, it'd be humorous or wry (as if you're suggesting it might be a person or a ghost, or an extremely intelligent rodent, up there). It's also a bit of a "grandfather" thing to say.
Flattening the language so that such nuances are lost is a danger Orwell warned about, among others.
In hindsight I think that was the first time I noticed what has now been a several-decades-long dehumanization process. Referring to machines with people pronouns is the flip side of referring to people with machine pronouns. Linguistically we lose the distinction between organism and mechanism, which sets the stage for further dehumanization. I think it's safe to say that the vast majority of us will not benefit from such a transformation.
From a talk by Sherry Turkle:
A self over the past 20 years that’s become starved of the give and take of conversation, that hasn’t learned to tolerate vulnerability and respect the vulnerability of others, is primed to look to technology for simpler fare.We treated programs as though they were people, but now we are trained to treat people as though they were programs. To me that’s the moment to mark. It’s not whether or not chatbots are fascinating, or smart, or pass the Turing test. It’s what it’s doing to our treating people as if they were only machines.
Fediverse heads up!
Try not to put hashtags in the middle of sentences, maybe one or two hashtags, but not more than that
Reasoning: Screen readers.
For you, that's just a small character and some colored text, for a blind person using a screen reader, that is the word "HASHTAG" being repeated aloud every 5 syllables
I don't know about you, but I think most people would go insane like that, at least, if you were to put all the hashtags at the bottom, the person using the screen reader can just skip the hashtag portion of the post and that would be an averted crisis
Hashtags also really won't actually improve your post's visibility when you overuse them, Mastodon (and most of the Fediverse as far as I am aware) doesn't actually give you an algorithm boost for your hashtags, because there IS no algorithm, people just decide to follow hashtags
So yeah, you might get a lot of exposure from hashtag cats because everybody loves cats, and they have gone out of their way just to follow that hashtag, but... will there really be anyone wanting to go check for... "hashtag constitution"? or "hashtag corruption"? Unless they are veeeery addicted to bad news and politics, then probably not.
Maybe use something more general, such as "USPOL", or "UKPOL", those are pretty common things to follow and want to keep track of.
@nelson I respectfully disagree. I find about half my interesting content here via hashtags.
However, you can safely put them at the end of your post. Inline has all the problems you mention.
The fediverse software I run (snac) puts the muted hashtags and the control thereof front and center. I fairly fluidly change these as the days go by and I change my mind about what I do and don't want to see. I like this feature quite a bit.
One of my favorite "security challenges" is the "verify your email" one. By this point my email has been verified so many times it should have top secret clearance.
@abucci So much security theater these days, even for really minor stuff that doesn't need it. But Slack is actually important, and its authentication system is insane. I'm sure someone made the decision back in the day that "signing up for a dedicated account is too much friction, anyone should be able to send anyone an invite by email to let them log in!" Seems sensible, until you think through the consequences...
1. Find your workspaces. The tab that used to have Slack open is now a signin form. I never remember the (usually long) name of the workspace I want, so I have to "find" whatever workspaces are associated with my email address and then pick the one I want
2. Validate email address. You have to "enter your email to sign in", and then click the "sign in with email" button, even though you clicked "find workspaces".
3. I'm not a robot. I constantly get captcha challenges.
4. We emailed you a code. They email a OTP code to the email you used and you have to enter it in
5. Additional authentication required. Now I can pick my workspace from a list. But! You have to click the "authenticate" button
6. Enter your authentication code. This time they text me an authentication code, even though I have a perfectly functional OTP app and nearly all other webapps with 2fa use this
In isolation each step makes a kind of sense. 2. is meant to prevent someone with your email address from figuring out the workspaces you've used. 6. is 2fa. But, this entire flow is outrageous for logging into a webapp. Steps 1-4 are an authorization flow that they're sticking ahead of the authentication flow. That's backwards, and this needless complexity is the result. It also seems to have some needless security holes; for instance, someone who was able to snoop my email could figure out my Slack workspaces, something that could be prevented by having robust authentication first, or by using an OTP app instead of email.
Anyway sorry about the rant.
@abucci I mean, any excuse to vent about Slack, right? 😉
I have the added PITA of having my Slack workspaces spread across three different email addresses, even though they are all associated with my university! So, each time I set up a new device or something, I go through this whole song and dance three times, which somehow Slack makes work all in one browser. But that means I'm sorta logged into three accounts, but Slack somehow binds that all together into one cookie and one UX, despite not acknowledging that these email accounts are the same person. What a mess!
I can only imagine how much the engineers working on this must hate it, too, but I imagine changing it at this point would be an absolutely Herculean refactoring and migration effort. This is exactly the sort of tech debt that just gets kicked down the road forever...
I can only imagine how much the engineers working on this must hate it, too, but I imagine changing it at this point would be an absolutely Herculean refactoring and migration effort. This is exactly the sort of tech debt that just gets kicked down the road forever..My God yes, I can see it. I can hear the meetings about it in my head. I can see the hundreds of 5-year-old Jira tickets. I bet they have internal Slack channels just for griping about it!
Oof that really does sound like a PITA. I don't understand why they didn't have workspaces be something you're authorized to work with once you authenticate. It seems like a flow that would eliminate so much misery. You can still do magic links etc.
Rust with chatbots! (Everyone disliked that.) Klabnik:
> I have also been wondering for a few years if you can build a language without financial backing and a team. I think LLMs might change that equation.
oh boyyyy
@davidgerard It seems to me that it‘s always the same pattern: I want something, but I don‘t (want | have the time) to care about the code. So I chuck my ethics out of the window and let‘s go.
(Especially sad to me: Steve was someone who I always thought of as reliably antifascist and capital-sceptic. Not anymore 😕)
@chris_evelyn @davidgerard: Meanwhile, I got a sense of neophilia from him that could conceivably blind him to certain externalities. It does not entirely surprise me to see him become a slopjockey.
@davidgerard for fun, sure, but otherwise why?
The question behind a programming language is ”what is it for?”
@davidgerard the lesson of Javascript should not be "how can I make a language even worse than the one cranked out by a guy in a week."
AI image generators have just 12 generic templates
how many three-window rooms can one AI make
https://www.youtube.com/watch?v=khysGsyK9Qo&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
https://pivottoai.libsyn.com/20251222-ai-image-generators-have-just-12-templates - podcast
time: 6 min 46 sec
https://pivot-to-ai.com/2025/12/22/ai-image-generators-have-just-12-generic-templates/ - blog post
@davidgerard the "philosophical bit" that was in there is common in ai/data analytics papers. Almost everyone who isn't working on pure algorithms reaches to make their work say something deeper about intelligence or humanity to increase the perceived relevance to their work.
And they missed a good insight that you caught- these models are trained on the internet. The internet is mostly advertising. What they produce has an advertising aesthetic
@davidgerard actually I thought about this more and thought of a possible flaw in their experiment that would lead to them missing some templates. The image generator->chat bot label -> image generator cycle is going to be forced within the guardrails the corporate overlords put on the models, so they won't be generating anything that is NSFW
The answer to the question I thought of after posting - "if its trained on the internet, where's the porn?"
Here's the tl;dr in case it's of interest:
The procedure they're using sounds like an iteration of what may well be a (pseudo)contractive transformation on a high-dimensional Euclidean space. Such things have fixed points--sometimes called attractors--which in this case manifest as specific images that always pop out. This has been known for a long time, and was even popularized in the 1980s. Michael Barnsley came to be known because of his work on iterated function systems (IFS), which he showed produce very cool fractal images such as what's now known as the Barnsley fern. He wrote a popular book about it, Fractals Everywhere. He also tried to turn this into a company called Iterated Systems, claiming that he had technology that could reverse engineer any image to find the coefficients of a very small IFS that would reproduce it, resulting in enormous compression ratios. According to my PhD advisor, it turns out he had a room full of graduate students doing this "automatic" compressing--prefiguring what we see today in some of the "AI" companies that are in fact complicated Mechanical Turks.
Anyway! You can put nearly any image into an IFS, iterate its transformations a bunch of times, and have the exact same fractal image (like the fern above) pop out at the end. You can start with a picture of yourself even, and you'll converge to that same image. The reason has to do with the math of the situation and has little to do with the input image. The same kinds of phenomena can occur when iterating neural networks. Section 3.2, p41 of Simon Levy's 2002 PhD dissertation, "RAAM as an IFS", goes into this for a specific recurrent neural network architecture, RAAM, that can be seen as an early ancestor of the transformer (disclaimer: I worked on this too; you'll see some code I wrote in Levy's dissertation if you poke around). Lots of neat fractals pop out of these things.
IFSs use affine transformations, which are linear. The non-linearities in neural networks make the story more complicated; there can in particular be several distinct attractors that depend on the initial image put in. The input image functions to give the math a little push in a particular direction. It's a bit like an egg crate: if you put a ball on one of the peaks, there are four different depressions it could roll into depending on which way it's pushed. An IFS with affine transformations is more like a bowl: there's one lowest point the ball always rolls down to. The 12 image clusters these authors observed could be like the 12 depressions in an egg crate for a dozen eggs.
To be 100% clear, I am not saying this is what happening in this paper. I haven't done the work to see that (haven't even read the paper). However, it's a plausible hypothesis and exactly what I'd expect to see, and it's definitely something I'd want to rule out if I were going to attempt to generalize what they observed Again, if this is what's happening then the fact that they used chatbots and image generators is irrelevant; it's an epiphenomenon, so to speak, something that pops out of the math.
@abucci oh yeah it's totally an expected outcome, it's just fun to see what the specific attractors are
We may be heading toward a world in which it seems peculiar or even foolish to read an article or book in its entirety. Incredibly, schools - K-12 & higher ed. alike - seem reconciled to becoming the accomplices of Big Tech by pointing us there.
I tend to speak my mind because I'm of the opinion that the Abilene paradox should not be a lifestyle, and that's gotten me some IRL hate over the years. But given what I've experienced here and how I've seen others treated, I'd say Mastodon really is something special in this regard.
Rather in every social media bubble, including Mastodon and the Fediverse. This was already the case in the days of IRC in the 1990s.
Unless this was meant as a demonstration, in which case bravo.
Sorry, perhaps I skipped over something. Yes, I believe we have a similar opinion; the Fediverse is something special and beneficial for people who wish to use social media. Democratic, decentralised, and social.
Whether it's beneficial and in what ways is not what I was talking about. I don't think it tracks to say we have a similar opinion because we are talking about different things.
@abucci Dunno, Usenet is the right analogy. A bunch of entitled specialists who are used to being unquestioned suddenly encounter people who don't tolerate bullshit.
For anyone who's reading along, and to clarify what might have been vague:
What was and still is at stake is the balance between technology and humanity struck by the US federal government. The same incoherence that led us to where we are with crypto is almost surely also at play with the apparent lack of movement around generative AI.
#USPol #democrats #DemocraticParty #crypto #cryptocurrency #StableCoins #ShadowBanking #regulation #AI #GenAI
@davidgerard @abucci @datarama @xgranade I don't know what makes these people into weird cult evangelists, but watching it unfold on LinkedIn from people who I worked with and *know* have to know better has made The Emperor Has No Clothes seem like less of an exaggerated satire than I ever imagined it could be.
Earlier today I noticed a survey on LinkedIn about whether a significant journal in my field should list ChatGPT as coauthor on articles.This was only two months after ChatGPT made its big splash in Nov 2022, yet people were losing their minds already.
@abucci @davidgerard @datarama @xgranade A manager who was formative early in my career and who I respected greatly started posting about how AI is making him 10x more productive now and I was just like "seriously?" and could not believe he would say that. Literally one of the most influential people in my professional life, and also helped open my mind about my ultra-conservative upbringing although I doubt he realizes that and now he's spouting AI crap...
im getting really sick of the primary voices i hear about "ai" being a problem coming from people who think learning enough about how it works to identify its use is some kind of sin and whose justifications for hating it range from vibes-based to mystical
im sorry but if your problem with LLMs is the plagiarism and someone uses one trained on exclusively their own work you have to either refine your perspective about the way the software is a problem to come up with a new reason or chill the fuck out
if you think the art is bad "because it lacks the touch of a human soul" that is a perfectly valid religious belief but not one anyone who does not share your faith-based ideas about the human soul needs to accept as any sort of a convincing argument
and if your arguments about "ai" are so broad they would also make web crawlers and dwarf fortress verboten then you need to learn enough to refine them to something reasonable
my reasons for this bothering me are personal, i have been making procgen art for a decade now and it sucks to be the baby thrown out with the bathwater
whyyyyyyy are people in my compiler trying to convince me LLMs are like the web in the year 2000
oh he works at AWS. wonder if his employer offering a paid service has anything to do with it
"i was teaching an engineer something, and i realized it was hard to teach them this concept" ok so instead of making the concept easier to learn and understand we instead say actually teaching people things is beneath me and not worth my time and would be better outsourced?
this reads like an internal design doc i would create if i wanted to be promoted at AWS and hated teaching people things not something i would genuinely tell other experts in the same field as me
the day the pants team at twitter outsourced tier 1 support was the beginning of the end because we immediately lost touch with which components of our build tool made sense or didn't. instead of using every question as an opportunity to improve docs or even simplify an API, suddenly we were all turned solely towards performance goals completely decoupled from user requests. this made it extremely easy for management to justify removing all of our work entirely and subsequently outsource the whole codebase to google's bazel
idk man i can't read more than a few sentences of this at a time it sets my brain on fire and i feel gross linking to this https://smallcultfollowing.com/babysteps/blog/2025/02/10/love-the-llm so incredibly fucking contemptuous of everything i thought rust cared about
I can't get past the title. "Stop worrying and love the LLM", are they too young to know that was satirical? It wasn't a good thing to love the bomb!
@hipsterelectron no this is great. they should definitely put llms in every orifice of rust specifically and nothing else
@hipsterelectron I saw this great article about how llms make you into an awesome programmer. so if we put LLMs into rust then it'll turn all of the rust programmers into super programmers and they'll finally stomp out the vile C++ programmers once and for all https://www.404media.co/microsoft-study-finds-ai-makes-human-cognition-atrophied-and-unprepared-3/
@aeva human cognition is a trophy to be won not a seed to nurture apparently
@aeva junyer the late RE2 maintainer and friend of mine saw googlers pushing rust because memery safety means they can avoid investing in security ever again. apparently rust also means never investing in junior engineers too
@[email protected] @[email protected] there's been this small part of me that has been like "I don't want shitty companies to invest in memory safety, I want them to remain eminently hackable" and so hearing they're also forcing their junior engineers to rely on stochastic code generators really balances that memory safety out. so, yay!
I'm not sure how Layla and May001 snuck past SpamAssassin but the assassin is being re-trained.
Read this from Ben Santer (very famous climate scientist for those who don’t know) about what NCAR means and what dismantling it will do.
@datarama i bought 64GBs for around 600 USD (it's ECC), the same package is now about 2000USD. and yeah, i hope Altman will find out soon. after all this fucking around
I've noticed that hard drive prices have shot up too but I haven't found a satisfying explanation for that (I'm happy to blame OpenAI but it'd be nice to know who is really to blame). The price-per-terabyte seems to have increased by at least 3-fold, probably more, since I last checked prices.
@abucci @datarama ssd and hard drive prices tend to follow DRAM prices. and from what i've read they are also increasing at least in part because of AI (datacenter) demand
edit: https://www.mooreslawisdead.com/post/sam-altman-s-dirty-dram-deal
I don't know. The mooreslawisdead post mentions SSDs, but there still isn't an analysis of hard drives in particular. "AI companies are buying up the good SSDs, which drives up the price so much that consumers can't afford them and turn to HDDs, which then drives up the price of those" is a plausible story, but I'd like to see some evidence (and also it's an Econ 101 explanation, which I'm skeptical of in general). Another possible explanation is simple price gouging: just as the COVID pandemic gave companies an excuse to jack their mark ups, which led to price increases across the economy, it's possible hardware component companies are jacking their prices and blaming the AI industry when there might be some other dynamics at play (greed, tariffs, an anticipation of reduced demand leading to decreased production, who knows what else).
One reason I find price gouging a possibility is that refurbished or recertified consumer grade HDDs spinning at 5400 RPM are also doubling or tripling in price. Nobody's putting stuff like that in datacenters (I hope). It all just smells fishy.
I guess I won't be reading Al Jazeera any more. *sigh*
"Al Jazeera Media Network says initiative will shift role of AI ‘from passive tool to active partner in journalism’"
https://www.aljazeera.com/news/2025/12/21/al-jazeera-launches-new-integrative-ai-model-the-core
@dentangle
Idk if they have already started or not, but I hope that human journalists writing under the byline "Al Jazeera staff" wouldn't have written this paragraph and then never mentioned the "pillars" again
"Relying on six pillars, the initiative will integrate AI systems to help Al Jazeera journalists process complex data, produce immersive content, gain access to analytical context and automate internal workflows, among other things."
If those activities are some of the pillars, in what sense can they be "relied on"?
If those activities are the intended outcomes of the AI initiative, then what are the pillars?
@nyhan If their organisation has somehow missed the obvious harms, the environmental impact, the economic and societal harm, the psychological problems #LLMs can cause, and the inherent bias amplification then I think we can assume that their journalism has little value at this point.
This whole LLM mess is one of the biggest stories of our generation, and apparently Al Jazeera haven't noticed.
@dentangle As far as I understood, Al Jazeera was a very thrustworthy source on all sorts of news, except anything oil related, since they are funded that way (to my understanding)
It seems logical that they follow the LLM path, since it's the last fad for oil companies to shill their product (with LLM's energy usage going through the roof, oil is kept relevant)
@minimoysmagician It's sad to see them go this way. They were one of my primary news sources until today.
LLMs are shaping up to be potent disinformation torrents. That feature would seem to run against the principles of good journalism.
RE: https://chaos.social/@librewolf/115716906957137196
This is also our position for #IronFox.
(Thankfully, Mozilla’s “generative AI” features haven’t really made their way to Android, so this hasn’t yet been an issue for us - but this is how we will proceed if/when they are introduced).
David Gerard boostedAs there seems to have been recent confusion about this, just a quick "official" toot to then pin: we haven't and won't support "generative AI" related stuff in LibreWolf. If you see some features like that (like Perplexity search recently, or the link preview feature now) it is solely because it "slipped through". As soon as we become aware of something like this / it gets reported to us, we will remove/disable it ASAP.
Mozilla’s “generative AI” features haven’t really made their way to AndroidYou would clearly know better than I would, but are you certain of this?
I have IronFox 146.0 on my phone and extensions.ml.enable is set to true; I didn't touch it so I assume this is the default setting. This is one of the settings that controls the Firefox AI Runtime, which as I understand it allows one to run transformers (generative AI) locally.
@abucci since the firefox ai runtime (/ml component) was implemented in gecko directly, you're right that it's technically present on android - but it's not actively being used/implemented anywhere in geckoview and fenix (firefox for android) itself - unlike firefox for desktop, which has been adding user-facing features like ai chatbot integration, link preview, smart tab grouping, etc. (with a separate "genai" component).
but, i took a look, and i will go ahead and disable `extensions.ml.enabled` for next release, thanks for pointing it out.
in general, i'm going to take a closer look into the ai runtime/ml component (that we're inheriting from gecko) for next release, and see what specifically we can disable/remove. i don't think we want to kill *all* local "machine learning" functionality (ex. firefox translations technically uses machine learning models - and we def want to support that), but we def don't won't support anything related to "generative" ai, like the examples i mentioned of firefox's recent features on desktop
ACM is now showing an AI “summary” of a recent paper of mine on the DL instead of the abstract. As an author, I have not granted ACM the right to process my papers in this way, and will not. They should either roll back this (mis)feature or remove my papers from the DL.
> “As an author, I have not granted ACM the right to process my papers in this way”
Well, you published this paper with a CC BY license, so anyone (including ACM) can publish a “derivative work” like a summary (AI generated or not), as long as proper attribution is given. Or did you mean something else?
@avandeursen I disagree that a CC license grants the right to process a work with AI and to present the output under the authors’ byline. But, even if we set this aside, there are many papers in the DL that are not CC licensed. Will they be excluded from these features?
@hovav @avandeursen some CC licenses do allow derivatives, including the CC-BY license used for this paper. But the -ND (no derivatives) versions do not. Most of my recent papers (the ones where my students didn't choose otherwise) are licensed CC-BY-ND, so unambiguously do not allow this. But the AI summary is still available on those papers.
@csgordon @hovav @avandeursen Hi can you point me to an example of this by chance (on a CC-BY-ND paper)? I am taking this up with the ACM Publications Board.
@JonathanAldrich @hovav @avandeursen sure. Here are 3, all licensed CC-BY-ND right in the PDF:
https://dl.acm.org/doi/10.1145/3763134
https://dl.acm.org/doi/10.1145/3689493.3689980
https://dl.acm.org/doi/10.1145/3689492.3689806
@csgordon @hovav @avandeursen Thank you. I have raised the issue, suggesting they remove this feature (or at least pause it pending a robust internal discussion & proper policy & implementation).
@csgordon @hovav @avandeursen Update: the ACM DL is being updated so that author-provided abstracts are the default view again, and AI summaries are more explicitly labeled with "AI-Summary". These changes should be visible worldwide by sometime tomorrow.
@csgordon @hovav @avandeursen Also people are invited to report factual inaccuracies in AI Summaries using the blue Feedback button on the right.
@JonathanAldrich @csgordon @hovav @avandeursen "report factual inaccuracies" is on us? These systems can't guarantee accuracy, this seems to be just another way to get more work out of us...
Jonathan, did anyone state why these AI summaries are an "improvement" over the author-provided abstracts? Why do they consider this a useful feature?
@smarr @csgordon @hovav @avandeursen The rationale I was given for AI Summaries *alongside* ordinary abstracts:
* The AI-generated summaries are intended to be more accessible to non-experts.
* They are intended to describe the contents of the paper as presented, including scope and limitations. Author abstracts, understandably, often emphasize positive contributions. They are often writing for the reviewer and trying “sell” their work in a constrained space.
(1/2)
@smarr @csgordon @hovav @avandeursen and
* Their uniform structure creates opportunities for comparative analysis and discovery across sets of papers.
(2/2)
@abucci @csgordon @hovav @avandeursen I don't really have insight. It was discussed mostly on the DL Board. We hard about it on the Pubs Board, but not a lot of details (e.g. assessments of quality, the idea that it would replace abstracts in the default view).
#Mozilla #Firefox #DarkPatterns#antifeatures #AISlop #NoAI #NoAIWebBrowsers #AICruft #AI #GenAI #GenerativeAI #LLMs #tech #dev #web
History