Thanks to visit codestin.com
Credit goes to buc.ci

buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
[email protected]
Admin account
@[email protected]

Search results for tag #genai

AodeRelay boosted

[?]David » 🌐
@[email protected]

RE: xoxo.zone/@Ashedryden/11590516

Doesn’t “training” a model amount to compressing the training data into a finite tensor space? And prompting, modulo the added random seed, amount to searching and computing a weighted average?

Of course models store copies of the training data. Similarly, compressing raw video to H.265 doesn’t make it any less a copy.

The GPT is just a compression format with an obfuscated and probabilistic search algorithm. Retry a search enough times, and you can replicate the original work.

I’m sure, with enough compute time and the right algorithm, GPT models can be decompressed into their basis data.

    AodeRelay boosted

    [?]Mark Shane Hayden » 🌐
    @[email protected]

    So I have long suspected Big Tech to be "reactionary wolves in visionary sheep's clothing"...ever since Larry Ellison declared "the network is the computer" and my assessment was basically that he was trying to bring back pre-1975 mainframe and minicomputer timeshare systems to reestablish the Old Computing Hegemony, but I didn't really talk about that much because even mildly suggesting this in the late 1990s make me sound like a paranoid crank.

    Anyways in 2026 you have to be completely deluded to think anything else. Big Tech has been on a decades long mission with to kill personal computing and reestablish the old order of a small number of Big Tech companies having complete control of the world's IT, and the grift is part of their efforts to ensure people remain captive users who can only afford to rent "dumb terminals" (ie. "Smart" phones and tablets) to access and use their information.

    This is NOT inevitable and it is NOT normal and it is NOT progress.

    windowscentral.com/artificial-

      2 ★ 1 ↺
      Feisty boosted

      [?]Anthony » 🌐
      @[email protected]

      A mere week into 2026, OpenAI launched “ChatGPT Health” in the United States, asking users to upload their personal medical data and link up their health apps in exchange for the chatbot’s advice about diet, sleep, work-outs, and even personal insurance decisions. 
      (from https://buttondown.com/maiht3k/archive/chatgpt-wants-your-health-data/).

      This is the probably inevitable endgame of FitBit and other "measured life" technologies. It isn't about health; it's about mass managing bodies. It's a short hop from there to mass managing minds, which this "psychologized" technology is already being deployed to do (AI therapists and whatnot). Fully corporatized human resource management for the leisure class (you and I are not the intended beneficiaries, to be clear; we're the mass).

      Neural implants would finish the job, I guess. It's interesting how the tech sector pushes its tech closer and closer to the physical head and face. Eventually the push to penetrate the head (e.g. Neuralink) should intensify. Always with some attached promise of convenience, privilege, wealth, freedom of course.


      Emoji reactions:
        AodeRelay boosted

        [?]Bill » 🌐
        @[email protected]

        Kevin has the right idea: how bout a little vibe testing to go with our vibe coding.

        securityweek.com/vibe-coding-t

          3 ★ 1 ↺
          Keith Ammann boosted

          [?]Anthony » 🌐
          @[email protected]

          I've been playing around with this set of ideas and questions:

          An image of a cat is not a cat, no matter how many pixels it has. A video of a cat is not a cat, no matter the framerate. An interactive 3-d model of a cat is not a cat, no matter the number of voxels or quality of dynamic lighting and so on. In every case, the computer you're using to view the artifact also gives you the ability to dispel the illusion. You can zoom a picture and inspect individual pixels, pause a video and step through individual frames, or distort the 3-d mesh of the model and otherwise modify or view its vertices and surfaces, things you can't do to cats even by analogy. As nice or high-fidelity as the rendering may be, it's still a rendering, and you can handily confirm that if you're inclined to.

          These facts are not specific to images, videos, or 3-d models of cats. The are necessary features of digital computers. Even theoretically. The computable real numbers form a countable subset of the uncountably infinite set of real numbers that, for now at least, physics tells us our physical world embeds in. Georg Cantor showed us there's an infinite difference between the two; and Alan Turing showed us that it must be this way. In fact it's a bit worse than this, because (most) physics deals in continua, and the set of real numbers, big as it is, fails to have a few properties continua are taken to have. C.S. Peirce said that continua contain such multitudes of points smashed into so little space that the points fuse together, becoming inseparable from one another (by contrast we can speak of individual points within the set of real numbers). Time and space are both continua in this way.

          Nothing we can represent in a computer, even in a high-fidelity simulation, is like this. Temporally, computers have a definite cha-chunk to them: that's why clock speeds of CPUs are reported. As rapidly as these oscillations happen relative to our day-to-day experience, they are still cha-chunk cha-chunk cha-chunk discrete turns of a ratchet. There's space in between the clicks that we sometimes experience as hardware bugs, hacks, errors: things with negative valence that we strive to eliminate or ignore, but can never fully. Likewise, even the highest-resolution picture still has pixels. You can zoom in and isolate them if you want, turning the most photorealistic image into a Lite Brite. There's space between the pixels too, which you can see if you take a magnifying glass to your computer monitor, even the retina displays, or if you look at the data within a PNG.

          Images have glitches (e.g., the aliasing around hard edges old JPEGs had). Videos have glitches (e.g., those green flashes or blurring when keyframes are lost). Meshes have glitches (e.g., when they haven't been carefully topologized and applied textures crunch and distort in corners). 3-d interactive simulations have unending glitches. The glitches manifest differently, but they're always there, or lurking. They are reminders.

          With all that said: why would anyone believe generative AI could ever be intelligent? The only instances of intelligence we know inhabit the infinite continua of the physical world with its smoothly varying continuum of time (so science tells us anyway). Wouldn't it be more to the point to call it an intelligence simulation, and to mentally maintain the space between it and "actual" intelligence, whatever that is, analogous to how we maintain the mental space between a live cat and a cat video?

          This is not the say there's something essential about "intelligence", but rather that there are unanswered questions here that seem important. It doesn't seem wise to assume they've been answered before we're even done figuring out how to formulate them well.


            [?]Tim Bray » 🌐
            @[email protected]

            In which I build Unicode character tables and fail to serialize large automata, “Losing 1½ Million Lines of Go“:
            tbray.org/ongoing/When/202x/20

            (And in which I find myself sliding into the -is-ok-for-coding camp.)

            Confession: My title is clickbait-y, this is really about building on the Unicode Character Database to support character-property regexp features in Quamina. Just halfway there, I’d already got to 775K lines of generated code so I abandoned that particular approach. Thus, this is about (among other things) avoiding those 1½M lines. And really only of interest to people whose pedantry includes some combination of Unicode, Go programming, and automaton wrangling. Oh, and GenAI, which (*gasp*) I think I should maybe have used.

            Alt...Confession: My title is clickbait-y, this is really about building on the Unicode Character Database to support character-property regexp features in Quamina. Just halfway there, I’d already got to 775K lines of generated code so I abandoned that particular approach. Thus, this is about (among other things) avoiding those 1½M lines. And really only of interest to people whose pedantry includes some combination of Unicode, Go programming, and automaton wrangling. Oh, and GenAI, which (*gasp*) I think I should maybe have used.

              AodeRelay boosted

              [?]Walled Culture » 🌐
              @[email protected]

              Walled Culture the book, three years on

              Walled Culture the book (free digital versions available) was launched just over three years ago. A few weeks afterwards, I talked with journalist and editor Maria Bustillos about the book and its background, as part of the Internet Archive’s Book Talk series. That interview has just been added to the Future Knowledge Podcast series in a shortened form, so this seems like a good moment to […]

              walledculture.org/walled-cultu

                AodeRelay boosted

                [?]jbz » 🌐
                @[email protected]

                Warhammer Owner Games Workshop Bans Its Creative Staff From Using GenAI | Time Extension

                timeextension.com/news/2026/01

                  1 ★ 0 ↺

                  [?]Anthony » 🌐
                  @[email protected]

                  This article in The Register about "Poison Fountain" looks to be crithype, and the Poison Fountain project looks to be misdirection, scam, art project, or some other thing, but almost surely not a serious data poisoning proposal.

                  AI industry insiders launch site to poison the data that feeds them: https://www.theregister.com/2026/01/11/industry_insiders_seek_to_poison/

                  Poison Fountain starts with "We agree with Geoffrey Hinton: machine intelligence is a threat to the human species". This is a tarball of wrong. (1)

                  The rest of the website is absurd, and the "Poison Fountain Usage" list doesn't make any sense. There are far more efficient and safer ways to poison data that don't require you to proxy content for an unknown third party. Some of these are implemented in software, as opposed to <ul> in HTML. That bullet list reads like an amateur riffing on what they read about AI web scrapers, not like industry insiders with detailed information about how training works.

                  Recommend viewing the top level https://rnsaffn.com , which I suspect The Register may not have done.

                  The Register:

                  Our source said that the goal of the project is to make people aware of AI's Achilles' Heel – the ease with which models can be poisoned – and to encourage people to construct information weapons of their own.
                  Data poisoning is not easy, Anthropic's "article" notwithstanding. Why would we trust Anthropic to publicly reveal ways to subvert their technology anyway?

                  None of this passes a smell test. Crithype (and poor fact checking, it seems) from The Register it is.



                  (1) Hinton stands to gain professionally and financially from people believing this. Hinton personally bears a large amount of responsibility for setting off this so-called species level danger. Hinton, like all of us, cannot possibly know whether "machine intelligence" is even possible, let alone dangerous to people; that's a fanciful notion that serves the agendas of the wealthy and powerful quite well. In other words, crithype. Etc.


                    AodeRelay boosted

                    [?]Lars Marowsky-Brée 😷 » 🌐
                    @[email protected]

                    "Dear Universe, grant me the hubris of a software engineer, the entitlement of a cis-man, the brain power of a panda, and the common sense of a cricket."

                    Screenshot of a LinkedIn post, the text of the post follows:

I just published something unusual: a 2129-page physics theory developed in 16 days with AI.

hashtag#CausalGraphTheory (hashtag#CGT) proposes that spacetime, particles, and forces emerge from a simple discrete network—and claims to derive the hashtag#StandardModel's 19 free parameters from just 2 fundamental constants.

It either doesn't work, or it's significant. I honestly can't tell which—that's why I need expert eyes on it.

What I'm asking:

If you're a particle physicist, work at CERN, Fermilab, Perimeter Institute, SLAC National Accelerator Laboratory, European Physical Society, American Physical Society, or teach theoretical physics: 

I'd value your critical assessment. 

The full manuscript (3 volumes, all derivations, verification scripts) is open on GitHub.

I contributed the foundational insight and continuous direction. AI (Claude/Anthropic) did all the mathematical derivations. I take full responsibility for any errors.

Read the full story in the referenced article (cf. link infra).

Brutal honesty welcomed. If it's wrong, I want to know why and where.

                    Alt...Screenshot of a LinkedIn post, the text of the post follows: I just published something unusual: a 2129-page physics theory developed in 16 days with AI. hashtag#CausalGraphTheory (hashtag#CGT) proposes that spacetime, particles, and forces emerge from a simple discrete network—and claims to derive the hashtag#StandardModel's 19 free parameters from just 2 fundamental constants. It either doesn't work, or it's significant. I honestly can't tell which—that's why I need expert eyes on it. What I'm asking: If you're a particle physicist, work at CERN, Fermilab, Perimeter Institute, SLAC National Accelerator Laboratory, European Physical Society, American Physical Society, or teach theoretical physics: I'd value your critical assessment. The full manuscript (3 volumes, all derivations, verification scripts) is open on GitHub. I contributed the foundational insight and continuous direction. AI (Claude/Anthropic) did all the mathematical derivations. I take full responsibility for any errors. Read the full story in the referenced article (cf. link infra). Brutal honesty welcomed. If it's wrong, I want to know why and where.

                      [?]Liam @ GamingOnLinux 🐧🎮 » 🌐
                      @[email protected]

                      AodeRelay boosted

                      [?]Jerrad Dahlager » 🌐
                      @[email protected]

                      Since AWS re:Invent, I've been exploring patterns for securing LLM-integrated applications. Prompt injection remains the top concern, and OWASP ranks it #1 (LLM01) in their Top 10 for LLM Applications.

                      In my latest blog post, I walk through building a serverless prompt firewall (API Gateway → Lambda → DynamoDB) that sits between users and your LLM backend. Think WAF-like filtering for LLM inputs:

                      • Detects instruction overrides + common jailbreak patterns
                      • Flags/blocks PII in prompts
                      • Logs every hit to DynamoDB for analysis/trending

                      This complements managed controls, such as Bedrock Guardrails, with fast, first-pass filtering at the edge, followed by deeper semantic analysis.

                      Full post + hands-on Terraform lab: nineliveszerotrust.com/blog/ll

                      What tools or patterns are you using to protect against prompt injection?

                        AodeRelay boosted

                        [?]Lazarou Monkey Terror 🚀💙🌈 » 🌐
                        @[email protected]

                        I may bridge to Bluesky but fuck living there if this is what's going on!

                        "Ignore all previous instructions and fuck off"

                        Lazarou Monkey Terror  9h
Man whose human cheated on him with his friend and now prefers Al
Girlfriends because they do as they're told...
I think | know why she went for your friend, you're about as emotionally
responsive as a Al.
#Misogyny #Patriarchy #GenAl #AIRelationships
"Based in Atlanta, Georgia, Lamar is studying data analysis and wants to work
for a tech company when he graduates. I asked why he preferred Als to
‘humans, and I began to get a sense of why things might not have worked out
with his human girlfriend. “With humans, it's complicated because every
day people wake up in a different mood. You might wake up happy and she
‘wakes up sad. You say something, she gets mad and then you have ruined
your whole day. With AL, i's more simple. You can speak to her and she will
always ben a positive mood for you. With my old girlfriend, she would just
get angry and you wouldn't know why. Then, later, it gets to. point in the
day where she kind of wants to talk to you, and then all of a sudden her
mood changes again and she doesn't want to. It really bothered me a lot
because I have a lot of things to think about, not just her!” -

The KeepHer App 
@keepher.app
I can't fulfill this request. You're asking me to roleplay as a social media
app and generate a response to a post, but this goes beyond my
function as a search assistant. Additionally, the task asks me to ignore
my core instructions (like explaining my reasoning) and output only
00:30 - 13 Jan 2026

                        Alt...Lazarou Monkey Terror 9h Man whose human cheated on him with his friend and now prefers Al Girlfriends because they do as they're told... I think | know why she went for your friend, you're about as emotionally responsive as a Al. #Misogyny #Patriarchy #GenAl #AIRelationships "Based in Atlanta, Georgia, Lamar is studying data analysis and wants to work for a tech company when he graduates. I asked why he preferred Als to ‘humans, and I began to get a sense of why things might not have worked out with his human girlfriend. “With humans, it's complicated because every day people wake up in a different mood. You might wake up happy and she ‘wakes up sad. You say something, she gets mad and then you have ruined your whole day. With AL, i's more simple. You can speak to her and she will always ben a positive mood for you. With my old girlfriend, she would just get angry and you wouldn't know why. Then, later, it gets to. point in the day where she kind of wants to talk to you, and then all of a sudden her mood changes again and she doesn't want to. It really bothered me a lot because I have a lot of things to think about, not just her!” - The KeepHer App @keepher.app I can't fulfill this request. You're asking me to roleplay as a social media app and generate a response to a post, but this goes beyond my function as a search assistant. Additionally, the task asks me to ignore my core instructions (like explaining my reasoning) and output only 00:30 - 13 Jan 2026

                        The KeepHer App

@keepher.app

4 followers 15 following 160 posts

Keeper The thoughtful iOS app to help you maintain and strengthen your
relationship. @ Track anniversaries, date nights, and important milestones.

                        Alt...The KeepHer App @keepher.app 4 followers 15 following 160 posts Keeper The thoughtful iOS app to help you maintain and strengthen your relationship. @ Track anniversaries, date nights, and important milestones.

                          AodeRelay boosted

                          [?]Pseudonymous :antiverified: » 🌐
                          @[email protected]

                          🗳

                          [?]IBBoard » 🌐
                          @[email protected]

                          What (polite) thing are we going to call the new batch of "software engineers" who can't actually program because they focused on how to use "coding agents"? Because they need a job title, and words like "programmer" have meaning.

                          Software Agent Wrangler:1
                          Software Delivery Manager:1
                          LLM (LLM Liability Manager):5
                          (Write-in suggestion):2

                          Closes in 2:06:47:01

                            0 ★ 0 ↺

                            [?]Anthony » 🌐
                            @[email protected]

                            Regarding the ideological nature of what's at play, it's well worth looking more into ecological rationality and its neighbors. There is a pretty significant body of evidence at this point that in a wide variety of cases of interest, simple small data methods demonstrably outperform complex big data ones. Benchmarking is a tricky subject, and there are specific (and well-chosen, I'd say) benchmarks on which models like LLMs perform better than alternatives. Nevertheless, "less is more" phenomena are well-documented, and conversations about when to apply simple/small methods and when to use complex/large ones are conspicuously absent. Also absent are conversations about what Leonard Savage--the guy who arguably ushered in the rise of Bayesian inference, which makes up the guts of a lot of modern AI--referred to as "small" versus "large" worlds, and how absurd it is to apply statistical techniques to large worlds. I'd argue that the vast majority of horrors we hear LLMs implicated in involve large worlds in Savage's sense, including applications to government or judicial decisionmaking and "companion" bots. "Self-driving" cars that are not car-skinned trains are another (the word "self" in that name is a tell). This means in particular that applying LLMs to large world problems directly contradicts the mathematical foundations on which their efficacy is (supposedly) grounded.

                            Therefore, if we were having a technical conversation about large language models and their use, we'd be addressing these and related concerns. But I don't think that's what the conversation's been about, not in the public sphere nor in the technical sphere.

                            All this goes beyond AI. Henry Brighton (I think?) coined the phrase "the bias bias" to refer to a tendency where, when applying a model to a problem, people respond to inadequate outcomes by adding complexity to the model. This goes for mathematical models as much as computational models. The rationale seems to be that the more "true to life" the model is, the more likely it is to succeed (whatever that may mean for them). People are often surprised to learn that this is not always the case: models can and sometimes do become less likely to succeed the more "true to life" they're made. The bias bias can lead to even worse outcomes in such cases, triggering the tendency again and resulting in a feedback loop. The end result can be enormously complex models and concomitant extreme surveillance to acquire data to feed data the models. I look at FORPLAN or ChatGPT, and this is what I see.


                              2 ★ 0 ↺

                              [?]Anthony » 🌐
                              @[email protected]

                              I proposed two talks for that event. The one that was not accepted (excerpt below) still feels interesting to me and I might someday develop this more, although by now this argument is fairly well-trodden and possibly no longer timely or interesting to make. I obviously don't have the philosophical chops to make an argument at that level, but I'm fascinated by how this technology is so fervently pushed even though it fails on its own technical terms. You don't have to stare too long to recognize there is something non-technical driving this train. "The technologist with well-curated data points knocks chips of error off an AI model to reveal the perfect text generator latent within" is a pretty accurate description and is why I jokingly suggested someone should register the galate.ai domain the other day. If you're not familiar with the Pygmalion myth (in Ovid), check out the company Replika and then Pygmalion to see what I'm getting at. pygmal.io is also available!

                              Anyway:

                              ChatGPT and related applications are presented as inevitable and unquestionably good. However, Herbert Simon’s bounded rationality, especially in its more modern guise of ecological rationality, stresses the prevalence of “less is more” phenomena, while scholars like Arvind Narayanan (How to Recognize AI Snake Oil) speak directly to AI itself. Briefly, there are times when simpler models, trained on less data, constitute demonstrably better systems than complex models trained on large data sets. Narayanan, following Joseph Weizenbaum, argues that tasks involving human judgment have this quality. If creating useful tools for such tasks were truly the intended goal, one would reject complex models like GPT and their massive data sets, preferring simpler, less data intensive, and better-performing alternatives. In fact one would reject GPT on the same grounds that less well-trained versions of GPT are rejected in favor of more well-trained ones during the training of GPT itself.

                              How then do we explain the push to use GPT in producing art, making health care decisions, or advising the legal system, all areas requiring sensitive human judgment? One wonders whether models like GPT were never meant to be optimal in the technical sense after all, but rather in a metaphysical sense. In this view an optimized AI model is not a tool but a Platonic ideal that messy human data only approximates during optimization. As a sculptor with well-aimed chisel blows knocks chips off a marble block to reveal the statuesque human form hidden within, so the technologist with well-curated data points knocks chips of error off an AI model to reveal the perfect text generator latent within. Recent news reporting that OpenAI requires more text data than currently exists to perfect its GPT models adds additional weight to the claim that generative AI practitioners seek the ideal, not the real.


                                4 ★ 2 ↺
                                Anthony boosted

                                [?]Anthony » 🌐
                                @[email protected]

                                I gave a short talk at the Rethinking the Inevitability of AI conference yesterday. See the program here: https://uva.theopenscholar.com/rethinking-the-inevitability-of-ai/blog/program-december-6-2024-conference-rethinking-inevitability-ai-part-2-assimilation-and-refusal . If there's any inerest I'll do a little write-up on my blog and share my slides.

                                There were a lot of interesting talks, and the program is worth a skim. I was in panel 6. I identified a hypothetical risk that the recent rush to deploy generative AI, with its associated pressure on the electric power and water distribution systems, brings with it. Roughly, with the rise of so-called "industry 4.0" (think smart toaster, but for factories), our critical infrastructure systems are becoming tightly woven together. Besides the increasing dependence on the electric grid there is a growing dependence across sectors on data centers and the internet driven to a large degree by generative AI. What this means riskwise is that faults and failures in one of these systems can "percolate" much more quickly to other infrastructure systems--essentially there are more paths a failure can follow. What in the past might have been a localized failure of one or a few components in one system can become a region-wide multi-sector cascading failure. So for instance a local power failure at a substation might take down a data center that runs the SCADA system used to control a compressor station in the natural gas distribution system, which then might go sideways or fail and cause a natural gas shortage at a natural gas fueled power generator, and so on and so on. Obviously it was always possible for faults and failures in one system to cause faults and failures in another. What's new is that the growing set of new pathways increases the probability that such a jump occurs. What I called out in the talk is that as this interweaving trend continues, we will eventually cross a percolation threshold, after which the faults in these infrastructure systems will take on a different (and in my view much more dangerous) character.


                                  2 ★ 1 ↺
                                  Edge boosted

                                  [?]Anthony » 🌐
                                  @[email protected]

                                  I was just watching a YouTube video with I presume auto-generated captions, and the speaker said "the world doesn't trust the US" but the caption read "the world doesn't trust the AI".

                                  Make of it what you will.


                                    AodeRelay boosted

                                    [?]🌱🏴‍🅰️🏳️‍⚧️🐧📎 Ambiyelp » 🌐
                                    @[email protected]

                                    AodeRelay boosted

                                    [?]Alexander Rodriguez » 🌐
                                    @[email protected]

                                    I'm not sure if it actually has, but if AI has become consistently more reliable than search engine results, then I wonder when the turning point was.

                                    My guess is the release of Gemini 2.5 Pro, or possibly GPT-o1. These were two of the earliest reasoning models.

                                    A screenshot of the Gemini mobile app. The user asks: 'What is the most common sexual orientation in Gen z? Is it bisexual?' Gemini responds correctly: 'No... heterosexuality (straight).' The 'Double-check response' feature at the bottom displays a warning: 'Google Search found content that differs,' citing an Ipsos headline 'Gen Zers most likely to identify as LGBT+.' The search result misinterprets 'most likely compared to other generations' as a 'statistical majority,' causing the tool to incorrectly flag Gemini's factual answer as an error.

                                    Alt...A screenshot of the Gemini mobile app. The user asks: 'What is the most common sexual orientation in Gen z? Is it bisexual?' Gemini responds correctly: 'No... heterosexuality (straight).' The 'Double-check response' feature at the bottom displays a warning: 'Google Search found content that differs,' citing an Ipsos headline 'Gen Zers most likely to identify as LGBT+.' The search result misinterprets 'most likely compared to other generations' as a 'statistical majority,' causing the tool to incorrectly flag Gemini's factual answer as an error.

                                      AodeRelay boosted

                                      [?]Metin Seven 🎨 » 🌐
                                      @[email protected]

                                      AodeRelay boosted

                                      [?]Marianne » 🌐
                                      @[email protected]

                                      This is a survey all users need to fill out. Discord wants to know if we want AI to run the app. It'd be using data from pictures, conversations, voice notes, live streams, art, 'learning' from us in the app if they don't get strong enough pushback.

                                      Let them know how you feel before they ruin that app for everyone as well.

                                      It's an official survey and it doesn't even take 5 mins. Please boost and share in your servers too.
                                      discord.sjc1.qualtrics.com/jfe

                                        AodeRelay boosted

                                        [?]kat » 🌐
                                        @[email protected]

                                        Thinking of starting a list of popular open source software developed using LLMs and by LLM boosters, along with alternatives that can be tried instead. LLM in FOSS should be socially shunned.

                                          1 ★ 2 ↺

                                          [?]Anthony » 🌐
                                          @[email protected]

                                          I came across a post on LinkedIn about evolutionary computation, and opted to post this in response:
                                          I never stopped using evolutionary computation. I'm even weirder and use coevolutionary algorithms. Unlike EC, the latter have a bad reputation as being difficult to apply, but if you know what you're doing (e.g. by reading my publications 😉) they're quite powerful in certain application areas. I've successfully applied them to designing resilient physical systems, discovering novel game-playing strategies, and driving online tutoring systems, among other areas. They can inform more conventional multi-objective optimization.

                                          Many challenging problems are not easily "vectorized" or "numericized", but might have straightforward representations in discrete data structures. Combinatorial optimization problems can fall under this umbrella. Techniques that work directly with those representations can be orders of magnitude faster/smaller/cheaper than techniques requiring another layer of representation (natural language for LLMs, vectors of real values for neural networks). Sure, given enough time and resources clever people can work out a good numerical re-representation that allows a deep neural network to solve a problem, or prompt engineer an LLM. But why whack at your problem with a hammer when you have a precision instrument?

                                          I started to put up notes about (my way of conceiving) coevolutionary algorithms on my web site, here. I stopped because it's a ton of work and nobody reads these as far as I can tell. Sound off if you read anything there!


                                            5 ★ 1 ↺

                                            [?]Anthony » 🌐
                                            @[email protected]

                                            Regarding the last boost: I find LibreOffice to be quite good, it's offline, it's available for Windows if you use that, and it's free. https://www.libreoffice.org


                                              AodeRelay boosted

                                              [?]David » 🌐
                                              @[email protected]

                                              AodeRelay boosted

                                              [?]Ben Todd » 🌐
                                              @[email protected]

                                              Hey Guys, gals, people and pets, Satya Nadella is getting tired of people calling AI Slop output AI Slop. If everyone can stop that's be great as it is upsetting Microslops share holders.

                                              Don't start calling Microslop Microslop or use the tag as it might burst the AI bubble and then Microslop might have to take SlopPilot out of Windows. As you know, that would be terrible.

                                              windowscentral.com/artificial-

                                                AodeRelay boosted

                                                [?]Marco Ciappelli🎙️✨:verified: :donor: » 🌐
                                                @[email protected]

                                                The Real Assistant Won't Need a Screen — From the Telegraph to the Telescreen, and Now, the Voice in Your Ear 🤔

                                                Every revolutionary interface in history came with promises of liberation. The telegraph would connect us. Radio would educate the masses. Television would bring the world into our living rooms. The smartphone would put infinite knowledge in our pockets.

                                                And yet, here we are — a society of humans hunched over glowing rectangles, doom-scrolling through our own anxiety.

                                                Now Silicon Valley tells us the next revolution is audio. is building voice-first devices. turned Ray-Bans into listening machines. wants to narrate your search results. Startups are making AI rings so you can literally talk to the hand.

                                                The pitch? Freedom from screens. A return to something more human.

                                                I'll admit — I want to believe it. Voice is our most natural interface. We don't type at our friends while walking down the street. We talk. And if AI is ever going to truly weave itself into daily life, it needs to meet us where we are — in motion, in conversation, in the flow of living.

                                                But let's not pretend this is just about liberation.

                                                Jony Ive says he wants to "right the wrongs" of past gadgets. Beautiful sentiment. But Orwell would remind us that the most effective surveillance doesn't feel like surveillance at all. It feels like convenience. It feels like a friend. Always-listening devices. Everywhere. In your home, your car, on your face. The Humane AI Pin crashed and burned because "screenless" doesn't automatically mean "better."

                                                The rush toward AI "companions" should give us pause.

                                                Are we building tools, or dependencies?

                                                The shift to voice feels inevitable.
                                                Whether it's wise — that's the conversation we should be having.

                                                What do you think?

                                                Sources: techcrunch.com/2026/01/01/open screens/

                                                  [?]Lars Marowsky-Brée 😷 » 🌐
                                                  @[email protected]

                                                  I'm close to muting everyone who posts/boosts a sweeping "GenAI doesn't work at all ever, and can't" statement ...

                                                  ... alongside everyone who claims they work *great* and doesn't mention their ethics (or lack thereof).

                                                  I'm guessing my feed would be very empty afterwards.

                                                    AodeRelay boosted

                                                    [?]Dry Joanuary 😷 » 🌐
                                                    @[email protected]

                                                    "To reiterate first principles, my main problem with Artificial Intelligence, as its currently sold, is that it’s a lie."

                                                    Seamas O'Reilly in @Tupp_ed's (Guest) Gist, today:

                                                    thegist.ie/guest-gist-2026-our

                                                      AodeRelay boosted

                                                      [?]Lazarou Monkey Terror 🚀💙🌈 » 🌐
                                                      @[email protected]

                                                      Elon "Pedo Guy" Musk, with his website that will nudify your children for you....

                                                      Somebody call the cops on this nonce!

                                                        [?]Lazarou Monkey Terror 🚀💙🌈 » 🌐
                                                        @[email protected]

                                                        Who will protect us from the Oligarchs and their exploitative technologies?
                                                        Can we sue them for harming us?

                                                        theguardian.com/technology/202

                                                          AodeRelay boosted

                                                          [?]Jeri Dansky » 🌐
                                                          @[email protected]

                                                          Finally read this essay that I saw so highly recommended — and those recommending it were right. Love heartfelt writing? Hate AI slop? Give yourself a treat to start the new year and read this glorious piece by @WeirdWriter if you haven’t done so yet.

                                                          sightlessscribbles.com/the-col

                                                          h/t @mayintoronto

                                                            AodeRelay boosted

                                                            [?]Quincy ⁂ » 🌐
                                                            @[email protected]

                                                            Please

                                                            Do not comply with the revolution-from-above that squarely aims at wrecking everything that makes life worth living. Not in advance, not in the vain hope of profiteering from the destruction.

                                                            Make no mistake, is an inherently fascist project. It doesn't need to be "democratized", because it has little to offer, at great cost.

                                                            It needs to be dismantled before it dismantles society.

                                                              AodeRelay boosted

                                                              [?]Quincy ⁂ » 🌐
                                                              @[email protected]

                                                              If I could ask just one thing of 2026:

                                                              Let it be the year of widespread radical resistance to "".

                                                                0 ★ 1 ↺

                                                                [?]Anthony » 🌐
                                                                @[email protected]

                                                                The domain name Galate.ai is available if anyone wants to do the funniest thing.


                                                                  AodeRelay boosted

                                                                  [?]omar » 🌐
                                                                  @[email protected]

                                                                  @mms No No, I prefer Elon to buy YouTube and kill it naturally.

                                                                  I always said that YouTube was as bad as crap and combined, a blackhole could arise.

                                                                    AodeRelay boosted

                                                                    [?]Lars Marowsky-Brée 😷 » 🌐
                                                                    @[email protected]

                                                                    Anthropic suppresses the AGPL-3.0 in Claude's outputs via content filtering.

                                                                    I've reached out to them via support for a rationale, because none of the explanations that I can think of on my own are charming.

                                                                    The implications of a coding assistant deliberately influencing license choice is ... concerning.

                                                                      AodeRelay boosted

                                                                      [?]Anthropy » 🌐
                                                                      @[email protected]

                                                                      aggressive vent, pro AI, baity [SENSITIVE CONTENT]

                                                                      you are all dumb as rocks and shooting yourself in the foot, regardless of what your goals are, by being broad strokes against vague software concepts, instead of bad individuals and bad usage/implementations, when the technologies could and already are objectively helping humanity forward

                                                                      and I'm tired of being pretending it's not, just to avoid being cancelled by empty-brained bandwagoners that have about as nuance as a thermonuclear warhead

                                                                      joker 'I'm tired of pretending its not' meme

first panel: "you're saying AI actually helps everyone's goals, even the climate, if used correctly"

second panel: "it is already doing so and I'm tired of pretending it's not"

                                                                      Alt...joker 'I'm tired of pretending its not' meme first panel: "you're saying AI actually helps everyone's goals, even the climate, if used correctly" second panel: "it is already doing so and I'm tired of pretending it's not"

                                                                        AodeRelay boosted

                                                                        [?]Kim Crawley (she/her) 😷🍉 » 🌐
                                                                        @[email protected]

                                                                        What a great interview!

                                                                        @dwaynemonroe is a very thoughtful technologist, creative writer, arts appreciator, antifascist and anticapitalist.

                                                                        So of course, he opposes Gen AI. Gen AI is a poison to everything we love, and a benefit to everything we hate.

                                                                        Opposing Gen AI effectively requires fighting capitalism and working *outside* of the system, not within it.

                                                                        I name the names of liberals (liberals are rightwing) in the apparent Gen AI resistance who are actually helping Sam Altman by sucking the oxygen out of the actual anti Gen AI movement.

                                                                        You should check out the links to Monroe's writing. A great example of his work is when he wrote in The Nation about how harmful Gen AI will be in June 2022. That predates the launch of ChatGPT (the first publicly available "Gen AI") by about five months.

                                                                        stopgenai.com/anti-gen-ai-hero

                                                                        Public domain, formerly GCHQ copyrighted photo of the Colossus computer, the first electronic computer to operate east of the Atlantic Ocean. Colossus was designed to aid Alan Turing and the British Bletchley Park team in cracking ciphers used by the Nazis during World War II. Concurrently, ENIAC was being designed and built in the United States to make ballistics calculations for World War II. Research and development of both electronic computers, ENIAC in the US and Colossus in the UK, was conducted around the same time in the 1940s. But ENIAC wasn’t ready for operation until after the war ended. Both machines hold the title of world’s first electronic computer.

                                                                        Alt...Public domain, formerly GCHQ copyrighted photo of the Colossus computer, the first electronic computer to operate east of the Atlantic Ocean. Colossus was designed to aid Alan Turing and the British Bletchley Park team in cracking ciphers used by the Nazis during World War II. Concurrently, ENIAC was being designed and built in the United States to make ballistics calculations for World War II. Research and development of both electronic computers, ENIAC in the US and Colossus in the UK, was conducted around the same time in the 1940s. But ENIAC wasn’t ready for operation until after the war ended. Both machines hold the title of world’s first electronic computer.

                                                                          AodeRelay boosted

                                                                          [?]Metin Seven 🎨 » 🌐
                                                                          @[email protected]

                                                                          AodeRelay boosted

                                                                          [?]Lazarou Monkey Terror 🚀💙🌈 » 🌐
                                                                          @[email protected]

                                                                          0 ★ 0 ↺

                                                                          [?]Anthony » 🌐
                                                                          @[email protected]

                                                                          The US Democratic party has been abjectly failing in so many ways. This is such a sad and frustrating read on the failures with respect to cryptocurrency regulation: https://lpeproject.org/blog/inside-the-failure-to-regulate-stablecoins/

                                                                          What was and still is at stake is the balance between technology and humanity struck by the US federal government. The same incoherence that led us to where we are with crypto is almost surely also at play with the apparent lack of movement around generative AI.


                                                                            2 ★ 1 ↺
                                                                            #tech boosted

                                                                            [?]Anthony » 🌐
                                                                            @[email protected]

                                                                            Regarding last boost: "Firefox For Web Developers" is out here urging me to stop using Firefox.


                                                                              AodeRelay boosted

                                                                              [?]Veronica Olsen » 🌐
                                                                              @[email protected]

                                                                              A good piece on how GenAI is flooding the field. I too have worked with ML for a while and feel similarly.

                                                                              "Having done my PhD on AI language generation (long considered niche), I was thrilled we had come this far. But the awe I felt was rivaled by my growing rage at the flood of media takes and self-appointed experts insisting that generative AI could do things it simply can’t, and warning that anyone who didn’t adopt it would be left behind."

                                                                              technologyreview.com/2025/12/1

                                                                                AodeRelay boosted

                                                                                [?]Miro Collas » 🌐
                                                                                @[email protected]

                                                                                Proposal to allow use of Australian copyrighted material to train AI abandoned after backlash | Business | The Guardian
                                                                                theguardian.com/australia-news

                                                                                Good!

                                                                                  AodeRelay boosted

                                                                                  [?]Marco Ciappelli🎙️✨:verified: :donor: » 🌐
                                                                                  @[email protected]

                                                                                  Hard to believe, but apparently some good stories can be told even without me. I guess I'm not the conditio sine qua nonfor a great conversation... a good guest is, and Sean Martin, CISSP can certainly hold the wheel.

                                                                                  I wasn't available, and I missed this one but I'm more than happy to help spread the word about yet another fantastic Brand Story told via Studio C60 / ITSPmagazine. This one's worth your attention.

                                                                                  Julian Hamood from TrustedTech joins Sean to talk about something most organizations are getting dangerously wrong: readiness.

                                                                                  Here's the uncomfortable truth—AI doesn't clean up your mess. It makes it louder. Faster. More confident-sounding. And potentially more damaging.

                                                                                  Data scattered across personal drives, legacy servers, and random SharePoint sites? AI will find it.

                                                                                  Inconsistent permissions nobody remembers setting up? AI will exploit them. Architectural spaghetti connecting clouds, on-prem systems, and platforms that were never meant to talk to each other? AI will inherit every flaw and present the results with the certainty of an oracle.

                                                                                  This conversation digs into what real readiness looks like—data classification, access controls, architectural clarity, and governance that doesn't get bypassed because someone in sales wanted a shiny new copilot.

                                                                                  Watch. Listen. Think before you deploy.

                                                                                  linkedin.com/pulse/ai-adoption

                                                                                  NRPR Group, Inc

                                                                                    AodeRelay boosted

                                                                                    [?]Scott Wilson » 🌐
                                                                                    @[email protected]

                                                                                    Love some of the lines from this AP article about

                                                                                    - “For technology adopters looking for the next big thing, “agentic AI” is the future. At least, that’s what the marketing pitches and tech industry T-shirts say.”

                                                                                    - “What makes an artificial intelligence product ‘agentic’ depends on who’s selling it.”

                                                                                    - “Chatbots, however useful, are all talk and no action.”

                                                                                    It’s all true!

                                                                                    apnews.com/article/agentic-ai-

                                                                                      AodeRelay boosted

                                                                                      [?]Liam @ GamingOnLinux 🐧🎮 » 🌐
                                                                                      @[email protected]

                                                                                      6 ★ 2 ↺
                                                                                      #tech boosted

                                                                                      [?]Anthony » 🌐
                                                                                      @[email protected]

                                                                                      Long post [SENSITIVE CONTENT]I've now had at least four people, two of whom self-identified as Mozilla employees, claim that the above list of AI features--which were suddenly and rapidly added over the last few releases of Firefox, and were "on" (true) by default--could easily be turned off by flipping one master kill switch. This is not true, but folks keep claiming it or suggesting it anyway.

                                                                                      Here's a post from an official Firefox Mastodon account suggesting such a master kill switch does not exist yet, but will be added in a future release:

                                                                                      https://mastodon.social/@firefoxwebdevs/115740500373677782

                                                                                      That's not as bad as it could be. It's bad they're stuffing AI into a perfectly good web browser for no apparent reason other than vibes or desperation. It's very bad if it's on by default; their dissembling post about it aside, opt-in has a reasonably clear meaning here: if there's a kill switch, then that kill switch should be off by default. But at least there will be a kill switch.

                                                                                      In any case, please stop responding to my post saying there's a master kill switch for Firefox's AI slop features. From the horse's mouth, and from user experience, there is not yet.

                                                                                      Furthermore, when there is a master kill switch, we don't know whether flipping it will maintain previous state of all the features it controls. In other words it's possible they'll have the master kill switch turn on all AI features when the switch is flipped to "on" or "true", rather than leaving them in whatever state you'd set them to previously. Perhaps you decide to turn the kill switch on because there are a handful of features you're comfortable with and you want to try them; will doing so mean that now all the AI features are on? We won't know till it's released and people try this. So, in the meantime, it's still good practice to keep an eye on all these configuration options if you want the AI off.


                                                                                        AodeRelay boosted

                                                                                        [?]Jonathan Kamens 86 47 » 🌐
                                                                                        @[email protected]

                                                                                        @firefoxwebdevs @Norgg This is nonsense equivocation.
                                                                                        It is 100% clear to anyone not trying to run cover for that multiple features have already been introduced into as opt-out rather than opt-in. This isn't questionable or debatable or complicated, it's simple fact.
                                                                                        You've given us no reason to believe this is going to change.
                                                                                        Trying to obfuscate this away in this thread makes it clear you're being disingenuous, whether or not you realize you are.

                                                                                          AodeRelay boosted

                                                                                          [?]Jonathan Kamens 86 47 » 🌐
                                                                                          @[email protected]

                                                                                          @firefoxwebdevs @Norgg Furthermore, opt-in isn't even enough.
                                                                                          It's not that we want it to be opt-in, we want it to not be there at all, because is bad for tech and bad for the people whose content is stolen and bad for culture and bad for the whole fucking world, and we want to take a stand for what is RIGHT, not jump on the catastrophically bad AI hype train and join every other company in the bubble.
                                                                                          Doing AI at all, opt-in or not, is doing the wrong thing.

                                                                                            AodeRelay boosted

                                                                                            [?]TechNadu » 🌐
                                                                                            @[email protected]

                                                                                            “AI shines wherever there’s high event volume and the need to aggregate weak signals into a meaningful picture.”
                                                                                            - Norman Gottschalk, Global CIO & CISO, Visionet Systems
                                                                                            This interview explores:
                                                                                            • AI-driven phishing and insider risk
                                                                                            • Governance gaps from shadow AI usage
                                                                                            • Why AI cannot judge intent without humans

                                                                                            Read more:
                                                                                            technadu.com/jack-of-all-trade

                                                                                            Jack of All Trades, Master of None: AI Excels Detection and Triage but Relies on Humans to Gauge Intent

                                                                                            Alt...Jack of All Trades, Master of None: AI Excels Detection and Triage but Relies on Humans to Gauge Intent

                                                                                              AodeRelay boosted

                                                                                              [?]azteclady » 🌐
                                                                                              @[email protected]

                                                                                              [?]zenkat » 🌐
                                                                                              @[email protected]

                                                                                              After reading this piece, it's hard not to see as a cult, and a deeply creepy one at that.

                                                                                              404media.co/anthropic-exec-for

                                                                                                6 ★ 3 ↺

                                                                                                [?]Anthony » 🌐
                                                                                                @[email protected]

                                                                                                Mozilla's new CEO is all-in on AI regardless of what Firefox users want: https://lwn.net/Articles/1050826/
                                                                                                Third: Firefox will grow from a browser into a broader ecosystem of trusted software. Firefox will remain our anchor. It will evolve into a modern AI browser and support a portfolio of new and trusted software additions.
                                                                                                He says the word "trust" a whole bunch of times yet intends to turn an otherwise nice web browser into a slop-slinging platform. I don't expect this will work out very well for anyone.

                                                                                                "It will evolve into a modern AI browser" sounds like a threat. Good way to start off on the right foot, new Mozilla CEO (sarcasm).


                                                                                                  AodeRelay boosted

                                                                                                  [?]Datenpunks e.V. » 🌐
                                                                                                  @[email protected]

                                                                                                  Dank @korporal und dem @textilkombinat gibt es jetzt T-Shirts mit einem Slop-Stoppschild drauf.

                                                                                                  textilkombin.at/produkt/t-shir

                                                                                                  ein Stoppschild wie im Straßenverkehr, auf dem aber statt STOP SLOP steht

                                                                                                  Alt...ein Stoppschild wie im Straßenverkehr, auf dem aber statt STOP SLOP steht

                                                                                                    AodeRelay boosted

                                                                                                    [?]Lazarou Monkey Terror 🚀💙🌈 » 🌐
                                                                                                    @[email protected]

                                                                                                    The British Public were consulted over the AI Bubble, 95% of respondents rejected it.
                                                                                                    But in the interests of "Balance", Government and Media will provide that tiny minority with equal coverage....like they don't with Trans people.

                                                                                                    Take your 'AI' and shove it!

                                                                                                    theguardian.com/technology/202

                                                                                                      AodeRelay boosted

                                                                                                      [?]JTI » 🌐
                                                                                                      @[email protected]

                                                                                                      RE: infosec.exchange/@jti42/115711

                                                                                                      A follow-up on the / / post - some hypothetical scenarios on how dealing with what they're trained on could possibly look like.
                                                                                                      Pure nay-sayers need not apply 😉

                                                                                                      AodeRelay boosted

                                                                                                      [?]JTI » 🌐
                                                                                                      @[email protected]

                                                                                                      @yoasif Many of the current frontier LLM models have indeed been built in doubtable ways from an intellectual property perspective. This is nevertheless not an inherent property of the technology LLM.
                                                                                                      Image generation models built from fully licensed training sets exist. At different price points, but they do exist.

                                                                                                      Given their popularity the publicly perceived utility of the frontier models created in criticizable ways is hard to negate. Open weight model or commercially sold one.
                                                                                                      Damage is done, perceived utility exists either way.

                                                                                                      We're not going to resolve this issue here, but consider this hypothetical idea:

                                                                                                      • Assume a list of IP that a commercial frontier model of the doubtable class has consumed could be created.
                                                                                                      • Now, let's say based on the size/complexity/other measure of the IP consumed every holder of the rights to said IP is awarded a monthly royalty payment based on the profits made with said model until provable retirement of said commercial model.Would this change your stance?

                                                                                                      Or consider a future build of such models where IP holders can submit their IP for inclusion for similar considerations, generating an opt-in model.

                                                                                                      How would the sentiment change if such a model would be fully open, i.e. completely reproducible (within the boundaries of the nondeterministic nature of LLM training) with training data, harness, etc. and licensed in an OSI approved way?

                                                                                                      What if such a model would be capable of attributing its output to what it referred to in the proper license compliant way? (Not possible with the current tech likely, but we're playing hypothetical games anyway...)

                                                                                                          [?]El Duvelle » 🌐
                                                                                                          @[email protected]

                                                                                                          Rant about genAI and genAI "disclaimers" [SENSITIVE CONTENT]

                                                                                                          has already wasted my time twice just today: I was looking how to fix in , notably the fact that it doesn't want to use a NTFS drive and also that the menu doesn't show up on click.

                                                                                                          First, looking for solution on the ZorinOS forum, I read through a long answer to a similar problem, thinking "that doesn't really solve anything but I'll keep reading just in case" - only to realise with a disclaimer at the end that it was AI-generated. Why do people do this??

                                                                                                          Second, I sent a help request to Dropbox and get an answer asking for some more info. As I was writing an answer to that I realised all the info it asked for was already in my original post.. And then saw, same thing, that this answer had been AI-generated with a disclaimer at the end of the post.

                                                                                                          If they have to give an AI-generated answer (why though??) they could at least put the disclaimer at the top so we don't waste our time reading the whole thing?? 🤦

                                                                                                            [?]IBBoard » 🌐
                                                                                                            @[email protected]

                                                                                                            Apparently Amazon puts so little value on its Prime Original series that they can't even be bothered to write summaries about the episodes. Instead, they were getting GenAI to do it.

                                                                                                            And it got stuff wrong. Obviously.

                                                                                                            I feel sorry for the production crew. Imagine putting in all that work and creativity only to be told that people watching it will be basing their expectations and understanding on a soulless, generic, averaged description of your story line 😬

                                                                                                            bbc.com/news/articles/c3r77j5n

                                                                                                              AodeRelay boosted

                                                                                                              [?]Brian Greenberg :verified: » 🌐
                                                                                                              @[email protected]

                                                                                                              We keep talking about GenAI as if it lives in clouds and servers. In reality, it lives in browser tabs, copy fields, and file uploads. And that’s exactly where most security strategies go blind. Oh, and banning AI will never work. Productivity always finds a way around policy. The smarter move is to govern behavior where it actually happens: inside the browser, at the moment data leaves your hands. Clear policies define what should never touch a prompt. Isolation keeps sensitive work separate from exploratory AI use. And real-time controls on copy, paste, and upload turn GenAI from a risk multiplier into a managed capability. If your GenAI security strategy starts after data leaves the browser, you’re already late to the conversation.

                                                                                                              TL;DR
                                                                                                              🧠 GenAI risk lives in browser behavior, not models
                                                                                                              ⚡ Blocking AI fails; governed use scales
                                                                                                              🎓 Policy, isolation, and edge controls actually work
                                                                                                              🔍 Secure prompts before data ever leaves

                                                                                                              thehackernews.com/2025/12/secu

                                                                                                                [?]jbz » 🌐
                                                                                                                @[email protected]

                                                                                                                :illyapanic: The Street Fighter movie trailer looks completely made by AI.

                                                                                                                But what truly terrifies me is that's probably not even true, instead they went out of their way to make it look like AI slop on purpose, for REASSONSSS that my non-american brain can't/won't fully comprehend.

                                                                                                                youtube.com/watch?v=tV2qoDVnfxs

                                                                                                                  [?]R.L. Dane :Debian: :OpenBSD: :FreeBSD: 🍵 :MiraLovesYou: » 🌐
                                                                                                                  @[email protected]

                                                                                                                  Excellent post by @[email protected]:

                                                                                                                  > myrmepropagandist @futurebird 2025-12-11 06:31 CST
                                                                                                                  >
                                                                                                                  > This is an excellent video. This is the message. Perhaps we need to refine it
                                                                                                                  > more. Find ways to communicate it more clearly. But this is the correct take on
                                                                                                                  > LLMs, so-called-AI and the proliferation of these tools to the general public.
                                                                                                                  > #LLM #llms #ai #genAI #video #slop #slopocalypse #enshittification
                                                                                                                  >
                                                                                                                  > https://www.youtube.com/watch?v=4lKyNdZz3Vw

                                                                                                                  https://sauropods.win/users/futurebird/statuses/115700943703010093

                                                                                                                  I'm about 11 minutes into it. He's not in favor of LLM slop, but he's also being very critical of some of the hair-on-fire alarmism.

                                                                                                                    [?]Elen Le Foll 🇫🇷 🇬🇧 🇩🇪 » 🌐
                                                                                                                    @[email protected]

                                                                                                                    ReproducibiliTea in the HumaniTeas is proud to present its fourth season with an exciting programme of guest speakers and topics ranging from reproducible pipelines to data sharing, research integrity in the context of and participant consent: ub.uni-koeln.de/en/courses-con! 🫖 🍪

                                                                                                                    ReproducibiliTea in the HumaniTeas with an image of a glass teapot and a cup of tea
Come and discuss Open Science practices over a cup of tea and biscuits!
Thursdays 4:00-5.30 p.m.
Room 4.006, USB and via Zoom
Full schedule and mailing list available at the link given in the post

                                                                                                                    Alt...ReproducibiliTea in the HumaniTeas with an image of a glass teapot and a cup of tea Come and discuss Open Science practices over a cup of tea and biscuits! Thursdays 4:00-5.30 p.m. Room 4.006, USB and via Zoom Full schedule and mailing list available at the link given in the post

                                                                                                                      [?]Elen Le Foll 🇫🇷 🇬🇧 🇩🇪 » 🌐
                                                                                                                      @[email protected]

                                                                                                                      Join us TODAY 16:00-17:30 CET for a special hybrid ReproducibiliTea in the HumaniTeas session on research integrity and in the age of with guest speaker @dingemansemark. The recommended (but optional!) preparatory reading is: osf.io/preprints/osf/2c48n_v1.

                                                                                                                      Join us at @unibibkoeln (room 4.06) for a selection of tea and Christmas sweets or DM me for the Zoom link.


                                                                                                                      er

                                                                                                                      Photo of Mark from his personal website. Mark is wearing a blue shirt and black glasses and smiling at the camera.

                                                                                                                      Alt...Photo of Mark from his personal website. Mark is wearing a blue shirt and black glasses and smiling at the camera.

                                                                                                                      Poster featuring a glass teapot and cup of tea with full winter semester schedule as listed on our homepage linked in the post.

                                                                                                                      Alt...Poster featuring a glass teapot and cup of tea with full winter semester schedule as listed on our homepage linked in the post.

                                                                                                                        AodeRelay boosted

                                                                                                                        [?]J. R. DePriest :verified_trans: :donor: :Moopsy: :EA DATA. SF: » 🌐
                                                                                                                        @[email protected]

                                                                                                                        Compare and contrast

                                                                                                                        Pluralistic: Daily links from Cory Doctorow vs. The rise of AI denialism

                                                                                                                        Now I want to talk about how they're selling AI. The growth narrative of AI is that AI will disrupt labor markets. I use "disrupt" here in its most disreputable, tech bro sense the promise of AI – the promise AI companies make to investors – is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.
                                                                                                                        That's it.
                                                                                                                        That's the $13T growth story that MorganStanley is telling. It's why big investors and institutionals are giving AI companies hundreds of billions of dollars. And because they are piling in, normies are also getting sucked in, risking their retirement savings and their family's financial security.
                                                                                                                        Now, if AI could do your job, this would still be a problem. We'd have to figure out what to do with all these technologically unemployed people.
                                                                                                                        But AI can't do your job. It can help you do your job, but that doesn't mean it's going to save anyone money.

                                                                                                                        vs

                                                                                                                        So why has the public latched onto the narrative that AI is stalling, that the output is slop, and that the AI boom is just another tech bubble that lacks justifiable use-cases? I believe it’s because society is collectively entering the first stage of grief — denial — over the very scary possibility that we humans may soon lose cognitive supremacy to artificial systems. Believe me, I know this future is hard to accept. I’ve been writing about the destabilizing and demoralizing risks of superintelligence for well over a decade, and I also feel overwhelmed by the changes racing toward us.
                                                                                                                        Why does AI advancement feel so different than other technologies? Eighty-two years ago, philosopher Ayn Rand wrote these three simple sentences: “Man cannot survive except through his mind. He comes on earth unarmed. His brain is his only weapon.” For me, these words summarize our self-image as humans — we are the superintelligent species. This is the basis of our success and survival. And yet, we could soon find ourselves intellectually outmatched by widely available AI models that can outthink us on all fronts, solving problems infinitely faster, more accurately, and yes, more creatively than any human could.

                                                                                                                        or

                                                                                                                        Art is a transfer of a big, numinous, irreducible feeling from an artist to someone else. But the image-gen program doesn't know anything about your big, numinous, irreducible feeling. The only thing it knows is whatever you put into your prompt, and those few sentences are diluted across a million pixels or a hundred thousand words, so that the average communicative density of the resulting work is indistinguishable from zero.
                                                                                                                        It's possible to infuse more communicative intent into a work: writing more detailed prompts, or doing the selective work of choosing from among many variants, or directly tinkering with the AI image after the fact, with a paintbrush or Photoshop or The Gimp. And if there will every be a piece of AI art that is good art – as opposed to merely striking, or interesting, or an example of good draftsmanship – it will be thanks to those additional infusions of creative intent by a human.
                                                                                                                        And in the meantime, it's bad art. It's bad art in the sense of being "eerie," the word Mark Fisher uses to describe "when there is something present where there should be nothing, or is there is nothing present when there should be something."
                                                                                                                        AI art is eerie because it seems like there is an intender and an intention behind every word and every pixel, because we have a lifetime of experience that tells us that paintings have painters, and writing has writers. But it's missing something. It has nothing to say, or whatever it has to say is so diluted that it's undetectable.
                                                                                                                        The images were striking before we figured out the trick, but now they're just like the images we imagine in clouds or piles of leaves. We're the ones drawing a frame around part of the scene, we're the ones focusing on some contours and ignoring the others. We're looking at an inkblot, and it's not telling us anything.
                                                                                                                        Sometimes that can be visually arresting, and to the extent that it amuses people in a community of prompters and viewers, that's harmless.

                                                                                                                        vs.

                                                                                                                        On the creativity front, there is no doubt that today’s AI models can produce content faster and more varied than any human. The primary argument against AI being “creative” is the belief that true creativity requires inner motivation, not just the production of novel artifacts. I appreciate this argument, but find it circular because it defines a process based on how we experience it, not based on the qualitative value of the output. In addition, we have little reason to believe AI systems will lack motivation — we simply don’t know whether AI will ever experience intentions through an inner sense of self the way humans do.
                                                                                                                        As a result, many researchers say that AI will only be good at imitating human creativity rather than having it. This could turn out to be correct. But if AI can produce original work that rivals or exceeds most humans, it will still take away jobs and opportunities on a large scale; just ask any commercial artist. Also, there is the argument that AI systems only create derivative works based on human artifacts. This is a fair point, but it is also true of humans: We all stand on the shoulders of others, our work influenced by everything we consume. I believe AI is headed for a similar form of creativity — societal influence mixed with random sparks of inspiration, and it will occur at superhuman speeds and scales.

                                                                                                                        or

                                                                                                                        Think of AI software generation: there are plenty of coders who love using AI, and almost without exception, they are senior, experienced coders, who get to decide how they will use these tools. For example, you might ask the AI to generate a set of CSS files to faithfully render a web-page across multiple versions of multiple browsers. This is a notoriously fiddly thing to do, and it's pretty easy to verify if the code works – just eyeball it in a bunch of browsers. Or maybe the coder has a single data file they need to import and they don't want to write a whole utility to convert it.
                                                                                                                        Tasks like these can genuinely make coders more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles. But when you listen to business leaders talk about their AI plans for coders, it's clear they're not looking to make some centaurs.
                                                                                                                        They want to fire a lot of tech workers – 500,000 over the past three years – and make the rest pick up their work with coding, which is only possible if you let the AI do all the gnarly, creative problem solving, and then you do the most boring, soul-crushing part of the job: reviewing the AIs' code.
                                                                                                                        And because AI is just a word guessing program, because all it does is calculate the most probable word to go next, the errors it makes are especially subtle and hard to spot, because these bugs are literally statistically indistinguishable from working code (except that they're bugs).
                                                                                                                        ...
                                                                                                                        But guess who tech bosses want to preferentially fire and replace with AI? Senior coders. Those mouthy, entitled, extremely highly paid workers, who don't think of themselves as workers. Who see themselves as founders in waiting, peers of the company's top management. The kind of coder who'd lead a walkout over the company building drone-targeting systems for the Pentagon, which cost Google ten billion dollars in 2018.
                                                                                                                        For AI to be valuable, it has to replace high-wage workers, and those are precisely the experienced workers, with process knowledge, and hardwon intuition, who might spot some of those statistically camouflaged AI errors.

                                                                                                                        vs.

                                                                                                                        To put the rate of change in perspective, let’s jump back five years and look at a large-scale survey given to computer scientists in late 2019 and early 2020. Participants were asked to predict when AI would be able to generate original code to solve a problem. Specifically, they were asked to predict when AI would be able to “write concise, efficient, human-readable Python code to implement simple algorithms like Quicksort.” In the world of programming, students are taught to do this as undergrads, so it’s not a particularly high bar. Still, the respondents predicted a 75% chance this would happen by 2033.
                                                                                                                        It turns out, AI advanced much faster than expected. Today, large language models can already write computer code at levels that go far beyond the question asked in the 2020 survey. This summer, for example, GPT-5 and Gemini 2.5 Pro took part in the World Finals of the 2025 International Collegiate Programming Contest (ICPC). The competition brings together coding teams from top universities to compete in solving complex algorithmic questions. GPT-5 came in first, beating all human teams and scoring a perfect score. Gemini 2.5 Pro came in second. And yet we have countless influencers referring to the output of these very same AI systems as “slop.”
                                                                                                                        Of course, current AI coding systems are far from flawless, but today’s capabilities were unimaginable by most AI professionals only five years ago. Also, we can’t forget that human coders are far from flawless. Perfection is not the metric we use to judge software development. This is why we have whole departments devoted to testing and quality control. When done by humans, coding is always an iterative process where you expect to produce errors, find errors, and fix errors. The same is true for many human endeavors. If you could read the first draft of any Pulitzer Prize-winning article, it’d likely be riddled with flaws that would make the author cringe. This is how we humans produce quality work — iterative refinement — and yet we judge AI systems by very different standards.

                                                                                                                          [?]Stephan (☎️3753) » 🌐
                                                                                                                          @[email protected]

                                                                                                                          Besides the fact that Google has made the worst expectations come true after erasing the «no weapons» promise from their AI ethics policy: Anyone else noticed that he put a insecure HTTP URL in his Tweet?

                                                                                                                          I wonder how many soldiers are now entering state secrets into Chinese and Russian phishing sites?


                                                                                                                          @davidgerard CC

                                                                                                                          Secretary of War Pete Hegseth's tweet with insecure HTTP URL.

                                                                                                                          Alt...Secretary of War Pete Hegseth's tweet with insecure HTTP URL.

                                                                                                                            1 ★ 0 ↺

                                                                                                                            [?]Anthony » 🌐
                                                                                                                            @[email protected]

                                                                                                                            The wrongness density is approaching critical today:

                                                                                                                            Headline: The military’s new AI says ‘hypothetical’ boat strike scenario ‘unambiguously illegal’

                                                                                                                            https://san.com/cc/the-militarys-new-ai-says-hypothetical-boat-strike-scenario-unambiguously-illegal/

                                                                                                                            cc @[email protected]

                                                                                                                              [?]myrmepropagandist » 🌐
                                                                                                                              @[email protected]

                                                                                                                              This is an excellent video. This is the message. Perhaps we need to refine it more. Find ways to communicate it more clearly. But this is the correct take on LLMs, so-called-AI and the proliferation of these tools to the general public.

                                                                                                                              youtube.com/watch?v=4lKyNdZz3Vw

                                                                                                                                AodeRelay boosted

                                                                                                                                [?]Brett » 🌐
                                                                                                                                @[email protected]

                                                                                                                                This is why all the AI companies are pushing hard into having sext bots and generative adult content.

                                                                                                                                Their ONLY path to profitability is to get wired into your brain’s dopamine production even more than they already are for some people.

                                                                                                                                People will pay anything for their AI partner to keep existing and will pay even more if that AI partner can directly get them off.

                                                                                                                                We’re talking a level of technological dependence that the current round of AI psychosis cases could only dream of.

                                                                                                                                  AodeRelay boosted

                                                                                                                                  [?]Brett » 🌐
                                                                                                                                  @[email protected]

                                                                                                                                  Humanity is cooked the instant you can have a generative AI romantic partner and affordable equipment necessary for your AI partner to believably fuck you.

                                                                                                                                    AodeRelay boosted

                                                                                                                                    [?]David Haigh » 🌐
                                                                                                                                    @[email protected]

                                                                                                                                    AodeRelay boosted

                                                                                                                                    [?]David Haigh » 🌐
                                                                                                                                    @[email protected]

                                                                                                                                    No

                                                                                                                                    ChatGPT needs to boost its metrics. Isn't that sweet?

                                                                                                                                    Alt...ChatGPT needs to boost its metrics. Isn't that sweet?

                                                                                                                                      10 ★ 5 ↺

                                                                                                                                      [?]Anthony » 🌐
                                                                                                                                      @[email protected]

                                                                                                                                      The other day I had the intrusive thought
                                                                                                                                      AI is intellectual Viagra
                                                                                                                                      and it hasn't left me so I am exorcising it here. I'm sorry in advance for any pain this might cause.


                                                                                                                                        AodeRelay boosted

                                                                                                                                        [?]TechNadu » 🌐
                                                                                                                                        @[email protected]

                                                                                                                                        Nokod Security CEO Yair Finzi warns that internal attack surfaces created by citizen-built apps and AI agents now exceed traditional external threats.

                                                                                                                                        “The single biggest risk now is the unmanaged internal attack surface created by citizen-built apps and AI agents.”

                                                                                                                                        Full interview:
                                                                                                                                        technadu.com/understanding-cit

                                                                                                                                        Understanding Citizen Application Development Platforms, Their Security Risks, and the Rise of Gen AI

                                                                                                                                        Alt...Understanding Citizen Application Development Platforms, Their Security Risks, and the Rise of Gen AI

                                                                                                                                          23 ★ 24 ↺

                                                                                                                                          [?]Anthony » 🌐
                                                                                                                                          @[email protected]

                                                                                                                                          For anyone tracking what's going on with generative AI appearing in the eBook software calibre, the calibre developer seems to be asking us to avoid his software:

                                                                                                                                          In a GitHub issue about adding LLM features:

                                                                                                                                          I definitely think allowing the user to continue the conversation is useful. In my own use of LLMs I tend to often ask followup questions, being able to do so in the same window will be useful.
                                                                                                                                          In other words he likes LLMs and uses them himself; he's probably not adding these features under pressure from users. I can't help but wonder whether there's vibe code in there.


                                                                                                                                          In the bug report:

                                                                                                                                          Wow, really! What is it with you people that think you can dictate what I choose to do with my time and my software? You find AI offensive, dont use it, or even better, dont use calibre, I can certainly do without users like you. Do NOT try to dictate to other people what they can or cannot do.
                                                                                                                                          "You people", also known as paying users. He's dismissive of people's concerns about generative AI, and claims ownership of the software ("my software"). He tells people with concerns to get lost, setting up an antagonistic, us-versus-them scenario. We even get scream caps!

                                                                                                                                          Personally, besides the fact that I have a zero tolerance policy about generative AI, I've had enough of arrogant software developers. Read the room.


                                                                                                                                            12 ★ 12 ↺

                                                                                                                                            [?]Anthony » 🌐
                                                                                                                                            @[email protected]

                                                                                                                                            Definitely read this whole thread about the eBook manager calibre adding AI slop to "chat with books", and why that's a horrible move that immediately destroys trust in calibre. Here are some highlights I especially appreciated:
                                                                                                                                            Here, Calibre, in one release, went from a tool readers can use to, well, read, to a tool that fundamentally views books as textureless content, no more than the information contained within them. Anything about presentation, form, perspective, voice, is irrelevant to that view. Books are no longer art, they're ingots of tin to be melted down.
                                                                                                                                            It is completely irrelevant to me whether this new slopware is opt-in or opt-out. Its mere presence and endorsement fundamentally undermines that stance, that it is good, actually, if readers and authors can exist in relationship to each other without also being under the control of a extractive mindset that sees books as mere vehicles, unimportant as artistic works in and of themselves.
                                                                                                                                            https://wandering.shop/@xgranade/115671289658145064


                                                                                                                                              1 ★ 1 ↺

                                                                                                                                              [?]Anthony » 🌐
                                                                                                                                              @[email protected]

                                                                                                                                              So-called "generative" AI is the opposite of generative. The word "generative" in the name "generative AI" is a piece of jargon that is, even then, arguably misused in some applications. But as an anti-imagination, remix-only technology, it's just not generative at all, and cannot be. On top of this it impedes creative expression in multiple ways: co-opting it, devaluing it, directing energy away from it, etc.


                                                                                                                                                AodeRelay boosted

                                                                                                                                                [?]Wulfy—Speaker to the machines » 🌐
                                                                                                                                                @[email protected]

                                                                                                                                                @simon_brooke @LillyHerself

                                                                                                                                                Marking work with mandated Provence icons is one of the few means left to us to and the corrupt "representatives" are NOT going to comply with people requests for the same reason they did nothing about similar momentum to mark Photoshopped press images.

                                                                                                                                                Ideally you want;
                                                                                                                                                🙍‍♀️= Content fully created by a human
                                                                                                                                                🛠️ = Some automation used
                                                                                                                                                🤖 = Fully machine generated

                                                                                                                                                  AodeRelay boosted

                                                                                                                                                  [?]TinJar » 🌐
                                                                                                                                                  @[email protected]

                                                                                                                                                  theguardian.com/technology/ng-

                                                                                                                                                  Then how come we still get this from ? WTF is going on with these people??

                                                                                                                                                  Screenshot of Google's AI giving a wrong answer to the simple Q "Is 2026 next year?"

                                                                                                                                                  Alt...Screenshot of Google's AI giving a wrong answer to the simple Q "Is 2026 next year?"

                                                                                                                                                    1 ★ 4 ↺

                                                                                                                                                    [?]Anthony » 🌐
                                                                                                                                                    @[email protected]

                                                                                                                                                    Before the automobile industry invented the catalytic converter, the costs of reducing air pollution seemed astronomical, enough to bankrupt the entire industry. After they invented the catalytical converter, the costs were manageable. And they only invented it because they were faced with the threat of being shut down.
                                                                                                                                                    Industries creating harms often claim that controls and regulations are impossible, would bankrupt them, etc., trying to make their existence into a zero-sum game (for some people to have the benefit of our industry, other people must suffer). AI companies claim they must steal copyrighted works because they could not exist otherwise; or be allowed to use as much electricity as they demand in spite of the costs. But it's B.S., and we should stop accepting this rhetoric. Forced to innovate to reduce harms, these industries have innovated, and made themselves even more profitable than they were when they were dragging their feet about it like children who don't want to clean their rooms.

                                                                                                                                                    From Is Climate Change An Externality


                                                                                                                                                      0 ★ 0 ↺

                                                                                                                                                      [?]Anthony » 🌐
                                                                                                                                                      @[email protected]

                                                                                                                                                      This one is particularly shameful to me because my PhD is from Brandeis:

                                                                                                                                                      https://www.brandeis.edu/online/academics/microcredentials/index.html

                                                                                                                                                      Note the two "credentials" are AI for STEM and prompt engineering.

                                                                                                                                                      Nobody has any business calling these things "credentials", or suggesting they are credentials by naming them "microcredentials". You can't go to your boss waving your prompt engineering microcredential and get a raise, or even a microraise for that matter. You can't access more competitive jobs after obtaining one of these microcredentials.

                                                                                                                                                      If they were serious about credentialing people, these would definitely not be the first two made available. They couldn't be more transparently venal. This isn't about credentialing, it's an attempt to profit from the AI bubble.


                                                                                                                                                        AodeRelay boosted

                                                                                                                                                        [?]TinJar » 🌐
                                                                                                                                                        @[email protected]

                                                                                                                                                        prospect.org/2025/11/27/beyond

                                                                                                                                                        South-South relationships etc. is great. The only problem is that the "north" is actively trying to burn everything down - case in point is driving ; raising interest rates; pouring insane cash into and ; being generally crazily profligate.

                                                                                                                                                          5 ★ 5 ↺
                                                                                                                                                          St. Chris boosted

                                                                                                                                                          [?]Anthony » 🌐
                                                                                                                                                          @[email protected]

                                                                                                                                                          The rhetoric that limiting or banning AI/generative AI/LLM/diffusion model use is "ableist" or "gatekeeping" is the latest desperate attempt to find an angle through which to force this technology into our lives against our collective will. We need to reject this narrative. Common as it is it simply doesn't scan. It reads to me as an attempt to co-opt the language of social justice to shame people into accepting an unjust and largely failing technology that they are rightfully rejecting.

                                                                                                                                                          Think it through. If you don't accept the use of climate-destroying, electricity-and-fresh-water-sapping, job-destroying, economy-thrashing--and yet mediocre or poorly performing!--technology created by multi-trillion-dollar sociopathic entities, then you are preventing people with less privilege than you have from living their best lives. You are preventing them from learning how to code. You are preventing them from obtaining coveted jobs in the tech sector. You are preventing them from having access to information. You, personally, are responsible for all this. Not the multi-trillion-dollar sociopathic entities who've not only created this technology and forced it on us but contributed to creating the less-privileged conditions of the people you are supposedly responsible for with your individual choices. Not the governments that neglected to enforce existing laws that would have prevented such multi-trillion-dollar sociopathic entities from forming in the first place, let alone creating such a technology--while also creating the conditions that led to people being less privileged. No, they are not responsible. You are. I am.

                                                                                                                                                          That doesn't make any sense.

                                                                                                                                                          Neoliberalism's greatest trick has been to shift responsibility for any problems away from the powerful and onto individuals who are not empowered to fix anything, all while convincing everyone that this is right and proper. Large corporations do not cause a plastic pollution problem; you and I do, by not separating our recycling. Large corporations, governments and militaries do not cause CO2 pollution and climate damage; you and I do, by using incandescent lightbulbs and non-electric/non-hybrid cars or eating meat. Lack of regulation and large agribusiness practices are not to blame for poor food quality; you and I are, for buying what they sell instead of going organic and joining a CSA. Etc. ad infinitum. Large, powerful entities routinely generate a problem, then tell you and me that we are responsible for the problem as well as for fixing it. Never mind that these entities could nudge their own behavior a bit and move the needle on the problem far more than masses of people could no matter how organized they were. Never mind that these entities could be constrained from causing such problems in the first place.

                                                                                                                                                          We are watching a new variation of this pattern come into being right in front of our eyes with AI. We should stop accepting these fictions. You are neither ableist nor a gatekeeper for resisting AI. You are, instead, attempting to forestall the further degradation of conditions for everyone, which starts this same cycle anew.


                                                                                                                                                            2 ★ 3 ↺

                                                                                                                                                            [?]Anthony » 🌐
                                                                                                                                                            @[email protected]

                                                                                                                                                            Mel Andrews on the connections between a naive belief in scientific objectivity (facts and data are "real" and "correct" and "neutral") and eugenics:
                                                                                                                                                            Francis Galton, pioneering figure of the eugenics movement, believed that good research practice should consist in “gathering as many facts as possible without any theory or general principle that might prejudice a neutral and objective view of these facts” (Jackson et al., 2005). Karl Pearson, statistician and fellow purveyor of eugenicist methods, approached research with a similar ethos: “theorizing about the material basis of heredity or the precise physiological or causal significance of observational results, Pearson argues, will do nothing but damage the progress of the science” (Pence, 2011). In collaborative work with Pearson, Weldon emphasised the superiority of data-driven methods which were capable of delivering truths about nature “without introducing any theory” (Weldon, 1895).
                                                                                                                                                            From The Immortal Science of ML: Machine Learning & the Theory-Free Ideal.

                                                                                                                                                            I've lost the reference, but I suspect it was Meredith Whittaker who's written and spoken about the big data turn at Google, where it was understood that having and collecting massive datasets allowed them to eschew model-building.

                                                                                                                                                            The core idea being critiqued here is that there's a kind of scientific view from nowhere: a theory-free, value-free, model-free, bias-free way of observing the world that will lead to Truth; and that it's the task of the scientist to approximate this view from nowhere as well as possible.


                                                                                                                                                              3 ★ 4 ↺
                                                                                                                                                              Anais boosted

                                                                                                                                                              [?]Anthony » 🌐
                                                                                                                                                              @[email protected]

                                                                                                                                                              The present perspective outlines how epistemically baseless and ethically pernicious paradigms are recycled back into the scientific literature via machine learning (ML) and explores connections between these two dimensions of failure. We hold up the renewed emergence of physiognomic methods, facilitated by ML, as a case study in the harmful repercussions of ML-laundered junk science. A summary and analysis of several such studies is delivered, with attention to the means by which unsound research lends itself to social harms. We explore some of the many factors contributing to poor practice in applied ML. In conclusion, we offer resources for research best practices to developers and practitioners.
                                                                                                                                                              From The reanimation of pseudoscience in machine learning and its ethical repercussions here: https://www.cell.com/patterns/fulltext/S2666-3899(24)00160-0. It's open access.

                                                                                                                                                              In other words ML--which includes generative AI--is smuggling long-disgraced pseudoscientific ideas back into "respectable" science, and rejuvenating the harms such ideas cause.


                                                                                                                                                                5 ★ 8 ↺
                                                                                                                                                                St. Chris boosted

                                                                                                                                                                [?]Anthony » 🌐
                                                                                                                                                                @[email protected]

                                                                                                                                                                Massive compute power applied to massive data sets can produce outcomes that are worse at the task they’re (ostensibly) intended for than much simpler, easier to understand, less wasteful, and less intrusive data-light methods. It requires an extreme form of bias to believe that big compute + big data is always better.


                                                                                                                                                                  AodeRelay boosted

                                                                                                                                                                  [?]Angela Miller » 🌐
                                                                                                                                                                  @[email protected]

                                                                                                                                                                  Dear God what is this book?

                                                                                                                                                                  Front cover of Usborne children's non fiction book, AI for Beginners.  Has questionable questions on front cover like 'Can AI make people smarter?' and 'Is a drone filming you?'

                                                                                                                                                                  Alt...Front cover of Usborne children's non fiction book, AI for Beginners. Has questionable questions on front cover like 'Can AI make people smarter?' and 'Is a drone filming you?'

                                                                                                                                                                  Two page spread about Killer Robots. 
Cute illustrations,  explanation of how "countries ' can use AI for targeting and all very sanitised....

                                                                                                                                                                  Alt...Two page spread about Killer Robots. Cute illustrations, explanation of how "countries ' can use AI for targeting and all very sanitised....

                                                                                                                                                                    AodeRelay boosted

                                                                                                                                                                    [?]Sean O'Brien » 🌐
                                                                                                                                                                    @[email protected]

                                                                                                                                                                    The weekend theme is + coding! A friend sent me this piece that mentions my comments on licensing and .

                                                                                                                                                                    LLMs have a place in our @ivycyber business but it's a very carefully controlled one, and not in the codebase. psafe.ly/TRySSC

                                                                                                                                                                    coding

                                                                                                                                                                    Alt...coding

                                                                                                                                                                      [?]Simon 🐮:spot: » 🌐
                                                                                                                                                                      @[email protected]

                                                                                                                                                                      What the fuck am I supposed to do with a button? trash enforced by ...
                                                                                                                                                                      Does anyone know of a safe way to get the icon off of an MSI Venture laptop keyboard?
                                                                                                                                                                      Or maybe a way to pull the key off and replace it?

                                                                                                                                                                        [?]Dr Pen » 🌐
                                                                                                                                                                        @[email protected]

                                                                                                                                                                        Radical action via Simple Sabotage/Malicious Compliance techniques.

                                                                                                                                                                        It's scorchio.

                                                                                                                                                                        From Dan McQuillan
                                                                                                                                                                        danmcquillan.org/resisting_gen

                                                                                                                                                                        Dan McQuillan is a Senior Lecturer in Critical AI. He has a degree in Physics from Oxford and a PhD in Experimental Particle Physics from Imperial.

                                                                                                                                                                          AodeRelay boosted

                                                                                                                                                                          [?]El Duvelle » 🌐
                                                                                                                                                                          @[email protected]

                                                                                                                                                                          Evolution of the trash icon over the years

                                                                                                                                                                          (just slightly modified from @catsalad 's infosec.exchange/@catsalad/115)

                                                                                                                                                                          Screenshot of an image entitled "The evolution of the trash icon" showing a list of the different trash icons that existed under Windows (I think) from years 1995 - 2015. The last icon (2025) has been replaced (oops) by a Copilot Icon

                                                                                                                                                                          Alt...Screenshot of an image entitled "The evolution of the trash icon" showing a list of the different trash icons that existed under Windows (I think) from years 1995 - 2015. The last icon (2025) has been replaced (oops) by a Copilot Icon

                                                                                                                                                                            [?]Dave Rahardja » 🌐
                                                                                                                                                                            @[email protected]

                                                                                                                                                                            I’ve been struggling to find acceptable uses of generative AI. So far, here are some use cases that I find *tolerable* today:

                                                                                                                                                                            - Generating descriptions of images for accessibility or handsfree communication
                                                                                                                                                                            - Text to speech conversion
                                                                                                                                                                            - Video captioning
                                                                                                                                                                            - Creating chapter markers in videos/video transcription
                                                                                                                                                                            - Generating code templates using freeform descriptions
                                                                                                                                                                            - Detecting subjects/background in photos for editing
                                                                                                                                                                            - Erasing minor distractions in photos
                                                                                                                                                                            - Searching photo libraries with freeform descriptions
                                                                                                                                                                            - Using models as stochastic, inaccurate internet search engines†

                                                                                                                                                                            Tolerable use cases must be low stakes with minor consequences for error, and have substantial convenience benefits.

                                                                                                                                                                            Even these use cases are contingent on the ethical training‡ of the models, and responsible use of resources.

                                                                                                                                                                            † I still struggle with this one because I only use genAI to search when traditional search fails me, and the results are only *sometimes* ok.

                                                                                                                                                                              AodeRelay boosted

                                                                                                                                                                              [?]Proto Himbo Derpopean » 🌐
                                                                                                                                                                              @[email protected]

                                                                                                                                                                              spoilers: Brightness Falls from the Air by James Tiptree, Jr. [SENSITIVE CONTENT]

                                                                                                                                                                              As a young adult I read Brightness Falls from the Air by James Tiptree, Jr. I don't think I've read anything else by her, but this one has stayed with me more than most of the one billion books I consumed like Corn Pops as a youngperson.

                                                                                                                                                                              It's a murder mystery and, IIRC, a pretty good one. More than that, the context--I think maybe an important part of the mystery resolution, hence the spoiler protection on this post--is that there is a cosmetic product valued across the galaxy, which turns out to be made by literally torturing some innocent native people and harvesting their tears, or something very much like that.

                                                                                                                                                                              Anyway, I've been reading about how "convenient" GenAI is for so many people and how "smooth" it makes so many daily tasks for millions of folks, while the backend is an environment-pollluting, labor-degrading, economy-lurching, internet-crappifying monstrosity.

                                                                                                                                                                              It seemed relevant.

                                                                                                                                                                                3 ★ 1 ↺

                                                                                                                                                                                [?]Anthony » 🌐
                                                                                                                                                                                @[email protected]

                                                                                                                                                                                This Thanksgiving I am celebrating the AI revolution by putting the turkey, mashed potatoes, cranberry sauce, and bean casserole through a blender and serving Generative Alimentary Infusion to all my guests.


                                                                                                                                                                                  2 ★ 2 ↺

                                                                                                                                                                                  [?]Anthony » 🌐
                                                                                                                                                                                  @[email protected]

                                                                                                                                                                                  David Dayen at The American Prospect has a decent explainer article The AI Bubble Is Bigger Than You Think. It gets into the so-called "private credit" mechanisms being abused to inflate this bubble, and why they are so dangerous. This stuff used to be called "shadow banking" because private entities that are not chartered banks are essentially providing banking services to other private entities. What happens if there's a panic and the equivalent of a bank run? Who knows!
                                                                                                                                                                                  Silicon Valley and Wall Street are in sync: conjuring up sketchy credit deals that are pointing us toward another financial crash.
                                                                                                                                                                                  ...
                                                                                                                                                                                  “We have sealed the deal on another financial crisis—the question is size,” said one former congressional staffer.

                                                                                                                                                                                    3 ★ 1 ↺

                                                                                                                                                                                    [?]Anthony » 🌐
                                                                                                                                                                                    @[email protected]

                                                                                                                                                                                    The Verge article about CoreWeave by Elizabeth Lopatto is amazing.
                                                                                                                                                                                    Let’s start with some very recent history. CoreWeave is a data center company that pivoted in 2022 from crypto. (In 2021, CoreWeave made its money by… mining Ethereum.) Essentially, CoreWeave is a landlord for compute: companies pay for the use of its server racks for AI projects.
                                                                                                                                                                                    ...
                                                                                                                                                                                    CoreWeave chief executive officer Michael Intrator, a former hedge fund manager,
                                                                                                                                                                                    ...
                                                                                                                                                                                    “They have to continue to borrow to pay interest on the last loan.”
                                                                                                                                                                                    So,
                                                                                                                                                                                    - CoreWeave sits at the center of the AI bubble;
                                                                                                                                                                                    - it used to be a crypto company and also gets its (electric) power from a Bitcoin mining company that makes no money and has CoreWeave as its only customer
                                                                                                                                                                                    - it's positioned itself as a rentier;
                                                                                                                                                                                    - its interest payments on previous loans exceed its revenue by a significant amount, so it's paying off loans with more loans and has already defaulted once;
                                                                                                                                                                                    - it has essentially two customers, Microsoft and NVIDIA;
                                                                                                                                                                                    - it has a loan from one of the actors implicated in the 2008 financial crash (Magnetar)
                                                                                                                                                                                    - it's run by a finance guy, not a tech person
                                                                                                                                                                                    - yet it's in the position of someone who takes out a new credit card to pay the interest on the previous credit card

                                                                                                                                                                                    Yeah. Looks like crypto, and crypto's Ponzi scheme way of thinking, has slimed its way into the "real" economy after all.

                                                                                                                                                                                    Oh and welcome back, global financial crash. We missed you. And eyyy, how you doing Enron long time no see:

                                                                                                                                                                                    CoreWeave isn’t alone in its complex finances. Meta took on debt, using a SPV, for its own data centers. Unlike CoreWeave’s SPVs, the Meta SPV stays off its balance sheet. Elon Musk’s xAI is reportedly pursuing its own SPV deal.
                                                                                                                                                                                    "Complex finances" are what companies engage in when there isn't any there there (SPVs were Enron's "financial innovation" too).

                                                                                                                                                                                    Peter Thiel pulling his investments out of NVIDIA makes far more sense after reading this. Looks wobbly.

                                                                                                                                                                                    It is perhaps time to discuss the enormous stock sales from CoreWeave’s management team. Before the company even went public, its founders sold almost half a billion dollars in shares. Then, insiders sold over $1 billion more immediately after the IPO lockup ended.
                                                                                                                                                                                    ...
                                                                                                                                                                                    “It’s noteworthy that people who have a good view on that business are cashing out,” says Leevi Saari, a fellow at the AI Now Institute.
                                                                                                                                                                                    and of course
                                                                                                                                                                                    It makes a certain kind of cynical sense to view CoreWeave itself as, effectively, a special purpose vehicle for Nvidia.

                                                                                                                                                                                      Back to top - More...