Thanks to visit codestin.com
Credit goes to programming.dev

Bio field too short. Ask me about my person/beliefs/etc if you want to know. Or just look at my post history.

  • 0 Posts
  • 178 Comments
Joined 2 年前
Codestin Search App
Cake day: 2023年8月3日

Codestin Search App
  • You may not realize that AI evangelism is not only in the scientific community. Articles like this, claiming AI is doing something amazing, something that humans have been unable to do, is propaganda.

    Peer review is great. The average reader of that article is assuming the peer review is complete by the time they read it. Even if that isn’t true. The takeaway for them is not “this annotation was likely made by a German scribe at XYZ date” the point of the article is that “Gemini figured out something that stumped human researchers”.

    And I refute the original position: the notes are not inscrutable and I doubt a human has never translated the numerals or analyzed the script to guess age. It just wasn’t important enough to write an article until it made AI look good.

    The article is propaganda. It’s neat, but if you read it as “look at this cool thing llms can do…” Then you fell for it.


  • I simply pointed out that nobody is expecting LLMs to validate the solutions it comes up with on its own, or to trust it to come up with a correct solution independently

    I want to point you towards (gestures broadly at everything) where LLMs are being sold as panacea and things like hallucinations and overconfidence are being minimized. The AI industry is claiming these tools are a way to remove the human in the loop, and are trustworthy.

    You may understand that a LLM is a starting point, not an end, but the way most people are being sold them is different and dangerous. Articles like the one you posted, which also downplayed the mistakes the model made (“Gemini made some minor numerical errors…”) while suggesting that it made a novel discovery about the source material are problematic. How many people just read the headline, or the whole article, and now assume that the data presented is fact, when the data presented is an unconfirmed opinion at best and made up at worst.

    LLMs are really good at making “smart sounding” text that reads like someone intelligent wrote it, but we have tons of examples where smart-sounding and factual are a Venn Diagram that doesn’t overlap.



  • This “grand chessboard” is a game played by people who are too “powerful” to matter. fuck 'em and shoot 'em.

    As an American, I want the US to be no more of a superpower than the EU, China, or Zimbabwe. We did this posturing in middle school. Has the whole world not evolved past that yet?!

    We’ll be better served when we can be cooperative instead of competitive.

    I approve of a strong EU. The more “players”, the less power each has, but also the less each has to gain or lose by being an asshole.

    We just need a plan to unite without leaving room for a few idiots to get elected and wreck the whole thing,



  • I keep asking, and have never seen one:

    Someone find me a recording of writing a real application – not a “vibe coding demo” – using one of these magic tools. Let’s ignore the complexities of scale or data integrity, or even error handling. Write a simple asteroids clone or calculator app. Something that doesn’t need to reach outside its tiny box.

    Build that in only an AI coding tool and see how long it takes to get a real product. (I get stuck here. So far, all demos end up halting before they are MVP, though they tend to get something faster than I could)

    Afterwards, get a code review and fix any findings without breaking anything.

    And here’s a catch: If you want to say AI codes better than humans, then a human is never allowed to modify the code directly. They can only prompt.

    Show me that, and I might change my tune, but we’re nowhere close to it AFAIK, and I don’t see it ever happening. Until then, “AI coding” is at-best a way to reduce time spent writing small atomic functions that can be easily verified by a human, or maybe starting a function that a human fixes.

    Things that took an experienced team months to build can now be done by a single guy in a few days…

    I have yet to see this materialize outside of CEO townhalls or industry self-aggrandizement. If you’re an experienced programmer, can you go make me the demo I want and then review the code with a critical eye?

    I’m not saying that a noob can ask an AI to build Google and it will be done; But you’d better believe that an experienced programmer using AI will deliver weeks of high-quality work in a single day.

    And, I wonder, how does an intern or novice programmer become this experienced programmer if they never have to touch code‽ We’re headed to Idiocracy here, where the people who knew how to get shit done and why it worked eventually died off and there was no pipeline to replace them.

    Until then… “vibe coding” is just smoke and will result in terrible things happening in a few years as banks and companies start inflicting terrible, cheap, AI code on us all.

    Want to make tons of money? Go learn to be a security researcher. I’d be happy to be proved wrong.




  • I really like this comment. It covers a variety of use cases where an LLM/AI could help with the mundane tasks and calls out some of the issues.

    The ‘accuracy’ aspect is my 2nd greatest concern: An LLM agent that I told to find me a nearby Indian restaurant, which it then hallucinated is not going to kill me. I’ll deal, but be hungry and cranky. When that LLM (which are notoriously bad at numbers) updates my spending spreadsheet with a 500 instead of a 5000, that could have a real impact on my long-term planning, especially if it’s somehow tied into my actual bank account and makes up numbers. As we/they embed AI into everything, the number of people who think they have money because the AI agent queried their bank balance, saw 15, and turned it into 1500 will be too damn high. I don’t ever foresee trusting an AI agent to do anything important for me.

    “trust”/“privacy” is my greatest fear, though. There’s documentation for the major players that prompts are used to train the models. I can’t immediately find an article link because ‘chatgpt prompt train’ finds me a ton of slop about the various “super” prompts I could use. Here’s OpenAI’s ToS about how they will use your input to train their model unless you specifically opt-out: https://openai.com/policies/how-your-data-is-used-to-improve-model-performance/

    Note that that means when you ask for an Indian restaurant near your home address, Open AI now has that address in it’s data set and may hallucinate that address as an Indian restaurant in the future. The result being that some hungry, cranky dude may show up at your doorstep asking, “where’s my tikka masala”. This could be a net-gain, though; new bestie.

    The real risk, though, is that your daily life is now collected, collated, harvested and added to the model’s data set; all without your clear explicit actions: using these tools requires accepting a ToS that most people will not really read and understand. Maaaaaany people will expose what is otherwise sensitive information to these tools without understanding that their data becomes visible as part of that action.

    To get a little political, I think there’s a huge downside on the trust aspect of: These companies have your queries(prompts), and I don’t trust them to maintain my privacy. If I ask something like “where to get abortion in texas”, I can fully see OpenAI selling that prompt to law enforcement. That’s an egregious example for impact, but imagine someone could query prompts (using an AI which might make shit up) and asks “who asked about topics anti-X” or “pro-Y”.


    My personal use of ai: I like the NLP paradigm for turning a verbose search query into other search queries that are more likely to find me results. I run a local 8B model that has, for example, helped me find a movie from my childhood that I couldn’t get google to identify.

    There’s use-case here, but I can’t accept this as a SaaS-style offering. Any modern gaming machine can run one of these LLMs and get value without the tradeoff from privacy.

    Adding agent power just opens you up to having your tool make stupid mistakes on your behalf. These kinds of tools need to have oversight at all times. They may work for 90% of the time, but they will eventually send an offensive email to your boss, delete your whole database, wire money to someone you didn’t intend, or otherwise make a mistake.


    I kind of fear the day that you have a crucial confrontation with your boss and the dialog goes something like:

    Why did you call me an asshole?

    I didn’t the AI did and I didn’t read the response as much as I should have.

    Oh, OK.


    Edit: Adding as my use case: I’ve heard about LLMs being described as a blurry JPEG of the internet, and to me this is their true value.

    We don’t need a 800B model, we need an easy 8B model that anyone can run that helps turn “I have a question” into a pile of relevant actual searches.



  • Similarly, my fantasy is that If I won the lottery, or otherwise became independently wealthy, I’d be doing a ton of different entry-level jobs to find one that hit as a passion.

    Construction worker, stagehand, (i’ve already been retail), food service, intern for anything that requires a degree I don’t have, etc.

    I like my current job but if I didn’t need the paycheck then I’m not sure I’d stay. I might stick around if I could negotiate terms and only do the parts I liked, though.

    I wish I could learn a little about everything, but our culture pushes us to commit and be deep instead, and then we get stuck in a job that used to be a fun hobby.


  • While you are partly correct, the RAM used in AI datacenters is not the same shape as DDR5, they share a supply chain: The silicon that makes that RAM is being diverted from “consumer”, or even most “on-prem enterprise” RAM (DDR4/5/+) to datacenter RAM.

    Memory manufacturers have been reallocating production capacity away from consumer products toward these more profitable enterprise components, and Micron has presold its entire HBM output through 2026. [emphasis added]

    Because of that change in allocation, there’s a lower supply without a (much)lower demand, and the prices of consumer RAM will eventually rise to meet what the AI datacenters are willing to pay per unit of silicon. The shape doesn’t matter very much. This change in supply has already had very visible effects in the consumer DDRX markets, like 100%-increases-in-a-month-visible.

    That smartphones and other devices with RAM haven’t felt this yet is reasonable. They locked their manufacturing prices in before the spike in price, so there is a lag. That price increase will be felt by the consumer, though. We’re going to be hard pressed to compete for memory against these companies willing to throw billions of dollars around. Next year’s flagship phone price will be a gunshot.

    The really sad, annoying, rage-inspiring part is that modern consumer goods never drop in price. When DDR6 hits $1k/8GB, or something like that, “they” will know some of us are willing to pay that price. This line only goes up, and AI datacenters are doing irreparable harm to consumer electronics as an industry.

    Think about this: What is NAND Flash made of? We’re already seeing SSD prices rise too, especially in the NVME flavor. SD cards for your camera, cartridges for your switch, your fucking fridge. They all have silicon in them and this AI supply chain is guzzling silicon the way it guzzles power and water.


  • This is my issue with NMS.

    It’s fun for a while, but it’s a pretty shallow sandbox and after you’ve played in the sand for a bit, it’s all just sand.

    If you’re not setting yourself a complex and/or grindy goal, like building a neat base, finding the perfect weapon or ship, filling out your reputations or lexicon, or learning all the crafting recipes to make the ultimate mcGuffin, then there is really not much to do. And, for me, once that goal is accomplished, I’m done for a while.

    Each planet is just a collection of random tree/bush/rock/animal/color combinations that are mechanically identical (unless something’s changed. I haven’t played since they added VR). I’m also a gamer who likes mechanical complexity and interactions; I don’t tend to play a game for the actual ‘role playing’.

    The hand-written “quests” were fun to do most of the time, but that content runs out quickly.

    I have the same problems with Elite Dangerous (I have an explorer somewhere out a solid few hours away from civilized space) and unmodded Minecraft (I can only build so many houses/castles). I’ll pick all of these up every now and then, but the fun wears off more quickly each time.



  • In the nicest possible way, and only judging from this post, you are part of the problem. Hear me out:

    They don’t actually need you. Either party. There’s a solid base of voters who are going to vote blue or stay home, or vote red or stay home. If you require being courted, then you’re either effectively random, staying home, or lean towards one side over the other.

    You’re possibly upset that none of your choices are good. That’s pretty true. ‘both sides’ have reasons to not vote for them. You need to help fix that: pick a side, whichever one you lean towards, and go make the choices better.

    Local politics (the ones at the precinct, county, state levels) decide how we choose our candidates in the larger races by deciding who represents us on those larger stages internally to the party. Example: the general public was not polled for the dnc chair election, it was only people put into dnc leadership, who were voted for, several steps down, by people at the precinct level. https://en.wikipedia.org/wiki/2025_Democratic_National_Committee_chairmanship_election

    Is there corporate bullshit here? almost certainly. Can it be overcome? Only if people are paying attention and care to get involved. Voting only in November elections and expecting the candidates to cater to you specifically will not resolve the problems.

    The candidates don’t need to work for your vote. You need to work for better candidates. Or shut up and vote for the least harm.