Large Language Models are not suitable for decision making roles. The majority of the work in software development involves making decisions.
Language translation is ALL decision making though?
“every perceived metamorphosis of a word or phrase within or between languages, every decipherment and interpretation of that logo on the panel, every act of reading, writing and interpretation of a text, every role by each actor in the cast, every adaptation of a script by a director of opera, theater, film, ballet, pantomime, indeed every perception of movement and change, in the street or on our tongues, on the page or in our ears, leads us directly to the art and activity of translation”
https://www.paideiainstitute.org/the_creative_art_of_translation
https://www.catranslation.org/feature/6-great-introductions-to-the-art-of-translation/
The most questionable effect of Dryden’s assertion, to my mind, is that it winds up collapsing the translator’s labor into the foreign author’s, giving us no way to understand (let alone judge) how the translator has performed the crucial role of cultural go-between. To read a translation as a translation, as a work in its own right, we need a more practical sense of what a translator does. I would describe it as an attempt to compensate for an irreparable loss by controlling an exorbitant gain.
https://wordswithoutborders.org/read/article/2004-07/how-to-read-a-translation/
Arguably, sure. I assert that LLMs are a terrible choice for translating anything which matters though, largely for that reason
No, it is not.
What is it then?
Translation.
Is a cashier in a decision making role when they “decide” what buttons to press on the cash register, given an existing basket of products?
This is not how translation works, you can’t reduce it to a simple table lookup for similar words and just replace them and call it done.
That is a poor example to compare to.
No, it is how translation works. You didn’t answer the question. Is the cashier “making decisions”? The analogy is apt.
No, tallying the price of items is a process of looking up each item’s price in a table and retrieving it.
There is always only ever one possible, perfect answer in this process and thus it is utterly unlike language translation at all and honestly it is alarming you can’t see the difference.
Bad news for this research team in that case, I wonder if they’ve seen your whitepaper yet?
Bad comparison, CEOs are also not suitable for decision making roles
Hah, fair enough
And yet llm (or what people call AI) can’t run a vending machines business.
https://www.anthropic.com/research/project-vend-1“Can’t” is a strong word, it ran a business - some might say better than some CEOs heh
I think it will make getting into coding easier. For me, i always wanted a module for foundry that:
- allowed GMs to select 2 or more tokens
- picks and targets one toke randomly
The idea is to not have to role a the time when the enemy picks of targets in range.
I tried starting once or twice, but life always got in the way and I never really knew where to start.
I spent 2 days with my Kagi AI assistant and Rando was ready to be used with my foundry setting.
No, I am not a programmer, but with the llm, I was able to turn an idea into reality - and that felt really good.
So, it could open the way to programming for people like thingieverse opened 3d printing. You can simply start with smaller stuff and learn along.
This is objectively false. Here’s why.
A Formal Language is one that is rigorously defined in logic. It has a grammar which rigidly defines the set of strings (or patterns of symbols) that are in the language. Math and computing is built on formal languages. It’s not intuitive, but a sufficiently well defined problem statement is a grammar, and the set of solutions to the problem is the language. So in a very literal sense, being able to “speak” a formal language is the same as calculating the solution to a problem.
A Natural Language on the other hand is contextual. It evolves over time. It’s not random, but it is arbitrary. There is not hard line between what is a valid sentence in a natural language, and what’s not. Me speak with broke worded, but still can understand you. There is a pattern there, if it were random there would be no meaning, but it is impossible to formally define.
LLMs and AI in general are specifically suited to this last part: approximating solutions to problems where there exists a pattern, but the pattern is impossible to rigorously define. Looking at a photo and knowing whether it’s a cat, creating artwork with a specific aesthetic, or speaking a natural language are what AI excels at. But calculating specific solutions to well defined problems is not, for that we built calculators.
All that said, humans brains are better at natural language than being calculators (that’s why we eventually invented calculators), so I think there may come a day when we make an AI that is capable of designing and building a calculator. And at THAT point, AI will handily replace programmers. But that is a much harder problem it would seem.
Ok
Thanks for posting to cmv, wish we saw more activity here.
When we talk about these types of topics on the Internet we are usually all speaking about slightly different things. For example “Coding will be replaced by AI” can be interpreted as 100% (every programmer) or partial (X%).
When we talk in the 100% sense the bar AI must achieve is MUCH higher than replacing some percentage. To replace 100% of programmers the AI needs to not only be on par with principal engineers but also be able to understand domain, real world implications, stakeholder input and a bunch of other goodies engineers do behind the scenes other than writing code.
When we talk about the partial percentage, the bar is low. Companies already take shortcuts such as outsourcing or greenlighting a proof of concept for production without proper design. There are MANY terrible programmers employed today who produce code slower and worse than even the halicigenic mess that is today’s modern llms.
The budget for replacing these subpar programmers is proportional to their salary. If we choose the arbitrary pay of 75k for these programmers, that means we could spend up to 75k on AI compute costs per year and still break even. This doesn’t even need to be fully autonomous as the remaining senior programmers will be expected to pick up the slack whether they want to or not.
Tldr;
AI will definitely replace some programmers but not all programmers.
exactly like how duolinguo replaced some language teachers, but not all language teachers
I learned spanish from duolingo with no teachers so….
Now i’m not saying chatgpt is going to replace programmers but duo probably could have replaced most language teachers. You will still need one for upper levels of fluency unless you’re immersed in the language and the newest versions suck since they use ai but the older versions were solid.
AI is producing content that could replace writers or translators. It’s not supposed to teach anything.

