London based software development consultant
- 378 Posts
- 52 Comments
codeinaboxOPto
Programming•The Efficiency Paradox: Why Making Software Easier to Write Means We'll Write Exponentially MoreEnglish
25·4 days agoMy understanding of how this relates to Jevons paradox, is because it had been believed that advances in tooling would mean that companies could lower their headcount, because developers would become more efficient, however it has the opposite effect:
Every abstraction layer - from assembly to C to Python to frameworks to low-code - followed the same pattern. Each one was supposed to mean we’d need fewer developers. Each one instead enabled us to build more software.
The meta-point here is that we keep making the same prediction error. Every time we make something more efficient, we predict it will mean less of that thing. But efficiency improvements don’t reduce demand - they reveal latent demand that was previously uneconomic to address. Coal. Computing. Cloud infrastructure. And now, knowledge work.
How far back are you talking? JavaScript became a standard in 1997, and IMHO Ajax really improved the browsing experience.
Kent Beck does mention CodeRabbit, however he also highlights the benefits of pairing with humans, as he later goes on to say:
It’s not pairing. Pairing is a conversation with someone who pushes back, who has their own ideas, who brings experience I don’t have. CodeRabbit is more like… a very thorough checklist that can read code.
I’d rather be pairing.
I miss the back-and-forth with another human who cares about the code. I miss being surprised by someone else’s solution. I miss the social pressure to explain my thinking out loud, which always makes the thinking better.
I’m open to a conversation discussing the pros and cons of large language models. Whilst I use AI coding tools myself, I also consider myself quite a sceptic, and often share articles critical of these tools.
Even when I share these articles in the AI community, they get voted down. 🫤 I know these articles aren’t popular, because there is quite a lot of prejudice against AI coding tools. However, I do find them interesting, which is why I share them.
Based on my own experience of using Claude for AI coding, and using the Whisper model on my phone for dictation, for the most part AI tools can be very useful. Yet there is nearly always mistakes, even if they are quite minor at times, which is why I am sceptical of AI taking my job.
Perhaps the biggest reason AI won’t take my job is it has no accountability. For example, if an AI coding tool introduces a major bug into the codebase, I doubt you’d be able to make OpenAI or Anthropic accountable. However if you have a human developer supervising it, that person is very much accountable. This is something that Cory Doctorow talks about in his reverse-centaur article.
“And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”
This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies calls an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.
This really sums up Beck’s argument, that now is the perfect time to invest in junior developers, because AI allows them to learn and skill up faster:
The juniors working this way compress their ramp dramatically. Tasks that used to take days take hours. Not because the AI does the work, but because the AI collapses the search space. Instead of spending three hours figuring out which API to use, they spend twenty minutes evaluating options the AI surfaced. The time freed this way isn’t invested in another unprofitable feature, though, it’s invested in learning.
That sounds like Léon – The URL Cleaner, which I use on a daily basis.
codeinaboxOPto
Programming•AI Is still making code worse: A new CMU study confirmsEnglish
341·30 days agoThis quote from the article very much sums up my own experience of Claude:
In my recent experience at least, these improvements mean you can generate good quality code, with the right guardrails in place. However without them (or when it ignores them, which is another matter) the output still trends towards the same issues: long functions, heavy nesting of conditional logic, unnecessary comments, repeated logic – code that is far more complex than it needs to be.
AI coding tools definitely helpful with boilerplate code but they still require a lot of supervision. I am interested to see if these tools can be used to tackle tech debt, as often the argument for not addressing tech debt is a lack of time, or if they would just contribute it to it, even with thorough instructions and guardrails.
Thank you! When I stumble across the Neovim posts, I try to share them here if I think someone will find them useful.
This quote really sums up the situation:
This is a technical and business challenge, but also an ethical crisis. Anyone who cares to look can see the tragic consequences for those who most need the help technology can offer. Meanwhile, the lies, half-truths, and excuses made by frontend’s influencer class are in defence of these approaches are, if anything, getting worse.
Through no action of their own, frontend developers have been blessed with more compute and bandwidth every year. Instead of converting that bounty into delightful experiences and positive business results, the dominant culture of frontend has leant into self-aggrandising narratives that venerate failure as success. The result is a web that increasingly punishes the poor for their bad luck while paying developers huge salaries to deliver business-undermining results.
The developer community really needs to be building websites that work on all devices and connections, and not just for those who can afford the latest technology and high-speed internet connections.
codeinaboxOPto
AI - Artificial intelligence•Major insurers move to avoid liability for AI lawsuits as multi-billion dollar risks emerge — Recent public incidents have lead to costly repercussionsEnglish
2·1 month agoI’m curious what this means for business insurance across the board as generative AI is so pervasive, and lots of businesses of all sizes are using it, but will their insurance policies cover any incidents that happen as a result?
On a related note, if I subcontracted someone to perform a task, and there were defects, they would be contractually liable for that. However, if someone used ChatGPT to help with a task, they would have a tough time trying to take OpenAPI to court.
The way the author described programming in 2025 did make me chuckle, and I do think he makes some excellent points in the process.
It’s 2025. We write JavaScript with types now. It runs not just in a browser, but on Linux. It has a dependency manager, and in true JavaScript style, there’s a central repository which anyone can push anything to. Nowadays it’s mostly used to inject Bitcoin miners or ransomware onto unsuspecting servers, but you might find a useful utility to pad a string if you need it.
In order to test our application, we build it regularly. On a modern computer, with approximately 16 cores, each running at 3 GHz, TypeScript only takes a few seconds to compile and run.
As the author notes, it is very impressive what generative AI can produce these days.
The frontier of what the LLMs can do has moved since the last time I tried to vibe-code something. I didn’t expect to have a working interpreter the same day I dreamt of a new programming language. It now seems possible.
However, as they point out, there’s definitely downsides to this approach.
The downside of vibe coding the whole interpreter is that I have zero knowledge of the code. I only interacted with the agent by telling it to implement a thing and write tests for it, and I only really reviewed the tests. I reckon this would be an issue in the future when I want to manually make some change in the actual code, because I have no familiarity with it.
What about developers who are required to use AI as part of their job?
I stumbled upon this article as I was having issues with my LSP setup for TypeScript projects. However, in my case, it appears the bug is in the plug-in nvim-lspconfig.
His experiences reveal a pattern: AI can rapidly produce 70% of a solution, but that final 30% – edge cases, security, production integration – remains as challenging as ever. Meanwhile, trust in AI-generated code is declining even as adoption increases.
I’m very much intrigued by this contradiction where where adoption of AI is increasing, but the trust in the code it generates is declining. Is it a case of the more developers use AI coding tools, the more they become aware of the shortcomings and problems?
I’m guessing that the author said this to warn people not to rely on this technique, as it’s not part of the specification. Does it behave consistently across all browsers?
codeinaboxOPto
Terminal Emulators@lemm.ee•State of Terminal Emulators in 2025: The Errant ChampionsEnglish
1·2 months agoI’ve been using WezTerm because of its built-in Nerd Font patching. I see Ghostty also offers this, so I’ll have to give it a try because of how well it scored

























I use AI coding tools, and I often find them quite useful, but I completely agree with this statement:
At first I found AI coding tools like a junior developer, in that it will keep trying to solve the problem, and never give up or grow frustrated. However, I can’t teach an LLM, yes I can give it guard rails and detailed prompts, but it can’t learn in the same way a teammate can. It will always require supervision and review of its output. Whereas, I can teach a teammate new or different ways to do things, and over time their skills and knowledge will grow, as will my trust in them.