We're hiring! Apply today or share this post with a great candidate in your network.
Coder
Software Development
Austin, Texas 13,492 followers
Self-hosted environments for agentic software development
About us
Coder is an AI software development company leading the future of autonomous coding. We empower teams to build software faster, more securely, and at scale through the collaboration of AI coding agents and human developers. Our mission is to make agentic AI a safe, trusted, and integral part of every software development lifecycle.
- Website
-
https://coder.com
External link for Coder
- Industry
- Software Development
- Company size
- 51-200 employees
- Headquarters
- Austin, Texas
- Type
- Privately Held
- Founded
- 2017
- Specialties
- terraform, cloud platforms, open source software, linux, golang, CDE, self-hosted software, software, github, enterprise, typescript, react, culture, startup, agentic ai, and ai software development
Products
Locations
-
Primary
Get directions
Austin, Texas, US
Employees at Coder
Updates
-
Coder reposted this
Everyone's focused on what AI agents can do. Not enough people are talking about what happens when they go wrong. Our CEO Rob Whiteley is speaking at our next Coder meetup in SF on May 13 and that's exactly what he's getting into. Why AI workflows stall after the demo. Where governance gets bolted on too late. And what teams that are actually shipping at scale are doing differently. If you're building with agents and wondering why production feels messier than the prototype, this one's worth showing up for. Link to register in the comments.
-
-
Running AI coding agents on OpenShift raises a question most platform teams haven't answered yet: where does the agent's environment end and the cluster's shared infrastructure begin? We updated the Coder on Red Hat OpenShift deployment guide to cover exactly that. Governed workspaces, workspace isolation, full audit trails, and the control plane architecture that keeps agents scoped to their own environment. We're at Red Hat Summit this week if you want to talk through it in person. The full deployment guide is here: http://cdr.co/redhatli
-
This week we launched Coder Agents in beta, a native agent built to run AI development workflows entirely on self-hosted infrastructure. No source code, prompts, or model interactions leave your network perimeter. It's model-agnostic, so platform teams get centralized control over models, prompts, and policies while developers use any approved provider, from Anthropic to OpenAI to self-hosted endpoints. We celebrated the launch with the team that built it. Amazing product, even better people. Thanks to everyone involved in the journey.
-
-
Most teams plan for AI adoption. Almost nobody plans for what breaks after it works. CI/CD pipelines buckle under the volume of AI-generated code. Team structures that made sense six months ago don't map to agent-driven workflows. Senior engineers go from writing code to directing agents, and no one has a playbook for that transition. On May 20, Coder PM David Fraley maps the failure modes that only appear once AI adoption actually succeeds, and the infrastructure decisions that determine whether the transition stalls or scales. http://cdr.co/airightli
-
-
Our CEO Rob Whiteley has been in enough conversations with engineering leaders to know where AI initiatives actually stall. And it's usually not the models or the tooling. It's everything that comes after the demo. The part where it has to actually work in production, not just impress in a sandbox. That's what Rob is getting into at our Coder meetup on May 13 in San Francisco. He's going to get honest about why most AI workflows don't survive contact with production and what teams are actually doing when they get it right. If you're working through that right now we'd love to have you join us. Reg link in the comments.
-
-
Today we're launching Coder Agents in beta. A new way to run AI development workflows on the self-hosted infrastructure you already control. Platform teams get centralized control over models, prompts, MCPs, and skills. Developers get a conversational interface that turns ideas into executed code changes, plus an API to trigger work from CI/CD, GitHub Actions, or Slack. The agent runs natively in the Coder control plane and only provisions workspaces when compute is actually needed. No wrapper around third-party agents. No lock-in to a single model provider. Teams standardize how agents work across the org instead of leaving every developer to figure it out alone. Here's what's in the beta, how the architecture works, and how it fits alongside tools like Claude Code and Codex: http://cdr.co/agents
-
Three AI agents in production. Three different model providers. Three sets of workflows that don't talk to each other. That's the reality at most enterprise engineering orgs right now. Agent sprawl, zero visibility, and every tool locked to a different model provider. Today we're launching Coder Agents in beta. One control plane for models, prompts, and agent execution across your entire dev org. The agent runs natively in the Coder control plane, only spinning up workspaces when compute is actually needed. Trigger work from CI/CD, GitHub Actions, or Slack. Self-hosted. Model-agnostic. Open source. No lock-in. Full Premium features with zero usage limits through September. Try it on your infrastructure: http://cdr.co/agentli
-
"Token usage is obviously one. You'll see that fluctuate over time depending on the learning capacity of the models themselves. We're also seeing model switching, when does a certain model run out of steam, and at what point does it start to switch." Simon Gregory at the UST AI Innovation Forum on the metrics enterprises are tracking as AI agents become part of the development workflow. Beyond tokens and model behavior, teams are monitoring task completion velocity, API usage patterns, and cost-to-value ratios. The question every platform team should be asking: how quickly are we accomplishing standard tasks, and how much is that costing us?