The Self-Driving Software Revolution: A Look Ahead to 2026
At the end of every year, I love to self-reflect on accomplishments, and more importantly, the significant transformations that occurred. The biggest trend this year for me has been (obviously!) AI, and specifically AI in the context of software development.
Calling it a "big trend" is perhaps under promising the scale of change.
But the biggest breakthrough hasn't happened yet. I'm fairly certain that during the first half of 2026, we are going to see more and more self-driving SaaS and product operations.
What is Self-Driving Software?
The concept of self-driving software represents the most critical strategic inflection point in modern software engineering, moving beyond mere AI assistance (like code suggestions) to genuine AI autonomy (self-directed issue resolution).
Self-driving software describes an environment where customer requests are automatically analyzed, business ideas are automatically prioritized, and those ideas are built, tested, and evaluated by virtual users. The overarching vision is one where the human user retains control, deciding which projects to pursue and resources to allocate, but the system moves everything forward autonomously. This exponential increase in the speed of innovation will inevitably result in the demise of many existing software businesses.
I believe businesses wanting to build self-driving software need to develop three distinct capabilities. Each is probably going to create its own ecosystem of specialized startups.
Capability 1: The Intake
This initial capability is all about establishing the autonomous feedback loop by transforming unstructured feedback into formalized development tasks. This enables agents, tools and software to intake requirements, bugs, weird user behaviors, and other patterns into a central knowledge base. The input is gathered from various channels, such as inbound support tickets, customer research, sales calls, and surveys.
By using AI for feedback, organizations can review a larger volume of data (up to 100 percent of customer interactions, for example) and quickly identify trending customer needs and pain points. Agents then consume all this knowledge to create a prioritized backlog and evaluate what is most important for the business and users.
Capability 2: The Coding
This capability is centered on orchestrating specialized components via a multi-agent architecture. The requirements identified during the Intake will go directly to coding agents so they can work on building the functionality.
Currently, most coding flows require a "human in the loop". However, the current latency and speed of the LLM coding creates friction by forcing the developer to switch contexts repeatedly. Moving forward, agents are going to be more autonomous, and developers are going to build teams of agents to work together. This shift eliminates the constant need for the human-in-the-loop flow. These specialized agents collaborate efficiently, directly sharing feedback, improving code, reviewing code, testing, and finding bugs. The underlying system will handle Task Decomposition (breaking the high-level goal down into smaller sub-tasks) and Hierarchical Planning (organizing those sub-tasks logically) for complex objectives.
Capability 3: The Evaluation
For me, this layer is crucial as it focuses on Autonomous Quality Assurance and validation. This involves creating agents designed to mimic specific customer behaviors of your platform.
An e-commerce company, for instance, might set up two distinct evaluation agents. One could model a budget-conscious Gen Z shopper using a mobile device, while the other represents an older individual browsing the website on an iPad. Those agents will consume vast amounts of usage data to predict user behavior patterns and surface actionable insights.
Instead of running traditional A/B tests or expensive user testing panels on real people, the firm could first automatically run every new feature on these virtual agents, asking them to complete specific actions and provide feedback simultaneously. Now, think of testing every new feature not with two, but with hundreds of different agents, each designed to mimic a specific segment of your customers.
These three capabilities collectively form the complete self-driving software system. Additionally, the system will need robust orchestration and governance components to ensure all elements work cohesively. In that sense, Linear has been for me the true innovator of 2025. I deeply value Linear's decision to openly discuss their vision, sharing significant research and establishing core UX principles for designing effective human-agent interaction within software development. I had the privilege of being an early builder of an agent on the Linear platform, and I was genuinely impressed by the system's simultaneous power and simplicity.
The evolution of this space is captivating. I dedicated a considerable amount of time and energy to get up to speed on this in 2025 and I’m looking forward to what’s coming.
