Written before the AI explosion, this piece predicted platform engineering and GitOps trends that are now table stakes. What it didn't see coming: LLM-powered pipelines.
In 2020, the DevOps tooling landscape felt like it was stabilizing. Kubernetes had won the container orchestration wars. GitHub Actions was making Jenkins feel old. Terraform was becoming the default language of infrastructure. The question wasn't which tools would survive — it was who would build the best integrations between them.
Five years later, most of those predictions held. But nobody predicted that the most disruptive force in DevOps tooling wouldn't be a new CI/CD platform or a smarter observability tool — it would be a large language model.
Let's start with what actually happened. Here's an honest look at the tools that cemented their place in the modern DevOps stack:
In 2020, "internal developer platforms" were something Google and Netflix did. By 2024, platform engineering is a job title at companies of 50 people. The idea that DevOps teams should build products for their internal customers — not just pipelines — has become mainstream.
Tools like Backstage (developer portals), Port, and Cortex didn't exist in most DevOps conversations in 2020. Now they're increasingly central to how mature engineering orgs operate.
This is the big one. In 2020, "AI in DevOps" meant anomaly detection in monitoring tools. In 2025, it means:
The pipeline itself hasn't changed structurally — build, test, deploy, monitor is still the loop. But every step in that loop now has an AI co-pilot.
DevSecOps went from a buzzword to a requirement. Supply chain attacks (SolarWinds, Log4Shell) changed how seriously organizations treat dependency scanning, SBOM generation, and secrets management. Tools like Snyk, Trivy, and HashiCorp Vault moved from "nice to have" to "required by compliance."
The next generation of CI/CD tools won't just run your pipeline — they'll help you design it. Imagine describing your deployment requirements in plain English and having the pipeline generated, optimized, and maintained by an AI agent. Early versions of this already exist. In three years, it'll be normal.
The on-call engineer who gets paged at 3am is going to be replaced — not by eliminating incidents, but by AI agents that can triage, diagnose, and often resolve incidents before a human needs to wake up. The engineer's role shifts to reviewing what the agent did and improving the playbooks that guide it.
OPA (Open Policy Agent) and tools like Kyverno are becoming the standard for enforcing compliance guardrails at the infrastructure level. As regulatory requirements around AI systems grow (EU AI Act, HIPAA for AI-generated insights), policy as code will expand from "security best practice" to legal requirement.
🔭 The pattern: Every five years, a new abstraction layer makes the previous one invisible. In 2010 it was VMs. In 2015 it was containers. In 2020 it was Kubernetes. In 2025, it's AI agents. The engineers who thrive are the ones who understand what's happening under the abstraction — and when to trust it.
The tools will keep changing. The fundamentals won't. Networking, storage, compute, security, observability — these pillars remain constant even as the tooling around them evolves.
What I'd add in 2025: learn to work with AI tools as a force multiplier, not a replacement. The engineers generating 10x output right now aren't smarter — they're better at directing AI tools, verifying their output, and owning the results.