If the video doesn’t load, you can watch it directly on YouTube ->

Ask Your Questions

If you have any questions, please submit them anytime via Slido. We’ll answer them during the Q&A session.

AI can generate code fast, but reviews remain the bottleneck. In this talk, we’ll show how agentic workflows in the CLI can automate code generation and code reviews end-to-end. Learn how context from repo history, dependency graphs, and tests powers LLM-driven reviews that catch real bugs, unsafe refactors, and race conditions. You’ll walk away with patterns and demos you can apply to speed up PRs without leaving your terminal.

🎙️ Speaker:
Sahil M Bansal has worked in both PM and PMM roles in cloud infrastructure and agentic AI space. Sahil has helped shape product narratives and drive adoption of AI-powered code review solutions and multi-cloud platforms. These days he works at the intersection of product, marketing, and developer experience—bringing customer stories to life and championing context-aware AI code reviews. Previously, Sahil spent over six years at Nutanix, leading product management and marketing for the Hybrid Multicloud Platform, and earlier he was the first marketing hire at Netsil, a kubernetes monitoring app, helping build their marketing strategy from the ground up.

Webinar partner

CodeRabbit is a AI code review platform that helps developers catch 50% more bugs and merge PRs 50% faster. CodeRabbit enriches the context behind code-changes, and uses advanced reasoning capabilities of foundational LLM models to deliver high quality AI code reviews. Catch bugs that other tools will miss and ship features faster. CodeRabbit commits $1 million to open source.