01%.
That’s the approximate reduction in friction a new integration between GitLab’s CLI (<a href="/tag/glab/">glab</a>) and AI agents could achieve, if early indicators hold. Forget the clunky copy-paste dance or AI hallucinating project details. GitLab’s glab command-line tool is now directly interfacing with AI models through the Model Context Protocol (MCP), offering a structured, reliable pipeline for machine intelligence to operate on your code. It’s less about AI assistants gaining access and more about them actually understanding your work without needing a human translator.
For too long, the promise of AI in the developer workflow has been hampered by a fundamental disconnect: the AI’s “brain” lives in a chat window or a separate tool, while the actual code and project context reside within platforms like GitLab. This disconnect forces developers to act as manual data conduits, copying snippets, summarizing issues, and manually feeding information back into the AI. It’s an absurdly inefficient bottleneck, especially when AI models are supposedly designed to reduce manual effort. The statistics on developer time spent on administrative tasks are already grim; adding AI-assisted friction to that is simply counterproductive.
The glab MCP Play: Cutting Out the Middleman
The core of this new functionality lies in MCP, an open standard designed to let AI tools discover and utilize external capabilities at runtime. When glab is hooked up to an MCP client, your AI assistant — be it Claude Code, Cursor, or another compatible tool — can bypass the UI entirely. It can read issues, comment on merge requests, check pipeline statuses, and write information back to GitLab, all programmatically. No more scraping web pages, no more stale training data. The AI gets real-time, structured access.
Running glab mcp serve is the surprisingly simple first step. Once your MCP client is configured, the AI can start asking meaningful questions, not guessing games. Think: “What’s the status of my open MRs?” or “Are there any failing pipelines on main?”. This isn’t just a minor convenience; it’s a fundamental shift in how AI interacts with development infrastructure. The AI now operates with direct knowledge of the project’s current state, not a fuzzy approximation.
Structured Data is King (Especially for Machines)
One of the smarter design choices here is glab’s automatic addition of --output json when invoked via MCP. This ensures that the AI receives clean, machine-readable data without any extra fuss. For commands that support it, this structured output is key. Developers know that dealing with poorly formatted or inconsistent data is a quick way to derail any automated process. By guaranteeing JSON output, GitLab is providing a reliable data stream that AI can parse and act upon with confidence.
Furthermore, the team has been deliberate about which glab commands are exposed. Commands requiring interactive terminal input — the kind where the system waits for you to type something specific — are intentionally excluded. This prevents AI agents from getting stuck in an endless loop, waiting for human intervention that will never arrive. The exposed functionality is curated for reliability in an automated, agent-driven context.
Streamlining the Review Gauntlet
Merge request (MR) reviews are a notorious time sink. Developers often face a backlog of MRs, each with potentially multiple unresolved discussions. This is precisely where glab and AI can offer significant value. Instead of manually sifting through comments, developers can hand over the MR details to their AI agent.
Consider a command like glab mr view 2677 --comments --unresolved --output json. This single command pulls not just the MR metadata but also every unresolved discussion thread, formatted as a single JSON payload. Imagine the efficiency gain: the AI receives everything it needs — the reviewer’s specific feedback, the context, and the associated code snippet — in one go.
This input returns the full MR: metadata, description, and every unresolved discussion, as a single structured JSON payload. Hand that to your AI and it has everything it needs: which threads are open, what the reviewer asked for, and in what context. No tab-switching, no copy-pasting individual comments.
This structured data then allows the AI to provide a prioritized summary of necessary fixes, directly answering questions like “what do I still need to fix in MR 2677?”. This isn’t just about speeding up reviews; it’s about making them more effective by ensuring critical feedback isn’t missed amidst the noise.
Once the feedback is addressed, the AI can also help manage the discussion threads. Commands like glab mr note list 456 --output json can list all discussions, and glab mr note resolve 456 3107030349 can mark them as resolved. This creates a clean, closed loop for issue resolution, all managed through programmatic interaction.
A Look Ahead: Beyond Triage
While code review and issue triage are immediate beneficiaries, the implications extend further. The ability for AI agents to reliably query pipeline statuses, check project configurations, or even draft documentation based on live code context opens up a vast landscape of potential automations. We’re moving from AI as a chatbot that talks about code to AI as a system that can act on code, thanks to these foundational integrations.
This move by GitLab signals a broader trend: the increasing maturity of tools designed to bridge the gap between human developers and artificial intelligence. By providing structured, reliable interfaces like MCP, platforms are empowering AI to become a more integrated and effective partner, rather than just a glorified search engine. The data-driven analyst in me sees this as a logical, albeit significant, step in the evolution of developer tooling, reducing the friction that has historically held back truly intelligent automation.