{"id":5310,"date":"2025-07-21T04:21:52","date_gmt":"2025-07-21T04:21:52","guid":{"rendered":"https:\/\/lockitsoft.com\/?p=5310"},"modified":"2025-07-21T04:21:52","modified_gmt":"2025-07-21T04:21:52","slug":"two-kubernetes-creators-are-applying-their-expertise-to-agentic-ai-helping-it-become-governable-portable-observable-and-boring","status":"publish","type":"post","link":"https:\/\/lockitsoft.com\/?p=5310","title":{"rendered":"Two Kubernetes creators are applying their expertise to agentic AI, helping it become governable, portable, observable, and &quot;boring.&quot;"},"content":{"rendered":"<p>The burgeoning field of agentic artificial intelligence, poised to revolutionize enterprise operations, faces a critical hurdle: its inherent complexity and lack of robust governance are preventing widespread adoption. While the allure of highly capable AI agents capable of performing complex tasks is undeniable, the path to integrating them safely and reliably into existing business infrastructures remains fraught with challenges. This is precisely the problem Stacklok, a new venture founded by Craig McLuckie and Joe Beda, the original architects of Kubernetes, aims to solve. Their mission is to bring the same principles of operational excellence, predictability, and control that made Kubernetes a cornerstone of cloud-native computing to the nascent world of agentic AI, ultimately aiming to make it &quot;boring&quot; for enterprises to deploy and manage.<\/p>\n<p>The concept of &quot;boring&quot; in this context is not a dismissal of innovation but rather an endorsement of reliability and predictability. Enterprises, particularly those in regulated industries like finance, healthcare, and government, require AI systems that can be trusted, governed, observed, and easily integrated into daily workflows without introducing undue risk. The current landscape is dominated by a proliferation of well-funded startups focused on showcasing the cutting-edge capabilities of AI models, often at the expense of the foundational work needed for enterprise-grade deployment. Stacklok positions itself as a counterpoint to this trend, focusing on the essential, often unglamorous, work of building a stable and manageable AI infrastructure.<\/p>\n<p><strong>The Genesis of Stacklok: From Container Orchestration to Agentic Governance<\/strong><\/p>\n<p>The formation of Stacklok in early 2023 by Craig McLuckie marks a significant strategic move. Joe Beda, McLuckie&#8217;s long-time collaborator from their days at Google and later at Heptio, had initially taken a semi-retired role in 2022. However, Beda was drawn back into active industry engagement by what he describes as an &quot;extraordinary moment in the industry.&quot; He recognized a profound opportunity to leverage their deep expertise in developer platforms and enterprise-grade infrastructure to address what he views as fundamental challenges in AI adoption.<\/p>\n<p>&quot;The biggest problem is accountability,&quot; states McLuckie, articulating the core thesis behind Stacklok. He elaborates, &quot;An agent, no matter how sophisticated, no matter how capable, no matter how useful, cannot be held accountable for the work it undertakes.&quot; This statement cuts to the heart of enterprise risk. While a sophisticated AI model can generate code, analyze documents, or initiate automated processes, the ultimate responsibility for its actions, particularly in cases of data breaches, unauthorized access, or operational failures, lies squarely with the enterprise. The notion of blaming the model itself is a non-starter in a business environment where liability and compliance are paramount.<\/p>\n<p>This realization mirrors a broader industry shift. Even leading AI labs like OpenAI, initially perceived as prioritizing raw model prowess, are increasingly acknowledging the enterprise imperative for AI to seamlessly integrate within existing operational frameworks, security controls, and deployment models. As Tom Krazit of Runtime.news has observed, the market is evolving beyond mere model capabilities to encompass the practicalities of enterprise deployment. This underscores a fundamental truth in IT: while businesses may purchase cutting-edge capabilities, they ultimately deploy and manage through robust control mechanisms.<\/p>\n<p>Joe Beda further highlights the disruptive potential of AI&#8217;s speed. Tasks that once took days or weeks of human effort can now be accomplished in minutes by AI agents. This dramatic increase in velocity, while a boon for productivity, also amplifies the potential for errors. What might be considered minor sloppiness in a human-driven process can escalate into an operational catastrophe when scaled by AI. Beda vividly puts it, &quot;The volume dial is going to 11 across the board.&quot; This rapid acceleration necessitates a renewed focus on fundamental security principles. Previously manageable issues related to identity, authorization, and auditability are no longer solely the domain of security teams; they are now architectural imperatives that must be addressed from the outset.<\/p>\n<p><strong>The Kubernetes Analogy: A Blueprint for Agentic AI Governance<\/strong><\/p>\n<p>The comparison of Stacklok&#8217;s mission to the impact of Kubernetes is not mere founder mythmaking; it serves as a powerful analogy for the transformative potential of bringing structured infrastructure principles to a nascent, chaotic technology domain. While many remember Kubernetes primarily as a container orchestration solution, its true enterprise value lay in its ability to provide a consistent operating model across diverse environments \u2013 on-premises, in the cloud, and at the edge. This common substrate facilitated the development of a rich ecosystem of tools for policy enforcement, security, observability, and workflow management. The Cloud Native Computing Foundation (CNCF) now reports that 82% of container users deploy Kubernetes in production, and critically, the organization itself frames Kubernetes as the de facto operating system for AI.<\/p>\n<p>McLuckie describes Kubernetes&#8217; core contribution as enabling &quot;self-determination&quot; for enterprises, offering a predictable and consistent platform regardless of deployment location. This consistency fostered trust and enabled the flourishing of an entire ecosystem.<\/p>\n<p>Beda elaborates on this by referencing a core tenet of Kubernetes: the declarative approach. &quot;One of the core ideas in Kubernetes is that you describe what you want to happen, and then you have the system go make it happen,&quot; he explains. This translates into what Beda terms &quot;control theory rendered into software.&quot; Over time, an enterprise&#8217;s desired operational state is codified, version-controlled, and integrated into systems traceable back to accountable human operators. While this process may sound &quot;nerdy and sort of dull,&quot; it is precisely the point. Enterprise AI does not simply require more intelligent models; it demands systems where human intent is clearly declared, machine execution is reliable, and the entire process remains transparent and auditable.<\/p>\n<p>This strategic focus on the &quot;control plane&quot; is what makes Stacklok&#8217;s approach particularly relevant. The company is explicitly targeting the layer responsible for managing and governing AI agents, rather than focusing solely on the models themselves. Their core proposition is that enterprises, already invested in Kubernetes infrastructure, will find it far more efficient and secure to manage agentic AI within that familiar environment. This means integrating essential capabilities such as policy management, identity verification, resource isolation, and robust observability directly into the Kubernetes framework, rather than as afterthoughts.<\/p>\n<p><strong>Addressing the Limitations of Protocols: From MCP to a Full-Fledged Platform<\/strong><\/p>\n<p>The recent emergence of the Model Context Protocol (MCP) by Anthropic, later donated to the Linux Foundation&#8217;s Agentic AI Foundation, represents a significant step towards interoperability in agentic AI. MCP provides an open standard for connecting AI systems with tools and data, fostering wider adoption. Anthropic reports a substantial uptake, with over 10,000 active public MCP servers and integration across major AI platforms like ChatGPT, Gemini, and Microsoft Copilot.<\/p>\n<p>While MCP is a commendable advancement, Stacklok&#8217;s founders emphasize that a protocol alone is insufficient for enterprise needs. A protocol can enable communication between an agent and a tool, but it does not inherently address critical enterprise concerns such as:<\/p>\n<ul>\n<li><strong>Authorization:<\/strong> Who approved this agent to perform this action?<\/li>\n<li><strong>Data Governance:<\/strong> What sensitive data can this agent access, and under what conditions?<\/li>\n<li><strong>Auditability:<\/strong> How are the agent&#8217;s actions logged and tracked for compliance and forensic purposes?<\/li>\n<li><strong>Lifecycle Management:<\/strong> How can an agent be safely deactivated or controlled when its purpose is complete or when its human overseer is no longer involved?<\/li>\n<\/ul>\n<p>This is where Stacklok&#8217;s emphasis on a Kubernetes-native, self-hosted solution becomes strategically advantageous. For enterprises dealing with sensitive data, the prospect of that data egressing their secure environment or being sent to a third-party SaaS endpoint controlled by a vendor is a significant concern. As McLuckie bluntly states, &quot;If you&#8217;re an enterprise connecting agents to sensitive data, you are almost certainly not comfortable with that data egressing your security domain or being sent to a SaaS endpoint that a vendor controls.&quot; The historical precedent of vendor lock-in, where reliance on a single provider for hosting, identity, tool integration, and policy management limits future options, is a scenario enterprises actively seek to avoid.<\/p>\n<p><strong>Embracing Open Source for Enterprise Leverage and Neutrality<\/strong><\/p>\n<p>Stacklok&#8217;s approach also incorporates a strategic use of open source, not as an ideological banner, but as a pragmatic tool for providing enterprises with leverage and choice in a rapidly evolving market. While open source doesn&#8217;t automatically guarantee market power, it can empower customers by offering options and a degree of control over their technological destiny. In the AI landscape, where the costs and complexities of switching models are still relatively low, this optionality is a critical factor for businesses.<\/p>\n<p>McLuckie and Beda are described as &quot;open source true believers,&quot; but their approach is nuanced, recognizing that enterprises prioritize simplicity and practicality over ideological pronouncements. The goal is to provide enough neutrality to prevent organizations from becoming trapped as the market continues to mature and shift.<\/p>\n<p>The company&#8217;s strategy is centered on &quot;meeting enterprises where they are.&quot; This means enabling them to incrementally adopt AI capabilities using familiar technologies such as containers, isolation techniques, OpenTelemetry for observability, Kubernetes, existing identity management systems, and established observability stacks. The vision is not for enterprises to implement a grand, theoretical framework for fully autonomous operations overnight, but rather to provide an &quot;accretive (golden) path&quot; to AI integration.<\/p>\n<p><strong>The &quot;Boring&quot; Path to Enterprise AI Adoption<\/strong><\/p>\n<p>The deliberate pursuit of &quot;boring&quot; in enterprise AI is a strategic imperative, not a compromise. The opposite of &quot;boring&quot; in this context is not innovation, but rather &quot;slideware&quot; or &quot;demoware&quot; \u2013 solutions that appear impressive in presentations but falter when confronted with the realities of enterprise procurement, security reviews, compliance mandates, and the messy nature of real-world data. McLuckie encapsulates this distinction: &quot;Vibe-coding a platform for two weeks can produce something plausible. It won\u2019t produce something accurate, hardened, or enterprise-grade.&quot;<\/p>\n<p>While it is premature to definitively state whether Stacklok will become the definitive provider for this critical AI governance layer, their focus on the right problem is a significant advantage. The current AI industry is replete with talented individuals who may have had accurate insights but struggled with commercial execution. Stacklok&#8217;s strategic alignment with core enterprise needs positions them favorably compared to a considerable portion of the AI sector.<\/p>\n<p>The next era of enterprise AI adoption will likely be defined by the entities that can make AI agents governable, portable, observable, and, crucially, &quot;boring&quot; enough to foster trust. Just as Kubernetes provided the foundational infrastructure for cloud-native computing, Stacklok is betting that a similar playbook can be applied to agentic infrastructure. This is not merely a nostalgic echo of Kubernetes&#8217; success; it is a recognition that enterprises consistently require not more technological magic, but rather a reliable and controllable framework to harness its power. The demand remains for control, not just capability.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The burgeoning field of agentic artificial intelligence, poised to revolutionize enterprise operations, faces a critical hurdle: its inherent complexity and lack of robust governance are preventing widespread adoption. While the allure of highly capable AI agents capable of performing complex tasks is undeniable, the path to integrating them safely and reliably into existing business infrastructures &hellip;<\/p>\n","protected":false},"author":3,"featured_media":5309,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[71],"tags":[292,570,572,474,72,467,74,469,471,571,73,466,473,472],"class_list":["post-5310","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cloud-computing","tag-agentic","tag-applying","tag-become","tag-boring","tag-cloud","tag-creators","tag-devops","tag-expertise","tag-governable","tag-helping","tag-infrastructure","tag-kubernetes","tag-observable","tag-portable"],"_links":{"self":[{"href":"https:\/\/lockitsoft.com\/index.php?rest_route=\/wp\/v2\/posts\/5310","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lockitsoft.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lockitsoft.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lockitsoft.com\/index.php?rest_route=\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/lockitsoft.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5310"}],"version-history":[{"count":0,"href":"https:\/\/lockitsoft.com\/index.php?rest_route=\/wp\/v2\/posts\/5310\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/lockitsoft.com\/index.php?rest_route=\/wp\/v2\/media\/5309"}],"wp:attachment":[{"href":"https:\/\/lockitsoft.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5310"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lockitsoft.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5310"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lockitsoft.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5310"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}