Skip to content

Blog

KubeCon EU 2026: what actually mattered

Banner image Banner image

KubeCon EU 2026: what actually mattered

KubeCon had no shortage of announcements, but the interesting part was not the volume. It was the convergence.

Across keynotes, maintainer updates, end-user sessions, and lightning talks, the same themes kept reappearing from different angles: platform teams acting more like product teams, AI moving into operational workflows, policy and governance becoming more automated but also more bounded, and internal platforms shifting away from ticket queues toward self-service APIs and reusable capability marketplaces.

If I strip away the noise, this is what I would actually take back to a platform team.

AI-driven GitOps visual AI-driven GitOps visual

Crossplane v2.0 visual Crossplane v2.0 visual

Agentic Backstage visual Agentic Backstage visual

The biggest takeaways

1. AI moved from novelty to operations

The strongest examples were not "AI writes code" demos. They were operational workflows where AI reduced real toil.

In the MCP + Argo CD talk, the standout point was this: teams got better adoption when they put AI in existing support channels, not in yet another portal tab. That sounds obvious in hindsight, but it is easy to miss when teams are excited by shiny UI work.

The same pattern showed up elsewhere too:

  • Backstage is moving toward a multi-surface operating model across UI, CLI, and MCP tooling.
  • Kyverno and Kagent showed policy operations becoming agent-assisted and closed-loop.
  • TAG DevEx is explicitly studying AI-assisted development based on real usage data rather than hype.
  • Multiple sessions treated agent experience as a real platform design concern, not just a gimmick.

If your engineers already troubleshoot in Slack, keep them there. Bring the capability to their workflow.

2. Platform APIs beat ticket queues

The Crossplane session made this painfully clear. Most teams still lose days waiting for routine infra changes because requests are encoded as tickets, not APIs.

Crossplane v2.0’s project model is a pragmatic step forward: - one project for API definitions, composition logic, and tests - local development loop that is fast enough to use daily - clearer path from platform intent to production behaviour

The big shift is cultural as much as technical: platform teams stop shipping "internal services" and start shipping products.

That same lesson was reinforced even more directly by the self-service platform session from DigitalOcean. Their core point was sharp: Kubernetes was not the bottleneck, the operating model was. Once environment creation moved from ticket queue to Backstage + Argo Events + Argo Workflows + virtual clusters + Kyverno guardrails, provisioning dropped from days to minutes.

This was one of the clearest signals of the conference: self-service is no longer a vague aspiration. It is a concrete architecture pattern.

3. Developer portals are becoming conversational and multi-surface

Backstage sessions pointed in one direction: discovery and self service are better when they feel like a conversation, not a form wizard.

That does not mean deleting structure. It means using structure behind the scenes while exposing a simpler interface to engineers.

The practical takeaway is to improve catalogue quality first. If metadata is stale, any agent layer will surface the mess faster.

The more mature version of this idea came from the Backstage maintainer update: platform teams should stop thinking in terms of a single portal UI and start thinking in terms of one platform model exposed through multiple operating surfaces. That is a much stronger design direction than just adding chat to an existing screen.

4. Governance is becoming a workflow, not just a rule set

One of the stronger late-conference themes was that policy is no longer just admission control.

Kyverno’s direction, especially in the multi-cluster governance talk, was toward full policy lifecycle management:

  • validation
  • mutation
  • generation
  • image verification
  • reporting
  • exemptions
  • cleanup

What made the session useful was the framing around operator workflow. The real problem was not only writing policies. It was checking them, testing them, troubleshooting them, and doing that across many clusters without drowning in manual work.

That is where agentic tooling becomes genuinely useful: not replacing deterministic policy enforcement, but reducing the human overhead around it.

5. Observability is getting more contextual

The Crossplane metrics discussion landed well because it moved beyond aggregate counts.

"15 clusters unhealthy" is not enough. Teams need: - which clusters - which team owns them - how long they have been degraded - what changed around the same time

This is where label strategy and cardinality discipline matter. Better questions, not just more dashboards.

6. Platform engineering has clearly become a measurement problem

The Norwegian public-sector platform maturity talk was one of the better reality checks of the week.

Platform adoption has clearly won. Internal developer platforms and Kubernetes are widespread, and tooling is converging even without central mandates. But measurement still lags. Teams know they are building platforms; they are less clear on how to prove those platforms are successful.

That gap matters. If teams cannot define what success looks like, they cannot prioritise the right improvements, justify investment, or distinguish real self-service from internal process theater.

7. The edge story is no longer theoretical

The CERN electric glider keynote reminded everyone that cloud native tooling now runs in places where power, network and weather are all hostile constraints.

That is relevant even if you do not run aircraft projects. The same patterns apply to factories, field devices and remote operations where assumptions from datacentre life break down quickly.

The telecom keynote made the same point from another angle: cloud native is no longer confined to datacentres and web apps. It is becoming part of how large, geographically distributed, highly specialised infrastructure is operated.

These were the trends that looked durable rather than fashionable.

1. Governed autonomy is replacing both central control and platform anarchy

The strongest sessions were not advocating fully autonomous systems. They were describing bounded autonomy:

  • AI in support channels with approval gates
  • action registries and standardised execution layers
  • policy engines underneath intelligent orchestration
  • self-service with declarative guardrails

This is a much more credible model than either “humans must approve everything manually forever” or “agents will just run the platform.”

2. Internal platforms are becoming capability marketplaces

Abby Bangser’s keynote was especially clear on this point. Platform teams cannot scale simply by adding more platform engineers. The longer-term answer is to let domain experts publish reusable platform capabilities while the platform team owns the standards, interfaces, and operating model.

That is a significant evolution from the older internal-platform-as-a-central-team model.

3. Kubernetes is expanding from container runtime substrate to broader systems operating layer

This showed up in multiple places:

  • accelerated and AI workloads
  • multiplayer gaming with Agones
  • edge and telecom environments
  • virtual clusters for developer environments

Kubernetes is no longer interesting only because it schedules containers. It is interesting because it provides a stable operational contract across increasingly different workload types.

4. The quality of metadata and interfaces is becoming a first-order platform concern

Backstage, agentic tooling, software catalog discussions, and even policy skills all pointed to the same issue: if metadata is stale, interfaces are inconsistent, or documentation is only designed for humans, the platform becomes harder to automate safely.

That is why catalog quality, policy reports, ownership labels, and machine-readable contracts kept resurfacing.

Exciting developments worth watching

These were the developments that felt most likely to compound over the next 12-18 months.

1. Backstage as a true multi-surface control plane

This is more interesting than “Backstage with AI.” If the action registry, catalog, permissions, CLI, and MCP surfaces keep converging, Backstage becomes a much more serious operational layer for internal platforms.

2. Closed-loop policy governance

Kyverno plus agentic orchestration is still early, but the direction is right: operators asking for reports, checks, installs, and remediation in one bounded flow rather than moving manually across ten tools.

3. Ticketless self-service environments with hard guardrails

The DigitalOcean session was one of the most practically useful examples at the conference because it showed a clear before-and-after operating model. It is easy to imagine many teams copying that architecture directly.

4. Platform engineering standards work getting more operational

TAG DevEx and the wider platform engineering community are moving from broad theory into scoped, measurable initiatives and reusable models. That is a healthier sign than another year of generic “platforms are products” slogans.

5. AI infrastructure normalising as a platform concern

The accelerator-native keynote, GPU fragmentation lightning talk, and agent-experience keynote all reinforced the same thing: AI is no longer a special side track. It is becoming another platform domain that needs scheduling, governance, observability, and usable interfaces.

What to do on Monday

If I had to boil this down into an action plan:

  1. Pick one high-friction operational workflow and reduce it with bounded automation rather than adding a new portal surface.
  2. Define one platform capability your developers request repeatedly and expose it as a productised API or template instead of a ticket.
  3. Tighten service catalog metadata, ownership fields, and machine-readable platform contracts before layering in more agents.
  4. Measure platform success explicitly: time to environment, ticket volume, adoption, failure rate, and developer satisfaction.
  5. Package repeated troubleshooting and governance work into reusable workflows, skills, or paved roads.
  6. Audit where your current platform still depends on specialist memory rather than sharable operational capability.
  7. Treat AI platform integration as a governance and interface design problem, not only a tooling problem.

None of this needs a big-bang rewrite. Small, boring improvements compound fast.

Where to go deeper

Platform operating models

AI, agents, and governance

Platform APIs and delivery systems

Developer surfaces and control planes

Official repos and resources

These are the primary project links that map most directly to the talks and posts above.

Platform APIs, GitOps, and self-service

Policy, agents, and governed autonomy

Observability, performance, and runtime operations

Broader cloud-native operating patterns

Full source notes

AI-driven GitOps with MCP and Argo CD

Banner image Banner image

AI-driven GitOps with MCP and Argo CD

This was one of those talks where the demo felt uncomfortably close to real life. You could see the practical value straight away: fewer support bottlenecks, faster incident triage, and less context-switching for developers.

Quick takeaways

  • Put AI where engineers already work (Slack/Teams), not in a new interface.
  • MCP makes Argo CD actions discoverable and reusable for different AI clients.
  • Start with operational tasks where feedback is fast: deploy, check health, roll back.
  • Capture troubleshooting logic once and reuse it through Agent Skills.

What was getting in the way

At KubeCon EU 2026, Alexander Matyushentsev (Argo CD co-founder) and Leonardo Luz Almeida (Intuit) laid out the problem clearly: managing 350+ Kubernetes clusters with 3,000+ production services across 50,000+ namespaces creates massive support bottlenecks. Developers wait in Slack channels for troubleshooting help. Expert DevOps engineers drown in repetitive diagnostic work. Traditional UI extensions fail because they don't meet users where they already are.

What we actually wanted

An AI-powered GitOps experience where developers use natural language to deploy applications, troubleshoot production issues, and automatically roll back failures - all through the tools they already use daily (Slack, Claude, Copilot).

Architecture in one view

MCP + Argo CD container view MCP + Argo CD container view

Model Context Protocol (MCP): the universal connector

MCP is a standardised bridge between AI clients and services like Argo CD. Instead of building custom integrations for every LLM, MCP provides:

  • JSON-RPC protocol over stdio or HTTP (Server-Sent Events or polling)
  • Discovery API exposing tools (functions), resources (read-only data), and prompts (guidance text)
  • Token passthrough for authentication - MCP delegates auth to underlying services
  • One-to-one tool mapping with Argo CD CLI/UI capabilities (sync, inspect, logs, manifests)

Open Source MCP for Argo CD

Available at github.com/argoproj-labs/argocd-mcp with growing community adoption (#mcp-for-argocd on CNCF Slack with 16 members at conference time).

Three use cases that worked in practice

1. Natural Language Application Creation

Before: Fill out complex forms, specify manifest repos, branches, namespaces, sync policies
After: "Create an app called 'frontend' using the manifests from github.com/myorg/apps, main branch, deploy to production namespace"

The AI agent discovers available MCP tools, constructs the Argo CD Application resource, and deploys - faster than manual creation.

2. Batch Deployment from Git Directories

Prompt: "Connect to my GitOps repo and create an Argo CD application for each directory under /apps"

The agent: - Scans the Git repository structure - Identifies manifest directories - Creates multiple applications automatically - Implements app-of-apps patterns without manual intervention

This is genuinely faster than human assembly for repetitive structures.

3. Automated Deployment with Intelligent Rollback

Prompt: "Deploy version 2.0 of my service, monitor its health, and roll back to the previous version if it degrades"

The agent: 1. Syncs the new version 2. Continuously checks application health via Argo CD API 3. Analyzes degradation patterns from status conditions 4. Automatically reverts manifests on failure detection

This reduces Mean Time To Recovery (MTTR) from hours to minutes.

Intuit's production journey

Failed Experiment: Argo CD UI Extension

Intuit initially built an AI-powered troubleshooting extension directly in the Argo CD UI: - Extracted logs, Kubernetes events, live state, desired state via Argo CD API - Provided LLM-powered root cause analysis - Result: Poor adoption - experts went straight to logs, novices never opened Argo CD

Key Lesson: Don't build new interfaces; integrate where users already are.

Breakthrough: Slack Bot Integration

Moving AI troubleshooting into existing Slack support channels achieved dramatically higher engagement:

Example 1: Stack Trace Analysis
Developer: "I'm seeing warnings in argo logs, not sure if critical"
Bot: "Not critical. The class is attempting to cast byte array during schema validation. Check implementation at line 76, ensure serialization order is: byte array → deserialized logic → schema validation."

Example 2: Multi-Failure Triage
Developer: "Production deployment issue"
Bot: "I see TWO failures: (1) Recent deployment can't reach config server, (2) Current running version is degraded. Which should I investigate?"
Developer: "Current failure"
Bot (2.5 min later): "Application cannot retrieve configuration from Spring config server due to connection issue. Root cause: App configured with e2e environment URL while this is production. Update URL from config-e2e.company.com to config-prod.company.com"

Reverse Proxy Pattern

Single MCP server acts as facade for 40+ Argo CD instances, simplifying: - Developer access management - Service-to-service communication - Security boundaries and token distribution

Agent Skills: The Next Evolution

To avoid duplicating troubleshooting logic across UI extensions, bots, and CLI tools, Intuit is experimenting with Agent Skills - reusable markdown-based diagnostic recipes.

Skills define: - How to extract Argo CD base URLs and application names - Step-by-step troubleshooting procedures for degraded apps - Which API calls to make and how to interpret responses

This creates a library of operational knowledge that any agent can consume.

Implementation Guide

For platform teams

  1. Deploy MCP for Argo CD from argo-cd-labs
  2. Configure HTTP transport for remote access (OAuth 2.0 or token passthrough)
  3. Integrate with existing support channels (Slack, Teams) rather than building standalone UIs
  4. Create prompts that instruct AI on common failure patterns:
  5. Image pull errors from Kubernetes events
  6. Resource limit violations from quota checks
  7. Config server misconfigurations from environment mismatches

For developers

  1. Test free-form application creation with natural language
  2. Experiment with batch operations via Git directory scanning
  3. Implement automated health monitoring with roll back conditions
  4. Measure engagement: track before/after metrics for bot interactions vs UI usage

For organisations

  1. Document troubleshooting workflows as Agent Skills in markdown format
  2. Share diagnostic logic across tools to eliminate duplication
  3. Consider MCP for other GitOps ecosystem tools (Flux, Argo Rollouts, External Secrets Operator)

What changed in practice

Before: Weeks-long support backlogs, expert knowledge bottlenecks, manual deployments
After: Conversational GitOps operations, automated diagnostics, self-service at scale

The convergence of GitOps and AI through standardised protocols like MCP changes how platform teams operate. By meeting users where they are and encoding operational knowledge in reusable formats, teams move from "Platform as Code" to "Platform as Conversation".

References


Presented at KubeCon + CloudNativeCon Europe 2026 by Alexander Matyushentsev (Akuity) and Leonardo Luz Almeida (Intuit)

Setting Up OpenClaw: Skills, Tailscale, GitHub Config Sync, and Copilot

Banner image Banner image

Setting Up OpenClaw: Skills, Tailscale, GitHub Config Sync, and Copilot

A few months ago we onboarded a new environment “the fast way.” It worked… until it didn’t. No clear roll back, no consistent skills, and no shared way to answer the basic question: what’s running, where, and why?

This guide fixes that. It’s a clean Day‑1 setup that gets OpenClaw running securely, reproducibly, and in a way you can scale.


Principles (the why)

  • Secure access by default (no exposed ports)
  • Reproducible configuration (GitHub‑backed)
  • Productive workflows (skills + Copilot)

Architecture at a glance

OpenClaw system context OpenClaw container view


1) Install OpenClaw (quick path)

Use the official docs for your platform. If you want a guided setup with scripts, use the companion repo:

https://github.com/polarpoint-io/ai-capabilities

Ubuntu (reference setup)

sudo apt-get update
sudo apt-get install -y git curl ca-certificates

Install OpenClaw via the official docs, then verify:

openclaw status
openclaw gateway status
openclaw gateway start

2) Bring up Tailscale (secure access)

Tailscale gives you private, encrypted access without opening ports publicly.

curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up

Tip: tag the host and lock it down with ACLs.


3) Use the setup scripts (fast, repeatable)

The ai‑capabilities repo includes setup scripts for both Linux and macOS:

# Linux
./scripts/setup-linux.sh

# macOS
./scripts/setup-macos.sh

These scripts: - ensure Tailscale is installed and online - start the OpenClaw gateway

Snippet (Linux):

openclaw gateway start
openclaw gateway status

4) Enable your first skills

Installed skills right now: - self-improving-agent - captures learnings from failures and feedback - ddg-web-search - web search without an API key - github - GitHub ops via gh - kubernetes - cluster operations and manifests

List skills:

clawhub list

Install a new skill:

clawhub install <skill-name>

5) Sync config with GitHub

Store non‑secret config in GitHub for reproducibility.

Suggested layout:

openclaw-config/
  README.md
  openclaw.json (sanitized)
  skills/
  notes/

Never commit secrets. Use env vars or a secret manager instead.


6) Pair with GitHub Copilot

Copilot helps with: - docs/blog drafts - refactors and cleanup - repetitive config work

Workflow: 1) OpenClaw automates 2) Copilot speeds edits 3) You review + ship


7) Validate the setup

Run these checks:

openclaw status
tailscale status
clawhub list

Quick checklist

  • ✅ OpenClaw running
  • ✅ Tailscale connected
  • ✅ Skills installed
  • ✅ Config synced (sanitized)
  • ✅ Copilot ready

Common pitfalls (and fixes)

  • Gateway not runningopenclaw gateway start
  • Tailscale offlinetailscale up
  • Skills missingclawhub list, reinstall
  • Secrets in Git → move to env vars/secret manager
  • No roll back → add a roll back note to openclaw-config/README.md

Next steps

Explore runnable examples and scripts in the companion repo: https://github.com/polarpoint-io/ai-capabilities

Secrets & GitOps: ArgoCD + External Secrets Done Right

Banner image Banner image

Secrets & GitOps: ArgoCD + External Secrets Done Right

What was getting in the way

GitOps worked - until secrets showed up. Teams either committed secrets or blocked releases.

What we actually wanted

A GitOps flow that keeps secrets outside of Git and still automates deployments.

How we approached it

External secrets context External secrets context Guardrails flow Guardrails flow

The Secure Pattern

  • Git holds ExternalSecret manifests only
  • ESO pulls from Vault/SSM/Secrets Manager
  • ArgoCD syncs manifests, ESO resolves secrets

Files worth opening

  • repo/gitops/secrets/secretstore.yaml
  • repo/gitops/secrets/externalsecret.yaml

What changed in practice

Teams can ship GitOps changes without ever touching sensitive data. Security teams gain control without blocking delivery.

Multi‑Cluster GitOps with ArgoCD: The Operational Blueprint

Banner image Banner image

Multi‑Cluster GitOps with ArgoCD: The Operational Blueprint

One cluster is manageable. Two is fine. Five is where things start getting interesting. Somewhere around ten, you're not managing Kubernetes anymore — you're managing the management of Kubernetes, and that's a different job entirely.

The pattern most teams fall into is depressingly predictable. You start with one ArgoCD instance, one cluster, everything works great. Then a new environment lands — staging needs to be production-like, a regional cluster gets spun up, a client needs isolation. Each new cluster gets a copy of the configuration from the last one, tweaked by hand. Promotions happen by copying YAML between directories and hoping nothing was missed. Six months in, you've got seven clusters and no real confidence they're running the same thing.

Config drift isn't a discipline problem. It's an architectural one. You need a model that makes consistency the default, not something you have to manually enforce.

GitOps as a Product: Building Self‑Service with ArgoCD

Banner image Banner image

GitOps as a Product: Building Self‑Service with ArgoCD

Every platform team reaches the same inflection point. The Kubernetes clusters are stable, GitOps is working, ArgoCD is syncing everything cleanly — and then a developer submits a ticket asking the platform team to onboard their new service. Then another. Then a queue forms.

You've built good infrastructure. But it's not a product yet. It's a service that requires the platform team to be in the loop for every new workload. That doesn't scale, and it creates exactly the kind of toil that drains platform engineers and slows down everyone else.

The fix isn't to work faster. It's to make onboarding something developers can do themselves — within guardrails the platform team controls.

How I Standardised Kubernetes Deployments with ArgoCD

Banner image Banner image

How I Standardised Kubernetes Deployments with ArgoCD

Why this exists

We had multiple teams shipping to multiple clusters, and every team invented their own deployment pattern. It worked - until it didn’t. Rollbacks were manual, promotions were inconsistent, and there was no single place to reason about “what is actually running.”

This post shows the approach we use to make deployments boring, repeatable, and traceable. It mixes a short narrative with a practical guide you can apply.

What we standardised on

  • Single interface: everything deploys through ArgoCD Applications
  • Environment overlays: dev → staging → prod are explicit
  • Project + RBAC boundaries: no cross‑team bleed
  • Promotion in Git: PRs and tags, not click‑ops
  • Consistent health + rollouts: same signals everywhere

Standardised context Standardised context Standardised flow Standardised flow

Architecture (repo + layout)

We keep a platform apps repo with a root App‑of‑Apps and a consistent layout:

platform-apps/
  apps/
    root/                # app-of-apps
    workloads/           # team apps
  overlays/
    dev/
    staging/
    prod/

Root app example:

# repo/gitops/app-of-apps/root-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: platform-root
  namespace: argocd
spec:
  project: platform
  source:
    repoURL: https://github.com/polarpoint-io/platform-apps
    targetRevision: main
    path: apps
  destination:
    server: https://kubernetes.default.svc
    namespace: argocd
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

1) ArgoCD Projects + RBAC (don’t skip)

Projects define where apps can deploy and who can touch them.

# repo/gitops/standardised/projects/platform-project.yaml
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: platform
  namespace: argocd
spec:
  sourceRepos:
    - https://github.com/polarpoint-io/platform-apps
  destinations:
    - namespace: platform-*
      server: https://kubernetes.default.svc
  roles:
    - name: team-readonly
      policies:
        - p, proj:platform:team-readonly, applications, get, platform/*, allow
    - name: team-admin
      policies:
        - p, proj:platform:team-admin, applications, *, platform/*, allow

Why it matters: without Projects, every app can deploy anywhere. That’s how drift becomes security incidents.

2) Environment overlays (dev/staging/prod)

Keep environment diffs explicit and small. Example values:

# repo/gitops/standardised/overlays/values-dev.yaml
replicaCount: 1
resources:
  limits:
    cpu: 200m
    memory: 256Mi
# repo/gitops/standardised/overlays/values-prod.yaml
replicaCount: 3
resources:
  limits:
    cpu: 1000m
    memory: 1Gi

Rule: if it can’t be expressed as an overlay, it doesn’t get deployed.

3) Promotion flow (Git, not clicks)

We promote with branches/tags and ArgoCD Applications that point to them:

  • maindev
  • release/*staging
  • prodproduction (protected)

Example dev/staging/prod apps:

# repo/gitops/standardised/promotion/argocd-app-dev.yaml
spec:
  source:
    targetRevision: main
    helm:
      valueFiles: [values-dev.yaml]

# repo/gitops/standardised/promotion/argocd-app-staging.yaml
spec:
  source:
    targetRevision: release/v1.2.3
    helm:
      valueFiles: [values-staging.yaml]
# repo/gitops/standardised/promotion/argocd-app-prod.yaml
spec:
  source:
    targetRevision: prod
    helm:
      valueFiles: [values-prod.yaml]

4) Health, rollouts, and sync order

We standardise sync waves and health checks so dependencies come up in order.

# repo/gitops/standardised/health/sync-waves.yaml
metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "1"

Standard rules: - readiness/liveness probes on all deployments - consistent timeouts + retries - health overrides for CRDs if needed

Operational impact

  • Deployments are auditable Git changes
  • Rollbacks are a single commit revert
  • Drift is visible instead of hidden
  • Teams still ship fast - but on rails

Walkthrough files

  • repo/gitops/app-of-apps/root-app.yaml
  • repo/gitops/standardised/projects/platform-project.yaml
  • repo/gitops/standardised/overlays/values-dev.yaml
  • repo/gitops/standardised/overlays/values-staging.yaml
  • repo/gitops/standardised/overlays/values-prod.yaml
  • repo/gitops/standardised/promotion/argocd-app-dev.yaml
  • repo/gitops/standardised/promotion/argocd-app-staging.yaml
  • repo/gitops/standardised/promotion/argocd-app-prod.yaml
  • repo/gitops/standardised/health/sync-waves.yaml

Demystifying Model Context Protocol (MCP): AI Gets Smarter About Context

Banner image Banner image

Demystifying Model Context Protocol (MCP): AI Gets Smarter About Context

Ask an LLM to help you debug a Kubernetes incident. It does a decent job. Then, five minutes later, ask it a follow-up question — and watch it answer as if the previous conversation never happened.

That's not a bug in the model. That's the fundamental architectural problem with how most AI tools are wired up. Every prompt starts fresh. Decisions evaporate. Context has to be re-established from scratch every single time. At some point you stop thinking of the AI as a collaborator and start treating it like a very fast search engine that needs to be told the same things repeatedly.

Model Context Protocol (MCP) is the attempt to fix that. It's an open standard — originally proposed by Anthropic, now gaining adoption across the industry — that defines a common way for AI models and the tools around them to share, persist, and prioritise context. The goal is simple: make AI assistants behave less like stateless chatbots and more like reliable operators that actually remember what's going on.

Why you should use External Secrets Operator with ArgoCD

Banner image Banner image

Cloud‑native patterns: Why you should use External Secrets Operator with ArgoCD

There's a moment every GitOps team hits where someone asks the obvious question: "where do the secrets go?"

The rest of the configuration is in Git. It's version-controlled, reviewable, auditable. But you can't put secrets there. So they end up... somewhere else. Sometimes it's a manual kubectl create secret that nobody documents. Sometimes it's base64-encoded values tucked into Helm values files with a comment that says "TODO: fix this properly". Sometimes it's a mix of both, spread across three clusters, maintained by different people who've all developed their own workarounds.

External Secrets Operator (ESO) is the "fix this properly" solution. It connects ArgoCD's GitOps workflow to your secret manager of choice — AWS Secrets Manager, Google Secret Manager, HashiCorp Vault, Azure Key Vault — so secrets are pulled automatically at deploy time, never stored in Git, and rotated without touching your GitOps configuration.