Skip to content

Blog

Welcome to our technical blog! Here you'll find insights, tutorials, and best practices for cloud-native technologies and DevOps.

Latest Posts

External Secrets Operator with ArgoCD Best Practices

Published: November 30, 2023 | Author: Surjit Bains

Learn why using External Secrets Operator with ArgoCD is a cloud-native security best practice and how to implement it effectively in your Kubernetes infrastructure.

Topics covered: - Security best practices for GitOps - Enterprise secret management integration
- Step-by-step implementation guide - Common patterns and troubleshooting


More posts coming soon...

Demystifying Model Context Protocol (MCP): AI Gets Smarter About Context

Banner image

Demystifying Model Context Protocol (MCP): AI Gets Smarter About Context

Artificial Intelligence (AI) has made remarkable leaps in the last decade. From early rule-based systems to modern large language models (LLMs), the goal has always been to create machines that understand, reason, and interact meaningfully with humans. However, one persistent challenge remains: context retention. Without proper context, AI models often produce inconsistent, irrelevant, or even misleading outputs.

Enter Model Context Protocol (MCP) — a framework designed to allow AI systems to better track, store, and utilize contextual information across sessions, domains, and applications. In this post, we’ll explore MCP in depth, its significance, real-world applications, and how it’s shaping the next generation of AI systems.

Table of Contents

  1. Introduction to MCP
  2. Why Context Matters in AI
  3. Core Principles of MCP
  4. MCP Architecture and Flow
  5. Integrating MCP with Large Language Models
  6. Enterprise Applications of MCP
  7. Challenges and Considerations
  8. Future of MCP
  9. Conclusion

Introduction to MCP

The Model Context Protocol (MCP) is essentially a standard that allows AI systems to:

  • Persist context across multiple interactions
  • Share contextual information between AI models and applications
  • Ensure consistent and coherent responses in dynamic environments

Unlike traditional AI pipelines where models are often “stateless” per interaction, MCP introduces state-aware intelligence, allowing models to act as if they “remember” prior conversations, decisions, or environmental inputs.

Why Context Matters in AI

Context is crucial for several reasons:

  1. Conversational AI: Chatbots and virtual assistants must remember prior interactions to provide meaningful answers.
  2. Recommendation Systems: Understanding user history and preferences enhances personalization.
  3. Automation Workflows: AI-driven workflows depend on prior steps and conditions for accuracy.
  4. Error Reduction: Context-aware systems reduce hallucinations and irrelevant outputs from LLMs.

Consider a customer support chatbot that fails to remember the last question — it would require users to repeat themselves constantly, creating frustration. MCP ensures context continuity across sessions and channels.

Core Principles of MCP

MCP revolves around several core principles:

  • Context Persistence: Store session and historical data in a structured format.
  • Context Sharing: Enable multiple models or services to access shared context securely.
  • Context Prioritization: Decide which context elements are most relevant to the current interaction.
  • Scalability: Support high-throughput applications without bottlenecks.

MCP Architecture and Flow

The typical architecture of MCP can be visualized as a context-aware pipeline.

@startuml
title MCP Architecture Overview
actor User
participant "Application Layer" as App
participant "MCP Context Store" as Store
participant "AI Model" as Model

User -> App: Sends request / message
App -> Store: Retrieve context
Store --> App: Return relevant context
App -> Model: Provide input + context
Model --> App: Response with context-aware output
App -> Store: Update context
App --> User: Deliver response
@enduml

Explanation: 1. The user sends a message or request. 2. The application layer queries the MCP context store for relevant context. 3. The AI model receives input enriched with contextual information. 4. The response is generated and the context store is updated for future interactions.

Integrating MCP with Large Language Models

MCP complements modern LLMs by supplying:

  • Session context: Keep track of ongoing conversations.
  • Knowledge context: Provide domain-specific data relevant to queries.
  • Cross-application context: Share context between different tools or platforms.

Example workflow:

@startuml
title MCP + LLM Interaction Flow
actor User
participant "LLM Model" as LLM
participant "MCP Context Store" as Store

User -> LLM: Ask question
LLM -> Store: Query previous context
Store --> LLM: Return relevant context
LLM -> LLM: Generate context-aware response
LLM --> User: Provide answer
@enduml

This architecture allows chatbots, AI assistants, and recommendation systems to act as if they have memory, improving coherence and reliability.

Enterprise Applications of MCP

MCP isn’t just theoretical — enterprises are already using it in:

  1. Customer Support Automation
  2. AI remembers user issues across tickets and channels.
  3. Personalized Recommendations
  4. Streaming platforms and e-commerce use MCP to track preferences over time.
  5. Decision Support Systems
  6. Financial or medical AI applications can factor in historical data to advise decisions.
  7. Workflow Automation
  8. Context-aware AI triggers actions in DevOps pipelines or marketing automation.
@startuml
title Enterprise MCP Workflow Example
actor User
participant "CRM System" as CRM
participant "MCP Context Store" as Store
participant "AI Assistant" as AI

User -> CRM: Open support ticket
CRM -> Store: Retrieve previous interactions
Store --> AI: Provide context
AI -> CRM: Suggest solution
CRM --> User: Deliver AI-assisted resolution
@enduml

Challenges and Considerations

Implementing MCP comes with challenges:

  • Data Privacy: Context may include sensitive information; ensure secure storage and access controls.
  • Scalability: Context stores must handle large volumes of concurrent interactions.
  • Model Alignment: AI models need to interpret context correctly; otherwise, irrelevant context may degrade output.
  • Consistency Across Platforms: Synchronizing context between multiple apps and services can be complex.

Future of MCP

The next evolution of MCP could involve:

  • Federated Context Stores: Decentralized context sharing without centralizing data.
  • AI-Assisted Context Prioritization: Automatically determine the most relevant context per interaction.
  • Cross-Domain Intelligence: MCP enabling AI to leverage context from different business units or applications seamlessly.

MCP is poised to become a critical foundation for enterprise AI, conversational intelligence, and advanced workflow automation.

Conclusion

Model Context Protocol (MCP) is a game-changer for AI systems seeking coherence, memory, and reliability. By persisting, sharing, and prioritizing context, MCP enables AI models to act intelligently across sessions, applications, and domains.

For enterprises looking to leverage AI effectively, understanding and implementing MCP is no longer optional — it’s essential.

Start experimenting with MCP in your AI workflows today and unlock the next level of context-aware intelligence.

Next Steps / CTA:

  • Explore Polarpoint’s AI consulting services for implementing MCP.
  • Subscribe to the blog for more deep dives on AI and platform engineering trends.

Cloud-native patterns: Why you should use External Secrets Operator with ArgoCD

ArgoCD is a great tool for managing deployments and keeping your applications up to date. However, one of the challenges with using ArgoCD is secrets management. ArgoCD does not natively support secrets management, so you have to use an external secrets operator. In this blog post, we'll show you why you should use the External Secrets Operator with ArgoCD.

External Secrets Operator

The External Secrets Operator is a Kubernetes operator that gives you the ability to manage secrets outside of ArgoCD. This means that you can use your existing secrets management solution with ArgoCD. The External Secrets Operator provides a way to synchronize secrets from an external secrets management system into Kubernetes.

External Secrets Operator (ESO) can be used with secret backends such as AWS Secrets Manager/GCP Secret Manager/Hashicorp Vault and many more to pull credentials, keys, and secrets using the External Secrets Operator needed by ArgoCD in a GitOps fashion. It provides a secure and automated way to populate secrets in Kubernetes whilst maintaining a strong security stance and with ArgoCD we are able to synchronise these across clusters, and applications.

Why use External Secrets Operator with ArgoCD?

1. Security

The External Secrets Operator provides a secure way to manage secrets. It allows you to store secrets in an external secrets management system and synchronize them into Kubernetes. This means that you don't have to store secrets in your Git repository, which is a security risk.

2. Scalability

The External Secrets Operator is designed to scale. It can handle a large number of secrets and can be used across multiple clusters. This makes it a great choice for large organizations that need to manage secrets across multiple environments.

3. Flexibility

The External Secrets Operator is flexible. It supports a wide range of external secrets management systems, including AWS Secrets Manager, GCP Secret Manager, and Hashicorp Vault. This means that you can use your existing secrets management solution with ArgoCD.

How to use External Secrets Operator with ArgoCD

To use the External Secrets Operator with ArgoCD, you need to install the External Secrets Operator on your Kubernetes cluster and configure it to synchronize secrets from your external secrets management system.

Here's an example of how to configure the External Secrets Operator to synchronize secrets from AWS Secrets Manager:

apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: aws-secrets-manager
spec:
  provider:
    aws:
      service: SecretsManager
      region: us-east-1
      auth:
        role: arn:aws:iam::123456789012:role/external-secrets-role
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: example-secret
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secrets-manager
    kind: SecretStore
  target:
    name: example-secret
    creationPolicy: Owner
  data:
  - secretKey: password
    remoteRef:
      key: /path/to/secret
      property: password

This configuration creates a SecretStore that connects to AWS Secrets Manager and an ExternalSecret that synchronizes a secret from AWS Secrets Manager into a Kubernetes secret.

Conclusion

The External Secrets Operator is a great tool for managing secrets with ArgoCD. It provides a secure, scalable, and flexible way to synchronize secrets from external secrets management systems into Kubernetes. By using the External Secrets Operator with ArgoCD, you can improve the security of your applications and simplify the management of secrets across multiple environments.

If you're using ArgoCD and need to manage secrets, we highly recommend using the External Secrets Operator. It's a best practice that will help you build more secure and scalable applications.