Back to Blog
DevOps

Advanced Deployment Patterns: Building Reliable Deployment Pipelines in 2025

Wang Yinneng
6 min read
deploymentkubernetesci-cdinfrastructurecloud-nativegitops

Advanced Deployment Patterns: Building Scalable Infrastructure in 2025

Introduction

Deployment patterns continue to evolve in 2025, with new approaches emerging to handle the increasing complexity of modern applications. This comprehensive guide explores advanced deployment patterns and their practical applications.

Infrastructure as Code

1. Terraform Modules

# modules/kubernetes/main.tf
module "kubernetes" {
  source = "./modules/kubernetes"

  cluster_name = var.cluster_name
  region      = var.region
  node_count  = var.node_count

  node_pools = {
    general = {
      machine_type = "e2-standard-4"
      node_count   = 3
      autoscaling  = true
      min_count    = 2
      max_count    = 5
    }
    gpu = {
      machine_type = "n1-standard-4"
      node_count   = 1
      gpu_type     = "nvidia-tesla-t4"
      autoscaling  = true
      min_count    = 1
      max_count    = 3
    }
  }

  network_policy = {
    enabled = true
    rules = [
      {
        name        = "allow-internal"
        description = "Allow internal traffic"
        source      = "10.0.0.0/8"
        ports       = ["80", "443"]
      }
    ]
  }
}

# modules/kubernetes/variables.tf
variable "cluster_name" {
  type        = string
  description = "Name of the Kubernetes cluster"
}

variable "region" {
  type        = string
  description = "Region where the cluster will be deployed"
}

variable "node_count" {
  type        = number
  description = "Number of nodes in the cluster"
  default     = 3
}

# modules/kubernetes/outputs.tf
output "cluster_endpoint" {
  value       = google_container_cluster.primary.endpoint
  description = "Kubernetes cluster endpoint"
}

output "cluster_ca_certificate" {
  value       = base64decode(google_container_cluster.primary.master_auth[0].cluster_ca_certificate)
  description = "Kubernetes cluster CA certificate"
  sensitive   = true
}

2. Pulumi Infrastructure

// infrastructure/index.ts
import * as pulumi from "@pulumi/pulumi";
import * as k8s from "@pulumi/kubernetes";
import * as aws from "@pulumi/aws";

// Create EKS cluster
const cluster = new aws.eks.Cluster("my-cluster", {
    roleArn: clusterRole.arn,
    vpcConfig: {
        subnetIds: vpc.privateSubnetIds,
    },
});

// Create node group
const nodeGroup = new aws.eks.NodeGroup("my-node-group", {
    clusterName: cluster.name,
    nodeGroupName: "my-node-group",
    nodeRoleArn: nodeRole.arn,
    subnetIds: vpc.privateSubnetIds,
    scalingConfig: {
        desiredSize: 3,
        minSize: 1,
        maxSize: 5,
    },
    instanceTypes: ["t3.medium"],
});

// Create Kubernetes provider
const k8sProvider = new k8s.Provider("k8s", {
    kubeconfig: cluster.kubeconfig,
});

// Deploy application
const app = new k8s.apps.v1.Deployment("my-app", {
    spec: {
        replicas: 3,
        selector: {
            matchLabels: {
                app: "my-app",
            },
        },
        template: {
            metadata: {
                labels: {
                    app: "my-app",
                },
            },
            spec: {
                containers: [{
                    name: "my-app",
                    image: "my-app:latest",
                    ports: [{
                        containerPort: 80,
                    }],
                    resources: {
                        requests: {
                            cpu: "100m",
                            memory: "128Mi",
                        },
                        limits: {
                            cpu: "500m",
                            memory: "512Mi",
                        },
                    },
                }],
            },
        },
    },
}, { provider: k8sProvider });

Continuous Deployment

1. GitHub Actions Workflow

# .github/workflows/deploy.yml
name: Deploy Application

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write
      id-token: write

    steps:
      - name: Checkout repository
        uses: actions/checkout@v3

      - name: Log in to the Container registry
        uses: docker/login-action@v2
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract metadata (tags, labels) for Docker
        id: meta
        uses: docker/metadata-action@v4
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}

      - name: Build and push Docker image
        uses: docker/build-push-action@v4
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}

      - name: Deploy to Kubernetes
        uses: azure/k8s-deploy@v1
        with:
          manifests: |
            k8s/deployment.yaml
            k8s/service.yaml
          images: |
            ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

2. ArgoCD GitOps

# applications/my-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/org/repo.git
    targetRevision: HEAD
    path: k8s
    helm:
      releaseName: my-app
      valueFiles:
        - values.yaml
  destination:
    server: https://kubernetes.default.svc
    namespace: my-app
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
    retry:
      limit: 5
      backoff:
        duration: 5s
        factor: 2
        maxDuration: 3m

# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: my-app
          image: my-app:latest
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 500m
              memory: 512Mi
          livenessProbe:
            httpGet:
              path: /health
              port: 80
            initialDelaySeconds: 30
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /ready
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 5

Blue-Green Deployment

1. Kubernetes Implementation

# blue-green-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-blue
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
      version: blue
  template:
    metadata:
      labels:
        app: my-app
        version: blue
    spec:
      containers:
        - name: my-app
          image: my-app:blue
          ports:
            - containerPort: 80

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-green
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
      version: green
  template:
    metadata:
      labels:
        app: my-app
        version: green
    spec:
      containers:
        - name: my-app
          image: my-app:green
          ports:
            - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: my-app
spec:
  selector:
    app: my-app
    version: blue  # Initially points to blue
  ports:
    - port: 80
      targetPort: 80

2. Deployment Script

#!/bin/bash

# Deploy new version
kubectl apply -f blue-green-deployment.yaml

# Wait for new deployment to be ready
kubectl rollout status deployment/my-app-green

# Run tests against new deployment
kubectl port-forward svc/my-app-green 8080:80 &
PORT_FORWARD_PID=$!

# Run tests
./run-tests.sh

# If tests pass, switch traffic
if [ $? -eq 0 ]; then
    # Update service to point to new version
    kubectl patch service my-app -p '{"spec":{"selector":{"version":"green"}}}'
    
    # Wait for old deployment to drain
    kubectl rollout status deployment/my-app-blue
    
    # Scale down old deployment
    kubectl scale deployment my-app-blue --replicas=0
else
    # Rollback if tests fail
    kubectl scale deployment my-app-green --replicas=0
fi

# Clean up
kill $PORT_FORWARD_PID

Canary Deployment

1. Istio Implementation

# canary-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: my-app
          image: my-app:stable
          ports:
            - containerPort: 80

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
      version: canary
  template:
    metadata:
      labels:
        app: my-app
        version: canary
    spec:
      containers:
        - name: my-app
          image: my-app:canary
          ports:
            - containerPort: 80

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: my-app
spec:
  hosts:
    - my-app
  http:
    - route:
        - destination:
            host: my-app
            subset: stable
          weight: 90
        - destination:
            host: my-app
            subset: canary
          weight: 10

2. Deployment Script

#!/bin/bash

# Deploy canary
kubectl apply -f canary-deployment.yaml

# Monitor canary metrics
while true; do
    # Get error rate
    ERROR_RATE=$(kubectl exec -it deploy/my-app-canary -- curl -s localhost:9090/metrics | grep http_requests_total | grep status=5 | awk '{print $2}')
    
    # Get latency
    LATENCY=$(kubectl exec -it deploy/my-app-canary -- curl -s localhost:9090/metrics | grep http_request_duration_seconds | awk '{print $2}')
    
    # Check if metrics are within acceptable range
    if [ $ERROR_RATE -lt 1 ] && [ $LATENCY -lt 0.5 ]; then
        # Increase canary weight
        kubectl patch virtualservice my-app -p '{"spec":{"http":[{"route":[{"destination":{"host":"my-app","subset":"stable"},"weight":80},{"destination":{"host":"my-app","subset":"canary"},"weight":20}]}]}}'
    else
        # Rollback if metrics are poor
        kubectl patch virtualservice my-app -p '{"spec":{"http":[{"route":[{"destination":{"host":"my-app","subset":"stable"},"weight":100},{"destination":{"host":"my-app","subset":"canary"},"weight":0}]}]}}'
        exit 1
    fi
    
    sleep 300  # Wait 5 minutes between checks
done

Best Practices

1. Infrastructure as Code

  • Use version control
  • Implement modular design
  • Document infrastructure
  • Regular updates

2. Continuous Deployment

  • Automate deployments
  • Implement rollback strategy
  • Monitor deployments
  • Regular testing

3. Deployment Strategies

  • Choose appropriate strategy
  • Implement health checks
  • Monitor metrics
  • Regular reviews

4. Security

  • Implement access control
  • Use secrets management
  • Regular security audits
  • Compliance checks

Conclusion

Advanced deployment patterns enable building scalable, reliable infrastructure. By understanding and applying these patterns, developers can create robust deployment processes that ensure application stability and performance.

Resources

WY

Wang Yinneng

Senior Golang Backend & Web3 Developer with 10+ years of experience building scalable systems and blockchain solutions.

View Full Profile →