Streamline Application Deployments with Argo Rollouts
Argo Rollouts is a powerful Kubernetes tool that offers us a wide range of deployment capabilities. It has useful features like blue-green, canary, and progressive rollouts. It has ultimately simplified the way we manage our Kubernetes cluster. Argo rollouts does an automated analysis of the software you’re about to release, checking the success and failure of the changes you are promoting to your customers. In today’s demo we will be performing a canary rollout with Istio:
- The
Canary Rollout Strategywe demo today, is going to be deploying a new version of our software to only a subset of users. Hence, keeping the majority of our users on a stable version of our application. By directing a small percentage of the traffic to thecanaryversion, we can easily identify any issues or performance problems before impacting a larger audience, allowing us torollbackif necessary.
Setting up your Kubernetes Cluster
We’re going to be deploying this locally on Docker Desktop. Let’s go over some of the things you will need so you can follow along with this workshop (all the steps and instructions are available in this link, clone this repository to get started).
Prerequisites
Make sure you go to Settings on Docker Desktop and Enable Kubernetes:
We’re going to start by creating a namespace for argocd
$ kubectl create namespace argocdThe output should confirm that the command was successful and that the new namespace has been successfully created in the Kubernetes cluster. Now, we’re going to install Argo CD in our new namespace and that’s where all of the Argo CD services and application resources will live:
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yamlHere’s a brief synopsis of what was actually installed
Custom resource definitions (CRDs):You’ve now installed the necessary resources ArgoCD needs.Service accounts:We now also have the various ServiceAccounts for our Argo CD components. Which includes the ApplicationSet controller, the Notifications controller, and more.Roles and Cluster Roles:These roles will provide the cluster-level permissions needed for Argo CD to interact with various Kubernetes resources and perform operations within the cluster.Role Bindings and Cluster Role Bindings:This will associate the ServiceAccount’s with their respective Role or Cluster Role.Config Maps:We’ve also installed the config data for the various Argo CD components as well.Secrets:These secrets store sensitive information required by the Argo CD components and resources, which ensuring the handling of credentials, API keys, and other confidential data necessary to operate our cluster.Deployments and Stateful Sets:These are the running instances of the specific Argo CD components we will use for this workshop.Network policies:These policies establish specific network connectivity rules for our Argo CD components, ensuring that we have secure communication within the Kubernetes cluster.
Now that we have Argo CD up and running, let’s check the status of the pods in our cluster, run the command:
$ kubectl get pods -AThis is going to provide us with insights in regards to the health and availability of our cluster:
NAMESPACE NAME READY STATUS RESTARTS AGE
argocd argocd-application-controller-0 1/1 Running 0 23s
argocd argocd-applicationset-controller-7786cb7547-5zf5w 1/1 Running 0 23s
argocd argocd-dex-server-58574dff5f-vc7vg 1/1 Running 0 23s
argocd argocd-notifications-controller-7764bb774d-wvhcv 1/1 Running 0 23s
argocd argocd-redis-77bf5b886-sl2hf 1/1 Running 0 23s
argocd argocd-repo-server-5b9977b575-5czsr 0/1 Running 0 23s
argocd argocd-server-6485ccb9c9-plk7s 0/1 Running 0 23s
kube-system coredns-5d78c9869d-qqx6f 1/1 Running 0 109s
kube-system coredns-5d78c9869d-zhh6d 1/1 Running 0 109s
kube-system etcd-docker-desktop 1/1 Running 0 106s
kube-system kube-apiserver-docker-desktop 1/1 Running 0 116s
kube-system kube-controller-manager-docker-desktop 1/1 Running 0 115s
kube-system kube-proxy-k4ffh 1/1 Running 0 110s
kube-system kube-scheduler-docker-desktop 1/1 Running 0 109s
kube-system storage-provisioner 1/1 Running 0 108s
kube-system vpnkit-controller 1/1 Running 0 108sIn the argocd namespace we have the following pods up and running:
argocd-application-controllerargocd-applicationset-controllerargocd-dex-servernotifications-controllerargocd-redisargocd-repo-serverargocd-server
And in the kube-system namespace we have:
corednsetcd-docker-desktopkube-apiserverkube-controllerkube-proxykube-scheduler
Now the next few commands we’re going to run, will utilize the argocd CLI I’ve actually already installed the CLI and it’s a part of the workshop prerequisites link listed above. Before we get into the CLI commands we’re going to need to retrieve the admin password for Argo CD. The command we’re going to run will fetch the admin password stored in the argocd-initial-admin-secret, Which lives in the argocd namespace. The secret will get decoded, which is then going to allow us to access and manage Argo CD via the user interface:
$ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -dOnce you get the password, store it in a text file or notepad somewhere because we’re going to need to log into the user interface. I’m going to use the Argo CD CLI to enable port forwarding and login to the UI (note: we’re going to be adding that --port-forward-namespace flag to every CLI command we run):
$ argocd --port-forward --port-forward-namespace argocd loginThe above CLI command will prompt you for your credentials:
Username: admin
Password:
'admin:login' logged in successfully
Context 'port-forward' updatedThis is where you will need the admin password we decoded in the steps above. Once we have logged in successfully we’re going to create an app which will be a part of our Argo CD list of managed applications:
$ argocd --port-forward-namespace argocd repo add "https://github.com/argocon22Workshop/ArgoCDRollouts"this is the same repo we have specified as our workshop project listed above in the Prerequisites section. The next thing we need to do is create an app with argocd app create command:
$ argocd --port-forward-namespace argocd app sync argo-rolloutsThis command is asking the API server to sync our argo-rollouts application. The application controller is going to be scanning the state of the resources and then apply them to our cluster. The command then watches for the sync operation and reconciles the states whether is a fail or a success. We have now synced and applied the:
namespaceServiceAccountargo-rollouts-notification-secretClusterRole: so argo-rollouts can view edit & modify the resources in the argo-rollouts app.The CRDs for: analysistemplates.argoproj.io, experiments.argoproj.io and rollouts.argoproj.io
and a lot more.
Istio
We’re going to be using Istio for enabling traffic-routed canary deployments. It will provide us with granular control over pod counts and traffic percentages. Furthermore, it will allow us to safely test out new versions of our applications by gradually diverting a portion of the traffic to the canary pods while monitoring their performance with Prometheus. With Istio, we can easily implement canary deployments and minimize the impact of potential issues by controlling the distribution of traffic between different versions of our services.
First install the istioctl command line tool:
$ brew install istioctlThen initiate the default install of Istio via their CLI tool:
$ istioctl install - set profile=demo -y - set values.global.tag=1.15.0
✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Installation complete Making this installation the default for injection and validation.This command initiates the installation process of Istio. The output will indicate the completion of the installation process, it shows we have installed of it’s core components:
Istiod (Istio control plane)Egress gatewaysIngress gateways
The default installation allows Istio to be configured to automatically inject its sidecar proxies into application pods for enhanced observability, security, and control. So we’ve got Istio installed, we’re going to create a new namespace called argo-rollouts-istio using the command kubectl create namespace command:
$ kubectl create namespace argo-rollouts-istioWe need to enable Istio automatic sidecar injection into our new namespace, so it can be used by our application. Please note, you must add this label istio-injection=enabled (to your namespace to enable Istio’s sidecar injection):
$ kubectl label namespace argo-rollouts-istio istio-injection=enabledPrometheus
We’re also using Prometheus to monitor our cluster, the command below is going to set up Prometheus and its relevant CRDs in our cluster. The CRDs will be installed in a separate namespace called monitoring:
$ kubectl apply --server-side -f manifests/prometheus/upstream/setupThis command applies a set of resources and configurations for setting up monitoring in our Kubernetes cluster. Here’s a breakdown of what was configured:
- A few service accounts were created, for the
alertmanager-main,blackbox-exporter,grafana,kube-state-metrics,node-exporter,prometheus-adapter,prometheus-k8s, andprometheus-operator - Various Roles and Cluster Roles were created, which define the permissions for all of the ServiceAccounts listed above
- Role bindings and Cluster Role bindings were created to associate each roles with it’s appropriate ServiceAccount
- Multiple ConfigMaps were created, which contain the config for different components in our cluster and the dashboard
- Secrets were created for
alertmanager-mainandgrafana, which will house sensitive information - Services were created for
alertmanager-main,blackbox-exporter,grafana,kube-state-metrics,node-exporter,prometheus-adapter,prometheus-k8s, andprometheus-operator - Deployments were created for the
blackbox-exporter,grafana,kube-state-metrics,prometheus-adapter, and theprometheus-operator - We also configured pod disruption budgets for
alertmanager-main,prometheus-adapter, andprometheus-k8s - An API service was created for
v1beta1.metrics.k8s.io - A daemonset was created for the
node-exporter - All of the various
CRDswere created for our different monitoring components - Service monitors were created for various components, including:
alertmanager-main,blackbox-exporter,grafana,kube-apiserver,kube-controller-manager,kube-scheduler,kube-state-metrics,kubelet,node-exporter,prometheus-adapter,prometheus-k8s, andprometheus-operator - Network policies were created for the
alertmanager-main,blackbox-exporter,grafana,kube-state-metrics,node-exporter,prometheus-adapter,prometheus-k8s, andprometheus-operator - And last but not least, a gateway and virtual service were created for the Istio networking layer for
prometheus
Argo Rollouts: Canary Rollout Strategy
In this section we’re going to use kustomize to build the Kubernetes manifest files located in the manifests/ArgoCD201-RolloutsDemoCanaryIstio directory. The resources that are being created are:
service/istio-host-split-canary: We will create a service for the canary version of the application:
apiVersion: v1
kind: Service
metadata:
name: istio-host-split-canary
labels:
app: istio-host-split
spec:
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: istio-host-splitservice/istio-host-split-stable: We will create a service for the stable version of the application:
apiVersion: v1
kind: Service
metadata:
name: istio-host-split-stable
spec:
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: istio-host-splitrollout.argoproj.io/istio-host-split: We’re going to create an Argo Rollout called istio-host-split, which defines the canary rollout strategy for the application:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: istio-host-split
spec:
replicas: 4
strategy:
canary:
canaryService: istio-host-split-canary
stableService: istio-host-split-stable
trafficRouting:
managedRoutes:
- name: mirror-route
istio:
virtualService:
name: istio-host-split-vsvc
routes:
- primary
steps:
- setWeight: 25
- pause: {}
- setWeight: 50
- pause: {}
- setWeight: 75
- pause: {}
selector:
matchLabels:
app: istio-host-split
template:
metadata:
labels:
app: istio-host-split
spec:
containers:
- name: istio-host-split
image: ghcr.io/argocon22workshop/rollouts-demo:blue
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
requests:
memory: 16Mi
cpu: 5mgateway.networking.istio.io/istio-host-split-gateway: We’re also going to create an Istio Gateway resource for routing traffic to the canary and stable versions of our application:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-host-split-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"virtualservice.networking.istio.io/istio-host-split-vsvc: Once we’ve created the Gateway resource, we’re going to create a Istio VirtualService resource for configuring all of the traffic splitting and routing rules between the canary and stable versions of our application:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: istio-host-split-vsvc
spec:
hosts:
- "*"
gateways:
- istio-host-split-gateway
http:
- name: primary
route:
- destination:
host: istio-host-split-stable
weight: 100
- destination:
host: istio-host-split-canary
weight: 0In this next step, we will apply the manifests for our Argo Rollouts Demo with our canary rollout strategy using Istio:
$ kustomize build manifests/ArgoCD201-RolloutsDemoCanaryIstio/ | kubectl apply -f -We have now enabled canary rollout strategy for our application using the traffic-routed canary strategy with Istio, which will allow us to have better control over pod counts and traffic percentages for a smooth and controlled deployment process:
service/istio-host-split-canary created
service/istio-host-split-stable created
rollout.argoproj.io/istio-host-split created
gateway.networking.istio.io/istio-host-split-gateway created
virtualservice.networking.istio.io/istio-host-split-vsvc createdNavigate to http://localhost/ you should see our demo application up and running:
Argo Rollouts Dashboard
We are now ready to look at the argo-rollouts dashboard
$ kubectl-argo-rollouts dashboardGo ahead and navigate to http://localhost:3100/rollouts you should now be able to see that the argo-rollouts dashboard is up and running:
Now we’re going to patch the istio-host-split rollout in the argo-rollouts-istio namespace. Just for context, we’re updating the image of for our container within the rollout’s pod template to use the ghcr.io/argocon22workshop/rollouts-demo:red image. It will essentially trigger the rollout process to update the pod with the new image:
$ kubectl -n argo-rollouts-istio patch rollout istio-host-split --type json --patch '[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value": "ghcr.io/argocon22workshop/rollouts-demo:red" }]'Once we have successfully patched our rollout the dashboard will now display two revisions:
In the rollout declared above, we had a configuration with the weight set to 25% for the first canary rollout, then we’re going to gradually increase it to 50% and then 75% in subsequent step. The pauses in between will allow time for the canary to stabilize before proceeding to the next weight increase. This gradual rollout strategy helps minimize the potential impact to users when new software version is released. Click the ^PROMOTE button (notice our demo application should show be updating in real-time showing us that our changes are being successfully deployed).
Our argo-rollouts dashboard should also be updated and indicate to us it is in a paused state and the updated Weight should now show at 50%
So we’re going to perform the same steps until we’re at 75% & we have now successfully performed the gradual rollout of our canary deployment, we are now ready to proceed with the final step of promoting the remaining changes. By increasing the weight to 100%, we ensure that the new version is fully exposed to the production environment, allowing us to take advantage of the latest enhancements and features while being confident in its stability. The Argo Rollouts CRDs help minimize any potential disruptions to the users and provide a reliable and efficient deployment process:
Rollbacks
The rollback button in Argo Rollouts provides you with a powerful and essential mechanism to revert to a previous, known, and stable version of your deployment. It allows for quick and efficient recovery in case the new rollout introduces unexpected issues, bugs, or performance problems. Utilizing the rollback button can help maintain application reliability and user satisfaction by swiftly reverting to a reliable state without the need for a time-consuming and error-prone manual rollback process. It serves as a valuable safety net during the deployment lifecycle, enabling us to respond promptly to any unforeseen issues and keep our apps running smoothly.
Once we click the rollback button, it will rollback to the previous version with the weight of 25%
once we validate that our demo-app once it is rolled back to the previous version it is functioning as designed we can then click on the PROMOTE-FULL button:
And now we have successfully rolled back to a previous version of our application. Thanks for taking the time to read this article, just a reminder all of the commands for this workshop are listed in this link here.
