Argo Events and Argo Workflows

Zubair Haque
5 min readAug 12, 2021

In my previous position as a Site Reliability Engineer for an organization developing ETL Data Pipelines in Python, my primary objective was to improve our workflow efficiency. One major pain point we experienced was the time-consuming process of building applications and running unit tests every time a Pull Request (PR) was submitted or updated with new changes. I’m going to be solving this problem by using the Webhook Event-Source in Argo Events, an event-driven workflow automation framework for Kubernetes. By implementing this solution, we can automate the process of triggering Argo Workflows, an open-source tool for parallel job orchestration in Kubernetes, to build our applications and run unit tests whenever a PR is submitted or updated. In this post, I will provide a step-by-step guide on how to set up Argo Events and trigger Argo Workflows locally on your machine, ultimately increasing our productivity and improving our workflow efficiency.

Getting Started

Run The minikube start command to start your cluster:

$ minikube start

This command creates and configures a virtual machine that runs a single-node Kubernetes cluster.

Setup Argo Events

Argo Events by nature is reacting to various external events, it’s Kubernetes-native which for us is ideal because it then becomes the perfect candidate to interact with Argo Workflows. There are four key concepts to know:

  • EventSource: sets up the configuration for consuming events from an external source.
  • Sensor: defines a set of events that need to occur before a particular action can be started.
  • EventBus: internal communication bus that handles all of the exchanging of messages between various components of the system.
  • Trigger: defines a set of actions that will be executed when all of its dependencies, such as events and conditions, have been met.

Now that we have a clearer understanding of Argo Events, in order to get started we’re going to need to create the argo-events namespace on your cluster:

$ kubectl create ns argo-events

Once the namespace has been created, we then need to install the Argo Events operator in our argo-events namespace:

$ kubectl apply -n argo-events -f https://raw.githubusercontent.com/argoproj/argo-events/stable/manifests/install.yaml

The following CRD’s should be created:

customresourcedefinition.apiextensions.k8s.io/eventbus.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/eventsources.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/sensors.argoproj.io created
clusterrole.rbac.authorization.k8s.io/argo-events-aggregate-to-admin created
clusterrole.rbac.authorization.k8s.io/argo-events-aggregate-to-edit created
clusterrole.rbac.authorization.k8s.io/argo-events-aggregate-to-view created
clusterrole.rbac.authorization.k8s.io/argo-events-role created
clusterrolebinding.rbac.authorization.k8s.io/argo-events-binding created

these resources are the custom objects required by the Argo Events operator and the permissions needed for it to function correctly.

Let’s verify that the installation is successful by running the following commands:

$ kubectl get crds | grep argoproj.io

This command lists all the Argo Events (CRDs) installed on the cluster by their name.

Next thing we’re going to do is deploy the Argo events gateway, the Argo Events gateway is a component that receives events from an event sources. It then takes that event and routes it to the appropriate event trigger or sensor based on predefined rules:

$ kubectl apply -n argo-events -f https://raw.githubusercontent.com/argoproj/argo-events/stable/examples/eventbus/native.yaml

you should get a message stating that your eventbus was created:

eventbus.argoproj.io/default created

We also want to make sure that our event source is running:

$ kubectl get pods -n argo-events

eventbus-default-stan-0 2/2 Running 0 57m
eventbus-default-stan-1 2/2 Running 0 57m
eventbus-default-stan-2 2/2 Running 0 57m

Now that our eventbus is running, let’s create our webhook sensor. The Argo Events webhook sensor listens for incoming HTTP requests on a specified endpoint and triggers event triggers or workflows based on predefined rules specified in a YAML configuration file:

$ kubectl apply -n argo-events -f https://raw.githubusercontent.com/argoproj/argo-events/stable/examples/sensors/webhook.yaml

you should get a message stating that your webhook sensorwas created:

$ sensor.argoproj.io/webhook created

If you run the get pods command again, you’ll see your webhook sensor you just created:

$ kubectl get pods -n argo-events

eventbus-default-stan-0 2/2 Running 0 76m
eventbus-default-stan-1 2/2 Running 0 76m
eventbus-default-stan-2 2/2 Running 0 76m
webhook-sensor-t78r9-66fc5f74c-j7tr6 1/1 Running 0 28s

We have to run the port forwarding command, so we can access the webhook sensor’s endpoint from your local machine for testing purposes:

$ kubectl port-forward deployment/webhook-sensor-t78r9 9300:9300 -n argo-events

Install Argo Workflows:

Argo Workflows provides a powerful way to define and manage workflows using CRDs. These CRDs are made up of three key components:

  • Workflows: Defines the overall flow of the workflow to be executed.
  • Templates: Defines the container and actions to be executed within the workflow.
  • Template Invocators: Triggers the Templates to start running at the appropriate time within the workflow.

Workflows define the overall flow of the workflow to be executed, Templates define the container and actions to be executed, and Template Invocators trigger the Templates to start running.

Building a CI/CD Pipeline:

To build our CI/CD pipeline using Argo Workflows, we’ll use a WorkflowTemplate to define the pipeline as a directed acyclic graph (DAG). This will make sure that steps are executed in order and dependencies are met before moving on to the next step. Let’s create an argo namespace so we can then define our pipeline, as well as execute the necessary commands to run our workflow on our minikube cluster:

Create the argo workflows namespace:

$ kubectl create namespace argo

Navigate to this page & install Argo on your machine. So we’ve got Argo Workflows installed now, we’ve also got Argo Events setup in the minikube cluster in it’s own namespace & let’s apply the following Kubernetes manifest file to create various Kubernetes resources that Argo Workflows needs to do it’s magic:

$ kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.4.5/install.yaml

once we’ve done that, we should have:

  • The necessary (CRDs) I mentioned above, which were the Workflows, Templates, and Event Bindings.
  • Argo Workflows requires service accounts and RBAC resources for secure and controlled access to Kubernetes resources.
  • A ConfigMap is used to configure the Argo workflow controller, which is responsible for executing workflows.
  • Argo Workflows also requires a Service and Deployment for the Argo server component, which provides a UI and API for interacting with workflows.
  • A PriorityClass is used to ensure that the Argo Workflow controller Deployment has a higher priority than other Deployments in the cluster, to prevent it from being preempted by other workloads.

--

--

Zubair Haque

The Engineering Chronicles: I specialize in Automated Deployments