Visit our sponsors in their Zoom rooms: chat, ask questions, get information about their solutions.

Sponsor session 2 - 15:30-16:30

CloudOps, room 2

Citrix, room 2

Dynatrace, room 2

Elastic, room 2

Hashicorp, room 2

StackRox room 2


Tracy Miranda,

Director of Open Source


Advancing the Future of CI/CD Together

Delivering software is increasingly complex due to cloud native environments and tool fragmentation. This talk outlines how the Continuous Delivery Foundation drives open initiatives so we can all work together to accelerate CI/CD adoption in a rapidly changing tech landscape.

How the Continuous Delivery Foundation (CDF) is working to advance CI/CD.

The Continuous Delivery Foundation was launched in 2019 as the new home to FOSS projects Jenkins, Jenkins, Spinnaker and Tekton. The foundation is also a community to advance adoption of CI/CD best practices and tools. This talk outlines the initiatives and ways to get involved so we can all work together to accelerate CI/CD adoption.

The Continuous Delivery Foundation hosts key CI/CD projects. This talk gives a brief overview of those projects and how we are working toward interoperability between them. We also look at the goals of the CDF and key initiatives such as CI/CD landscape, security, diversity and MLOps. This talk will share how you can get involved so we can all work together in open source to drive forward the direction of CI/CD and make software delivery better for everyone.

Kevin Crawley,

Developer Relations


Untangling the Maesh - Thinking Outside the Net

When the team at Containous decided to build a service mesh, we were faced with deciding the most effective approach to the design, implementation, and compatibility. We also needed to make sure we could implement a solution which was quick and do it quickly. In this talk, I’ll cover why we decided to leverage our existing open-source project Traefik, how we accomplished this, the benefits, and how we were able to move away from the sidecar proxy model that is associated with most service meshes. In addition, I will briefly discuss some of the advantages and disadvantages of using a Service Mesh, and the situations where you may or may not want to use one.

I will also demonstrate how this all works by deploying a realistic microservice application on Kubernetes while utilizing features such as the implementation of canary testing using the support of the latest Service Mesh Interface specification (SMI) along with implementing back-offs / retries. Finally, I’ll also cover some of the other advantages of the SMI, how we’ve been involved with that project, and discuss the future of Maesh.

Alex Menezes,

Service Reliability Engineer

Red Hat

A Case Study on The Value of Operators

What is all of that about operators? How a cloud native application can leverage this new standard empowering developers to do what they do better? How the operator-sdk can be a powerful tool to quickly create operators? What value we extract from that? That’s what we try to answer with this talk.

>Have you ever wanted to give your platform users a seamless App Store experience with your application? Is it even possible? Our answer is yes! We use operators for that! So taking off from a case study we try to run down types of architectures that lead us to the point we are right now with operators. We discuss a bit of operator design, operator-sdk, OLM (operator life cycle manager),, OpenShift embedded operator hub, operator levels and how it impacts the open source Cloud Native community and accelerates new technologies adoption. If you have a basic understanding on Kubernetes and containers joins us on this talk and take a look on how your Cloud Native application can be empowered by using operators.

Steve Tene,

Cloud Native Engineer

Container Solutions

GitOps made simple with Flux

Ever wondered what is GItOps ? not really sure how to do it ? We are going to demystify the concept behind it and make it understandable by everyone. Be sure, it’s not about operations learning git commands, but much about leveraging on Git advantages to reach a desired cluster state.

The concept of GitOps originated at Weaveworks, and stands for having a git repository as the single source of truth of application code, infrastructure and configurations. Coming from a CI/CD world where pipelines and deployment keys are usually mashed up together, we are almost kind of blocked into a world where we need a script for everything, especially to deploy our code to environnements. But what if i’m not good at scripting/python/bash ?!

After this talk you’ll be able to understand GitOps and how to use a simple tool like Flux to achieve a better result than spending a whole week writing deployment scripts.

Dima Kassab,

Customer Engineer


Muneeb Master,

Hybrid Cloud Specialist


Google’s Approach to Configuration Management in Multi-Cluster Environments

Google’s Approach to Configuration Management in Multi-Cluster Environments

  • Configuration management challenges in multi-cluster and hybrid Kubernetes deployments.
  • Configuration as Code, Gitops-style
  • Config Sync: Git syncing functionality for distribution of configurations in multi-tenant, multi cluster environments
  • Anthos Config Management: automate policy and security at scale across all of your Kubernetes deployments.
  • Config Connector
  • Policy Controller

The talk will discuss configuration management and policy controllers in general, by visiting other approaches and then showing the approach we adapted at Google.

Filipe Santos,

Container Solution Architect


Kubernetes Dream

We all dreamed of having the ability to have a piece of code that creates a Kubernetes cluster back to back regardless of the underlying infrastructure. We all have a need to bring the application closer to our end users. We were told “the cloud will fix it all” just like magic! Let’s dream together.

I hope I am not destroying anyone’s dreams but today we all know that magic doesn’t really exist.

Today we will discuss how we implemented Kubernetes clusters around the world on various cloud and private providers with the end goal of bringing the application closer to our customers. We will also review how to access and manage various clusters using a single view and how we upgrade our environments.

Karen Bruner,

Technical Evangelist


Trust No 8: Kubernetes Needs the Zero-Trust Model

As Kubernetes adoption continues to grow, so does the need for creating and following strong principles for securing Kubernetes clusters and their workloads. The multi-tenant use cases and deployment patterns found in Kubernetes clusters make these clusters an ideal breeding ground for escalating attacks if an intruder gets a foothold. By basing their Kubernetes security practices on the Zero-Trust Model, Kubernetes users can prevent and contain many serious incursions.

We will discuss how Kubernetes calls for a zero-trust architecture, noting what those principles would look like as they apply to different cluster components and resources. We will also discuss some of the tools in the Kubernetes ecosystem that can help you with your zero-trust goals.

By the end of the talk, you should have an understanding of why a zero-trust model for Kubernetes security is so important, what that ideal cluster might look like, and how to get started.

Kyle J. Davis,

Head of Developer Advocacy

Redis Labs

Declarative vs Imperative Caching

STOP! Don’t over complicate caching. Throw away the custom application logic and learn to approach caching declaratively.

The concept of caching is not complicated: store the results of a query for a short period of time in a fast storage engine so you touch the slow and/or expensive database less. This simple concept often hides application level complexity as not all queries should be cache. Deciding what to cache and what not to cache depends on many factors: - Will there be a performance benefit? Already short running queries may have negligible or even negative performance impacts when cached. - Are slightly stale results acceptable? Your application or users may be able to tolerate an out-of-date response for non-critical pieces but require as near to real-time as possible for others. - Does the query pollute your cache? A highly varied query pattern may result in your cache being overrun by data that provides little utility.

These issues lead to two conclusions: 1) You can not universally cache all queries, 2) You need to be able to toggle caching of queries for testing and profiling purposes.

Unfortunately, most application-level caching libraries take an imperative approach to caching - the application developer needs to implement some sort of flow control to determine a given query should be cached. For each query that needs to be cached this has to be uniquely implemented. Even if this a simple, abstracted library, it requires thought and creates a surface area for bugs and mistakes.

An alternate approach is to work declaratively - your actual application logic is unaffected by the caching (or not) of a query, you only need to notate the query itself to dictate the caching.

Ahmed Belgana,

Solutions Engineer


Vault over multi-region setup, to ensure resiliency and provide HA

It is inevitable for organizations to have a disaster recovery (DR) strategy to protect their Vault deployment against catastrophic failure of an entire cluster. Vault Enterprise supports multi-datacenter deployment where you can replicate data across datacenters for performance as well as disaster recovery. In this talk we will discuss the two replication methods, how to set them up and when to use one or the other.

Mathieu Benoit,

Cloud Solution Architect


At scale Kubernetes Clusters and Apps management with Azure Arc

In this presentation we will have a quick look at Azure Arc which allows us to manage Kubernetes Clusters and Apps at scale - across on-premises, edge and multi-cloud environments under control by centrally organizing and governing from a single place. We will demonstrate DevOps best practices with a Security first approach.

George Kobar,


Observa-BLT, a Delicious Practice That Should Be More than just Tools for Logs, Metrics, Traces

In most organizations, the monitoring tool landscape is extremely fragmented. There is a tool for the application team for traces, while operations have a tool to view system metrics and different tool for logs. Rarely do any tools provide a single unified view across each department and or these different systems. To add to this problem, our application and system infrastructure is becoming increasingly complex. Do you need even more tools? In this talk we will talk about Observability including all states of your system, including functional and dysfunctional, your sociotechnical systems and exploring the unknowns of unknowns. We will explore what ingredients and foundation concepts with a single tool to start a great single unified Observa-BLT, and different thoughts and methods on how to avoid being burned in the kitchen

Yan Lafrance,


How could the Citrix ADC help you navigate through a Kubernetes world

IT transformation, cloud, and application development technologies are key to delivering services throughout the Industry. Come learn how Citrix ADC’s can help you better manage your cloud-native infrastructures

Steve Caron,

CoE Solutions Engineer


Embracing service-level-objectives for your Cl/CD pipelines

Organizations are deploying efforts to deliver code changes more rapidly and automate testing and delivery. But for many, up to 80% of pipeline execution time is spent in manual build validation steps. How can this be reduced? Moreover, a CI/CD pipeline with well-defined tests does not guarantee a failure-safe application in production. Therefore, it is crucial to define the objectives and quality metrics of individual services in terms of service level objectives as code.

In this talk, we will discuss how service level indicators (SLIs) and service level objectives (SLOs) can help you set up automated quality gates in a Cl/CD system to increase the pace of delivery while preventing bad code changes to ever reach production. We will also provide an introduction to Keptn, an open-source pluggable control plane for autonomous software delivery and see specifically how it helps implementing such SLI/SLO-based quality gates.