Director of Open Source
Advancing the Future of CI/CD Together
Delivering software is increasingly complex due to cloud native environments and tool fragmentation. This talk outlines how the Continuous Delivery Foundation drives open initiatives so we can all work together to accelerate CI/CD adoption in a rapidly changing tech landscape.
How the Continuous Delivery Foundation (CDF) is working to advance CI/CD.
The Continuous Delivery Foundation was launched in 2019 as the new home to FOSS projects Jenkins, Jenkins, Spinnaker and Tekton. The foundation is also a community to advance adoption of CI/CD best practices and tools. This talk outlines the initiatives and ways to get involved so we can all work together to accelerate CI/CD adoption.
The Continuous Delivery Foundation hosts key CI/CD projects. This talk gives a brief overview of those projects and how we are working toward interoperability between them. We also look at the goals of the CDF and key initiatives such as CI/CD landscape, security, diversity and MLOps. This talk will share how you can get involved so we can all work together in open source to drive forward the direction of CI/CD and make software delivery better for everyone.
Data Science Intern
Kubeflow: Machine Learning in Kubernetes
Kubeflow is an open source, production ready platform for data scientists and DevOps engineers. This talk will demo the various components of Kubeflow with a focus on Kubeflow pipelines. Check out kubeflow.org/docs/started/kubeflow-overview/ for more information.
The talk will focus mainly on Kubeflow pipelines. A demo will be given where a typical ML workflow created in a Jupyter Notebook will be transformed into an automated pipeline through Kubeflow-Kale. How to create custom pipeline steps and docker images will be demonstrated. I will also show some of the functionality of Kubeflow-Katib, with a focus on neural architecture search.
Service Reliability Engineer
A Case Study on The Value of Operators
What is all of that about operators? How a cloud native application can leverage this new standard empowering developers to do what they do better? How the operator-sdk can be a powerful tool to quickly create operators? What value we extract from that? That’s what we try to answer with this talk.
>Have you ever wanted to give your platform users a seamless App Store experience with your application? Is it even possible? Our answer is yes! We use operators for that! So taking off from a case study we try to run down types of architectures that lead us to the point we are right now with operators. We discuss a bit of operator design, operator-sdk, OLM (operator life cycle manager), operatorhub.io, OpenShift embedded operator hub, operator levels and how it impacts the open source Cloud Native community and accelerates new technologies adoption. If you have a basic understanding on Kubernetes and containers joins us on this talk and take a look on how your Cloud Native application can be empowered by using operators.
Cloud Native Engineer
GitOps made simple with Flux
Ever wondered what is GItOps ? not really sure how to do it ? We are going to demystify the concept behind it and make it understandable by everyone. Be sure, it’s not about operations learning git commands, but much about leveraging on Git advantages to reach a desired cluster state.
The concept of GitOps originated at Weaveworks, and stands for having a git repository as the single source of truth of application code, infrastructure and configurations. Coming from a CI/CD world where pipelines and deployment keys are usually mashed up together, we are almost kind of blocked into a world where we need a script for everything, especially to deploy our code to environnements. But what if i’m not good at scripting/python/bash ?!
After this talk you’ll be able to understand GitOps and how to use a simple tool like Flux to achieve a better result than spending a whole week writing deployment scripts.
Hybrid Cloud Specialist
Google’s Approach to Configuration Management in Multi-Cluster Environments
Google’s Approach to Configuration Management in Multi-Cluster Environments
- Configuration management challenges in multi-cluster and hybrid Kubernetes deployments.
- Configuration as Code, Gitops-style
- Config Sync: Git syncing functionality for distribution of configurations in multi-tenant, multi cluster environments
- Anthos Config Management: automate policy and security at scale across all of your Kubernetes deployments.
- Config Connector
- Policy Controller
- A demo of Anthos Config Management (The plan is to have gke on prem, gke on AWS, and GKE clusters)
The talk will discuss configuration management and policy controllers in general, by visiting other approaches and then showing the approach we adapted at Google.
Container Solution Architect
We all dreamed of having the ability to have a piece of code that creates a Kubernetes cluster back to back regardless of the underlying infrastructure. We all have a need to bring the application closer to our end users. We were told “the cloud will fix it all” just like magic! Let’s dream together.
I hope I am not destroying anyone’s dreams but today we all know that magic doesn’t really exist.
Today we will discuss how we implemented Kubernetes clusters around the world on various cloud and private providers with the end goal of bringing the application closer to our customers. We will also review how to access and manage various clusters using a single view and how we upgrade our environments.
Lead Container Developer Advocate
Cloud Native Security: Slaying the Insecure by Default Perception
While there is a lot of FUD around cloud native and container security in general and Kubernetes security, in particular, and often referred to as “insecure by default” there are ways to harden security on a deployed cluster today by taking a shift-left approach.
With the advent of Kubernetes and microservices, the attack surfaces have increased and that necessitates a more holistic and disciplined approach towards security. While there is a lot of FUD around container security in general and Kubernetes security, in particular, and often referred to as “insecure by default” there are ways to harden security on a deployed cluster today by taking a shift-left approach.
Attend this session to understand practices for secure development and deployment including a discussion on configuration parameters and how to incorporate security principles such as least privilege, authorization, etc. via runAsUser, readOnlyRootFilesystem and disallowPrivilegeEscalation besides scanning images for known vulnerabilities. We will also look at platforms built atop Kubernetes like Helm and Istio from a security perspective time permitting.
After attending this session, intended for developers, admins and devops audience attendees alike they will get a good understanding of the challenges of Kubernetes security and related platforms, how the shift-left approach helps to minimize attack surface and how to incorporate best practices, configuration parameters, etc. into your pipeline and to be able to secure the Kubernetes clusters and workloads.
Trust No 8: Kubernetes Needs the Zero-Trust Model
As Kubernetes adoption continues to grow, so does the need for creating and following strong principles for securing Kubernetes clusters and their workloads. The multi-tenant use cases and deployment patterns found in Kubernetes clusters make these clusters an ideal breeding ground for escalating attacks if an intruder gets a foothold. By basing their Kubernetes security practices on the Zero-Trust Model, Kubernetes users can prevent and contain many serious incursions.
We will discuss how Kubernetes calls for a zero-trust architecture, noting what those principles would look like as they apply to different cluster components and resources. We will also discuss some of the tools in the Kubernetes ecosystem that can help you with your zero-trust goals.
By the end of the talk, you should have an understanding of why a zero-trust model for Kubernetes security is so important, what that ideal cluster might look like, and how to get started.
Kyle J. Davis,
Head of Developer Advocacy
Declarative vs Imperative Caching
STOP! Don’t over complicate caching. Throw away the custom application logic and learn to approach caching declaratively.
The concept of caching is not complicated: store the results of a query for a short period of time in a fast storage engine so you touch the slow and/or expensive database less. This simple concept often hides application level complexity as not all queries should be cache. Deciding what to cache and what not to cache depends on many factors: - Will there be a performance benefit? Already short running queries may have negligible or even negative performance impacts when cached. - Are slightly stale results acceptable? Your application or users may be able to tolerate an out-of-date response for non-critical pieces but require as near to real-time as possible for others. - Does the query pollute your cache? A highly varied query pattern may result in your cache being overrun by data that provides little utility.
These issues lead to two conclusions: 1) You can not universally cache all queries, 2) You need to be able to toggle caching of queries for testing and profiling purposes.
Unfortunately, most application-level caching libraries take an imperative approach to caching - the application developer needs to implement some sort of flow control to determine a given query should be cached. For each query that needs to be cached this has to be uniquely implemented. Even if this a simple, abstracted library, it requires thought and creates a surface area for bugs and mistakes.
An alternate approach is to work declaratively - your actual application logic is unaffected by the caching (or not) of a query, you only need to notate the query itself to dictate the caching.