Description
With this training, you will learn how to deploy and manage containerized applications on Google Kubernetes Engine (GKE). You'll also learn how to use other tools on Google Cloud that interact with GKE deployments. Through extensive hands-on work, you will deploy solution elements: infrastructure components such as pods, containers, deployments and services, as well as networks and application services, and also security management and access, resource management and resource monitoring.
Who is this training for ?
For whom ?
Cloud Architects, Cloud Administrators and/or SysOps/DevOps professionals or anyone who uses Google Cloud to create new solutions or to integrate systems, application environments, and infrastructure with Google Cloud.
Prerequisites
Have completed the "Google Cloud Platform - Core infrastructure - Fundamentals" course or have equivalent knowledge.
Training objectives
Training program
- Introduction to GCP (Google Cloud Platform)
- Use the Google Cloud Console
- Use Cloud Shell
- Define Cloud Computing
- Identify Google Cloud Compute Services
- Understanding regions and zones
- Understand the hierarchy of Cloud resources
- Administer Google Cloud resources
- Containers and Kubernetes in GCP
- Create a container using Cloud Build
- Store a container in Container Registry
- Understand the relationship between Kubernetes and GKE (Google Kubernetes Engine)
- Understand how to choose among Google Cloud Compute platforms
- Architecture de Kubernetes
- Understand Kubernetes architecture: pods, namespaces
- Know the components of the Kubernetes control plane
- Create container images using from Cloud Build
- Store container images in Container Registry
- Create a Kubernetes Engine cluster
- Kubernetes Operations
- Work with the kubectl command
- Inspect the cluster and pods
- View console output from a pod
- Connect to a pod interactively
- Deployments, jobs and scaling
- Deployments
- Ways to create deployments
- Services and scalability
- Updating deployments
- Updates progressive day
- Deployments
- Deployment management
- Jobs and CronJobs
- Parallel Jobs
- CronJobs
- Cluster scaling
- Scaling down
- Node pools
- Pod placement control
- Affinity and anti-affinity
- Example of pod placement
- Taints and tolerances
- Implementing software in your cluster
- GKE Network
- Introduction
- Pod Network
- Service
- Searching for Services
- Service Types and Load Balancers
- How load balancers work
- Resource Ingress
- Container native load balancing
- Network security
- Persistent data and storage
- Volumes
- Volume types
- Persistent volume abstraction
- Learn more about persistent volumes
- StatefulSets
- ConfigMaps
- Secrets
- Access control and security in Kubernetes and Kubernetes Engine
- Understand Kubernetes authentication and authorization
- Define RBAC roles and Kubernetes role bindings to access resources in namespaces
- Determine the Kubernetes RBAC cluster roles and ClusterRole bindings to access resources cluster-wide
- Define Kubernetes pod security policies
- Understand the IAM framework
- Define IAM roles and policies for Kubernetes Engine cluster administration
- Logging and Monitoring
- Use Cloud Monitoring to monitor and manage availability and performance
- Locate and inspect Kubernetes logs
- Create probes for wellness checks on applications in production
- Using GCP-managed storage services from Kubernetes applications
- Understand the pros and cons of using a managed storage service versus self-managed containerized storage
- Allow applications running in GKE to access storage services GCP storage
- Understand the use cases of Cloud Storage, Cloud SQL, Cloud Spanner, Cloud BigTable, Cloud Firestore and BigQuery, from a Kubernetes application
- Logging and Monitoring (continued)
- CI/CD concepts
- CI/CD for Google Kubernetes Engine
- CI/CD examples
- Manage code application in a source repository that can trigger code changes in a continuous delivery pipeline