Important Stuff
Who We Are
Why We Love OpenStack
Where all the Cool Stuff is on GitHub
Assumptions
You are OpenStack Saavy, so..
You know a little about Heat
You are either devs or ops or both
You know what GitHub is
You've heard of Docker ;0)'
Optional Skillz:
git
Docker
Kubernetes
Golang
Part 1: PaaS On OpenStack?
Cloud Services
Why Put a PaaS on OpenStack?
Improve IT's productivity
Build and Deploy Applications Faster
Maintain Flexibility
Drive Down Cost of IT
Meet Developer Expections
Automation, Automation, Automation
Infrastructure-as-a-Serviceis Not Enough
Servers in the Cloud
You build and manage everything (OS, app servers, DB, application, etc.)
Software-as-a-Service
Someone else's app, hosted in the cloud
You are restricted to the features of the application— You get what they give you.
SalesForce.com, Google Apps, iCloud
Platform-as-a-Service
Quickly build (or try out) the applications that you need.
Code applications that can live on a hybrid cloud.
Leverage the ease , scale and power of the Cloud.
PaaS is the ideal level for interfacing the platform with source code as the input. IaaS
still requires weeks of setup time to get everything running. Something like an MBaaS is
too intrusive (with all its code generation and such). PaaS is really the sweet spot for
developers and those tasked with deploying applications.
The 3 Flavors of OpenShift
What can you do with OpenShift?
How Does It Work?
It starts with multi-tenancy via linux containers...
How Does It Work?
...and adds central management with easily scaled deployments
Heat: Putting the PaaS in OpenStack
Cross Community Collaboration
Heat Overview
Provides AWS CloudFormation and native ReST API
Abstract configuration of services into a simple template
HA/auto-scaling/monitoring features
OpenShift Origin Heat Templates
OpenShift Enterprise Heat Templates
So Why a New PaaS?
Live and learn
New tools
Sweet user experience + awesome technologies = Happiness
What Red Hat does is build on and foster the best open source technologies. When OpenShift
began those technologies were RHEL, selinux, cgroups, etc. Three years later we have a lot
of additional choices and we are again going to pick the best ones available.
First PaaS Component: Docker
Ten minute break and then we'll start talking about Docker, which is the first piece of the new OpenShift architecture.
What is a Container?
In the Docker world, a container is a running instance of an image
Based on linux containers (namepaces, control groups)
A file system layer cake, aka a "Union File System"
Includes all of the components necessary to run a process, store persistent data, or both
Images: More like git
than tar
Pull and Push
Versions and Tags
diff
a container
Versioning / Tagging
Find the image ID:
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
danehans/nodesrv v1 fe809d275af3 18 hours ago 864.9 MB
node latest 32b8e915efd9 3 weeks ago 864.9 MB
centos centos6 b1bd49907d55 5 weeks ago 212.5 MB
centos centos7 b157b77b1a65 5 weeks ago 243.7 MB
centos latest b157b77b1a65 5 weeks ago 243.7 MB
Create the tag:
$ docker tag fe809d275af3 danehans/nodesrv:latest
$ docker images danehans/nodesrv
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
danehans/nodesrv v1 fe809d275af3 18 hours ago 864.9 MB
danehans/nodesrv latest fe809d275af3 18 hours ago 864.9 MB
Container Ops
Instantiate a Docker container with docker run
:
$ docker run -i -t danehans/centos /bin/bash
bash-4.1# exit
exit
List running and exited docker processes with docker ps
:
$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
7c4ef3596fa5 danehans/centos:latest "/bin/bash" 49 seconds ago Exited (0) grave_newton
"Diffing" a Container
Run a Docker image and perform some actions:
$ docker run -i -t --name="add_wget" danehans/centos /bin/bash
bash-4.1# yum install -y wget
...
bash-4.1# exit
Run a diff on the container after it has run:
$ docker diff add_wget
C /.bash_history
C /etc
A /etc/wgetrc
C /tmp
C /usr
C /usr/bin
A /usr/bin/wget
C /usr/share
C /usr/share/doc
A /usr/share/doc/wget-1.12
...
Docker Containers as Daemons
Start a container as a detached process with docker run -d
:
$ docker run -d danehans/nginx:base
1aa9f0bd1418f951a590c12ad717ea8af639dd29969ee3f59dfd87da1da23c4e
$ docker ps
CONTAINER ID IMAGE COMMAND PORTS NAMES
1aa9f0bd1418 danehans/nginx:base "/bin/sh -c '/usr/sb 80/tcp elegant_bell
Use the -P
flag to automatically map container ports to the Docker host:
$ docker run -d -P danehans/nginx:base
1c2e06d8f85e6e034dfd1f7e822b32ed3f4ddf1d5760011d1e84a88a589f50f3
$ docker ps
CONTAINER ID IMAGE COMMAND PORTS NAMES
1c2e06d8f85e danehans/nginx:base "/bin/sh -c '/usr/sb 0.0.0.0:49153->80/tcp loving_mclean
Docker Does:
Portability
Workflow
Easy
Speed
Docker Doesn't
See beyond a single host
Provision related containers as a unit
Have capacity for handling mass configuration & deployment.
The entire docker lab could be done with a single host instance. That's a
by-product of Docker's chief limitation as a host-centric technology. This wasn't
an oversight as much as a scope boundary.
Even within the single-host scope, Docker doesn't provide for managing related containers as a group. If you want
to run a web server container alongside a database container, you have to do them one at a time.
Another problem related to the host-level focus is that Docker itself doesn't have
any sort of mass provisioning for containers. This includes configuration, deployment, and ongoing management.
To Docker's credit, they have specifically determined that this stuff is out of scope for their project. However,
it left a big opening for technologies to come along and solve. Solution? Kubernetes.
PaaS Component #2:
Kubernetes
What is Kubernetes? When I made this slide I had only a vague concept of what
Google's Kubernetes project was about, but I knew it was doing something at
a higher level than Docker. As of yet there are no cool logos for Kubernetes,
so I am going to make some proposals as we go through.
Kuberenetes Terminology
Pod:
One or more inter-related Docker containers.
Service:
A configuration unit for the kube-proxy.
Label:
Used with pods to specify identifying metadata.
Master:
Runs cluster-level control plane services.
Minion/Node:
A Docker host running the kubelet and the proxy service.
Now we can start to get in to what Kubernetes does for Docker deployments. Let's begin with
some terminology...
Notice also the services that are running on this minion. Docker is the obvious one but the other three are new.
etcd
First, etcd. This is a highly available key/value store that provides the de-facto messaging layer
between each minion and a central controller. Strictly speaking, etcd doesn't need to live on
each minion, or on any minion. As long as there are one or more reachable instances of etcd that a
minion can communicate with, Kubernetes continues working.
etcd instances handle their own clustering. If you really want to get into the weeds, they
use the RAFT consensus algorithm to decide on the true current value for a given key. It makes
for a cool graphic, but we don't need to get deeper than this to use etc with Kubernetes.
Minion Daemon:kubelet
Pod management
Take instructions from the cluster master
Now, the other minion-based process is called kubelet. Its main job is pod management,
which really means that its main job is talking to docker. The kubelete daemon interprets
pod definitions in docker terms and makes the right commands to get the desired behavior.
All docker functionality is available through kubelet and pods.
In addition to managing docker, the kubelet has another job, which is to keep track of
pod running states. The kubelet is periodically polled by cluster management processes
to know how the cluster is doing.
Finally, another feature of the kubelet that is in the works is the ability for the
kubelet to register its minion with the cluster. Right now minions have to be introduced
manually, but when this feature is in place, a kubelet with the right credentials can join
a cluster automatically.
Minion Daemon:kube-proxy
The proxy service maps a common port on every minion to relevant pods across the entire cluster
Relevant pods are chosen by comparing a label on the proxy definition to labels on the running pods
The interesting thing about the service proxy is that every minion gets all of the
service proxy rules, regardless of which pods are actually running on the them. The
job of the proxy starts with the question: "does this minion have any pods that match
each of the service labels that I know about?"
In this diagram, the proxy knows about three services. But looking over the pods that
are running, the proxy sees that it can't handle requests for the mongo service. On the
other hand, it's got two running pods that match the 'nginx' label, so it will handle the
routing and traffic management to those pods.
The proxy service is not responsible for managing pods. All it does is indicates
whether or not the minion can handle a given service request, and if there's more than one
pod that can satisfy the request, it does the traffic management.
Cluster Management
Kubernetes API
RESTful API for Kubernetes
Scheduler
Choose minions for pods
Controller Manager
Monitoring service for deployed pods
kubecfg
CLI for working with a Kubernetes cluster
A cluster master is a host that acts as both a front end and health monitor for a Kubernetes cluster. The
master it self does not run docker and therefore does not host pods. Instead, it provides a web API and runs
a simple monitoring service that checks for the existence of specified pod deployments.
Finally, the cluster master also provides a utility called kubecfg that enables users to interact with the
cluster.
Kubernetes Doesn't
Have a concept of a complete application .
Have capacity for building and deploying Docker images from source code.
Have a focus on a user or admin experience .
Bringing it All Together:
So far today, we've introduced Docker for running containerized applications, and we've
introduced Kubernetes as a cluster management layer over the top. But when we compare that
to a complete Platform-as-a-Service system, we can see a number of gaps. So just as we started
describing Kubernetes in terms of what Docker is missing, let's start talking about the new
OpenShift architecture in terms of what Kubernetes is missing... aside from a logo, of course.
Applications = Distinct Interconnected Services
Distinct : App components must be abstracted so that they can evolve independently
Interconnected : Every component should be easy to build, manage and deploy in concert
So now we're thinking about applications differently. Pieces aren't just modular; they're actually completely distinct components
separated by network APIs.
Applications in OpenShift 3
config
, n. A collection of objects describing a combination of pods, services, replicationControllers, environment variables.
template
, n. A parameterized version of a config for generalized re-use.
Now let's take a more concrete look at applications. Given that we are already thinking differently about them, what
do they actually look like in a Kubernetes and Docker based system?
Builds in OpenShift v3
buildConfig
, n. An object containing three key pieces of information about an application that will be automatically built and rebuilt by OpenShift:
The source code URI
The build type (Docker or STI)
The authentication code for change notifications (webhooks)
Application Lifecycle:
Integrating with CI and CD through "triggers"
Make a platform that is aware of changes:
In source code
On a CI system
In an image repository
...so that the entire product lifecycle isrepeatable , fault-tolerant and automated .
When we talk about DevOps, we are talking about an environment where we continuously iterate and deploy application code based on a series of gating operations.
Lifecycle in OpenShift 3:
The Deployment
deployment
, n. The combination of:
A replicationController that describes a desired running state.
One or more trigger policies for driving the deployment
A deployment strategy for performing the deployment
Deployment Trigger Policies
Manual
Image change
config
change
Manual: you tell OpenShift to run the deployment.
Image change: OpenShift 3 watches the state of images in the cluster's private Docker repo. When an image is updated, redeployment is triggered.
config change: If I change the definition of my config file, redeploy the pieces covered in the deployment configuration.
New Concepts Summary
Configurations
Collections of Kubernetes and OpenShift 3 objects
Parameterized templates
Post-processed configs
Builds
Where is the code coming from?
How do we turn it into a Docker image?
Deployments
When do we deploy?
How do we deploy?
What should the deployment look like?
Enough Talking... Let's See It!
VIDEO
Getting Started