This module provides a brief overview of the OpenShift Web Console.
The OpenShift cluster web console url for our workshop will be {{ CONSOLE_URL }}. The login page prompts users for their Username and Password.
Your user name will be {{ USER_ID }}
and password is
{{ OPENSHIFT_USER_PASSWORD }}
Proceed to create a new project with the name {{ USER_ID }}-myproject
by selecting Create Project
.
Upon creating a project you will be brought to the overview page for the new project.
If you want to get to a list of all the projects that are available, you
can select Home→Projects
from the side menu on the left. You can
click on the hamburger menu item button on the top left corner of the
web console if you do not see the side menu.
Proceed to select From Catalog. This will bring us to the Developer Catalog.
In this example, we will deploy a web application which is implemented using Python.
Click on Languages and then select Python. We will see the options for deploying applications which are related to Python. Select the Python tile for the generic Python Source-to-Image (S2I) builder.
This will bring up a dialog with the details of the builder image. Click
on Create Application
in the dialog.
Under the Git settings, we will insert the Git Repo URL that we will
be using for this example,
i.e. https://github.com/openshift-katacoda/blog-django-py
. This
repository contains a sample implementation of a blog application,
designed to show the various features of OpenShift. The blog application
is implemented using Python and Django.
Next, we will scroll down to the General settings. We can see that the
settings in the Application Name field have been pre-populated with
values based on the Git repository name. We will need to select Deployment Config
under the Resources
section. Other changes can be made as needed.
Proceed to hit the Create
button at the bottom of the page after this.
Upon hitting the Create
button, we will get redirected to the
Topology overview of the project.
The topology overview provides a visual representation of the application you have deployed.
The Git icon shown to the lower right of the ring can be clicked on to take us to the hosted Git repository from which the source code for the application was built.
The icon shown to the lower left represents the build of the application. The icon will change from showing an hour glass, indicating the build is starting, to a sync icon indicating the build is in progress, and finally to a tick or cross depending on whether the build was successful or failed. Clicking on this icon will take you to the details of the current build.
The ring itself will progress from being white, indicating the deployment is pending, to light blue indicating the deployment is starting, and blue to indicate the application is running. The ring can also turn dark blue if the application is stopping.
Once the build has started, click on the View Logs link shown on the Resources panel.
This will allow you to monitor the progress of the build as it runs. The
build will have completed successfully when you see a final message of
Push successful
. This indicates that the container image for the
application was pushed to the OpenShift internal image registry.
Once the build of the application image has completed, it will be deployed.
A Route is automatically created for the application upon successful creation of the Django application. The route will be exposed outside of the cluster. The URL which can be used to access the application from a web browser will be visible.
Once the application is running, the icon shown to the upper right can be clicked to open the URL for the application route which was created.
Clicking on the URL link will bring us to the Django application that was just deployed.
Select the Adminstrator
perspective for the project from the left hand
side menu
OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed.
Pods are the rough equivalent of a machine instance (physical or virtual) to a container. Each pod is allocated its own internal IP address, therefore owning its entire port space, and containers within pods can share their local storage and networking.
Pods have a lifecycle; they are defined, then they are assigned to run on a node, then they run until their container(s) exit or they are removed for some other reason. Pods, depending on policy and exit code, may be removed after exiting, or may be retained in order to enable access to the logs of their containers.
OpenShift Container Platform treats pods as largely immutable; changes cannot be made to a pod definition while it is running. OpenShift Container Platform implements changes by terminating an existing pod and recreating it with modified configuration, base image(s), or both. Pods are also treated as expendable, and do not maintain state when recreated. Therefore pods should usually be managed by higher-level controllers, rather than directly by users.
Go to the Workloads
tab and select Pods to view the pods in this
project.
A Kubernetes service serves as an internal load balancer. It identifies a set of replicated pods in order to proxy the connections it receives to them. Backing pods can be added to or removed from a service arbitrarily while the service remains consistently available, enabling anything that depends on the service to refer to it at a consistent address. The default service clusterIP addresses are from the OpenShift Container Platform internal network and they are used to permit pods to access each other.
Services are assigned an IP address and port pair that, when accessed, proxy to an appropriate backing pod. A service uses a label selector to find all the containers running that provide a certain network service on a certain port.
Like pods, services are REST objects. Go to the Networking
tab and
select Services to view the services in this project.
An OpenShift route is a way to expose a service by giving it an
externally-reachable hostname like www.example.com
. A defined route
and the endpoints identified by its service can be consumed by a router
to provide named connectivity that allows external clients to reach your
applications.
Deployments and DeploymentConfigs in OpenShift Container Platform are API objects that provide two similar but different methods for fine-grained management over common user applications. A DeploymentConfig or a Deployment describes the desired state of a particular component of the application as a Pod template.
DeploymentConfigs involve one or more ReplicationControllers, which contain a point-in-time record of the state of a DeploymentConfig as a Pod template. Similarly, Deployments involve one or more ReplicaSets, a successor of ReplicationControllers.
The DeploymentConfig deployment system provides the following capabilities:
-
A DeploymentConfig, which is a template for running applications
-
Triggers that drive automated deployments in response to events
-
User-customizable deployment strategies to transition from the previous version to the new version. A strategy runs inside a Pod commonly referred as the deployment process.
-
A set of hooks (lifecycle hooks) for executing custom behavior in different points during the lifecycle of a deployment
-
Versioning of your application in order to support rollbacks either manually or automatically in case of deployment failure
-
Manual replication scaling and autoscaling
Go to the Workloads
tab and select Deployment Configs to view the
DeploymentConfig in this project.
A ReplicationController ensures that a specified number of replicas of a Pod are running at all times. If Pods exit or are deleted, the ReplicationController acts to instantiate more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount.
A ReplicationController configuration consists of:
-
The number of replicas desired (which can be adjusted at runtime)
-
A Pod definition to use when creating a replicated Pod
-
A selector for identifying managed Pods
Go to the Workloads
tab and select Replication Controllers to view
the ReplicationController for this project.
The Secret
object type provides a mechanism to hold sensitive
information such as passwords, OpenShift Container Platform client
configuration files, dockercfg
files, private source repository
credentials, and so on. Secrets decouple sensitive content from the
pods. You can mount secrets into containers using a volume plug-in or
the system can use secrets to perform actions on behalf of a pod.
Many applications require configuration using some combination of configuration files, command line arguments, and environment variables. These configuration artifacts should be decoupled from image content in order to keep containerized applications portable.
The ConfigMap
object provides mechanisms to inject containers with
configuration data while keeping containers agnostic of OpenShift
Container Platform. A ConfigMap
can be used to store fine-grained
information like individual properties or coarse-grained information
like entire configuration files or JSON blobs.
The ConfigMap
API object holds key-value pairs of configuration data
that can be consumed in pods or used to store configuration data for
system components such as controllers. ConfigMap
is similar to
secrets, but designed to more conveniently support working with strings
that do not contain sensitive information.
Go to the Workloads
tab and select Config Maps to view the Config
Maps for this project. In this case, we can see the CA certificates as
config maps.
A PersistentVolume
object is a storage resource in an OpenShift
Container Platform cluster. Storage is provisioned by cluster
administrator by creating PersistentVolume
objects from sources such
as GCE Persistent Disk, AWS Elastic Block Store (EBS), and NFS mounts.
Storage can be made available by laying claims to the resource. We can
make a request for storage resources using a PersistentVolumeClaim
object; the claim is paired with a volume that generally matches our
request.
A PersistentVolume
is a specific resource. A PersistentVolumeClaim
is a request for a resource with specific attributes, such as storage
size. In between the two is a process that matches a claim to an
available volume and binds them together. This allows the claim to be
used as a volume in a pod. OpenShift Container Platform finds the volume
backing the claim and mounts it into the pod.
A PersistentVolumeClaim
is used by a pod as a volume. OpenShift
Container Platform finds the claim with the given name in the same
namespace as the pod, then uses the claim to find the corresponding
volume to mount.
In this chapter, we learnt about deploying an application from source
code using a Source-to-Image (S2I) builder. We have deployed the
application from the web console from Developer
perspective and looked
at the different tabs under the Administrator
perspective.
The web application was implemented using the Python programming language. OpenShift provides S2I builders for a number of different programming languages/frameworks in addition to Python. These include Java, NodeJS, Perl, PHP and Ruby.