Red Hat’s OpenShift Container Platform (OCP) is Red Hat’s enterprise-grade security and support integrated with Kubernetes. Kubernetes is a platform for deploying, scaling, and managing — also known as orchestrating — container workloads on-premise and in the cloud.
In this series of articles, we’ll explore the OpenShift container platform in depth. To help you get started before moving on to advanced topics like OpenShift Architecture, OpenShift Mesh, and OpenShift Route, we’ll start with OpenShift basics in this introduction.
Server virtualization helped the software industry abstract underlying computer hardware from workloads. Since then, containerization has risen in popularity, further abstracting away underlying system components, and becoming the cornerstone of microservices architecture.
The microservices architecture design pattern allows services to be developed, maintained, and scaled individually while communicating with other microservices via an API. With this application architecture, developers can create a microservice without dependencies on other microservices. In this paradigm, the container hosting a microservice is terminated and replaced each time a new version of the microservice is released by developers. This allows the teams that develop microservices to iterate and innovate faster using short-lived containers.
As microservices architecture grew in popularity, so did the need for container orchestration at scale. Container orchestration is the problem that platforms like Kubernetes — a Google project released in June 2014 and eventually handed over to the Cloud Native Computing Foundation ( CNCF) — and OpenShift help solve.
To understand OpenShift, you need to understand Kubernetes and container orchestration.
As microservices-based applications grow, efficient container orchestration becomes crucial. Orchestration streamlines and automates many tasks an administrator or developer must otherwise do manually. These tasks include managing most aspects of resource utilization, including networking and storage, essentially any computing resource.
Much like how the Linux operating system manages an individual system's computing resources, it might be helpful to think about OpenShift as the operating system for a data center or a collection of resources. It manages the CPUs, RAM, storage, and communication between all functions through APIs.
Scratching the surface here, Kubernetes groups workloads into pods at a high level. Pods are a group of one or more closely related containers. Pods represent a workload in an OpenShift environment. These pods are placed on machines (physical or virtual) in the cluster called worker nodes. At least three other control nodes, collectively called the control plane, manage a production cluster.
For more on how all the components work, check out our OpenShift Architecture article.
Subscribe to our LinkedIn Newsletter to receive more educational contentSubscribe now
Kubernetes Autoscaling doesn’t recommend pod limit values or consider I/O. Densify identifies mis-provisioned containers at a glance and prescribes the optimal configuration.
Densify has partnered with Intel to offer one year of free resource optimization software licensing to qualified companies.
Because OpenShift is a version of Kubernetes, there are many similarities between the two platforms. For example, much like the kubectl command, OpenShift has the oc command that allows interaction with the OpenShift API via the command line.
However, they are not the same. OpenShift takes opinionated stances, provides more structure around specific topics, and adds value for end users. Many OpenShift value-adds are developer tools such as the Source-2-Image (S2I) workflow, where a container image can be built from a source code repository. Here are several other areas where RedHat OpenShift and Kubernetes differ:
One of the benefits of OpenShift is that it can be deployed in the cloud, on-premise, or with a mixture of both. It allows for the usage of many resources, allowing developers and administrators to use the right resources for the right workload.
OpenShift deployment models
|OpenShift Dedicated||A fully managed instance of OpenShift on AWS or GCP|
|Azure Red Hat OpenShift||OCP on Azure cloud; this is a managed instance of OpenShift on Azure cloud|
|Red Hat OpenShift on IBM cloud||A fully managed OCP service on the IBM cloud|
|Red Hat OpenShift Container Platform||Self-managed install of OCP in the cloud or on-premise|
Additionally, Red Hat Code Ready Containers are a way to deploy a local instance of OpenShift to your laptop or test machine. As noted before, a production cluster needs at least three control nodes and then some number of worker nodes. Code Ready Containers(CRC) runs OpenShift on a single machine.
Be aware that Code Ready Containers are not viable for production deployment. Many of the high-availability and production level features are not available.
If you’d like to learn more about RedHat OpenShift, check out these detailed technical articles:
Learn the core concepts and components that are the building blocks of OpenShift.OpenShift Architecture
Routes are the way Openshift exposes services to the public.OpenShift Route
A service mesh is a networking design pattern that gives the cluster more control over network functions.OpenShift Service Mesh
Understand the pros and cons of OpenShift alternatives such as AWS ECS for Kubernetes (EKS), Google Kubernetes Engine (GKE), and Azure Kubernetes Service (AKS)Openshift Alternatives
Deep-dive into OpenShift container storage to understand its benefits such as environment independence, scalability customizability built-in monitoring, and moreUnderstanding OpenShift Container Storage
Explore the benefits of using OpenShift on Azure including built-in security, cloud-native integrations, quick startup, flexible instance types, and more.Using Azure OpenShift
Learn how Rancher and Openshift offer similar features, and how their differences influence when to use each tool. Understand when to use each through recommendations and examples.Rancher vs. Openshift: The Guide
Learn the benefits and potential pitfalls of an event-driven OpenShift serverless deployment model and follow instructions and best practices to be successful with itOpenShift Serverless: Guide & Tutorial
Learn about OpenShift pod anti-affinity use cases and configuration steps to schedule pods based on labels and know what to do when a pod fails because of an anti-affinity ruleAnti-Affinity OpenShift: Tutorial & Instructions
Learn how to use OpenShift operators to automate the lifecycle management of applications deployed on OpenShift clusters and follow installation and configuration instructions.OpenShift Operators: Tutorial & Instructions
More chapters to come soon.
Already Running a Cost Optimization Technology?
Quickly and easily run a Densify trial alongside any other solution and we will prove why Densify is the gold standard for optimization. See results in less than 48 hours.