The world’s top supplier of open source software, Red Hat, Inc., today announced the release of Red Hat Enterprise Linux AI (RHEL AI), a foundation model platform that makes it easier for users to create, test, and implement generative artificial intelligence (GenAI) models. RHEL AI combines the Granite large language model (LLM) family from IBM Research, which is licenced under an open source licence, with model alignment tools from InstructLab that are based on the LAB (Large-scale Alignment for chatBots) methodology and a community-driven model development process via the InstructLab project. The whole solution is available in OpenShift AI, Red Hat’s hybrid machine learning operations (MLOps) platform, for executing models and InstructLab at scale across the hybrid cloud. It is also packaged as an optimised, bootable RHEL image for individual server installations throughout the hybrid cloud.
It is anticipated by IDC that businesses globally would invest over $40 billion in GenAI by 2024, and over $150 billion by 2027, as they recognise the benefits of this technology and integrate it into their operations. By providing intelligent self-service for both customers and employees, increasing agent and developer productivity, and providing customers with actionable insights for an even quicker path to success — all on the AI platform for business transformation—ServiceNow is leveraging AI to benefit people throughout the Now Platform.
Stefanie Chiras, Senior Vice President, Partner Ecosystem Success, Red Hat
“Red Hat Enterprise Linux is backed by a skilled ecosystem of certified hardware providers, OEMs, software and application vendors to deliver enhanced value and capabilities for customers, wherever they choose to deploy. With image mode for Red Hat Enterprise Linux, we are further enabling the Red Hat partner ecosystem with a flexible and reliable containerized operating system, equipped with the added security capabilities and application development models that customers expect from the world’s leading enterprise Linux platform.”
GenAI Innovation and Collaboration
The launch of ChatGPT generated tremendous interest in GenAI, with the pace of innovation only accelerating since then. Enterprises have begun moving from early evaluations of GenAI services to building out AI-enabled applications. A rapidly growing ecosystem of open model options has spurred further AI innovation and illustrated that there won’t be “one model to rule them all.” Customers will benefit from an array of choices to address specific requirements, all of which stands to be further accelerated by an open approach to innovation.
Implementing an AI strategy requires more than simply selecting a model; technology organizations need the expertise to tune a given model for their specific use case, as well as deal with the significant costs of AI implementation. The scarcity of data science skills are compounded by substantial financial requirements including:
- Procuring AI infrastructure or consuming AI services
- The complex process of tuning AI models for specific business needs
- Integrating AI into enterprise applications
- Managing both the application and model lifecycle.
Building AI in the open with InstructLab
IBM Research created the Large-scale Alignment for chatBots (LAB) technique, an approach for model alignment that uses taxonomy-guided synthetic data generation and a novel multi-phase tuning framework. This approach makes AI model development more open and accessible to all users by reducing reliance on expensive human annotations and proprietary models. Using the LAB method, models can be improved by specifying skills and knowledge attached to a taxonomy, generating synthetic data from that information at scale to influence the model and using the generated data for model training.
After seeing that the LAB method could help significantly improve model performance, IBM and Red Hat decided to launch InstructLab, an open-source community built around the LAB method and the open-source Granite models from IBM. The InstructLab project aims to put LLM development into the hands of developers by making, building, and contributing to an LLM as simple as contributing to any other open-source project.
As part of the InstructLab launch, IBM has also released a family of select Granite English language and code models in the open. These models are released under an Apache license with transparency on the datasets used to train these models. The Granite 7B English language model has been integrated into the InstructLab community, where end-users can contribute the skills and knowledge to collectively enhance this model, just as they would when contributing to any other open-source project. Similar support for Granite code models within InstructLab will be available soon.
Open source AI innovation on a trusted Linux backbone
RHEL AI builds on this open approach to AI innovation, incorporating an enterprise-ready version of the InstructLab project and the Granite language and code models along with the world’s leading enterprise Linux platform to simplify deployment across a hybrid infrastructure environment. This creates a foundation model platform for bringing open-source-licensed GenAI models into the enterprise. RHEL AI includes:
Open source-licensed Granite language and code models that are supported and indemnified by Red Hat.
A supported, lifecycled distribution of InstructLab that provides a scalable, cost-effective solution for enhancing LLM capabilities and making knowledge and skills contributions accessible to a much wider range of users.
Optimized bootable model runtime instances with Granite models and InstructLab tooling packages as bootable RHEL images via RHEL image mode, including optimized Pytorch runtime libraries and accelerators for AMD Instinct™ MI300X, Intel, and NVIDIA GPUs and NeMo frameworks.
Red Hat’s complete enterprise support and lifecycle promise that starts with a trusted enterprise product distribution, 24×7 production support, and extended lifecycle support.
As organizations experiment and tune new AI models on RHEL AI, they have a ready on-ramp for scaling these workflows with Red Hat OpenShift AI, which will include RHEL AI, and where they can leverage OpenShift’s Kubernetes engine to train and serve AI models at scale and OpenShift AI’s integrated MLOps capabilities to manage the model lifecycle. IBM’s watsonx.ai enterprise studio, which is built on Red Hat OpenShift AI today, will benefit from the inclusion of RHEL AI in OpenShift AI upon availability, bringing additional capabilities for enterprise AI development, data management, model governance, and improved price performance.
The cloud is hybrid. So is AI.
Open-source technology have eliminated barriers to innovation and allowed for rapid innovation while also significantly reducing IT expenditures for over 30 years. Since the early 2000s, Red Hat has been at the forefront of this movement, first with RHEL, which provided open enterprise Linux platforms, and then with Red Hat OpenShift, which established containers and Kubernetes as the cornerstones of open hybrid cloud and cloud-native computing.
Red Hat is driving AI/ML strategies throughout the open hybrid cloud, allowing AI workloads to operate wherever data resides—in the data centre, multiple public clouds, or at the edge. This effort is ongoing. Red Hat’s vision for AI goes beyond workloads; it also includes model training and tweaking along the same route to more effectively manage constraints related to data sovereignty, compliance, and operational integrity. Red Hat’s platforms provide uniformity throughout various contexts, regardless of the operating system used, which is essential to maintaining the flow of AI innovation.
This goal is further realised by RHEL AI and the InstructLab community, who remove several obstacles to testing and developing AI models and offer the resources, information, and ideas required to power the upcoming generation of intelligent workloads.
Availability
Red Hat Enterprise Linux AI is now available as a developer preview. Building on the GPU infrastructure available on IBM Cloud, which is used to train the Granite models and support InstructLab, IBM Cloud will now be adding support for RHEL AI and OpenShift AI. This integration will allow enterprises to deploy generative AI more easily into their mission-critical applications.