Liqid Unveils Next-Generation Composable Infrastructure Solutions to Scale Enterprise AI for On-Premises Datacenter and Edge Environments
Innovative solutions deliver high-performance, agile, and efficient GPU, memory, and storage scale-up and scale-out to support AI inference and other power-hungry, compute-intensive workloads.
WESTMINSTER, Colo. --(BUSINESS WIRE)
Liqid (www.liqid.com), the global leader in software-defined composable infrastructure for on-premises datacenters and edge environments, today announced new portfolio additions that are purpose-built to deliver unmatched performance and agility for scale-up and scale-out required for enterprise AI workloads, while minimizing costs from underutilized infrastructure as well as power and cooling demands.
Deliver 2x More Tokens per Watt + 50% Higher Tokens per Dollar
As AI becomes a strategic business driver, Liqid’s software-defined composable infrastructure platforms give enterprises a clear edge. Liqid uniquely enables granular scale-up and seamless scale-out to optimize for the new AI metrics: tokens per watt and tokens per dollar. By eliminating static inefficiencies and transforming to precise, on-demand, resource allocation, Liqid boosts throughput while cutting power consumption by up to 2x, maximizing ROI on AI infrastructure.
To help enterprises maximize AI initiatives and support compute-hungry applications such as VDI, HPC, and rendering, Liqid is releasing:
- Liqid Matrix® 3.6: Powerful software that delivers a unified interface for managing composable GPU, memory, and storage resources in real-time for maximum agility to meet the demand of diverse and dynamic workloads and achieve 100% and balanced utilization.
- A new PCIe Gen5 10-slot composable GPU platform: Liqid EX-5410P is capable of supporting modern, 600W GPUs as well as other accelerators, FPGAs, NVMe drives, and more. The EX-5410P is part of Liqid’s Gen5 PCIe fabric, which features Liqid Matrix software, a dedicated PCIe Gen5 switch, and host bus adapters (HBAs). The solution delivers GPU composability via ultra-low-latency, high-bandwidth interconnects, providing the performance, agility, and efficiency to optimize every workload and every dollar spent on infrastructure.
- A breakthrough composable memory solution: Liqid EX-5410C is built on the CXL 2.0 standard and is capable of powering memory-hungry applications such as LLMs and in-memory databases. The EX-5410C is part of Liqid’s CXL 2.0 fabric, which features Liqid Matrix software, a dedicated CXL switch, and HBAs. The solution delivers memory composability via ultra-low-latency, high-bandwidth interconnects, meeting the demands of memory-bound AI workloads and in-memory databases.
- Liqid LQD-5500: Updated Gen5 IOA drives for the fastest NVMe cache storage available, with bandwidth of up to 128TB per device.
“With generative AI moving on-premises for inference, reasoning, and agentic use cases, it’s pushing datacenter and edge infrastructure to its limits. Enterprises need a new approach to meet the demands and be future-ready in terms of supporting new GPUs, new LLMs, and workload uncertainty, without blowing past power budgets,” said Edgar Masri, CEO of Liqid. “With today’s announcement, Liqid advances its software-defined composable infrastructure leadership in delivering the performance, agility, and efficiency needed to maximize every watt and dollar as enterprises scale up and scale out to meet unprecedented demand.”
Unified Interface for Composable GPU, Memory, and Storage
Liqid Matrix 3.6 delivers the industry’s first and only unified software interface for real-time deployment, management, and orchestration of GPU, memory, and storage resources. This intuitive platform empowers IT teams to rapidly adapt to evolving AI workloads, simplify operations, and achieve balanced, 100% resource utilization across datacenter and edge environments.
With built-in northbound APIs, Liqid Matrix seamlessly integrates with orchestration platforms such as Kubernetes, VMware, and OpenShift; job schedulers like Slurm; and automation tools such as Ansible, enabling resource pooling and right-sized AI Factory creation across the entire infrastructure.
Next-Gen Scale-Up with PCIe Gen5 Composable GPU Solution
Liqid’s new EX-5410P, a 10-slot PCIe Gen5 composable GPU chassis, supports the latest high-power 600W GPUs, including NVIDIA H200, RTX Pro 6000, and Intel Gaudi 3. With orchestration from Liqid Matrix software, Liqid’s composable GPU solution enables higher density with greater performance per rack unit while lowering power and cooling costs. Organizations can also mix and match accelerators (GPUs, FPGAs, DPUs, TPUs, etc.) to tailor performance to specific workloads.
Liqid offers two composable GPU solutions:
- UltraStack: Delivers peak performance by dedicating up to 30 GPUs to a single server.
- SmartStack: Offers flexible resource sharing by pooling up to 30 GPUs across as many as 20 server nodes.
Composable CXL 2.0 Memory Solution: Unleashing New Levels of Performance
Liqid’s new composable memory solution leverages CXL 2.0 to disaggregate and pool DRAM, making it possible to allocate memory across servers based on workload demands. Liqid Matrix software powers Liqid’s composable memory solution, ensuring better utilization, reducing memory overprovisioning, and accelerating performance for memory-bound AI workloads and in-memory databases.
Liqid offers the industry’s first and only fully disaggregated, software-defined composable memory solution, supporting up to 100TB of memory. Mirroring the flexibility of Liqid’s GPU offerings, Liqid offers two composable memory solutions:
- UltraStack delivers uncompromised performance by dedicating up to 100TB of memory to a single server.
- SmartStack enables dynamic pooling and sharing of up to 100TB of memory across as many as 32 server nodes.
Ultra-Performance NVMe for Unmatched Bandwidth, IOPS, and Capacity
The new Liqid LQD-5500 NVMe storage device offers 128TB capacity, 50GB/s bandwidth, and over 6M IOPS, combining ultra-low latency and high performance in a standard NVMe form factor. Ideal for AI, HPC, and real-time analytics, it offers enterprise-grade speed, scalability, and reliability.
Liqid’s solutions create disaggregated pools of GPUs, memory, and storage, enabling high performance, agile, and efficient on-demand resource allocation. Liqid outperforms traditional GPU-enabled servers in scale-up performance and simplicity, while delivering unmatched agility and flexibility in scale-out demands through its open, standards-based foundation. Additionally, Liqid reduces the complexity, space, and power overhead typically associated with scaling multiple high-end servers without the excessive power consumption of AI factories.
Additional Resources
About Liqid
Liqid is the leader in software-defined composable infrastructure, delivering flexible, high-performance, and efficient on-premises datacenter and edge solutions for AI inferencing, VDI, and HPC, as well as solutions for financial services, higher education, healthcare, telecommunications service providers, media & entertainment, and government organizations.
Liqid enables customers to manage, configure, reconfigure, and scale essential compute, accelerators (GPU, DPU, TPU, FPGA), memory, storage, and networking into physical bare metal server systems in seconds. Liqid customers can optimize their IT infrastructure and achieve up to 100% GPU and memory utilization for maximum tokens per watt and dollar. Learn more at www.liqid.com.
Copyright © 2025 Liqid, Inc. All Rights Reserved. Liqid and Liqid Matrix are registered trademarks of Liqid, Inc. Other trademarks may be trademarks of their respective owners.
View source version on businesswire.com: https://www.businesswire.com/news/home/20250715123138/en/
Media Contact:
Joe Vukson, Liqid
joe.vukson@liqid.com
303-500-1551
Copyright Business Wire 2025
Information contained on this page is provided by an independent third-party content provider. XPRMedia and this Site make no warranties or representations in connection therewith. If you are affiliated with this page and would like it removed please contact pressreleases@xpr.media