Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more
As AI transforms enterprise operations across diverse industries, critical challenges continue to surface around data storage—no matter how advanced the model, its performance hinges on the ability to access vast amounts of data quickly, securely, and reliably. Without the right data storage infrastructure, even the most powerful AI systems can be brought to a crawl by slow, fragmented, or inefficient data pipelines.
This topic took center stage on Day One of VB Transform, in a session focused on medical imaging AI innovations spearheaded by PEAK:AIO and Solidigm. Together, alongside the Medical Open Network for AI (MONAI) project—an open-source framework for developing and deploying medical imaging AI—they are redefining how data infrastructure supports real-time inference and training in hospitals, from enhancing diagnostics to powering advanced research and operational use cases.
>>See all our Transform 2025 coverage here<<Innovating storage at the edge of clinical AI
Moderated by Michael Stewart, managing partner at M12 (Microsoft’s venture fund), the session featured insights from Roger Cummings, CEO of PEAK:AIO, and Greg Matson, head of products and marketing at Solidigm. The conversation explored how next-generation, high-capacity storage architectures are opening new doors for medical AI by delivering the speed, security and scalability needed to handle massive datasets in clinical environments.
Crucially, both companies have been deeply involved with MONAI since its early days. Developed in collaboration with King’s College London and others, MONAI is purpose-built to develop and deploy AI models in medical imaging. The open-source framework’s toolset—tailored to the unique demands of healthcare—includes libraries and tools for DICOM support, 3D image processing, and model pre-training, enabling researchers and clinicians to build high-performance models for tasks like tumor segmentation and organ classification.
A crucial design goal of MONAI was to support on-premises deployment, allowing hospitals to maintain full control over sensitive patient data while leveraging standard GPU servers for training and inference. This ties the framework’s performance closely to the data infrastructure beneath it, requiring fast, scalable storage systems to fully support the demands of real-time clinical AI. This is where Solidigm and PEAK:AIO come into play: Solidigm brings high-density flash storage to the table, while PEAK:AIO specializes in storage systems purpose-built for AI workloads.
“We were very fortunate to be working early on with King’s College in London and Professor Sebastien Orslund to develop MONAI,” Cummings explained. “Working with Orslund, we developed the underlying infrastructure that allows researchers, doctors, and biologists in the life sciences to build on top of this framework very quickly.”
Meeting dual storage demands in healthcare AI
Matson pointed out that he’s seeing a clear bifurcation in storage hardware, with different solutions optimized for specific stages of the AI data pipeline. For use cases like MONAI, similar edge AI deployments—as well as scenarios involving the feeding of training clusters—ultra-high-capacity solid-state storage plays a critical role, as these environments are often space and power-constrained, yet require local access to massive datasets.
For instance, MONAI was able to store more than two million full-body CT scans on a single node within a hospital’s existing IT infrastructure. “Very space-constrained, power-constrained, and very high-capacity storage enabled some fairly remarkable results,” Matson said. This kind of efficiency is a game-changer for edge AI in healthcare, allowing institutions to run advanced AI models on-premises without compromising performance, scalability, or data security.
In contrast, workloads involving real-time inference and active model training place very different demands on the system. These tasks require storage solutions that can deliver exceptionally high input/output operations per second (IOPS) to keep up with the data throughput needed by high-bandwidth memory (HBM) and ensure GPUs remain fully utilized. PEAK:AIO’s software-defined storage layer, combined with Solidigm’s high-performance solid-state drives (SSDs), addresses both ends of this spectrum—delivering the capacity, efficiency, and speed required across the entire AI pipeline.
A software-defined layer for clinical AI workloads at the edge
Cummings explained that PEAK:AIO’s software-defined AI storage technology, when paired with Solidigm’s high-performance SSDs, enables MONAI to read, write, and archive massive datasets at the speed clinical AI demands. This combination accelerates model training and enhances accuracy in medical imaging while operating within an open-source framework tailored to healthcare environments.
“We provide a software-defined layer that can be deployed on any commodity server, transforming it into a high-performance system for AI or HPC workloads,” Cummings said. “In edge environments, we take that same capability and scale it down to a single node, bringing inference closer to where the data lives.”
A key capability is how PEAK:AIO helps eliminate traditional memory bottlenecks by integrating memory more directly into the AI infrastructure. “We treat memory as part of the infrastructure itself—something that’s often overlooked. Our solution scales not just storage, but also the memory workspace and the metadata associated with it,” Cummings said. This makes a significant difference for customers who can’t afford—either in terms of space or cost—to re-run large models repeatedly. By keeping memory-resident tokens alive and accessible, PEAK:AIO enables efficient, localized inference without needing constant recomputation.
Bringing intelligence closer to the data
Cummings emphasized that enterprises will need to take a more strategic approach to managing AI workloads. “You can’t be just a destination. You have to understand the workloads. We do some incredible technology with Solidign and their infrastructure to be smarter on how that data is processed, starting with how to get performance out of a single node,” Cummings explained. “So with inference being such a large push, we’re seeing generalists becoming more specialized. And we’re now taking work that we’ve done from a single node and pushing it closer to the data to be more efficient. We want more intelligent data, right? The only way to do that is to get closer to that data.”
Some clear trends are emerging from large-scale AI deployments, particularly in newly built greenfield data centers. These facilities are designed with highly specialized hardware architectures that bring data as close as possible to the GPUs. To achieve this, they rely heavily on all solid-state storage—specifically ultra-high-capacity SSDs—designed to deliver petabyte-scale storage with the speed and accessibility needed to keep GPUs continuously fed with data at high throughput.
“Now that same technology is basically happening at a microcosm, at the edge, in the enterprise,” Cumming explained. “So it’s becoming critical to purchasers of AI systems to determine how you select your hardware and system vendor, even to make sure that if you want to get the most performance out of your system, that you’re running on all solid-state. This allows you to bring huge amounts of data, like the MONAI example—it was 15,000,000 plus images, in a single system. This enables incredible processing power, right there in a small system at the end.”
Source link