Cirrascale Cloud Services® is excited to announce a recent expansion to its Graphcloud service as Graphcore recently announced their IPU-POD128 and IPU-POD256 products as the latest and largest in the ongoing story of scaling Graphcore AI compute systems. The new systems show the strengths and benefits of an architecture that has been designed from the ground up for scale-out. Graphcore’s reach into supercomputer territory is further extended with a powerful 32 teraFLOPS of AI compute for IPU-POD128 and 64 teraFLOPS for IPU-POD256. Cirrascale has started offering the updated service to customers immediately as part of the Graphcloud service. The Graphcloud service is a collaboration between Cirrascale and Graphcore® offering customers access to Graphcore’s tightly integrated hardware and software IPU-POD scale-out platform using second-generation GC200 Intelligence Processing Units (IPUs) and Poplar® software stack.
As Graphcore noted in their recent blog post, initial results running popular models show impressive training performance and highly efficient scaling, with future software refinements expected to further boost performance. Early results show an up to 97% scaling efficiency when scaling from an IPU-POD64 to an IPU-POD128 for BERT-L Ph1 (SL128) TensorFlow Pretraining models. For us here at Cirrascale, that’s an extremely impressive scaling efficiency.
The IPU-POD128 and IPU-POD256 are the first systems to utilize the new IPU-Gateway Links, the horizontal, rack-to-rack connections that extend IPU connectivity across multiple PODs. The whole system, hardware, and software have been architected together. IPU-POD128 and IPU-POD256 support standard frameworks and protocols to enable a smooth integration. Innovators can focus on deploying their AI workloads at scale, using familiar tools while benefitting from cutting-edge performance in the cloud.
Although the IPU-POD16 continues to be an ideal platform to explore with and the IPU-POD64 is aimed at those who want to build their AI compute capacity, the latest introduction of the IPU-POD128 and IPU-POD256 to Graphcloud can deliver a new and more streamlined way for customers to grow further, faster. These solutions can be optimized to deliver maximum performance for different AI workloads, delivering the best possible total cost of ownership (TCO).
IPU-POD systems support industry-standard software tools. Developers can work with frameworks such as TensorFlow, PyTorch, PyTorch Lightning, and Keras, as well as open standards like ONNX and model libraries like Hugging Face. For deeper control and maximum performance, the Poplar framework enables direct IPU programming in Python and C++. Poplar allows effortless scaling of models across many IPUs without adding development complexity, so developers can focus on the accuracy and performance of their applications.
IPU-POD128 and IPU-POD256 systems are currently available with Graphcloud and can be accessed in weekly or monthly flat-rate allotments with discounts offered for longer-term use with predictable pricing and no extra charges. Additionally, the Graphcloud platform can easily integrate with a customer’s current online cloud-based workflow at various hyperscalers creating a secure, multi-cloud solution.
Customers can begin using the Graphcloud IPU-POD instances or can learn more about how to purchase their own IPU-POD for on-premise use by visiting graphcloud.ai.