Network Types Offered by Hyperstack

Explore Hyperstack's virtual machine networking solutions designed to meet various workload demands and bandwidth requirements.

Explore Hyperstack's virtual machine networking solutions designed to meet various workload demands and bandwidth requirements. These options include entry-level EthernetEthernethigh-performance Ethernet with SR-IOV, and InfiniBand technologies. This article also details the network type associated with each of Hyperstack's GPU offerings.

In this article

  • Entry-level Ethernet network
  • Ethernet and high-performance Ethernet networks
    • How SR-IOV improves network performance
    • Features of Hyperstack VMs with high-performance Ethernet
  • InfiniBand (exclusive to Supercloud offerings)

Entry-level Ethernet network

For our CPU-only and RTX GPU virtual machines, entry-level Ethernet networking provides a basic and efficient way to transfer data between connected devices. This networking standard is ideal for CPU-only and basic GPU-accelerated workloads, enabling efficient communication between virtual machines and external networks. With its reliability and affordability, entry-level Ethernet ensures virtual machines can efficiently exchange data, collaborate on tasks, and access remote resources, catering to various computational needs.

Use cases

  • AI/ML, deep learning, virtual desktop infrastructure (VDI), 3D rendering, simulation, and visualisation.

Virtual Machine GPUs


Ethernet and high-performance Ethernet networks

For workloads requiring faster data transfer speeds, Hyperstack offers the L40A100, and H100 PCI-e GPU virtual machines, capable of achieving speeds of up to approximately 10 gigabits per second (Gbps). Additionally, the A100 with NVLinkH100, and H100 with NVLink GPU virtual machines offer optional high-performance Ethernet with SR-IOV (Single Root I/O Virtualization) technology. SR-IOV assigns hardware resources directly to individual virtual machines, thereby increasing bandwidth, reducing latency, and more.

How SR-IOV improves network performance

Single Root I/O Virtualization (SR-IOV) is a specification that extends the PCI Express (PCIe) standard to allow a single physical PCIe device, such as a network interface card (NIC), to present itself as multiple separate virtual devices to the host system. This capability is particularly useful in virtualized environments where it can help to improve the performance and efficiency of network traffic handling between virtual machines (VMs) and the physical network.​

NOTE:

SR-IOV is available for contracted users upon request through our technical support team at [email protected].

Features of Hyperstack VMs with high-performance Ethernet

  • Increased throughput and reduced latency resulting from data bypassing the host operating system's software-based network stack.

  • Reduced CPU cycles required for processing network packets, especially advantageous in high-throughput environments where the overhead of emulating network access can be substantial.

  • Fine-grained control over network traffic facilitated by hardware isolation, allowing each virtual function to be independently secured and managed.

Use cases

  • LLM inference, LLM fine-tuning, data analytics, high-performance computing (HPC) for simulation, and scientific computing.

Virtual Machine GPUs

PCI-e cards in CANADA-1 region.

GPUs with Ethernet networks

GPUs with high-performance Ethernet (SR-IOV) networks


InfiniBand

NOTE:

InfiniBand is exclusively available for Supercloud offerings.

InfiniBand (IB) enables the clustering of multiple hosts with GPUDirect RDMA, offering low latency and advanced QoS features. A standout protocol within this framework is SHARP (Scalable Hierarchical Aggregation and Reduction Protocol), which optimizes collective communications, especially in extensive parallel computing tasks, by delegating them to network hardware. By minimizing data movement during such operations, SHARP significantly enhances efficiency. Additionally, InfiniBand incorporates Self-Healing Networks, providing rapid recovery mechanisms from network failures. This capability is vital for upholding the stability and reliability of large GPU clusters, ensuring minimal downtime and consistent performance, even amidst network disruptions.

Use cases

  • LLM inference, LLM fine-tuning, LLM foundational model training, Data Analytics, high-performance computing (HPC) for simulation, and scientific computing.

Virtual Machine GPUs

 

For further assistance, don't hesitate to reach out to us at:

Support Email: [email protected]

Sales Contact: [email protected]

Phone: +44 (0) 203 475 3402


nexgen logo

Please check out our Website and Social Media