Key Features

Accelerate Every Workload, Everywhere

The NVIDIA H100 is an integral part of the NVIDIA data center platform. Built for AI, HPC, and data analytics, the platform accelerates over 3,000 applications, and is available everywhere from data center to edge, delivering both dramatic performance gains and cost-saving opportunities.

Take an Order-of-Magnitude Leap for Accelerated Computing

The NVIDIA H100 Tensor Core GPU delivers unprecedented performance, scalability, and security for every workload. With NVIDIA® NVLink® Switch System, up to 256 H100 GPUs can be connected to accelerate exascale workloads, while the dedicated. Transformer Engine supports trillion-parameter language models. H100 uses breakthrough innovations in the NVIDIA Hopper™ architecture to deliver industry- leading conversational AI, speeding up large language models by 30X over the previous generation.  

Ready for Enterprise AI?

NVIDIA H100 GPUs for mainstream servers come with a five-year software subscription, including enterprise support, to the NVIDIA AI Enterprise software suite, simplifying AI adoption with the highest performance. This ensures organizations have access to the AI frameworks and tools they need to build H100- accelerated AI workflows such as AI chatbots, recommendation engines, vision AI, and more. Access the NVIDIA AI Enterprise software subscription and related support benefits for the NVIDIA H100.

Securely Accelerate Workloads From Enterprise to Exascale

NVIDIA H100 GPUs feature fourth-generation Tensor Cores and the Transformer Engine with FP8 precision, further extending NVIDIA’s market-leading AI leadership with up to 4X faster training and an incredible 30X inference speedup on large language models. For high-performance computing (HPC) applications, H100 triples the floating-point operations per second (FLOPS) of FP64 and adds dynamic programming (DPX) instructions to deliver up to 7X higher performance. With second-generation Multi-Instance GPU (MIG), built-in NVIDIA confidential computing, and NVIDIA NVLink Switch System, H100 securely accelerates all workloads for every data center from enterprise to exascale.


H100 PCIe H100 NVL
FP64 26 teraFLOPS 68 teraFLOPS
FP64 Tensor Core 51 teraFLOPS 134 teraFLOPS
FP32 51 teraFLOPS 134 teraFLOPS
TF32 Tensor Core 756 teraFLOPS 1,979 teraFLOPS
BFLOAT16 Tensor Core 1,513 teraFLOPS 3,958 teraFLOPS
FP16 Tensor Core 1,513 teraFLOPS 3,958 teraFLOPS
FP8 Tensor Core 3,026 teraFLOPS 7,916 teraFLOPS
INT8 Tensor Core 3,026 TOPS 7,916 TOPS
GPU Memory 80GB 80GB
Memory bandwidth 2TB/s 7.8TB/s
Decoders 7 NVDEC; 7 JPEG 7 NVDEC; 7 JPEG
Max thermal design power (TDP) 300-350W (configurable) 2x 350-400W (configurable)
Multi-instance GPUs Up to 7 MIGs @ 10GB each Up to 14 MIGs @ 12GB each
Form factor PCIe > dual-slot > air-cooled 2x PCIe > dual-slot > air-cooled
Interconnect NVLink: > 600GB/s PCIe > Gen5: 128GB/s NVLink: > 600GB/s PCIe > Gen5: 128GB/s
Server options Partner and NVIDIA-Certified Systems with 1–8 GPUs Partner and NVIDIA-Certified Systems with 2-4 pairs
NVIDIA Enterprise Included Included

Speak with an expert to learn more.