NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform. A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. Available in 40GB and 80GB memory versions, A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.
- FP64: 9.7 TFLOPS
- FP64 Tensor Core: 19.5 TFLOPS
- FP32: 19.5 TFLOPS
- Tensor Float 32 (TF32): 156 TFLOPS
- BFLOAT16 Tensor Core: 312 TFLOPS
- FP16 Tensor Core: 312 TFLOPS
- INT8 Tensor Core: 624 TOPS
- GPU Memory: 80GB HBM2e
- GPU Memory Bandwidth: 1,935 GB/s
- Max Thermal Design Power (TDP): 300W
- Multi-Instance GPU: Up to 7 MIGs @ 10GB
- Form Factor: PCIe, Dual-slot air-cooled or single-slot liquid-cooled
- Interconnect: NVIDIA NVLink Bridge for 2 GPUs – 600 GB/s, PCIe Gen4 – 64 GB/s
- Server Options: Partner and NVIDIA-Certified Systems with 1-8 GPUs
Reviews
There are no reviews yet.