Amazon EC2 has the cloud’s broadest and most capable portfolio of hardware-accelerated instances featuring GPUs, FPGAs, and our own custom ML inference chip, AWS Inferentia. G4dn instances offer the best price/performance for GPU based ML inference, training less-complex ML models, graphics applications others that need access to NVIDIA libraries such as CUDA, CuDNN and NVENC.
Read More for the details.