Lambda is a cloud computing service for AI developers that lets you provision on demand NVIDIA GPU instances and clusters. It charges by the hour, so it's good for projects with fluctuating demand. Lambda supports a range of GPUs and offers preconfigured ML environments, scalable file systems and one-click Jupyter access, so it's a good all-purpose option for training and inferencing AI models.
Another good option is RunPod, a globally distributed GPU cloud that lets you spin up GPU pods immediately. It charges by the minute with no egress or ingress fees, and it offers more than 50 preconfigured templates for frameworks like PyTorch and Tensorflow. RunPod is good for developers who need to stand up and manage AI workloads quickly with a minimum of setup hassle.