Lambda is a cloud computing service tuned for AI developers, with on-demand and reserved NVIDIA GPU instances and clusters for training and inference. The service offers a variety of GPUs, including the new NVIDIA H100, H200 and GH200 Tensor Core GPUs, so customers can train and fine-tune big AI models.
Lambda's on-demand cloud lets customers provision GPU instances by the hour without a long-term commitment. That's good for projects that need computing resources that can wax and wane. For customers who need to use a lot of GPUs for months at a time, Lambda also offers reserved cloud options.
Among its features:
On-demand instance pricing varies depending on the GPU and configuration. For example:
Reserved cloud pricing is available for longer-term commitments, with options for one, two and three-year terms. For example:
Developers and researchers will benefit from Lambda's ML-first user experience, which lets them quickly provision and manage GPU instances that meet their project needs. Thousands of AI engineers and researchers already use Lambda, and the company says they appreciate its ease of use and performance.
Published on July 10, 2024
Analyzing Lambda...