Launches Zero-Emission AI Cloud with Integrated MLOps Technology Stack Optimized for NVIDIA Architectures

Americas Uncategorized, the San Francisco-based developer of cutting edge AI infrastructure, is proud to announce the launch of the AI Cloud, its first zero-emissions AI cloud integrated with a complete stack of MLOps tooling for the entire machine learning lifecycle.


In partnership with sustainable data center leader, atNorth, the AI Cloud has been built from the ground up to simplify and accelerate AI development and deployment. Leveraging the latest AI-optimized NVIDIA GPU architectures, including the A100-powered DGX and HGX systems, the AI Cloud provides market-leading developer experience and unified resource management for deep learning and inference workloads.


It also provides AI developers with native integration of’s complete MLOps interoperability platform, which facilitates all stages of AI pipeline creation, data management, training and monitoring while offering seamless integration of the best open source and proprietary AI development and deployment tools currently available, including: Pachyderm, W&B, DVC, Seldon, MLflow, NNI and many more. 


Located in atNorth’s Tier 3 and ISO 27001-certified data center with over 80MW of power capacity, the AI Cloud utilizes 100% renewable geothermal + hydro energy and free-air cooling due to its near-Arctic location in Iceland. will begin utilizing available capacity for its own client work immediately, including for AI transformation and development engagements in the telecommunications, retail, healthcare and consumer finance industries.


“The AI Cloud was created for enterprise AI teams who are dedicated to both high-impact AI solutions and the responsibility we all share for ethical and sustainable AI. Our green cloud is a zero-emission path forward for increasingly large models and datasets,” said CEO Constantine Goltsev.‘s state-of-the-art MLOps platform helps to solve some of the most complex challenges that enterprises and scientists face today, and atNorth is thrilled to be a part of delivering that innovation at zero carbon cost,” said Sebastian Holtslag, Vice President International at atNorth. “Our high density datacenters across the Nordics are built with sustainability, scalability and security at their forefront, and we look forward to supporting‘s future growth.”


“ remains committed to complete infrastructure portability – where MLOps environments run seamlessly on virtually any compute infrastructure, with all resources, processes, permissions and artifacts managed through a central dashboard and pre-existing integrations for major cloud providers (AWS, Azure, GCP), on-prem and the AI Cloud.” is an MLOps technology company and interoperability platform that supports the full lifecycle of AI development and deployment, including modular custom pipeline creation, resource orchestration and automation and instrumentation at each step of ML system construction and deployment. comes with out-of-the-box integrations that provide instant seamless interoperability between a broad ecosystem of the best in breed open source and proprietary ML tools from innovative providers like W&B, DVC, Pachyderm, Seldon, MLflow and more. also solves for infrastructure portability and lock-in and comes with pre-existing integrations for major cloud providers (AWS, Azure, GCP) and on-prem, including optimizations for the latest NVIDIA GPUs and inference servers – such as those used in’s own zero-emission AI Cloud. With, all resources, processes, permissions and artifacts can be managed through a central dashboard and installed and run on virtually any compute infrastructure, be it on-premise or in the cloud of your choice.


atNorth is a leading Nordic data center services company offering environmentally responsible, power-efficient, cost-optimized data center hosting facilities and high-performance computing services. atNorth offers sustainable and extremely scalable HPC resources fully delivered as-a-Service enabling customers to focus on their simulation applications and calculations without having to worry about the underlying HPC infrastructure.