- DataMigration.AI
- Posts
- Cloud Migration Challenges Hinder Enterprise AI Progress
Cloud Migration Challenges Hinder Enterprise AI Progress
This issue is especially prevalent among businesses with revenues exceeding $1 billion.

For many companies, migrating to the cloud is a critical step in their path toward digital transformation and AI adoption.
“We’ll see an immense surge in data growth as AI progresses over the coming years,” said Ivo Ivanov, CEO of DE-CIX, one of the largest internet exchange operators globally. “Building a strong hybrid or multi-cloud environment with effective connectivity will become essential.”
However, many enterprises are finding cloud migration to be complex. An MIT survey revealed that 34% of respondents identified incomplete cloud migration as a key barrier to AI deployment speed. This issue is especially prevalent among businesses with revenues exceeding $1 billion, as they often face greater challenges and costs due to large data stores and outdated IT infrastructure.
Ivanov highlighted key concerns such as infrastructure, cost management, and lack of expertise in cloud migration, all of which significantly impact companies striving to integrate AI. One challenge slowing migration is the hidden expenses, like cloud egress fees, which can unexpectedly inflate costs. Transitioning to the cloud also demands hiring cloud architects or upskilling existing teams, adding further complexity.
Another common issue is vendor lock-in, which can be mitigated with a multi-cloud strategy, but connectivity problems can still arise. A lack of comprehensive cloud connectivity strategies leads to performance bottlenecks, hampering the data flow necessary for cloud and AI applications. Ivanov noted that the interoperability issues between hybrid or multi-cloud infrastructures create additional layers of complexity. Using software-defined cloud or AI routing services on an exchange platform can help mitigate these risks.
Without a well-defined connectivity plan, companies risk losing control. A recent IDC survey found that 64% of organizations in Europe consider network latency and performance as significant technical challenges in cloud migration. “Ensuring the long-term success of AI initiatives requires a thoroughly planned infrastructure strategy,” Ivanov emphasized, including optimizing connectivity, using cloud resources efficiently, and controlling costs.
Modern data environments add to this complexity, with many organizations still relying on outdated data pipelines that are difficult to maintain and slow the cloud migration process, said Jonathan Whitney, Chief Product Officer at Airbyte. Fragmented data systems and a lack of standardized tools for moving large volumes of both structured and unstructured data complicate the process further.
“Legacy systems aren’t scalable, leading to bottlenecks and obstructing seamless data transfer,” Whitney noted. Since AI depends on data, any disruption in accessing diverse, clean data sources negatively impacts the accuracy and effectiveness of AI models. This is particularly important for large language models (LLMs), which require massive datasets for training, typically hosted in cloud environments.
Without efficient cloud access, AI models struggle with insufficient data, reducing their ability to generate insights. This limitation hinders enterprises from fully leveraging AI’s potential.
Ivanov stressed the importance of establishing direct cloud connectivity and utilizing cloud exchange platforms to minimize latency and costs. “Data transmission, exchange, and network connections must be seamless, secure, and high-performing,” he said.
To achieve this, Ivanov outlined three essential components: direct cloud connections through private solutions at Cloud Exchanges, scalable cloud routing services, and direct peering with AI-as-a-Service providers. These private connections outperform public internet routes, as they are covered by SLAs ensuring reliability.
Whitney recommended open-source platforms that handle both structured and unstructured data across multiple clouds, along with scalable, automated workflows to maintain seamless data pipelines. Tools optimized for AI use cases, such as vector databases and retrieval-augmented generation (RAG) models, are critical for enhancing AI and LLM workflows.
Accelerate Petabyte-Scale Migrations with Cloud Write on Amazon FSx for NetApp ONTAP

The International Data Corporation (IDC) predicts that global data generation and consumption will reach 175 zettabytes (ZB) by 2025. Consequently, organizations are seeking fast, reliable, and scalable cloud migration solutions to move their growing on-premises data into the cloud. Whether driven by an upcoming lease renewal, the closure of a data center, or the early phases of a technological transformation, your organization might be preparing for a significant migration project.
To highlight the challenges of large-scale data migrations, take the example of a prominent automotive manufacturer that needed to transfer over 300 TB of data from their legacy file systems to AWS. Initially, their migration took nearly two weeks as they had to constantly pause the process to allow file system tiering or risk exceeding capacity limits. This led to considerable delays. However, after implementing Amazon FSx for NetApp ONTAP’s Cloud Write feature, they managed to reduce the migration time to just two days, an 85% reduction. This not only sped up the process but also enabled the company to benefit from AWS’s scalability, agility, and cost efficiency much sooner.
This article outlines how you can use FSx for ONTAP’s Cloud Write feature to quickly migrate data and optimize storage costs, providing step-by-step instructions to enable and disable Cloud Write, along with key considerations. It is intended for users planning to migrate non-ONTAP source file systems into FSx for ONTAP.
Solution Overview
Amazon FSx for ONTAP is a fully managed storage service that allows you to deploy and operate NetApp ONTAP file systems within AWS. It combines the features, performance, and APIs of NetApp file systems with the scalability and ease of AWS’s managed services.
FSx for ONTAP supports high-performance file storage accessible from Linux, Windows, and macOS through standard protocols like Network File System (NFS), Server Message Block (SMB), and Internet Small Computer System Interface (iSCSI). With features like snapshots, cloning, and replication, it provides flexible and cost-effective storage options, which are elastic and can scale to large sizes. Additionally, it supports data compression and deduplication, further reducing storage expenses.
What is Cloud Write?
As of November 2023, FSx for ONTAP includes a new volume-level feature called Cloud Write, which allows users to bypass the solid-state drive (SSD) storage tier and write data directly to the capacity pool tier. This feature enhances migration speed, streamlines processes, and cuts costs when transferring data to AWS.
FSx for ONTAP Storage Tiers
An FSx for ONTAP file system has two storage tiers: SSD storage, which is high-performance and provisioned for active datasets, and capacity pool storage, which is fully elastic and cost-optimized for infrequently accessed data. Depending on the configured tiering policy, a file system volume can reside entirely on either the SSD tier, the capacity pool tier, or both. The SSD storage tier is paid based on provisioned capacity, while the capacity pool tier is elastic and billed based on usage.
FSx for ONTAP Tiering Policies
You can configure FSx for ONTAP volumes to automatically tier data between the SSD storage and capacity pool tiers. The tiering process operates in the background, with timeframes varying depending on the selected tiering policy and the volume of data. However, if the SSD tier fills faster than the tiering process can transfer data to the capacity pool, the migration pauses, as ONTAP prioritizes front-end traffic over tiering. When SSD space exceeds 98% capacity, tiering stops until space is freed up.
FSx for ONTAP offers four tiering policies:
Snapshot-only (default): Moves cold snapshots to the capacity pool storage tier with a minimum cooling period of two days, adjustable up to 183 days.
Auto: Transfers cold user data and snapshots to the capacity pool based on access patterns, with a default cooling period of 31 days.
All (required for Cloud Write): Moves all user data and snapshots to the capacity pool as soon as the tiering scan runs.
None: Keeps all data in the SSD storage tier without moving it to the capacity pool.
Challenges of Migrating Large Datasets into FSx for ONTAP
Large data migrations are challenging due to the complexities of moving massive volumes while maintaining data integrity and minimizing downtime. Migrating data quickly is crucial for operational efficiency, and by default, data is initially written to the SSD storage tier before being tiered to the capacity pool. This can cause the SSD tier to fill up, halting the migration process.
Key Benefits of Cloud Write
Cloud Write offers several advantages:
It allows data to be written directly to the capacity pool tier, preventing migration halts due to filled SSD storage.
It helps optimize costs by eliminating the need to expand SSD storage during migrations.
Cloud Write can be disabled post-migration, with active data automatically tiered back to the SSD storage tier as needed.