Network latency is proving to be a bottleneck for AI-heavy workloads with 33% and more of elapsed time spent waiting for the network, implying that the network is the computer for AI workloads. It does not matter how well AI is deployed; it will be limited by the best network setup. Ivo Ivanov at DE-CIX explains more.
According to a 2024 MIT Technology Review report into AI, a whopping 95% of companies are already using AI in some form and half of companies expect to deploy AI across all business areas within two years. So, where are the cracks?
While the AI boom continues and businesses are going all-in, important questions are being raised about the AI-readiness of businesses, particularly regarding connectivity and the infrastructure supporting it.
The same MIT report, while optimistic on the whole, reveals that of those businesses claiming to have deployed AI, three-quarters have not yet gone beyond pilot projects and many are reporting challenges relating to data quality, 49%, data infrastructure, pipelines, 44%, and integration tools, 40%.
Network latency is also proving to be something of a bottleneck, particularly when it comes to AI-heavy workloads. According to Meta, more than 33% of elapsed time in AI, ML workloads is spent waiting for the network, leading the Dell’Oro Group to conclude that the network is the computer for AI workloads.
In other words, it does not matter how well AI is deployed or how ready a business thinks it is for AI implementation, they will always be limited by their best network setup.
One path businesses are taking is to ride the tide of two complementary megatrends: cloud computing and AI. Cloud plays an important role in AI implementation, not least because migrating data lakes and warehouses to the cloud helps enterprises keep up with the accelerating growth of data. As many in its partner ecosystem have observed, AI success goes hand in hand with cloud migration and expansion.
According to Deloitte, four in five companies that have experienced the most success with AI also expanded their cloud investments as part of their AI strategy. The growing ecosystem, driven by market developments from its largest players, also points to this growing intersection between cloud and AI services, such as AWS’ delivery of Generative AI capabilities through its platform or Microsoft’s integration of OpenAI’s models into Azure.
Cloud-based data lakes are appealing not only because they enable businesses to store more data than they could ever do on-premises. They must also seamlessly integrate with AI models, also often found in the cloud, and that is where the issues surrounding connectivity, bandwidth, and latency can arise.
It is an unfortunate fact that many companies continue to rely on the public Internet or third-party IP transit to connect their data and AI systems, with little or no security and performance guarantees. As a result, critical company data may be compromised and the speed and bandwidth or the connection is unreliable.
For AI to work optimally, CIOs need to ensure dedicated, secure, and direct connectivity to their on-premise hardware, data warehouses, AI clouds, and AI-as-a-service solutions.
Moreover, the network needs to be able to handle both heavy bandwidth requirements, typically used in building and training AI models as well as the low latency needs of real-time AI applications such as automated driving, financial trading, cyber security, or predictive analytics.
Businesses that neglect to address the twin demands of low latency and high bandwidth will significantly limit their ability to gain real-time AI-derived insights.
Public Internet and third-party IP transit offers enterprises little control over how their data is routed, network performance, or security.
Offering network interconnection services to businesses provides them with secure, dedicated connections between their on-premise systems and their AI cloud services in which they can set performance levels and adjust latency. An Internet Exchange can bring together on one platform all relevant networks and players, from the company’s own network to IoT networks, customer networks, and partner networks all along the AI value chain.
Cloud Exchanges with cloud routing capabilities are crucial for building responsive multi-cloud environments, and AI exchanges enable efficient data paths to AI as a Service providers, allowing for the outsourcing of AI development while maintaining high performance. Interconnection platforms with integrated Cloud and AI Exchange capabilities offer enormous potential here.
Ecosystem partners that can offer direct, high-performance links between corporate networks and cloud platforms can help client organisations create responsive, interoperable hybrid cloud or multi-cloud environments that ensure reliable AI performance.
Increasingly, CIOs are working in close collaboration with systems integrators who can deliver these high-performance interconnection solutions, not only to solve their current connectivity challenges, but to future-proof their networks as they prepare for a more data-intensive, AI-driven future.
In closing, as businesses embrace AI to achieve productivity gains, automate workflows, and cut costs, the channel needs to help them think beyond the tools themselves and ensure their infrastructure is primed for the demands of AI.
Strategic partnerships with Cloud Exchange, Internet Exchange, and AI Exchange operators will be pivotal for businesses seeking to unlock AI’s full potential.