Get the Recap from All Things Open 2025

Article originally published on Forbes.com. Image courtesy of Getty.
The AI race is in full swing. Digital companies are pushing quickly into new frontiers to gain competitive advantages.
Between 2023 and 2033, the global AI market is projected to jump from $189 billion to $4.8 trillion. AI is becoming an integral and valuable part of day-to-day life. Autonomous vehicles are now providing 150,000 rides in U.S. cities every week. The FDA approved 223 AI-enabled medical devices in 2023, a jump from six devices in 2015. Last year, 78% of organizations reported using AI, and U.S. private investment in AI reached $109.1 billion.
AI is helping organizations solve real-world problems, but financial, geographic and technical limitations are creating roadblocks. Business leaders need to understand and prepare for these challenges. The AI race won’t be won with a single advantage: LLM size, amount of source data, VC investment or access to GPUs. I believe that true success will depend on your ability to build systems that can operate across borders and respond nimbly—under pressure and across many different constraints.
Deploying AI infrastructure at scale is a monumental undertaking. I see so many deployments fail because of a lack of human knowledge around how to construct and operate these complex systems and processes. I estimate that fewer than 10,000 people in the world truly understand AI infrastructure at this level. If you don’t have one of them on your team, it’s critical that you educate yourself on common challenges and find an infrastructure partner to protect your business interests.
There’s a popular myth that on-demand GPUs exist—and for a modest hourly fee, you can get access to the latest and greatest GPUs in a cloud, elastic way.
That’s not how it works. I've found that most on-demand GPU offers are smoke and mirrors. I recently looked into a provider that promised low hourly pricing on its website, but there was no way to sign up. I had to schedule a meeting with a sales rep to discover that, to get on-demand GPU access, I would have to order a minimum of 200 GPUs and commit to a 36-month contract at around $100,000 a month.
Be skeptical of on-demand GPU offers that seem too good to be true. They probably are. Read the fine print, and make sure you understand the contract terms and the real costs. Don't overcommit to a long, expensive contract without having a clear return on investment. I recommend developing a hybrid strategy: owning a core set of GPUs and working with a boutique provider or aggregator that aligns with your business.
AI is geopolitically constrained. Your ability to scale depends on where your data is allowed to live. Data sovereignty is not a compliance box to check. It’s a system design challenge. AI models are being regulated at the border. Governments are enforcing where training and inference take place, and laws and regulations are becoming more stringent.
Take a strategic, proactive approach to prepare for data sovereignty barriers. Build compliant zones into your architecture intentionally, not as an afterthought. Your technical executives—CTO, chief data officer, chief AI officer, etc.—need to consider factors such as federated learning, inference localization and metadata fencing as you develop your tech stack.
Avoid a "lift-and-shift" strategy. Build local, within borders, first; scale second. A good rule of thumb is: If your AI system doesn't know which country it's in, then it's already out of compliance.
Integration complexity is killing velocity and creativity, and AI teams today are stuck. Too often, they’re piecing together compute, storage, networking, orchestration and compliance across vendors. In my experience, AI stacks today are 60% to 70% glue code. Projects that take months to stand up and run up six- to seven-figure bills sometimes don't even have the right software stack to manage them.
I’ve talked to leaders in large-scale organizations or government agencies whose teams are manually writing scripts and running software after investing $100 million in an AI infrastructure deployment. And the consulting companies that they're partnering with require a new SOW every few months to take advantage of the latest technologies. When tech is changing almost day by day, this is an expensive and inefficient model.
Before you work with a consultant, ask: How does this plan serve our organization? How does it serve the consultant? What’s the ROI? If you get stuck working with a partner more interested in their billable hours than your business goals, you're more likely to get left behind by your competitors. Companies that have faster, leaner approaches will end up the winners.
AI training and inference cycles are bottlenecked by slow bandwidth and network routing. Having an edge-first network is crucial in sectors where even milliseconds matter.
In healthcare, for example, AI is being used to improve medical imaging, disease detection and diagnosis, administrative tasks and decision-making. A five-second delay in information going back and forth can have serious, even life-or-death, consequences.
Rural hospitals across the U.S. are expanding their use of AI to provide faster interventions, greater healthcare access and better patient outcomes, from Washington and Michigan to Missouri and Florida. When one person in five lives in a rural area, and rural residents are at a higher risk of death than those in urban areas, this technology is saving lives.
Prioritize edge-first networking to combat latency and bandwidth issues. Edge-first networking isn’t just a CDN for AI. Inference fails without proximity, pipes and network routing. Use smart routing technologies, such as anycast, to make your AI models as fast as possible for end users.
The future of AI infrastructure is modular, distributed and sovereign by design. To start preparing for it, ask this question: If you had to relocate your AI infrastructure tomorrow to a new country, region or provider, could you do it?
Reach out to learn how our global platform can power your next deployment. Fast, secure, and built for scale.