A CTO’s Playbook for the AI Era: Insights from Hudson IX’s Arthur Valhuerdi

AI has rewritten the rules of infrastructure almost overnight. Power scarcity, gigawatt campuses, liquid cooling, and the resurgence of on‑site generation are no longer niche—they’re becoming standard requirements.  

We sat down with Arthur Valhuerdi, CTO of Hudson Interxchange, whose extensive experience building and operating data centers has put him at the center of today’s most pressing challenges. 

In this interview, Arthur breaks down the industry shifts that matter most, blind spots CTOs should avoid, and the operational disciplines that define industry leaders. 

Question: What are the biggest shifts you're seeing in the industry right now?

There are numerous trends shaping the industry: AI, power demand, and cooling innovation. 

AI and high-density workloads have created a new category of data center: the AI factory with density per rack, and power and cooling requirements far greater than anything we have seen in the industry.  

Now, densities and short distances between the AI nodes are driving huge power demands and spawning gigawatt campuses.  

This surge in power use on the U.S.’s aging electrical infrastructure has created a challenge in providing and distributing enough power. Because of that, locations that would not have been considered before this AI boom are now being selected if close to a power plant with spare generation and, importantly, transmission capacity.  

You need powerlines with spare capacity to transmit power where it needs to be delivered. Indeed, some sites and operators are working behind the meter, installing their own power generation equipment. If they are near high pressure natural gas lines, they can deploy gas fired turbines or reciprocating engines. These can be self-purchased or leased, and they can be permanent or a stop gap until the utility can provide power to a site. We even have some nuclear projects starting. While natural gas is a relatively clean fuel, nuclear plants do not pollute, apart from the spent nuclear fuel, which remains radioactive for 10,000 years. 

As for cooling, it has become even more critical. In a traditional data center, the cool air in the room is actually a thermal reservoir. If you lose cooling, the air in the room slowly heats up, but that gives you critical minutes to correct the issue. If you have 300 kW in a cabinet and it loses cooling, even if the servers shut down, the heat can build enough to damage the servers and chips. There are huge costs to build a liquid thermal reservoir, from the amount of liquid to insulation to pumping back up, and so on. 

Again, this is a huge shift for the industry. When computing started, we had vacuum tubs being cooled by water. Now, liquid cooling is back, but at the chip level. The densities are driving innovations in cooling:  

  • We have single cabinets that have become tiny data centers unto themselves, with cooling coils and high-speed fans enabling 300 kW/rack 

  • Cooling to the chip, where the cooling is running to a cold plate on the GPU and removing the heat directly and efficiently 

  • And we’re immersing equipment in a dielectric oil, allowing even higher densities with the immersion proving a very quick and efficient medium for heat rejection 

Question: What are operators who are executing well doing differently than operators who are constantly playing catch‑up?

Operators who are consistently ahead tend to be disciplined and highly data-driven. Again, treat infrastructure as a process not a project. 

Standardize. Repeatable designs, modular builds, automated systems, and monitoring. The DCIM is your eyes and ears, and then that data needs to be analyzed and acted on. This allows you to scale. 

They also need to have clear Service Level Agreements and meet or exceed them. Maintenance needs to be performed, but your customer needs to have notice and be aware, incidents need to be minimized, and service availability needs to be maintained. 

And after developing those relationships with the utility companies and your core vendors, co-design the facility with them. Long lead times are a fact, but they can be minimized by partnerships and planning and are key to ensuring cash flow and customer satisfaction. 

Question: What would you advise a CTO entering the industry to prioritize early that most people overlook?

I would tell a new CTO that their first priorities should be power, standardization, and data. Do those three things first and then you can see which technologies and vendors will help accomplish your plan. 

To do that, establish relationships with utility power companies and secure power. Without having power secured, you will not have a sellable story. 

Next, try and standardize your architecture by creating repeatable modules, standard design, and preapproved specifications. If your business is building data centers, you want to make them more like a process rather than a unique project each time. The base building may be unique, but if your pods are similar, you can size your workforce in a standard way, your salespeople will understand how to sell a pod or module, because it is similar across the company, and your customers will feel more comfortable. 

And last, data is key. Leverage your BMS/DCIM and your data intelligence platform that pulls in your OSS, BSS, CRM, fiber and equipment inventory sales info, etc.  

There is a reason everyone is AI crazy. AI is not just something data center companies provide; they also consume it. With information from the DCIM, you can optimize cooling and power consumption, saving electricity, which is the number one cost of a data center. Using your data intelligence platform, you not only see what your customer needs, you can start predicting those needs, using the platform to plan expansions, fiber augments, based on customer trends. And you can use your DCIM to anticipate repair needs for critical equipment before a piece of equipment fails.  

Question: What will define success for infrastructure‑focused organizations going forward—and what should CTOs be doing today to prepare?

Success will belong to infrastructure organizations that can deliver high-density, AI-ready capacity quickly, reliably, and sustainably, while staying flexible as technology shifts. 

Being able to turn capacity into revenue faster, using shorter build cycles, with higher utilization and the ability to phase the work without over committing capital—anyone can do it with enough money. 

The secret to that is having clean data that unlocks capacity forecasting, power and colling efficiency, and faster time to repair and to prevent service outages.

Next
Next

Stories from the Field: Turning Audits into a Goldmine