Generative AI is a technology that is a significant tailwind for most sectors, especially data centres. One question we get asked from our clients is how do we think about demand generated from generative AI across AIPAC? I'm going to cover this in a three part video series. In this first part, I'll walk you through a framework on how to think about generative AI demand.
In the second part, I'm going to talk to you about how we've thought about that demand in Australia and New Zealand, and in the third part, I'll talk to you about how that's likely to affect Southeast Asia. Generative AI demand can broadly be broken in two categories. The first is training and the second is inferencing. Training large language models involves using billions of parameters.
Example like GPT 4, Claude, Gemini. Inferencing is all about generation of responses based on patterns and information learned during the training of these large language models. For training large language models, latency sensitivity is low, but for inferencing that same sensitivity is very high. The technology requirements for training large language models and inferencing very dramatically.
There are 4 geographical characteristics that will determine which demand profile is applicable to which geography. The first one is land availability, the second one is power availability and affordability, The third is local DC operator capability, and the fourth is geopolitical security. As an investor, the important thing to understand is training. Large language model workloads are not something that all geographies are going to have.
This is a very specific workload that only certain geographies are going to benefit from. But inferencing, on the other hand, is something that every geography that has an existing cloud footprint is going to benefit from.