One of the issues with artificial intelligence (AI) that often goes overlooked is just how expensive it is to build and train a large language model (LLM). With rumors earlier this year that OpenAI is burning through $700,000 USD a day to run ChatGPT, few companies are equipped to create their own versions from the ground up.
That’s why one of the most influential players in the AI space for both hardware and software has come up with a different approach. This week at Microsoft Ignite, NVIDIA announced its new Foundry AI service, combining its own foundation models, the NeMo framework and tools, and the company’s DGX Cloud AI supercomputing services into one package. Together, these enable Foundry AI users to build their own custom generative AI models for enterprise applications via Microsoft’s Azure platform.
Read more here.