IBM
Partner Content
IBM
This content was paid for by IBM and produced in partnership with the Financial Times Commercial department.

How smaller, industry-tailored AI models can offer greater benefits

Not all artificial intelligence tools are created equal, and one size cannot fit all. Businesses navigating sector-specific concerns, compliance and security are turning to more focused models

Supply chain volatility – and its impact on businesses and consumers alike – is nothing new. What has changed is that companies can now access new tools to help them cope, thanks to advances in artificial intelligence (AI).

“In 2024, Samsung SDS was dealing with the normalisation of high freight rates and decreases in cargo volume,” says HaeGoo Song, Executive Vice President at Samsung SDS, an information technology services company that helps others solve logistics challenges. “To address these circumstances, we began investing more in cloud and generative AI and promoting hyper-automation in business practices.”

The AI-driven technologies that companies such as Samsung SDS are using have applications far beyond supply chains. However, for many organisations, the path to adoption can be complicated by privacy concerns, regulatory compliance requirements and the high costs associated with running large AI models. As businesses grapple with these challenges, many are discovering that one-size-fits-all solutions can fall short.

“CFOs are realising how expensive this can be,” says Manish Goyal, Vice President and Senior Partner of AI and Analytics at IBM Consulting. “Over the past 18 months, we’ve seen a lot of experimentation. Experimentation is good but, when it’s not directed experimentation, teams can end up doing some of the same stuff, which can be wasteful.”

The challenge of enterprise-ready AI

Challenges can become acute when organisations need to process sensitive data or require industry-specific applications. Large, general-purpose AI models – often called “frontier” models – are trained on massive datasets that can help companies identify insights in their data, generate content and more. But such models also require substantial computing resources, making it a challenge to scale their use affordably. Those using them may end up with solutions that turn out to be less than ideal for their specific needs.

These open-source models are not only less costly to run, but can also be designed to operate on just about any infrastructure, including on-premises servers and private clouds

Part of the answer, according to Song and Goyal, lies in a new class of open-source, purpose-built AI models that can run lean, with fewer parameters than large frontier models. These open-source models are not only less costly to run, but can also be designed to operate on just about any infrastructure, including on-premises servers and private clouds. That enables businesses to use them in more scenarios. Such models also lend themselves well to fine-tuning, helping organisations get even more accurate results.

As another bonus, these smaller, task-specific models are proving particularly valuable for businesses dealing with time-sensitive data. Compact time series AI models, for example, have the potential to extract patterns over longer periods and among more variables than traditional statistical forecasting methods. AI models for time series forecasting can be a powerful tool across a variety of industries, such as logistics, manufacturing and financial services, to help manage uncertainty. They allow companies to predict a range of scenarios, from sales and demand to revenue and capacity requirements, with potentially significant operational savings for large enterprises that do it accurately.

Data security, too, can get a boost with smaller, more focused models. These models can be fine-tuned with proprietary data while enhancing security and privacy, thanks to their ability to run effectively on private infrastructure, according to Goyal. This approach allows businesses to leverage their unique data assets while mitigating the risks associated with sharing sensitive information with larger, cloud-based models.

Case in point: when a major telecommunications company needed to analyse hundreds of thousands of daily call transcripts, managers initially used a frontier model with hundreds of billions of parameters. Switching to a smaller, 7bn-parameter IBM Granite model tuned with InstructLab reduced costs by more than 90 per cent while maintaining performance levels, according to information provided by IBM.

Building trust through governance

Getting AI right requires more than choosing the right-size models, according to Song. It’s also about effectively managing the models and the data they use and produce. “Having a good AI governance structure in place is critical,” Song says. “Governance is more than just part of the AI stack – it’s the foundation of trust.”

According to Goyal, an effective governance framework should encompass both organisational and technical aspects. He believes it starts with declaring clear principles for AI usage and then implementing them through clear AI governance policies and processes. The framework should determine who is ultimately accountable for governing AI applications, with the funding and mandate to back it up.

Having a good AI governance structure in place is critical. Governance is more than just part of the AI stack – it’s the foundation of trust

As global businesses navigate increasingly complex operational environments, the emergence of smaller, industry-tailored AI models is transforming how they approach artificial intelligence. These smaller models have shown they can match or even exceed the performance of their larger cousins. And they may do so while addressing costs, enhancing data security and optimising governance – giving organisations a sustainable path to innovation at scale. “As the ROI becomes clearer,” says Goyal, “the scale and value will come from being able to apply AI more broadly.”

To learn more about selecting the right AI models, visit

Related Content