But most organizations don’t have the resources, financial and human, to build and train their own domain-specific models from the ground up. Fine-tuning existing LLMs requires considerable time and skills beyond the capabilities of mid-size enterprises, even though it needs less compute power and data than building from scratch. Prompt tuning and prompt engineering are the most common and straightforward approaches. Rather than modifying model parameters, these techniques consume far less resources and, although specialist skills are required, can be adopted relatively easily.
In the real world
Some early LLM deployments trained on internal data have come from the larger banks and consulting firms. Morgan Stanley, for instance, used prompt tuning to train GPT-4 on a set of 100,000 documents relating to its investment banking workflows. The objective was to help its financial advisers provide more accurate and timely advice to clients. BCG has also adopted a similar approach to help its consultants generate insights and client advice alongside an iterative process that fine-tunes their models based on user feedback. This has helped improve outputs and reduces the chances of hallucinations more common in consumer-facing GPTs.
We’re now starting to see less technology-intensive, service-oriented firms customizing LLMs with internal data. Garden-care company ScottsMiracle-Gro has collaborated with Google Cloud to create an AI-powered “gardening sommelier” to provide customers with gardening advice and product recommendations. This has been trained on the firm’s product catalogues and internal knowledge base, and will soon be rolled out to its 1,000 field sales associates to help them advise retail and market garden clients on prices and availability. It’s anticipated that, depending on results, it’ll then be available to consumers, with the aim of driving sales and customer satisfaction.