The evolution of AI depends on the synergy of LLMs and SLMs

January 25, 2024

RavenPack's Chief Data Scientist discusses a shift to a hybrid AI approach, combining LLMs and SLMs, to enhance trust and transparency.

Peter Hafez

Peter Hafez

Chief Data Scientist

RavenPack

The rapid ascent of large language models (LLMs) in 2023 sparked a wave of innovation and intense competition among tech giants. This Eureka moment was a culmination of several years of development: BERT's launch in 2017 paved the way, and chatGPT, powered by GPT 3.5, continued the trajectory. The shift is palpable: conversations have moved from kitchen tables to boardrooms and FOMO is propelling companies to invest substantially in new technologies and internal processes for a rapid AI adoption. The question is how can we ensure the continued evolution of AI without giving in to the pitfalls of hype and disillusionment?

The question is how can we ensure the continued evolution of AI without giving in to the pitfalls of hype and disillusionment?

ChatGPT's remarkable capabilities in specific domains can lead to overconfidence in its overall abilities. On occasion, it can even exhibit "hallucinations" — a phenomenon where the language model generates information that lacks any factual basis. This can blur the lines between what AI can and cannot achieve, set unrealistic expectations, and ultimately hinder true progress.

To avoid this, companies must maintain a spirit of experimentation, embracing setbacks as opportunities to learn and improve.

Seeking feedback from innovators and early adopters before a mass-market rollout is often a safer approach. The key is finding the right balance between experimentation and widespread implementation for a successful business venture.

Training massive LLMs also incurs significant costs, making them accessible primarily to major players. Deploying these models on one's own content, especially in high-volume textual content scenarios like financial investing, proves expensive. This presents an opportunity for industry leaders to provide LLMs as a service, with the potential for cost reductions as the industry continues to mature.

LLMs and SLMs:

Balancing Generality with Specificity

  • While LLMs excel in general-purpose use cases, they are not always the ideal choice for specific scenarios, such as those that require domain-specific knowledge, handling sensitive or private data or real-time processing.
  • Smaller language models, known as SLMs, designed for particular tasks, often yield superior results.
  • However, employing a hybrid approach — combining SLMs and LLMs — often makes better business and engineering sense.
  • It provides greater control, ensures robustness in solutions, makes for easier maintenance, and potentially reduces the content fed into larger models if needed.

In finance and beyond, future intellectual property is developed through experimentation in process development. Knowing when to apply a smaller, fine-tuned model versus opting for a larger one becomes crucial. Breaking processes into smaller model containers allows easy optimization or replacement when better models emerge, ensuring transparency at each step instead of relying on a black-box system.

As LLMs continue to steer conversations, propelled by the upcoming release of LLAMA 3 and GPT 5 models, 2024 seems poised to introduce the era of SLMs or, more probably, of a hybrid approach. This marks a positive evolution in our understanding and application of AI to solve an increasing array of use cases.

Trust in LLMs will stem from widespread experimentation and market feedback, addressing the inherent opacity in the modeling process itself. Transparency, in this context, extends beyond modeling intricacies to encompass the entirety of the workflow. The lack of transparency can be effectively addressed by implementing retrieval-augmented generation (RAG) techniques. RAG applications offer users access to the underlying sources and content that influence the LLM's responses, which boosts transparency and accountability.

Trust in LLMs will stem from widespread experimentation and market feedback, addressing the inherent opacity in the modeling process itself.

Applied AI for Real-World Financial Impact

  • At RavenPack, our focus revolves around applied AI, particularly within the financial domain.
  • The feedback loop is immediate and measurable — success is gauged by the ability to craft profitable investment strategies from derived analytics.
  • Unlike processes driven by hype, our emphasis lies in contributing to our clients' bottom line.

We’re currently building next-gen sentiment and thematic pipelines following this approach. We leverage a range of components, including entity detection, sentiment scoring, trained embeddings, theme detection, novelty and relevance scoring, and verification layers. Not all components are efficiently addressed by an LLM.

Guided by innovation and pragmatism, quant investors navigate a delicate balance between staying on the cutting edge and employing a natural vetting process that prioritizes profitability through continuous experimentation. While the wisdom of awaiting market validation sometimes outweighs the benefits of early innovation, especially for those beyond the fortunate few, trust remains a cornerstone in AI-driven analytics.

Our methodology places a premium on trust through transparency, enabling clients to shape more effective signals and impactful use-cases with robust source control. This type of transparency proves indispensable not only for client comprehension but also for seamless communication with regulators and stakeholders across the financial industry.



By providing your personal information and submitting your details, you acknowledge that you have read, understood, and agreed to our Privacy Statement and you accept our Terms and Conditions. We will handle your personal information in compliance with our Privacy Statement. You can exercise your rights of access, rectification, erasure, restriction of processing, data portability, and objection by emailing us at privacy@ravenpack.com in accordance with the GDPRs. You also are agreeing to receive occasional updates and communications from RavenPack about resources, events, products, or services that may be of interest to you.