The most popular and comprehensive Open Source ECM platform
Artificial Intelligence: Composition of Experts (CoE) –A Breakthrough in Large Language Models?
In the dynamic realm of artificial intelligence, a new methodology called Composition of Experts (CoE) has emerged, that has the potential to reshape the landscape of Large Language Models (LLMs). CoE diverges from the conventional monolithic model, offering a modular and cost-efficient alternative.
CoE operates by assembling existing expert models into a cohesive whole. It achieves this through two critical steps: identifying the experts and constructing a router. Each expert excels in specific tasks, while the router dynamically selects the most suitable expert for a given query. It is like an orchestra where each instrument contributes a unique sound to create a harmonious composition. CoE assembles a similar ensemble of models, resulting in a powerful yet adaptable LLM.
The primary developer of CoE research is SambaNova, a company known for its hardware innovations. Their recent breakthrough, Samba-CoE-v0.1, demonstrates the potential of this approach. By ensembling five expert models—ranging from mathematics to common sense reasoning—Samba-CoE-v0.1 outperforms several benchmarks. Notably, it surpasses Mixtral 8x7B, Gemma-7B, Llama2-70B, Qwen-72B, and Falcon-180B across various tasks. Moreover, it achieves this feat at an inference cost equivalent to just two calls to 7-billion-parameter LLMs.
Beyond performance gains, CoE offers agility. Its modular design allows organizations to fine-tune specific components without retraining the entire model. The following areas are expected to improve as the technology develops over the future:
- Scalability: Scaling CoE to even larger models could unlock unprecedented capabilities.
- Robust Routing: Enhancing the router’s ability to handle diverse prompts and multi-turn conversations.
- Broader Adoption: CoE’s adoption beyond SambaNova, as other players explore this methodology.
While SambaNova leads the CoE charge, other companies are likely to follow suit, but, the field remains open for innovation, fostering healthy competition.
Widespread adoption of CoE-based models will take time. As hardware advances and research matures, we can expect CoE to become more prevalent within the next few years. Could hybrid models emerge, blending CoE with other techniques? Perhaps CoE will inspire novel architectures that combine expert models and end-to-end training.
Training trillion-parameter models can cost over $100 million. CoE disrupts this by achieving comparable performance at approximately 1/10th the cost. Organizations can now explore cutting-edge AI without breaking the bank.