Model Routers
Related topics:
- Adapter-based Selective Knowledge Distillation for Federated Multi-domain Meeting SummarizationXiachong Feng, Xiaocheng Feng, Xiyuan Du, Min-Yen Kan, Bing Qin [https://arxiv.org/abs/2308.03275](https://arxiv.org/abs/2308.03275) [[Routers]] [[Arxiv/Agents Multi|Agents Multi]] [[Reading Summari…
- AgentsNet: Coordination and Collaborative Reasoning in Multi-Agent LLMsLarge-language models (LLMs) have demonstrated powerful problem-solving capabilities, in particular when organized in multi-agent systems. However, the advent of such systems also raises several quest…
- Beyond GPT-5: Making LLMs Cheaper and Better via Performance-Efficiency Optimized RoutingBalancing performance and efficiency is a central challenge in large language model (LLM) advancement. GPT-5 addresses this with test-time routing, dynamically assigning queries to either an efficient…
- Externalization in LLM Agents: A Unified Review of Memory, Skills, Protocols and Harness EngineeringLarge language model (LLM) agents are increasingly built less by changing model weights than by reorganizing the runtime around them. Capabilities that earlier systems expected the model to recover in…
- Fast, Slow, and Tool-augmented Thinking for LLMs: A ReviewLarge Language Models (LLMs) have demonstrated remarkable progress in reasoning across diverse domains. However, effective reasoning in real-world tasks requires adapting the reasoning strategy to the…
- Federation of Agents: A Semantics-Aware Communication Fabric for Large-Scale Agentic AIWe present Federation of Agents (FoA), a distributed orchestration framework that transforms static multi-agent coordination into dynamic, capability-driven collaboration. FoA introduces Versioned Cap…
- Guidance is All You Need: Temperature-Guided Reasoning in Large Language ModelsWe present Quasar-1, a novel architecture that introduces temperature-guided reasoning to large language models through the Token Temperature Mechanism (TTM) and Guided Sequence of Thought (GSoT). Our…
- Hybrid LLM: Cost-Efficient and Quality-Aware Query RoutingLarge language models (LLMs) excel in most NLP tasks but also require expensive cloud servers for deployment due to their size, while smaller models that can be deployed on lower cost (e.g., edge) dev…
- MasRouter: Learning to Route LLMs for Multi-Agent SystemsMulti-agent systems (MAS) powered by Large Language Models (LLMs) have been demonstrated to push the boundaries of LLM capabilities, yet they often incur significant costs and face challenges in dynam…
- Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm IntelligenceWe propose MODEL SWARMS, a collaborative search algorithm to adapt LLMs via swarm intelligence, the collective behavior guiding individual systems. Specifically, MODEL SWARMS starts with a pool of LLM…
- RouteLLM: Learning to Route LLMs with Preference DataLarge language models (LLMs) excel at a wide range of tasks, but choosing the right model often involves balancing performance and cost. Powerful models offer better results but are expensive, while s…
- StructRAG: Boosting Knowledge Intensive Reasoning of LLMs via Inference-time Hybrid Information StructurizationRetrieval-augmented generation (RAG) is a key means to effectively enhance large language models (LLMs) in many knowledge-based tasks. However, existing RAG methods struggle with knowledge-intensive r…
- Why Do Multi-agent LLM Systems Fail?[[Routers]] Despite growing enthusiasm for Multi-Agent LLM Systems (MAS), their performance gains across popular benchmarks often remain minimal compared to single-agent frameworks. This gap highlig…