QoS-Efficient Serving of Multiple Mixture-of-Expert LLMs Using Partial Runtime Reconfiguration

Paper · arXiv 2505.06481 · Published May 10, 2025
LLM Architecture

The deployment of mixture-of-experts (MoE) large language models (LLMs) presents significant challenges due to their high memory demands. These challenges become even more pronounced in multi-tenant environments, where shared resources must accommodate multiple models, limiting the effectiveness of conventional virtualization techniques. This paper addresses the problem of efficiently serving multiple finetuned MoE-LLMs on a single-GPU. We propose a serving system that employs similarity-based expert consolidation to reduce the overall memory footprint by sharing similar experts across models. To ensure output quality, we introduce runtime partial reconfiguration, dynamically replacing non-expert layers when processing requests from different models. As a result, our approach achieves a competitive output quality while maintaining throughput comparable to serving a single model while incurring a negligible increase in time-to-first-token (TTFT).