SpecEdge

A scalable edge-assisted framework that achieves enhanced cost efficiency and reduced latency

Abstract diagram of SpecEdge with draft (edgeside) and verify (server-side) inference concept.

Summary

Large language models (LLMs) power many modern applications, but serving them at scale remains costly and resource-intensive. Current server-centric systems overlook consumer-grade GPUs at the edge. We introduce SpecEdge, an edgeassisted inference framework that splits LLM workloads between edge and server GPUs using a speculative decoding scheme, exchanging only token outputs over the network. SpecEdge employs proactive edge drafting to overlap edge token creation with server verification and pipeline-aware scheduling that interleaves multiple user requests to increase server-side throughput. Experiments show SpecEdge enhances overall cost efficiency by 1.91x through achieving 2.22x server throughput, and reduces inter token latency by 11.24% compared to a server-only baseline, introducing a scalable, cost-effective paradigm for LLM serving.

Publications

  1. NeurIPS
    SpecEdge: Scalable Edge-Assisted Serving Framework for Interactive LLMs
    In Proceedings of The 39th Annual Conference on Neural Information Processing Systems Dec 2025 (Spotlight, top 3.2% of submissions)

Members