Skip to main content

PERSEUS: Characterizing Performance and Cost of Multi-Tenant Serving for CNN Models

In this paper, we present PERSEUS, a measurement framework that provides the basis for understanding the performance and cost trade-offs of multi-tenant model serving.

Published onSep 11, 2020
PERSEUS: Characterizing Performance and Cost of Multi-Tenant Serving for CNN Models
·

ABSTRACT

Deep learning models are increasingly used for end-user applications, supporting both novel features such as facial recognition, and traditional features, e.g. web search. To accommodate high inference throughput, it is common to host a single pre-trained Convolutional Neural Network (CNN) in dedicated cloud-based servers with hardware accelerators such as Graphics Processing Units (GPUs). However, GPUs can be orders of magnitude more expensive than traditional Central Processing Unit (CPU) servers. These resources could also be under-utilized facing dynamic workloads, which may result in inflated serving costs. One potential way to alleviate this problem is by allowing hosted models to share the underlying resources, which we refer to as multi-tenant inference serving. One of the key challenges is maximizing the resource efficiency for multi-tenant serving given hardware with diverse characteristics, models with unique response time Service Level Agreement (SLA), and dynamic inference workloads. In this paper, we present PERSEUS, a measurement framework that provides the basis for understanding the performance and cost trade-offs of multi-tenant model serving. We implemented PERSEUS in Python atop a popular cloud inference server called Nvidia TensorRT Inference Server. Leveraging PERSEUS, we evaluated the inference throughput and cost for serving various models and demonstrated that multi-tenant model serving led to up to 12% cost reduction.

Read the full article.

M. LeMay, S. Li and T. Guo, "PERSEUS: Characterizing Performance and Cost of Multi-Tenant Serving for CNN Models," in 2020 IEEE International Conference on Cloud Engineering (IC2E), Sydney, Australia, 2020 pp. 66-72. doi: 10.1109/IC2E48712.2020.00014.

*denotes a WPI undergraduate student author

Comments
0
comment

No comments here