SOTAVerified

Scalable and Cost-Efficient ML Inference: Parallel Batch Processing with Serverless Functions

2025-01-30Unverified0· sign in to hype

Amine Barrak, Emna Ksontini

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

As data-intensive applications grow, batch processing in limited-resource environments faces scalability and resource management challenges. Serverless computing offers a flexible alternative, enabling dynamic resource allocation and automatic scaling. This paper explores how serverless architectures can make large-scale ML inference tasks faster and cost-effective by decomposing monolithic processes into parallel functions. Through a case study on sentiment analysis using the DistilBERT model and the IMDb dataset, we demonstrate that serverless parallel processing can reduce execution time by over 95% compared to monolithic approaches, at the same cost.

Tasks

Reproductions