SOTAVerified

Non-monotone DR-submodular Maximization: Approximation and Regret Guarantees

2019-05-23Unverified0· sign in to hype

Christoph Dürr, Nguyen Kim Thang, Abhinav Srivastav, Léo Tible

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Diminishing-returns (DR) submodular optimization is an important field with many real-world applications in machine learning, economics and communication systems. It captures a subclass of non-convex optimization that provides both practical and theoretical guarantees. In this paper, we study the fundamental problem of maximizing non-monotone DR-submodular functions over down-closed and general convex sets in both offline and online settings. First, we show that for offline maximizing non-monotone DR-submodular functions over a general convex set, the Frank-Wolfe algorithm achieves an approximation guarantee which depends on the convex set. Next, we show that the Stochastic Gradient Ascent algorithm achieves a 1/4-approximation ratio with the regret of O(1/T) for the problem of maximizing non-monotone DR-submodular functions over down-closed convex sets. These are the first approximation guarantees in the corresponding settings. Finally we benchmark these algorithms on problems arising in machine learning domain with the real-world datasets.

Tasks

Reproductions