UBC Theses and Dissertations
Beyond submodular maximization : one-sided smoothness and meta-submodularity Ghadiri, Mehrdad
While there are well-developed tools for maximizing a submodular function f(S) subject to a matroid constraint S ∈ ℳ, there is much less work on the corresponding supermodular maximization problems. We develop new techniques for attacking these problems inspired by the continuous greedy method applied to the multi-linear extension of a submodular function. We first adapt the continuous greedy algorithm to work for general twice-continuously differentiable functions. Our results are based on a new notion of one-sided smoothness of an objective. Reminiscent of how Lipschitz smoothness bounds convergence rates in convex optimization, one-sided smoothness controls the approximability of maximizing a monotone, non-linear function. If F : [0,1]^n → ℝ≥₀ is one-sided σ-smooth, then it yields an approximation factor depending only on σ. We apply the new algorithm to a broad class of quadratic supermodular functions arising in diversity maximization. We also develop new methods for rounding quadratics over a matroid polytope. These are based on extensions to swap rounding and approximate integer decomposition. Together with the adapted continuous greedy this leads to a O(σ³/²)-approximation. This is the best asymptotic approximation known for this class of diversity maximization and we give some evidence for why we believe it may be tight. We then consider general (non-quadratic) functions. We give a broad parameterized family of monotone functions which include submodular functions and the just-discussed supermodular family of discrete quadratics. The new family is defined by restricting the one-sided smoothness condition to the boolean hypercube; such set functions are called γ-meta-submodular. We develop local search algorithms with approximation factors that depend only on γ. We show that the γ-meta-submodular families include well-known function classes including meta-submodular functions (γ - 0), proportionally submodular (γ - 1) and diversity functions based on negative-type distances or Jensen-Shannon divergence (both γ - 2) and (semi-)metric diversity functions. We then focus on maximizing a specific 1-meta-submodular function in a distributed setting. This has applications in machine learning and recommender systems. As an application, we model the multi-label feature selection problem as such an optimization problem. This combined with our optimization algorithm leads to the first distributed multi-label feature selection method.
Item Citations and Data
Attribution-NonCommercial-NoDerivatives 4.0 International