Class imbalance learning is the study of problems in which some classes appear more frequently than the others. Most existing works that study this problem assume data set to be dense and do not exploit the rich structure of the data. One such structure is the sparsity. In the present work, we focus on solving the class imbalance problem under the sparsity assumption. More specifically, a well-known Gmean metric for class imbalance learning problem in binary classification setting has been maximized, which results in a non-convex loss function. Convex relaxation techniques are used to convert the non-convex problem to the convex problem. The problem formulation in the present work uses L1 regularized proximal learning framework and is solved via accelerated-stochastic-proximal gradient descent algorithm. Our aim in the paper is to show: (i) the application of proximal algorithms to solve real world problems (class imbalance); (ii) how it scales to Big data; and (iii) how it outperforms some recently proposed algorithms in terms of Gmean, F-measure and Mistake rate on several benchmark data sets.