Regularization oversampling for classification tasks: To exploit what you do not know
Name:
Publisher version
View Source
Access full-text PDFOpen Access
View Source
Check access options
Check access options
Publication type
Journal article with impact factorPublication Year
2023Journal
Information SciencesPublication Volume
635Publication Issue
JulyPublication Begin page
169Publication End page
194
Metadata
Show full item recordAbstract
In numerous binary classification tasks, the two groups of instances are not equally represented, which often implies that the training data lack sufficient information to model the minority class correctly. Furthermore, many traditional classification models make arbitrarily overconfident predictions outside the range of the training data. These issues severely impact the deployment and usefulness of these models in real life. In this paper, we propose the boundary regularizing out-of-distribution (BROOD) sampler, which adds artificial data points on the edge of the training data. By exploiting these artificial samples, we are able to regularize the decision surface of discriminative machine learning models and make more prudent predictions. Next, it is crucial to correctly classify many positive instances in a limited pool of instances that can be investigated with the available resources. By smartly assigning predetermined nonuniform class probabilities outside the training data, we can emphasize certain data regions and improve classifier performance on various material classification metrics. The good performance of the proposed methodology is illustrated in a case study that consists of both benchmark balanced and imbalanced classification data sets.Knowledge Domain/Industry
Accounting & Financeae974a485f413a2113503eed53cd6c53
10.1016/j.ins.2023.03.146