電子情報通信学会総合大会講演要旨
D-20-6
Classification Using Single-view or Multi-views: Which One is Better?
○Sang-Woon Kim・Thanh-Binh Le(Myongji Univ.)
In multi-view learning approaches, multiple classifiers, one for each view, are trained and the most confident predictions of unlabeled data (U) for the multiple classifiers are used to teach each other. For example, in co-training, a single view data set (X) is first divided into two mutually exclusive subsets (X1 and X2) through reshaping or decomposition. Then, two classifiers, trained using X1 and X2 separately, can provide the most-confident U predictions so that the training data can be reinforced for the other classifier. When utilizing traditional feature selection methods, discriminative information inherent in X is concentrated into X1 or X2, meaning that one of the two is noisy. From this consideration, the following is an interesting issue to be investigated: is the accuracy of a multi-view based classifier superior or inferior to that of the conventional ones when being designed using X1 and X2 together or separately?