Vis enkel innførsel

dc.contributor.authorKim, Sang-Woon
dc.contributor.authorOommen, B. John
dc.date.accessioned2011-02-15T13:34:37Z
dc.date.available2011-02-15T13:34:37Z
dc.date.issued2010
dc.identifier.citationKim, S.-W., & Oommen, B. J. (2010). On Optimizing Locally Linear Nearest Neighbour Reconstructions Using Prototype Reduction Schemes. In J. Li (Ed.), AI 2010: Advances in Artificial Intelligence (Vol. 6464, pp. 153-163): Springer Berlin / Heidelberg.en_US
dc.identifier.urihttp://hdl.handle.net/11250/137856
dc.descriptionPublished version of an article from the Book: AI 2010: Advances in Artificial Intelligence, Spinger. Also available on Springerlink: http://dx.doi.org/10.1007/978-3-642-17432-2_16en_US
dc.description.abstractThis paper concerns the use of Prototype Reduction Schemes (PRS) to optimize the computations involved in typical k-Nearest Neighbor (k-NN) rules. These rules have been successfully used for decades in statistical Pattern Recognition (PR) applications, and have numerous applications because of their known error bounds. For a given data point of unknown identity, the k-NN possesses the phenomenon that it combines the information about the samples from a priori target classes (values) of selected neighbors to, for example, predict the target class of the tested sample. Recently, an implementation of the k-NN, named as the Locally Linear Reconstruction (LLR) [11], has been proposed. The salient feature of the latter is that by invoking a quadratic optimization process, it is capable of systematically setting model parameters, such as the number of neighbors (specified by the parameter, k) and the weights. However, the LLR takes more time than other conventional methods when it has to be applied to classification tasks. To overcome this problem, we propose a strategy of using a PRS to efficiently compute the optimization problem. In this paper, we demonstrate, first of all, that by completely discarding the points not included by the PRS, we can obtain a reduced set of sample points, using which, in turn, the quadratic optimization problem can be computed far more expediently. The values of the corresponding indices are comparable to those obtained with the original training set (i.e., the one which considers all the data points) even though the computations required to obtain the prototypes and the corresponding classification accuracies are noticeably less. The proposed method has been tested on artificial and real-life data sets, and the results obtained are very promising, and has potential in PR applications.en_US
dc.language.isoengen_US
dc.publisherSpringer Berlin/Heidelbergen_US
dc.relation.ispartofseriesLecture Notes in Computer Science;
dc.titleOn Optimizing Locally Linear Nearest Neighbour Reconstructions Using Prototype Reduction Schemesen_US
dc.typeChapteren_US
dc.typePeer revieweden_US
dc.subject.nsiVDP::Mathematics and natural science: 400::Mathematics: 410::Statistics: 412en_US
dc.subject.nsiVDP::Mathematics and natural science: 400::Information and communication science: 420::Algorithms and computability theory: 422en_US
dc.source.pagenumber153-163en_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel