A review of kernel Fisher discriminant analysis for statistical classification
Discriminant analysis is a well-known and frequently used class of statistical classification procedures. A recent addition to this class is kernel Fisher discriminant analysis (KFDA). This procedure, originally proposed in the machine learning literature, has so far been applied mainly in areas such as artificial intelligence, machine learning and pattern recognition, with little exposure in statistics. Given its excellent performance in difficult classification problems, it is important for statisticians working in classification to become acquainted with KFDA. In this paper we discuss the rationale behind KFDA, comment on practical aspects which have to be dealt with during its implementation, describe a new proposal for determining the value of the Gaussian kernel parameter, and report the results of a simulation study comparing KFDA and linear discriminant analysis.