ITEM VIEW

Projected naive bayes

dc.contributor.advisorHofmeyr, Daviden_ZA
dc.contributor.authorMelonas, Michail C.en_ZA
dc.contributor.otherStellenbosch University. Faculty of Economic and Management Sciences. Dept. of Statistics and Actuarial Science.en_ZA
dc.date.accessioned2020-02-18T06:26:12Z
dc.date.accessioned2020-04-28T12:06:42Z
dc.date.available2020-02-18T06:26:12Z
dc.date.available2020-04-28T12:06:42Z
dc.date.issued2020-03
dc.identifier.urihttp://hdl.handle.net/10019.1/107862
dc.descriptionThesis (MCom)--Stellenbosch University, 2020.en_ZA
dc.description.abstractENGLISH SUMMARY : Naïve Bayes is a well-known statistical model that is recognised by the Institute of Electrical and Electronics Engineers (IEEE) as being among the top ten data mining algorithms. It performs classification by making the strong assumption of class conditional mutual statistical independence. Although this assumption is unlikely to be an accurate representation of the true statistical dependencies, naïve Bayes nevertheless delivers accurate classification in many domains. This success can be related to that of linear regression providing reliable estimation in problems where exact linearity is not realistic. There is a rich body of literature on the topic of improving naïve Bayes. This dissertation is concerned with doing so via a projection matrix that provides an alternative representation for the data of interest. We introduce Projected Gaussian naïve Bayes and Projected Kernel naïve Bayes as naïve-Bayes-type classifiers that respectively relies on Gaussianity and kernel density estimation. The proposed method extends the flexibility of the standard naïve Bayes. The approach maintains the simplicity and efficiency of naïve Bayes while improving its accuracy. Our method is shown to be competitive with several popular classifiers on real-world data. In particular, our method’s classification accuracy is compared to that of linear- and quadratic discriminant analysis, the support vector machine and the random forest. There is a close connection between our proposal and the application of naïve Bayes to a class conditionally conducted independent component analysis. In addition to a classification accuracy improvement, the proposed method also provides a tool for visually representing data in low-dimensional space. This visualisation aspect of our method is discussed with respect to the connection to independent component analysis. Our method is shown to give a better visual representation than does linear discriminant analysis on a number of real-world data-sets.en_ZA
dc.description.abstractAFRIKAANSE OPSOMMING : Naïve Bayes is ’n bekende statistiese model wat deur die Institute of Electrical and Electronics Engineers (IEEE) erken word as een van die top tien data-ontginning algoritmes. Dit voer klassifikasie uit deur die sterk aanname van klasvoorwaardelike onderlinge statistiese onafhanklikheid te maak. Alhoewel hierdie aanname waarskynlik nie ’n akkurate voorstelling van die werklike statistiese afhanklikhede is nie, lewer naïve Bayes nietemin akkurate klassifikasie in baie domeine. Hierdie sukses hou verband met dié van lineêre regressie, wat betroubare beraming gee in probleme waar presiese lineariteit nie realisties is nie. Daar is ’n ryk literatuur rondom die verbetering van naïve Bayes. Hierdie proefskrif handel daaroor via ’n projeksiematriks wat ’n alternatiewe voorstelling bied vir die data van belang. Ons stel “Projected Gaussian naïve Bayes” en “Projected Kernel naïve Bayes” voor as naïve-Bayes-tipe klassifiseerders wat onderskeidelik gebruik maak van die Normaalverdeling en kerndigtheidsberaming. Die voorgestelde metode brei die buigsaamheid van die standaard naïve Bayes uit. Die benadering handhaaf die eenvoud en doeltreffendheid van naïve Bayes terwyl die akkuraatheid daarvan verbeter word. Daar word getoon dat ons metode mededingend is met verskeie gewilde klassifiseerders op natuurlike data. In besonder word die akkuraatheid van die klassifikasie van ons metode vergelyk met dié van lineêre- en kwadratiese diskriminant analiese, die “support vector machine” en die “random forest”. Daar is ’n noue verband tussen ons voorstel en die toepassing van naïve Bayes op ’n voorwaardelike onafhanklike komponentanalise. Benewens die verbetering van die akkuraatheid van klassifikasie, bied die voorgestelde metode ook ’n instrument om data in ’n lae-dimensionele ruimte visueel voor te stel. Hierdie visualiseringsaspek van ons metode word bespreek met betrekking tot die verbinding met onafhanklike komponentanalise. Ons metode word getoon om ’n beter visuele voorstelling te gee as wat lineêre diskriminant analiese op ’n aantal natuurlike datastelle doen.af_ZA
dc.format.extentxvii, 75 pages ; illustrations, includes annexures
dc.language.isoen_ZAen_ZA
dc.publisherStellenbosch : Stellenbosch Universityen_ZA
dc.subjectBayesian statistical decision theoryen_ZA
dc.subjectCorrespondence analysis (Statistics)en_ZA
dc.subjectGaussian distributionen_ZA
dc.subjectKernel functionsen_ZA
dc.subjectKernel density estimationen_ZA
dc.subjectData miningen_ZA
dc.subjectComputer algorithmsen_ZA
dc.subjectUCTD
dc.titleProjected naive bayesen_ZA
dc.typeThesisen_ZA
dc.description.versionMastersen_ZA
dc.rights.holderStellenbosch Universityen_ZA


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

ITEM VIEW