Approximate message passing algorithms for latent Dirichlet allocation

dc.contributor.advisorDu Preez, Johanen_ZA
dc.contributor.authorTaylor, Rebeccaen_ZA
dc.contributor.otherStellenbosch University. Faculty of Engineering. Dept. of Electrical and Electronic Engineering.en_ZA
dc.date.accessioned2023-03-06T08:21:25Zen_ZA
dc.date.accessioned2023-05-18T07:17:05Zen_ZA
dc.date.available2023-03-06T08:21:25Zen_ZA
dc.date.available2023-05-18T07:17:05Zen_ZA
dc.date.issued2023-03en_ZA
dc.descriptionThesis (PhD)--Stellenbosch University, 2023.en_ZA
dc.description.abstractENGLISH ABSTRACT: Latent Dirichlet allocation (LDA) is a hierarchical Bayesian model that is most well known for its ability to extract latent semantic topics from text corpora. Since exact inference of the posterior distribution is intractable, a variety of approximate inference techniques can be employed for LDA. The traditional variational Bayesian inference (VB) approach is sufficient for most text topic modelling problems. VB, however, does not perform topic extraction well where the corpora of documents are small, where document sizes are limited, or where topic overlap is large. In these cases, collapsed Gibbs sampling, a slower, computationally more expensive inference method, is typically preferred. In this dissertation, we present two variational message passing algorithms for inference in LDA. Our first algorithm, approximate LBU (designated by the acronym ALBU), is derived by applying loopy belief update (LBU) (also known as the Lauritzen-Spiegelhalter algorithm), where possible, and using ideas from hierarchical sampling to derive messages where conjugacy does not apply. The second algorithm, which we designate variational message passing plus (VMP+), is based on variational message passing (VMP), the message passing variant of VB. In VMP+, instead of following the VMP algorithm exactly, we relax the mean-field assumption between parent and child nodes that are in the same cluster of the cluster graph representation of LDA. This results in an algorithm that is similar to VB, but performs better at topic extraction. To evaluate ALBU and VMP+, we use four text corpora (20 Newsgroups, 6 Newsgroups, Covid tweets, and Bible verses), all of which contain short texts, to compare the performance of our two algorithms with VB and collapsed Gibbs sampling using topic coherence scores. Because VB typically performs well on large data sets, we apply these two new variants specifically to smaller data sets where topic extraction is expected to be difficult. In addition to the real-life text documents described above, we perform topic extraction on a variety of simulated corpora which are obtained by using different hyperparameter settings, to enable comparisons among the estimated results and the true distributions. Here, Kullback-Leibler divergence (KLD) is used to evaluate and compare inference algorithm performance. Based on results obtained for both the real-life and synthetic data, we show that both algorithms outperform VB, and that the performance of ALBU is similar to that of collapsed Gibbs sampling only when the sampler is given an extended period of time to converge. ALBU appears to be an efficient approach to inference in LDA in certain situations where topic extraction is expected to be difficult. VMP+ is an alternative inference algorithm for LDA that is simpler than VB and also typically extracts more coherent topics than VB. On the basis of our experiments, we summarise the differences between the variational algorithms, and propose reasons for these differences. We conclude by providing recommendations regarding the methods that we anticipate would be most suitable for inference on particular corpus types.en_ZA
dc.description.abstractAFRIKAANS OPSOMMING: Latente Dirichlet-toekenning (LDA) is ’n hiërargiese Bayes-model bekend vir sy vermoë om latente semantiese onderwerpe uit tekskorpora te onttrek. Aangesien presiese LDA-berekeninge wiskundig onhaalbaar is, word verskeie inferensietegnieke as alternatiewe toegepas. Die tradisionele variasie-Bayes-benadering (VB) is voldoende vir die meeste onderwerp-afskattingsprobleme in tekste. VB is egter nie so geskik vir onderwerp-afskatting wanneer die korpus van dokumente klein is nie, ook nie in die geval van dokumente met beperkte groottes of wanneer die onderwerp-oorvleueling groot is nie. In dié gevalle is die stadiger inferensiemetode, naamlik saamgevoude Gibbs-monsterversameling (CGS), verkieslik. In hierdie proefskrif stel ons twee nuwe variasie-boodskapverspreiding (VMP) algoritmes voor vir inferensie in LDA. Die eerste algoritme, wat ons approximate loopy belief update (ALBU) noem, word verkry deur loopy belief update (LBU), ook bekend as die Lauritzen-Spiegelhalter algoritme, toe te pas saam met hiërargiese monstering vir die benadering van boodskappe in gevalle waar die sommasie van waarskynlikheidsverdelings nie prakties is nie. Die tweede algoritme, wat ons die variasie-boodskapverspreiding plus (VMP+) noem, is gegrond op VMP maar gebruik konsepte van ALBU. Om ALBU en VMP+ te evalueer, gebruik ons vier tekskorpora as toetsdata (20 nuusgroepe, 6 nuusgroepe, Covid-twiets en Bybelverse—alles kort tekste) en onderwerpkoherensie as ’n maatstaf. Die twee algoritmes word spesifiek met VB en CGS vergelyk. Omdat VB goed vaar met groot datastelle, gebruik ons eerder kleiner datastelle waarvan die onttrekking van onderwerpe heel waarskynlik moeilik sal wees. Benewens die werklike teksdokumente, soos hierbo beskryf, het ons ook onderwerpafskatting op verskeie sintetiese korpora van wisselende kompleksiteit gedoen. Dit iv stel ons in staat om vergelykings te kan tref tussen die algoritmes se resultate en die werklike onderliggende waarskynlikheidsverdelings. Hier word die Kullback-Leiblerdivergensie (KLD) as vergelykingsmaatstaf gebruik. Gegrond op die resultate wat van die werklike en sintetiese data verkry is, toon ons dat albei ons algoritmes beter vaar as VB, en dat ALBU dieselfde vaar as CGS wanneer CGS langer tyd gegun word. ALBU blyk ’n eenvoudig dog doeltreffende benadering tot LDA-inferensie te wees in gevalle waar verwag kan word dat onderwerp-onttrekking moeilik gaan wees. VMP+ is ’n alternatiewe algoritme wat eenvoudiger en meer akkuraat is as VB. Op grond van ons proefnemings bied ons ’n opsomming van die verskille tussen die onderskeie variasie-algoritmes aan, en stel moontlike verklarings voor. Ons sluit af met aanbevelings oor die inferensie-tegnieke wat waarskynlik die mees toepaslik sou wees om aan te wend op verskeie soorte tekskorpus-tipes.af_ZA
dc.description.versionDoctorateen_ZA
dc.format.extentxxxi, 213 pages : illustrationsen_ZA
dc.identifier.urihttp://hdl.handle.net/10019.1/127341en_ZA
dc.language.isoen_ZAen_ZA
dc.language.isoen_ZAen_ZA
dc.subject.lcshLatent semantic indexingen_ZA
dc.subject.lcshNatural language processing (Computer science)en_ZA
dc.subject.lcshMachine learningen_ZA
dc.subject.lcshLatent Dirichlet Allocationen_ZA
dc.titleApproximate message passing algorithms for latent Dirichlet allocationen_ZA
dc.typeThesisen_ZA
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
taylor_approximate_2023.pdf
Size:
7.69 MB
Format:
Adobe Portable Document Format
Description: