Semi-supervised learning in computer vision

dc.contributor.advisorBrink, Willieen_ZA
dc.contributor.authorLouw, Christiaanen_ZA
dc.contributor.otherStellenbosch University. Faculty of Science. Dept. of Applied Mathematics.en_ZA
dc.date.accessioned2022-11-22T09:04:01Zen_ZA
dc.date.accessioned2023-01-16T12:51:09Zen_ZA
dc.date.available2022-11-22T09:04:01Zen_ZA
dc.date.available2023-01-16T12:51:09Zen_ZA
dc.date.issued2022-12en_ZA
dc.descriptionThesis (MSc) -- Stellenbosch University, 2022.en_ZA
dc.description.abstractENGLISH ABSTRACT: Deep learning models have proven to be successful at tasks such as image classification. A major drawback of supervised learning is the need for large labelled datasets to obtain good classification accuracy. This can be a bar rier to those in resource-constrained environments wanting to implement a classification model in a previously unexplored field. Recent advancements in unsupervised learning methods, such as contrastive learning, have made it viable to perform representation learning without labels, which when com bined with supervised learning on relatively small labelled datasets can lead to state-of-the-art performance on image classification tasks. We study this technique, called semi-supervised learning, and provide an in vestigation into three semi-supervised learning frameworks. Our work starts by discussing the implementations of the SimCLR, SimSiam and FixMatch frameworks. We compare the results of each framework on the CIFAR-10 and STL-10 datasets in label-scarce scenarios and show that: (1) all frameworks outperform a purely supervised learning baseline when the number of labels is reduced, (2) the improvement in performance of the frameworks over the su pervised baseline increases as the number of available labels is decreased and (3) in most cases, the semi-supervised learning frameworks are able to match or outperform the supervised baseline with 10% as many labels. We also investigate the performance of the SimCLR and SimSiam framework on class-imbalanced versions of the CIFAR-10 and STL-10 datasets, and find that: (1) the improvements over the supervised learning baseline is less sub stantial than in the results with fewer overall, but balanced, class labels, and (2) with basic oversampling implemented the results are significantly improved, with the semi-supervised learning frameworks benefiting the most. The results in this thesis indicate that unsupervised representation learning can indeed lower the number of labelled images required for successful image classification by a significant degree. We also show that each of the frameworks considered in this work serves this function well.en_ZA
dc.description.abstract"Geen opsomming beskikbaar"en_ZA
dc.description.versionMastersen_ZA
dc.format.extentiv, 61 pages : illustrationsen_ZA
dc.identifier.urihttp://hdl.handle.net/10019.1/126121en_ZA
dc.language.isoen_ZAen_ZA
dc.publisherStellenbosch : Stellenbosch Universityen_ZA
dc.rights.holderStellenbosch Universityen_ZA
dc.subjectSupervised learning (Machine learning)en_ZA
dc.subjectComputer visionen_ZA
dc.subjectDeep learningen_ZA
dc.subjectImage classificationen_ZA
dc.subjectUCTDen_ZA
dc.titleSemi-supervised learning in computer visionen_ZA
dc.typeThesisen_ZA
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
louw_learning_2022.pdf
Size:
6.37 MB
Format:
Adobe Portable Document Format
Description: