Semi-supervised learning in computer vision
dc.contributor.advisor | Brink, Willie | en_ZA |
dc.contributor.author | Louw, Christiaan | en_ZA |
dc.contributor.other | Stellenbosch University. Faculty of Science. Dept. of Applied Mathematics. | en_ZA |
dc.date.accessioned | 2022-11-22T09:04:01Z | en_ZA |
dc.date.accessioned | 2023-01-16T12:51:09Z | en_ZA |
dc.date.available | 2022-11-22T09:04:01Z | en_ZA |
dc.date.available | 2023-01-16T12:51:09Z | en_ZA |
dc.date.issued | 2022-12 | en_ZA |
dc.description | Thesis (MSc) -- Stellenbosch University, 2022. | en_ZA |
dc.description.abstract | ENGLISH ABSTRACT: Deep learning models have proven to be successful at tasks such as image classification. A major drawback of supervised learning is the need for large labelled datasets to obtain good classification accuracy. This can be a bar rier to those in resource-constrained environments wanting to implement a classification model in a previously unexplored field. Recent advancements in unsupervised learning methods, such as contrastive learning, have made it viable to perform representation learning without labels, which when com bined with supervised learning on relatively small labelled datasets can lead to state-of-the-art performance on image classification tasks. We study this technique, called semi-supervised learning, and provide an in vestigation into three semi-supervised learning frameworks. Our work starts by discussing the implementations of the SimCLR, SimSiam and FixMatch frameworks. We compare the results of each framework on the CIFAR-10 and STL-10 datasets in label-scarce scenarios and show that: (1) all frameworks outperform a purely supervised learning baseline when the number of labels is reduced, (2) the improvement in performance of the frameworks over the su pervised baseline increases as the number of available labels is decreased and (3) in most cases, the semi-supervised learning frameworks are able to match or outperform the supervised baseline with 10% as many labels. We also investigate the performance of the SimCLR and SimSiam framework on class-imbalanced versions of the CIFAR-10 and STL-10 datasets, and find that: (1) the improvements over the supervised learning baseline is less sub stantial than in the results with fewer overall, but balanced, class labels, and (2) with basic oversampling implemented the results are significantly improved, with the semi-supervised learning frameworks benefiting the most. The results in this thesis indicate that unsupervised representation learning can indeed lower the number of labelled images required for successful image classification by a significant degree. We also show that each of the frameworks considered in this work serves this function well. | en_ZA |
dc.description.abstract | "Geen opsomming beskikbaar" | en_ZA |
dc.description.version | Masters | en_ZA |
dc.format.extent | iv, 61 pages : illustrations | en_ZA |
dc.identifier.uri | http://hdl.handle.net/10019.1/126121 | en_ZA |
dc.language.iso | en_ZA | en_ZA |
dc.publisher | Stellenbosch : Stellenbosch University | en_ZA |
dc.rights.holder | Stellenbosch University | en_ZA |
dc.subject | Supervised learning (Machine learning) | en_ZA |
dc.subject | Computer vision | en_ZA |
dc.subject | Deep learning | en_ZA |
dc.subject | Image classification | en_ZA |
dc.subject | UCTD | en_ZA |
dc.title | Semi-supervised learning in computer vision | en_ZA |
dc.type | Thesis | en_ZA |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- louw_learning_2022.pdf
- Size:
- 6.37 MB
- Format:
- Adobe Portable Document Format
- Description: