Automatic video captioning using spatiotemporal convolutions on temporally sampled frames
dc.contributor.advisor | Brink, Willie | en_ZA |
dc.contributor.author | Nyatsanga, Simbarashe Linval | en_ZA |
dc.contributor.other | Stellenbosch University. Faculty of Science. Department of Mathematical Sciences (Applied Mathematics). | en_ZA |
dc.date.accessioned | 2020-02-03T10:54:01Z | |
dc.date.accessioned | 2020-04-28T12:04:21Z | |
dc.date.available | 2020-02-03T10:54:01Z | |
dc.date.available | 2020-04-28T12:04:21Z | |
dc.date.issued | 2020-03 | |
dc.description | Thesis (MSc)--Stellenbosch University, 2020. | en_ZA |
dc.description.abstract | ENGLISH ABSTRACT: Being able to concisely describe content in a video has tremendous potential to enable better categorisation, indexed based-search and fast content-based retrieval from large video databases. Automatic video captioning requires the simultaneous detection of local and global motion dynamics of objects, scenes and events, to summarise them into a single coherent natural language description. Given the size and complexity of video data, it is important to understand how much temporally coherent visual information is required to adequately describe the video. In order to understand the association between video frames and sentence descriptions, we carry out a systematic study to determine how the quality of generated captions changes with respect to densely or sparsely sampling video frames in the temporal dimension. We conduct a detailed literature review to better understand the background work in image and video captioning. We describe our methodology for building a video caption generator, which is based on deep neural networks called encoder-decoders. We then outline the implementation details of our video caption generator and our experimental setup. In our experimental setup, we explore the role of word embeddings for generating sensible captions with pretrained, jointly trained and finetuned embeddings. We train and evaluate our caption generator on the Microsoft Video Description (MSVD) dataset. Using the standard caption generation evaluation metrics, namely BLEU, METEOR, CIDEr and ROUGE, our experimental results show that sparsely sampling video frames with either finetuned or jointly trained embeddings, results in the best caption quality. Our results are promising in the sense that high quality videos with a large memory footprint could be categorised through a sensible description obtained through sampling a few frames. Finally, our method can be extended such that the sampling rate adapts according to the quality of the video. | en_ZA |
dc.description.abstract | AFRIKAANSE OPSOMMING: Die vermoë om ’n video se inhoud bondig te beskryf, het geweldige potensiaal vir beter kategorisering, indeksgebaseerde soektogte, en vinnige inhoudgebaseerde ontrekking uit groot video databasisse. Die outomatiese generering van video-onderskrifte vereis die gelyktydige opsporing van lokale en globale bewegingsdinamika van voorwerpe, tonele en gebeure, om in ’n enkele, samehangende, natuurlike taalbeskrywing opgesom te word. Vanweë die grootte en kompleksiteit van video data is dit belangrik om te verstaan hoeveel tyd-samehangende visuele inligting nodig is om die video voldoende te beskryf. Ten einde die verband tussen video-rame en sinbeskrywings te verstaan, voer ons ’n sistematiese studie uit om te bepaal hoe die gehalte van gegenereerde onderskrifte verander soos video-rame digter of yler in die tyd-dimensie gemonster word. Ons voer ’n gedetailleerde literatuurstudie uit om bestaande werk in die generering van beeld- en video-onderskrifte beter te verstaan. Ons beskryf ons metodologie vir die bou van ’n video-onderskrifgenerator, wat gebaseer is op diep neurale netwerke wat enkodeerderdekodeerders genoem word. Ons gee dan ’n uiteensetting van die implementeringsbesonderhede van ons video- nderskrifgenerator en ons eksperimentele opstelling. In ons eksperimentele opstelling ondersoek ons die rol van woordinbeddings vir die generering van sinvolle onderskrifte met vooraf-afgerigte, gesamentlik-afgerigte, en verfynde inbeddings. Ons onderskrifgenerator word afgerig en evalueer op die Microsoft Video Description (MSVD) datastel. Deur gebruik te maak van standaard evalueringsmaatstawwe, naamlik BLEU, METEOR, CIDEr en ROUGE, toon ons eksperimentele resultate dat yl gemonsterde video-rame, met verfynde of gesamentlik-afgerigte inbeddings, die beste onderskrifkwaliteit lewer. Ons resultate is belowend in die sin dat hoë gehalte video’s met groot geheue-vereistes gekategoriseer kan word, deur middel van sinvolle beskrywings vanaf enkele rame. Ons metode kan ook uitgebrei word deur die monstertempo aan te pas volgens die kwaliteit van die video. | af_ZA |
dc.description.version | Masters | en_ZA |
dc.format.extent | 104 pages : Illustrations | en_ZA |
dc.identifier.uri | http://hdl.handle.net/10019.1/107805 | |
dc.language.iso | en_ZA | en_ZA |
dc.publisher | Stellenbosch : Stellenbosch University. | en_ZA |
dc.rights.holder | Stellenbosch University. | en_ZA |
dc.subject | Machine learning | en_ZA |
dc.subject | Video captioning | en_ZA |
dc.subject | Neural networks (Computer science) | en_ZA |
dc.subject | Closed captioning | en_ZA |
dc.subject | Convolutions (Mathematics) | en_ZA |
dc.subject | Motion detectors | en_ZA |
dc.subject | Embeddings (Mathematics) | en_ZA |
dc.subject | UCTD | |
dc.title | Automatic video captioning using spatiotemporal convolutions on temporally sampled frames | en_ZA |
dc.type | Thesis | en_ZA |