Department of Applied Mathematics
Permanent URI for this community
Browse
Browsing Department of Applied Mathematics by Title
Now showing 1 - 20 of 87
Results Per Page
Sort Options
- Item3D reconstruction of naturally fragmenting warhead fragments(Stellenbosch : Stellenbosch University, 2024-03) Sequeira, Jose; Smit, Francois; Coetzer, Johannes; Stellenbosch University. Faculty of Science. Dept. of Applied Mathematics.ENGLISH ABSTRACT: This study starts with a brief introduction to the South African armaments and ammunition technology industry, highlighting the historical alignment with NATO standards. The focal point is the investigation into the potential of an existing NATO-compliant icosahedral imaging system to ascertain additional geometric features of fragments, such as mass and volume. Building on prior work, the author proposes leveraging extensive image data sets acquired through the icosahedral imaging system to determine these features. The literature study explores two key approaches: stereo vision and shape-from-silhouette 3D reconstruction. The latter emerges as the favored method, particularly due to how well the technique complements the icosahedral camera arrangement. Subsequently, attention is directed toward the electro-mechanical design of the icosahedral imaging instrument and the creation of shape-from-silhouette reconstruction software. Challenges in calibrating the multi-imaging system are addressed through hardware upgrades. The study advances to experimental results, involving the analysis of fragments recovered from a warhead arena test. Average presented areas are determined, and 3D reconstructed models are obtained using the shape-from-silhouette technique, with errors ranging from 2% to 54%. A detailed discussion follows, highlighting the similar average presented area measurements for different icosahedral imaging systems. The inclusion of shadow regions is noted to significantly impact the accuracy of the 3D reconstruction process. Furthermore, slender fragments exhibit smaller errors compared to non-slender counterparts. The study concludes by affirming the achievement of the primary objectives, namely, the ability to use fragment silhouettes obtained during average presented area measurements to produce close-fit 3D models of fragments. Future work is underscored, building upon the strong foundation laid by this investigation. Recommendations, improvements, and suggestions for future research are provided, emphasizing the potential for enhanced reconstruction accuracy, particularly for non-slender fragments, with increased camera deployment.
- ItemAccurate camera position determination by means of moiré pattern analysis(Stellenbosch : Stellenbosch University, 2015-03) Zuurmond, Gideon Joubert; Brink, Willie; Herbst, B. M.; Stellenbosch University. Faculty of Science. Department of Applied Mathematics.ENGLISH ABSTRACT : We introduce a method for determining the position of a camera with accuracy beyond that which is obtainable through conventional methods, using a single image of a specially constructed calibration object. This is achieved by analysing the moiré pattern that emerges when two high spatial frequency patterns are superimposed, such that one pattern on a plane is observed through another pattern on a second, semi-transparent parallel plane, with the geometry of both the patterns and the planes known. Such an object can be created by suspending printed glass over printed paper or by suspending printed glass over a high resolution video display such as an OLED display or LCD. We show how the camera’s coordinate along the axis perpendicular to the planes can be estimated directly from frequency analysis of the moiré pattern relative to a set of guide points in one of the planes. This method does not require any prior camera knowledge. We further show how the choice of the patterns allows, within limits, arbitrary accuracy of this coordinate estimate at the cost of a stricter limit on the span along that coordinate for which the technique is usable. This improved accuracy is illustrated in simulation. With a sufficiently accurate estimate of the camera’s full set of 3D coordinates, obtained by conventional methods, we show how phase analysis of the moiré pattern in relation to the guides allows calculation of a new estimate of position in the two axes parallel to the planes. This new estimate is shown in simulation to offer significant improvement in accuracy.
- ItemAnalysing retinal fundus images with deep learning models(Stellenbosch : Stellenbosch University, 2023-12) Ofosu Mensah, Samuel; Bah, Bubacarr; Brink, Willie; Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences. Applied Mathematics Division.ENGLISH ABSTRACT: Convolutional neural networks (CNNs) have successfully been used to classify diabetic retinopathy but they do not provide immediate explanations for their decisions. Explainability is relevant, especially for clinicians. To make results explainable, we use a post-attention technique called gradient-weighted class activation mapping (Grad- CAM) on the penultimate layer of deep learning models to produce localisation maps on retinal fundus images after using them to classify diabetic retinopathy. Moreover, the models were initialised using pre-trained weights obtained from training models on the ImageNet dataset. The results of this are fewer training epochs and improved performance. Next, we predict cardiovascular risk factors (CVFs) using retinal fundus images. In detail, we use a multi-task learning (MTL) model since there are several CVFs. The impact of using an MTL model is the advantage of simultaneously training for and predicting several CVFs rather than doing so individually. Also, we investigate the performance of the fundus cameras used to capture the retinal fundus images. We notice a superior performance of the desktop fundus cameras to the handheld fundus camera. Finally, we propose a hybrid model that fuses convolutions and Transformer encoders. This is done to harness the benefits of convolutions and Transformer encoders. We compare the performance of the proposed model with other attention-based models and observe on-par performance.
- ItemAnalysis of Extreme Events in the Coastal Engineering Environment(Stellenbosch : Stellenbosch University, 2015-12) Stander, Cornel; Diedericks, Gerhardus Petrus Jacobus; Fidder-Woudberg, Sonia; Stellenbosch University. Faculty of Science. Department of Mathematical Sciences (Applied Mathematics)ENGLISH ABSTRACT : Coastal zones are subject to storm events and extreme waves with certain return periods. The return period of such events is defined as the average time interceding two independent, consecutive events, similar in nature, i.e., with the same return level. Coastal structures have to be designed to provide sufficient protection against flooding or erosion to a desired return level associated with a particular return period, for example 100 years. Statistical analyses of measured wave data over a time series are used for these estimations. In this study, wave data, measured by a Datawell Waverider buoy, is analysed by means of extreme value analyses. This dataset covers only approximately 18 years. Extreme value theory provides a framework that enables extrapolation in order to estimate the probability of events that are more extreme than any that have already been observed. It can, for example, be used to estimate wave return levels over the next 100 years given only an 18 year history. Different methods for making these estimations are implemented and evaluated. Datasets containing periods where data values are absent (i.e., gaps in a dataset), as well as the effects these missing values have on the estimation of extreme values, are also investigated. Methods for the treatment of gaps are evaluated by using NCEP (National Centre for Environmental Prediction) hindcast data, containing no missing values, and creating incomplete datasets from this data. Estimations are then made based on these incomplete sets. The resulting estimations are compared to the estimations made based on the complete NCEP dataset. Finally, recommendations are made for conducting optimal extreme value analyses, based on this study.
- ItemApplication of convolutional neural networks to building segmentation in aerial images(Stellenbosch : Stellenbosch University, 2018-12) Olaleye, Kayode Kolawole; Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences. Division Applied Mathematics.; Fantaye, YabebalENGLISH ABSTRACT : Aerial image labelling has found relevance in diverse areas including urban management, agriculture, climate, mining, and cartography. As a result, research efforts have been intensified to find fast and accurate algorithms. The current state-of-the-art results in this context have been achieved by deep convolutional neural networks (CNNs). This has been possible because of advances in computing technologies such as fast GPUs and the discovery of optimal architectures. One of the main challenges in using deep CNNs is the need for a large set of ground truth labels during the training phase. Moreover, one has to choose optimal values for the many hyperparameters involved in the model construction to get a good result. In this thesis we focus on building segmentation from aerial images, and study the effect of different hyperparameter values, paying particular attention to the generalisation ability of the resulting models. For all our experiments we use the same architecture and performance metric as the one used in Mnih & Hinton (2012). Our investigation found the following main results: 1) when it comes to the size of CNN filters, small size filters perform as good or even better than large sized filters; 2) the LeakyReLU activation functions lead to a better precision-recall curve than ReLU (Rectified Linear unit) and Tanh activation functions; 3) batch-normalization leads to a slightly poor breakeven point than without batch-normalization - this is contrary to what has been found in other studies with different architectures. In addition, we also investigate how well our models generalise to the task of interpreting contexts that are different from the training sets. Drawing from our findings, we gave recommendations on how to make deep CNN models more robust to variations in aerial images of other continent such as Africa where annotations are either unavailable or in short supply.
- ItemApplications of natural language processing for low-resource languages in the healthcare domain(Stellenbosch : Stellenbosch University., 2020-03) Daniel, Jeanne Elizabeth; Brink, Willie; Stellenbosch University. Faculty of Science. Department of Mathematical Sciences (Applied Mathematics).ENGLISH ABSTRACT: Since 2014 MomConnect has provided healthcare information and emotional support in all 11 official languages of South Africa to over 2.6 million pregnant and breastfeeding women, via SMS and WhatsApp. However, the service has struggled to scale efficiently with the growing user base and increase in incoming questions, resulting in a current median response time of 20 hours. The aim of our study is to investigate the feasibility of automating the manual answering process. This study consists of two parts: i) answer selection, a form of information retrieval, and ii) natural language processing (NLP), where computers are taught to interpret human language. Our problem is unique in the NLP space, as we work with a closed-domain question-answering dataset, with questions in 11 languages, many of which are low-resource, with English template answers, unreliable language labels, code-mixing, shorthand, typos, spelling errors and inconsistencies in the answering process. The shared English template answers and code-mixing in the questions can be used as cross-lingual signals to learn cross-lingual embedding spaces. We combine these embeddings with various machine learning models to perform answer selection, and find that the Transformer architecture performs best, achieving a top-1 test accuracy of 61:75% and a top-5 test accuracy of 91:16%. It also exhibits improved performance on low-resource languages when compared to the long short-term memory (LSTM) networks investigated. Additionally, we evaluate the quality of the cross-lingual embeddings using parallel English-Zulu question pairs, obtained using Google Translate. Here we show that the Transformer model produces embeddings of parallel questions that are very close to one another, as measured using cosine distance. This indicates that the shared template answer serves as an effective cross-lingual signal, and demonstrates that our method is capable of producing high quality cross-lingual embeddings for lowresource languages like Zulu. Further, the experimental results demonstrate that automation using a top-5 recommendation system is feasible.
- ItemAutomated elephant detection and classification from aerial infrared and colour images using deep learning(Stellenbosch : Stellenbosch University, 2018-03) Marais, Jacques Charles; Brink, Willie; Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences (Applied Mathematics)ENGLISH ABSTRACT : In this study we attempt to detect and classify elephants in aerial images using deep learning. This is not a trivial task even for a human since elephants naturally blend in with their surroundings, making it a challenging and meaningful problem to solve. Possible applications of this work extend into general animal conservation and search-and-rescue operations, with natural extension to satellite imagery as input source. We create a region proposal algorithm that relies on digital image processing techniques and morphological operations on infrared images that correspond to the RGB images. The goal is to create a fast and computationally cheap algorithm that reduces the work that needs to be done by our deep learning classification models. The algorithm reaches our accuracy goal, detecting 98% of all ground truth elephants in the dataset. The resulting regions are mapped onto the corresponding RGB images using a plane-to-plane homography along with adjustment heuristics to overcome alignment issues caused by sensor vibration. We train multiple convolutional neural network models, using various network architectures and weight initialisation techniques, including transfer learning. Two sets of models were trained, in 2015 and 2017 respectively, using different techniques, software, and hardware. The best performing model reduces the manual verification workload by 97% while missing only 1% of the elephants detected by the region proposal algorithm. We find that convolutional neural networks, as well as the advancements in deep learning, hold significant promise in detecting elephants from aerial images for real world applications
- ItemAutomatic video captioning using spatiotemporal convolutions on temporally sampled frames(Stellenbosch : Stellenbosch University., 2020-03) Nyatsanga, Simbarashe Linval; Brink, Willie; Stellenbosch University. Faculty of Science. Department of Mathematical Sciences (Applied Mathematics).ENGLISH ABSTRACT: Being able to concisely describe content in a video has tremendous potential to enable better categorisation, indexed based-search and fast content-based retrieval from large video databases. Automatic video captioning requires the simultaneous detection of local and global motion dynamics of objects, scenes and events, to summarise them into a single coherent natural language description. Given the size and complexity of video data, it is important to understand how much temporally coherent visual information is required to adequately describe the video. In order to understand the association between video frames and sentence descriptions, we carry out a systematic study to determine how the quality of generated captions changes with respect to densely or sparsely sampling video frames in the temporal dimension. We conduct a detailed literature review to better understand the background work in image and video captioning. We describe our methodology for building a video caption generator, which is based on deep neural networks called encoder-decoders. We then outline the implementation details of our video caption generator and our experimental setup. In our experimental setup, we explore the role of word embeddings for generating sensible captions with pretrained, jointly trained and finetuned embeddings. We train and evaluate our caption generator on the Microsoft Video Description (MSVD) dataset. Using the standard caption generation evaluation metrics, namely BLEU, METEOR, CIDEr and ROUGE, our experimental results show that sparsely sampling video frames with either finetuned or jointly trained embeddings, results in the best caption quality. Our results are promising in the sense that high quality videos with a large memory footprint could be categorised through a sensible description obtained through sampling a few frames. Finally, our method can be extended such that the sampling rate adapts according to the quality of the video.
- ItemBayesian forecasting of stock returns using simultaneous graphical dynamic linear models(Stellenbosch : Stellenbosch University, 2022-12) Kyakutwika, Nelson; Bartlett, Bruce; Becker, Ronnie; Stellenbosch University. Faculty of Science. Dept. of Applied Mathematics.ENGLISH ABSTRACT: Cross-series dependencies are crucial in obtaining accurate forecasts when forecast- ing a multivariate time series. Simultaneous Graphical Dynamic Linear Models (SGDLMs) are Bayesian models that elegantly capture cross-series dependencies. This study aims to forecast returns of a 40-dimensional time series of stock data using SGDLMs. The SGDLM approach involves constructing a customised dy- namic linear model (DLM) for each univariate time series. Every day, the DLMs are recoupled using importance sampling and decoupled using mean-field varia- tional Bayes. We summarise the standard theory on DLMs to set the foundation for studying SGDLMs. We discuss the structure of SGDLMs in detail and give de- tailed explanations of the proofs of the formulae involved. Our analyses are run on a CPU-based computer; an illustration of the intensity of the computations is given. We give an insight into the efficacy of the recoupling/decoupling techniques. Our results suggest that SGDLMs forecast the stock data accurately and respond to market gyrations nicely.
- ItemA bio-economic application to the Cape Rock Lobster resource using a delay difference modelling approach(Operations Research Society of South Africa, 2004) Roos, E.In many species, like the Cape Rock Lobster (Jasus lalandii), the life cycles of males and females differ. This may motivate the use of two-sex models in a stock-assessment analysis. It is also true for this resource, that juveniles do not reach sexual maturity immediately. Therefore a delay-difference model is appropriate. In this study we follow a bio-economic approach and use a two-sex delay-difference model to determine a maximum economic yield strategy. Thus we determine an economic optimum steady state solution at which to harvest this resource subject to the biological constraints of the species.
- ItemA catalytic model for SARS-CoV-2 reinfections : performing simulation-based validation and extending the model to include nth infections(Stellenbosch : Stellenbosch University, 2023-12) Lombard, Belinda; Van Schalkwyk, Cari; Pulliam, Juliet; Stellenbosch University. Faculty of Science. Dept. of Applied Mathematics.ENGLISH SUMMARY: Background: A global pandemic of COVID-19, caused by SARS-CoV-2, was declared in March 2020. Subsequently, studies have revealed a high seroprevalence of SARS-CoV-2 in both South African and global populations, along with instances of multiple reinfections. Among various models, a catalytic model has been developed for detecting population-level increases in risk of reinfection, following primary infection. This thesis aims to assess how potential biases from imperfect data observation processes affect the catalytic model’s ability to detect increases in reinfection risk. Furthermore, the thesis extends the catalytic model to detect increases in the risk of multiple reinfections. Methods: Simulation-based validation involved creating different reinfection scenarios representing real life data, which were then used in the model’s fitting and projection procedure. Observed reinfections were simulated using a time-series of primary infections, representative of South African data. Scenarios included considering both imperfect observation (with constant observation probability or a probability dependent on primary infection count) and mortality. The method’s ability to detect increases in the reinfection risk was measured by determining both the clusters of reinfections and the proportion of points that fell above the projection interval. Following simulation-based validation, the method was extended to detect population-level increases in the risk of 𝑛𝑡ℎ infections. This extended method was applied to observed third infections in South Africa, with an additional model parameter representing increased reinfections during the Omicron wave. Simulation-based validation was conducted on the extended method to assess its ability to detect increases of varying magnitudes in the risk of third infection. Results: During the simulation-based validation of the original catalytic model, model parameters converged in most scenarios. Failure to converge was mostly related to insufficient cases to properly inform the model parameters during the fitting procedure. Scenarios where the model parameters did not converge, or where the simulated data did not accurately fit the model, were excluded from interpretation. Introducing an increase in the reinfection risk resulted in successful detection of an increase (even with small increments), although with delayed timing under lower observed infection numbers. Mortality from first infections, unaccounted for in the model, did not impact the method’s ability to detect increases in the reinfection risk. The method demonstrated high specificity, reliably distinguishing true increases in the reinfection risk from noise. The catalytic model was extended to detect increases in the risk of 𝑛𝑡ℎ infections, and the extended method’s ability to detect increases in the risk of third infections was validated. The additional third infection hazard representing increased reinfection risk observed during the Omicron wave was successfully fitted to the data, and the method effectively detected increases in the risk of third infections. Conclusion: The findings highlight the need for sufficient infection data and the importance of convergence as a prerequisite for result interpretation. The extended model reliably detected increases in the risk of two or more reinfections and demonstrated robustness under different observation processes and increases in reinfection risk scenarios.
- ItemThe class imbalance problem in computer vision(Stellenbosch : Stellenbosch University, 2022-04) Crous, Willem Hendrik; Brink, Willie; Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences (Applied Mathematics)ENGLISH ABSTRACT: Class imbalance is a naturally occurring phenomenon, typically characterised as a dataset consisting of classes with varying numbers of samples. When trained on class imbalanced data, networks tend to favour frequently occurring (majority) classes over the less frequent (minority) classes. This poses chal- lenges for tasks reliant upon accurate recognition of the less frequent classes. The aim of this thesis is to investigate general methods towards addressing this problem. First we establish why a network may favour majority classes. We contend that as less frequent classes are likely to under-represent the re- quired underlying distribution for a given task, training may produce a decision boundary that transgresses the feature space of minority classes. Additionally we find that the weight norms of the classification layer in a neural network may tend towards the distribution of the training data, thus affecting the de- cision boundary. We determine that this decision boundary shift impacts both the accuracy and confidence calibration of neural networks. We investigate several approaches to shift the decision boundary. The first approach acquires additional data and increases the representation of minority classes. This is achieved through either creating synthetic samples following a distribution- aware regularisation method, or utilising additional unlabelled data in a semi- supervised setting. The second approach aims to adjust the classifier weight norms by separately training the classifier and feature extractor. We find that implementing an effective regularisation method with a simple decoupled sam- pling scheme can provide considerable improvements over standard sampling methods. Furthermore we find that utilising additional unlabelled data may lead to additional gains given certain dataset characteristics are taken into consideration.
- ItemCommunity OR and OR for development : a South African perspective(Operations Research Society of South Africa, 1999) Fourie, PhilipAn overview is given of Community Operations Research and of the connection between OR and development. The RDP is the main framework for development in South Africa, and its present state is described. Some suggestions are made as to ways in which ORSSA could support the RDP and development in South Africa.
- ItemComparison of methods for solving Sylvester systems(Stellenbosch : Stellenbosch University, 2018-12) Kirsten, Gerhardus Petrus; Hale, Nicholas Peter; Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences. Division Applied Mathematics.ENGLISH ABSTRACT :This thesis serves as a comparative study of numerical methods for solving Sylvester equations, which are linear matrix equations of the form AX + XB + C = 0. These equations have important applications in many areas of science and engineering, such as signal processing, control theory, and systems engineering, and their efficient solution is therefore of practical significance. As with standard linear systems (i.e., those of the form Ax = b), algorithms for the efficient solution of Sylvester equations typically fall into two categories, namely direct and iterative methods. As a naive approach, one can convert a Sylvester equation to a standard linear system (of larger size) using Kronecker operations, and then apply standard methods from numerical linear algebra. We shall see, however, that unless the matrix is very sparse and structured, this approach is usually inefficient. Instead, modern algorithms for solving Sylvester equations are applied directly to the equation in Sylvester form. When the matrices A and B are small and dense, direct methods such as Bartels–Stewart and Hessenberg–Schur, which are based on suitable factorisations of A and B, are efficient. As the matrices become larger, however, one typically switches to a projectionbased or some other iterative method. The projection methods considered in this thesis use Krylov subspace techniques to project the system onto a much smaller subspace, which can be solved efficiently using one of the direct methods mentioned above as an internal solver. In this thesis we consider two different subspaces for the comparison of projection methods, namely the standard Krylov subspace and an enriched approximation space known as the extended Krylov subspace. We shall see that when the matrix C is of low rank, then the extended Krylov subspace method is competitive with direct methods, even when the system size is relatively small. Each of the methods discussed above are compared, both theoretically by consideration of floating point operation counts and numerically by computational efficiency and accuracy, when used to solve several example problems arising in applications. Based on the results of these experiments, it is concluded that a method based on the eigenvalue decompositions of A and B is the most efficient direct method, although to some degree at the expense of numerical stability. In the class of projection methods, we find that the extended Krylov subspace to be the most efficient approximation space.
- ItemComputational modelling and optimal control of Ebola virus disease with non-linear incidence rate(IOP Publishing, 2017) Takaidza, I.; Makinde, O. D.; Okosun, O. K.The 2014 Ebola outbreak in West Africa has exposed the need to connect modellers and those with relevant data as pivotal to better understanding of how the disease spreads and quantifying the effects of possible interventions. In this paper, we model and analyse the Ebola virus disease with non-linear incidence rate. The epidemic model created is used to describe how the Ebola virus could potentially evolve in a population. We perform an uncertainty analysis of the basic reproductive number R 0 to quantify its sensitivity to other disease-related parameters. We also analyse the sensitivity of the final epidemic size to the time control interventions (education, vaccination, quarantine and safe handling) and provide the cost effective combination of the interventions.
- ItemA conceptual framework for the development of intelligent, learning style- and computer-based educational software for topics from operations research(Stellenbosch : Stellenbosch University, 1999-11) Du Plessis, S. A. (Sameul Altenstadt); De Kock, H. C.; Stellenbosch University. Faculty of Science. Department of Mathematical Sciences.ENGLISH ABSTRACT: The purpose of this study was to construct a conceptual framework for the development of intelligent, learning style- and computer-based educational software and to apply it to linear programming (LP). A secondary goal was to extend the framework to also include other topics from Operations Research. The system that resulted from this study was named GEORGE, in honor of the inventor of the simplex method George Dantzig. GEORGE utilizes fuzzy interpretations of learning style inventories and models of teaching and learning to determine a student's learning and teaching style preferences. An individualized tutoring strategy is then computed and used to present the course material to the student. A whole range of modes of presentation can be included in such a strategy, from drill-and-practice exercises, demonstrations and step-by-step tutorials to flow- and step charts and point-and-query interfaces. GEORGE keeps a practical and effective student model and controls the tutoring with a domain- and motivational based planner. The models of teaching and learning, mentioned above, are based on the results of fuzzy interpretations of Kolb's learning style inventory (experimental learning), a Myers Briggs Type Indicator Test (personality), La Haye's temperament test, a visualizer-verbalizer questionnaire, a study preference guide (sequential/global preferences), the model of teaching and learning of Felder and Silverman (for engineering education), Neethling's Brain Profile Test, a model of teaching and learning that is based on left and right brain preferences, and Sternberg's model of thinking styles. GEORGE consists of six modules, namely a problem solving or domain expert module, a generic questioning module, a presentation module, an "artificial psychologist" module, a student model module and a tutorial module. The generic questioning module is used to generate tutoring and testing material for GEORGE and the "artificial psychologist" module is used primarily to supply students with individualized psychological help, from study techniques and emotional matters to motivation and goal setting. The remaining four modules correspond more or less with the four modules of a traditional intelligent tutoring system. A number of artificial intelligence techniques i.e. natural language understanding, fuzzy expert and fuzzy decision making systems, induction and neural networks, are used in the implementation of different components of GEORGE. Applications of De Bono's thinking skills also play an important role in a number of components (teaching students how to think), the presentation of various personal development or self improvement techniques are very prominent (in the "artificial psychologist" module), and the accommodation of differences among users (especially learning style preferences) receives high priority. The implementation of the various components of GEORGE resulted in a number of useable computer-based learning modules. These demonstration programs illustrate the various concepts within the suggested general framework. The system was developed in Turbo Pascal and integrated within the "Windows"-environment with the help of the authoring system, EasyTutor. GEORGE will eventually be extended to become not only a computer-based tutor of LP topics, but also a Resourceful Environment for the Clever Tutoring of other Operations Research techniques (RECTOR). Guidelines regarding the transformation of GEORGE into RECTOR are provided. RECTOR, and parts thereof, should be used in a very similar way as in GEORGE, to supply computer support of lectures, to provide computer-assisted learning, to conduct computer-based learning, to create a computer environment for calculations and as a source of self-paced and open access material.
- ItemContributions to the theory of near-vector spaces, their geometry, and hyperstructures(Stellenbosch : Stellenbosch University, 2022-12) Rabie, Jacques; Howell, Karin-Therese; Stellenbosch University. Faculty of Science. Dept. of Applied Mathematics.ENGLISH ABSTRACT: This thesis expands on the theory and application of near-vector spaces — in particular, the underlying geometry of near-vector spaces is studied, and the theory of near-vector spaces is applied to hyperstructures. More specifically, a near-linear space is defined and some properties of these spaces are proved. It is shown that by adding some axioms, the nearaffine space, as defined by André, i s obtained. A correspondence is shown between subspaces of nearaffine spaces generated by near-vector spaces, and the cosets of subspaces of the corresponding near-vector space. As a highlight, some of the geometric results are used to prove an open problem in near-vector space theory, namely that a non-empty subset of a near-vector space that is closed under addition and scalar multiplication is a subspace of the near-vector space. The geometric work of this thesis is concluded with a first look into the projections of nearaffine s paces, a branch of the geometry that contains interesting avenues for future research. Next the theory of hyper near-vector spaces is developed. Hyper near-vector spaces are defined having similar properties to André’s near-vector space. Important concepts, including independence, the notion of a basis, regularity, and subhyperspaces are defined, and an analogue of the Decomposition Theorem, an important theorem in the study of near-vector spaces, is proved for these spaces.
- ItemConvolutional and fully convolutional neural networks for the detection of landmarks in tsetse fly wing images(Stellenbosch : Stellenbosch University, 2021-12) Makhubele, Mulanga; Brink, Willie; Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences. Division Applied Mathematics.ENGLISH ABSTRACT: Tsetse flies are a species of bloodsucking flies in the house fly family, that are only found in Africa. They cause animal and human African trypanosomiasis (AAT and HAT), commonly referred to as nagana and sleeping sickness. Effective tsetse fly eradication requires area-wide control, which means understanding the population dynamics of the tsetse flies in an area. Among the factors that entomologists believe to be critical to this understanding, fly size and fly wing shape are considered most important. Fly size can be deduced by calculating the distance between specific landmarks on a wing. The South African Centre for Epidemiological Modelling and Analysis (SACEMA) conducts research into tsetse fly population management and have a database of wings. To use landmarks on the wings for biological deductions about the tsetse flies in the area, researchers will need to manually annotate individual images of the wings by marking the important landmarks by hand, which is slow and error-prone. The purpose of this research is to assess the feasibility of automating the process of landmark detection in tsetse fly wing images using machine learning algorithms with a limited dataset. Extensive research has been done into automatic landmark detection. Particular focus has been given to detection of human body parts but there are a number of notable cases of animal landmark detection. Convolutional neural networks (CNNs) have been used as backbone architectures for most state-of-the-art detection systems. We compare the performance of fully convolutional networks (FCNs) against conventional LeNet style CNNs for the regression task of landmark detection in a fly wing image. The FCN accepts an image input and returns a segmentation mask as output. A Gaussian function is used to convert the response coordinate pairs into heat maps, which are combined to form a segmentation mask. After model training the heat maps produced by the FCN model are converted back to coordinate pairs using a weighted average method. Three types of models were trained: a baseline artificial neural network (ANN), LeNet style CNNs and FCNs. The ANN model had a root mean square error (RMSE) of 282.62 pixels and mean absolute error (MAE) of 181.33 pixels. The best LeNet model, LeNet3 with dropout, had an RMSE of 53.58 and MAE of 41.05. The best FCN model FCN8 with batch size 32 and Adam optimization, had an RMSE of 1.12 and MAE of 0.88. All trained models were best at predicting landmark points 5, 8 and 10 and struggled to predict landmark points 1, 4 and 6. The results indicate that machine learning models can be used to automatically and accurately detect landmark points on tsetse fly wing images. Furthermore, for our limited dataset FCNs outperform conventional LeNet style CNNs.
- ItemData-driven river flow routing using deep learning: predicting flow along the lower Orange river, Southern Africa(Stellenbosch : Stellenbosch University, 2019-04) Briers, C. J.; Brink, Willie; Smit, G. J. F.; Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences. Division Applied Mathematics.ENGLISH ABSTRACT : The Vanderkloof Dam, located on the Orange River, is responsible for the water supply to consumers along its 1 400 km reach up to where it flows into the Atlantic Ocean. The Vaal River, which joins the Orange River approximately 200 km downstream of the dam, contributes significant volumes of water to the flow in the Orange River. These contributions are, however, not taken into account when planning for releases from the Vanderkloof Dam. In this thesis we aimed to develop an accurate and robust flow routing model of the Orange and Vaal River system to predict the effects of releases from the Vanderkloof Dam and anticipate inflows from the Vaal River. Since the factors that impact on flow rate and volume along the river are hard to quantify over long distances, a data-driven approach is followed which uses machine learning to predict the flow rate at downstream flow gauging stations based on flow rates recorded at upstream gauging stations. We restrict the model input to data that would be readily available in an operational setting, making the model practically implementable. A variety of neural network architectures, including fully-connected networks, convolutional neural networks (CNNs) and recurrent neural networks (RNNs), were investigated. It was found that fully-connected networks produce results with accuracy comparable to a simple linear regression model, but display a superior ability to predict the timing of peaks and troughs in flow rate trends. CNNs and RNNs displayed the same ability, as well as showing improvements in accuracy. The best-performing CNN model had a mean absolute percentage error (MAPE) of 14.5 % compared to 16.9 % of a linear regression model. To anticipate contributions from the Vaal River we investigated including inflows recorded at stations on the Vaal River and two of its tributaries, the Modder and Riet Rivers. Both approaches which were investigated, i.e. incorporating these inflows as part of multi-dimensional input into a CNN, and using a parallel CNN model architecture, showed promise with a MAPE of 21.6 % and 23.5 %, respectively. Although these models did not achieve a high level of accuracy, they did display the ability to anticipate contributions from the Vaal River system. It is believed that they could, with additional refinement or using appropriate safety factors, be practically applied in an operational setting. We further investigated including seasonal data as input into our models. Including the time of the year, and including evaporation data recorded at meteorological stations in the recent past, both resulted in improved MAPE accuracy (14.4 % and 14.8 %, respectively, compared to 18.4 % for a model including no seasonal data). Observations of errors staying relatively constant over time prompted us to include errors made in the recent past as input into subsequent predictions. A model trained with this additional data achieved a MAPE of 10.2 %, a significant improvement over other applied methods
- ItemDental implant recognition(Stellenbosch : Stellenbosch University, 2023-09) Kohlakala, Aviwe; Coetzer, Johannes; Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences. Applied Mathematics Division.ENGLISH ABSTRACT: Deep learning-based frameworks have recently been steadily outperforming existing state-of-the-art systems in a number of computer vision applications, but these models require a large number of training samples in order to effectively train the model parameters. Within the medical field the limited availability of training data is one of the main challenges faced when using deep learning to create practical clinical applications in medical imaging. In this dissertation a novel algorithm for generating artificial training samples from triangulated three-dimensional (3D) surface models within the context of dental implant recognition is proposed. The proposed algorithm is based on the calculation of two-dimensional (2D) parallel projections from a number of different angles of 3D volumetric representations of computer-aided design (CAD) surface models. A fully convolutional network (FCN) is subsequently trained on the artificially generated X-ray images for the purpose of automatically identifying the connection type associated with a specific dental implant in an actual X-ray image. An ensemble of image processing and deep learning-based techniques capable of distinguishing between pixels that belong to an implant from those belonging to the background in an actual X-ray image is developed. Normalisation and preprocessing techniques are subsequently applied to the segmented dental implants within the questioned actual X-ray image. The normalised dental implants are presented to the trained FCN for classification purposes. Experiments are conducted on two data sets that contain the simulated and actual X-ray images in order to gauge the proficiency of the proposed systems. Given the fact that the novel systems proposed in this study utilise an ensemble of techniques that has not been employed for the purpose of dental implant classification/recognition on any previous occasion, the results achieved in this study are encouraging and constitute a significant contribution to the current state of the art, especially in scenarios where the proposed systems are combined with existing systems.