- ItemEconomic capital allocation to market and survival risk for pure endowment products(Stellenbosch : Stellenbosch University, 2023-03) Pretorius, Jaco Harm; Louw, Simon; Stellenbosch University. Faculty of Economic and Management Sciences. Dept. of Statistics and Actuarial Science.ENGLISH ABSTRACT: Economic capital allocation to interest and longevity rate risks is a topic of interest for life insurers. This study aims to provide approaches to allocate the overall economic capital amount into market and survival risk components for a pure endowment product. The calculation of the economic capital figure can be done using either analytical or simulation-based methods. An allocation approach found in literature is then applied to the simulation-based capital quantification. An allocation approach for the analytical method is proposed. A pure endowment contract which faces risks that have been calibrated to a regulatory shock environment over one year backed by a six-month fixed interest risk free asset was used as case study for these allocation approaches. Both methods deliver comparable results, and both conclude that interest rate risk is much more important than the longevity component. The importance of interest rate risks depending on method range between 97% and 99.97% of economic capital allocated to this risk. Sensitivity analysis proved particularly insightful in this study. We found that the analytical approach is more sensitive to the methodology choice in the decomposition step. Both methods provide sensible behaviour across different parameter values. The main advantage of the simulation-based approach is flexibility. The analytical approach delivers a closed form solution for capital allocation which reduces computing time and ease of implementation. This research demonstrates capital allocation provides a valuable tool for understanding the behaviour of capital relative to various risks. This enables insurers to better manage their risk exposure as they can start to see the drivers of risk.
- ItemA comparison of value at risk & expected shortfall models in cryptocurrencies(Stellenbosch : Stellenbosch University, 2023-03) Bosman, Lisa-Marie; Perrang, Justin; Stellenbosch University. Faculty of Economic and Management Sciences. Dept. of Statistics and Actuarial Science.ENGLISH SUMMARY: The key objective of this study is to examine the application of specific traditional market risk management measures on the cryptocurrency market and investigate the efficiency and accuracy thereof through the application of value at risk (VaR) and expected shortfall (ES) models. The further objective is to provide an extensive literature review of important topics relating to cryptocurrencies and the risk management thereof. Numerous studies and applications related to cryptocurrencies have already been conducted. This study clarifies certain aspects and factors regarding the cryptocurrency market, such as blockchains, cryptocurrency bubbles and the impending regulation of the asset class. As the volatile cryptocurrency market has become more prominent in the financial sector throughout the years, the modelling and management of market risk have become key areas related to the asset class. VaR and ES are well-known measures of market risk. These are implemented in this study on the daily return data for Bitcoin, Ethereum, Ripple and Dogecoin for the period 8 August 2015 — 31 August 2022. The historical simulation model, as well as independent and identically distributed (i.i.d.) models, assuming both normally and Student t distributed returns, are applied. The exponentially weighted moving average (EWMA) model provides an improvement upon the i.i.d. models. Following this, the asymmetric volatility of the data is taken into account with an adjusted EWMA model, the asymmetric exponentially weighted moving average (AEWMA). The final model applied is the filtered historical simulation (FHS), which combines the benefits of the historical simulation and AEWMA models using bootstrapping. The daily VaR and ES forecasts are extensively backtested to enable comparison among the models. The results produced differ among the different coins and for the different significant levels. However, it is clear that the asymmetric volatility of the data significantly impacts the results and must be accounted for in modelling. It can be concluded that traditional market risk management has an important place in the cryptocurrency market.
- ItemAnalysing GARCH models across different sample sizes(Stellenbosch : Stellenbosch University, 2023-03) Purchase, Michael Andrew; Conradie, Willie; Viljoen, Helena; Stellenbosch University. Faculty of Economic and Management Sciences. Dept. of Statistics and Actuarial Science.ENGLISH SUMMARY: As initially constructed by Robert Engle and his student Tim Bollerslev, the GARCH model has the desired ability to model the changing variance (heteroskedasticity) of a time series. The primary goal of this study is to investigate changes in volatility, estimates of the parameters, forecasting error as well as excess kurtosis across different window lengths as this may indicate an appropriate sample size to use when fitting a GARCH model to a set of data. After examining the T = 6489 1-day logreturns on the FTSE/JSE-ALSI between 27 December 1995 and 15 December 2021, it was calculated that an average estimate for volatility of 0.193 670 should be expected. Given that a rolling window methodology was applied across 20 different window lengths under both the S-GARCH(1,1) and E-GARCH(1,1) models, a total of 180 000 GARCH models were fit with parameter and volatility estimates, information criteria and volatility forecasts being extracted. Given the construction of the asymmetric response function under the E-GARCH model, this model has greater ability to account for the `leverage effect' where negative market returns are greater drivers of higher volatility than positive returns of an equal magnitude. Among others, key results include volatility estimates across most window lengths taking longer to settle after the Global Financial Crisis (GFC) than after the COVID-19 pandemic. This was interesting because volatility reached higher levels during the latter, indicating that the South African market reacted more severely to the COVID-19 pandemic but also managed to adjust to new market conditions quicker than those after the Global Financial Crisis. In terms of parameter estimates under the S-GARCH(1,1) model, values for a and b under a window length of 100 trading days were often calculated infinitely close to zero and one respectively, indicating a strong possibility of the optimising algorithm arriving at local maxima of the likelihood function. With the exceptionally low p-values under the Jarque-Bera and Kolmogorov-Smirnov tests as well as all excess kurtosis values being greater than zero, substantial motivation was provided for the use of the Student's t-distribution when fitting GARCH models. Given the various results obtained around volatility, parameter estimates, RMSE and information criteria, it was concluded that a window length of 600 is perhaps the most appropriate when modelling GARCH volatility.
- ItemThe use of bayesian neural networks in thyroid cancer classification(Stellenbosch : Stellenbosch University, 2023-03) Du Preez, Tiana; Bierman, Surette; Stellenbosch University. Faculty of Economic and Management Sciences. Dept. of Statistics and Actuarial Science.ENGLISH SUMMARY: Artificial Neural Networks form a class of machine learning models that can be used to model complex relationships between variables. They are used in an innumerable number of practical applications toward solving real-world problems. However, one of the limitations of conventional neural networks is that they are not designed to accurately quantify uncertainty in network predictions. A possible solution to this problem is the use of Bayesian inference to introduce stochasticity in neural networks. For example, Bayesian neural networks assign a prior distribution to the neural network weight parameters. The posterior distribution is then derived by means of variational inference algorithms. Bayesian neural networks are currently successfully used in a wide variety of applications. Since they are particularly useful in settings where quantification of uncertainty in prediction is important, a key application area for Bayesian neural networks is the medical field. We investigate the use of Bayesian Neural Networks in thyroid cancer classification. Thyroid cancer diagnosis is a difficult task. Therefore, developing models yielding predictions of cancer stages, with accurate associated risks, can be a worthwhile contribution. In our empirical work, we thus focus on the classification of thyroid cancer by means of Bayesian neural networks. More specifically, since we use data that consist of ultrasound images, we fit Bayesian Convolutional Neural Networks for image classification. Modified versions of the LeNet-5, AlexNet and GoogLeNet network architectures are considered. Most of the different architectures, adapted for Bayesian inference, are found to perform slightly better than the corresponding conventional network architectures. In addition, Bayesian aleatoric and epistemic uncertainties are reported for each model. This uncertainty quantification may be considered a sensible contribution.
- ItemExploring the class imbalance problem in text classification(Stellenbosch : Stellenbosch University, 2023-03) Bezuidenhout, Jean-Pierre; Lamont, Morne; Stellenbosch University. Faculty of Economic and Management Sciences. Dept. of Statistics and Actuarial Science.ENGLISH SUMMARY: Natural Language Processing (NLP) is a subfield in computer science which is focused on leveraging computers to learn from human language. Over the years, the field has been used to perform a wide variety of tasks which have resulted in many interesting real-world applications. One of these tasks is text classification, where the focus is on the development of models which are able to successfully predict the class label for textual inputs from a set of pre-defined category labels. Text classification has previously been applied in the development of automatic spam detection systems and in the analysis of consumer sentiment. Unfortunately, many real-world text data have an imbalanced class label distribution. This is often the case for spam data sets, where the majority of observations are labelled as non-spam. In the development of an automatic spam detection system, we want the system to correctly identify spam instances. However, traditional Machine Learning (ML) models are usually overwhelmed by instances in the majority class, which hinders the ability of these models to correctly identify instances in the minority class. The field of imbalanced learning is focused on the manipulation of data and algorithms to address the problem that was just described. However, these methods have not been thoroughly explored in the literature. Thus, our objective in this thesis is to contribute new knowledge to the problem of imbalanced class label distributions in the context of text classification. The problem is approached by reviewing the literature to identify ML models which were previously applied to text classification tasks. Furthermore, methods are identified from the literature which manipulate data and algorithms which are well suited to the task of imbalanced learning. The performance of these techniques is investigated by means of an empirical study which focused on real-world movie review data. Simulated scenarios with varying degrees of class imbalance are investigated in order to study the robustness of classifiers on imbalanced data problems, and to analyse the performance of imbalanced learning techniques. For the data set that was analysed, the results from our findings suggest that some classifiers are more robust to class imbalance than others, and that performance gains are possible when imbalanced learning techniques are included in the learning process.