Faculty of Engineering
Permanent URI for this community
The Faculty of Engineering at Stellenbosch University is one of South Africa's major producers of top quality engineers. Established in 1944, it currently has five Engineering Departments.
News
For the latest news click here.
Browse
Browsing Faculty of Engineering by browse.metadata.type "Masters"
Now showing 1 - 13 of 13
Results Per Page
Sort Options
- ItemCrafting asset allocation for a re-insurer via portfolio optimisation(Stellenbosch : Stellenbosch University, 2022-04) Abdulla, Mubeen; Von Leipzig, KonradENGLISH SUMMARY: One of the most challenging tasks faced by nancial advisors and consultants, relates to the phenomena of portfolio selection. This process typically entails selecting asset classes based on their risk and reward attributes. Striking an optimal balance between risk and reward is no easy task, given its con icting nature. This phenomena is referred to as portfolio optimisation and is commonly formulated and solved via the well-known mean-variance optimisation procedure, based on the pioneering works by Harry Markowitz. The objective function is formulated as a quadratic programming problem, that seeks to maximise expected return whilst minimising risk. While this approach presents an auspicious foundation to solve a portfolio optimisation problem, it does not incorporate the unique liabilities (such as future payments or claims) inherent to most institutional investors. The aim of the study is therefore to provide a roadmap outlining how assets and liabilities are dovetailed to enhance the decision making process around portfolio optimisation. To achieve this, the notion and premise of asset-liability management (ALM) and liability-driven investing (LDI) are introduced to better manage both assets and liabilities, coherently. This would ultimately ensure an institutional investor's long term nancial sustainability. To add a practical ingredient to this thesis, a real-world case study for a re-insurer is examined. Essentially, the roadmap is applied to a case study to solve a complete portfolio optimisation problem, from an LDI perspective. The results of the unconstrained asset allocation reveal the optimiser's preference to allocate chie y to a small range of asset classes. While this outcome may be theoretically appropriate, this presents a practical challenge given potential concentration risks, and lack of portfolio diversi cation opportunities. For this reason, constraints are imposed within the optimisation procedure, resulting in a more diversi ed and larger array of asset classes to include within a portfolio. To aid with the model validation component and to serve as credence, subject matter experts are consulted. The outcome of this validation was that the process embarked upon as well as the results produced are reasonable and resonates with industry standards. To supplement the model validation and to serve as a reasonability check, a comprehensive sensitivity analysis was undertaken on key input parameters such as expected return to assess the impact this has on the optimal portfolio of assets.
- ItemDesign and implementation of a two-element interferometer(Stellenbosch : Stellenbosch University, 2022-04) Schleich, Stella; De Villiers, Dirk; Stellenbosch University. Faculty of Engineering. Dept. of Electrical and Electronic Engineering.ENGLISH ABSTRACT: This thesis describes the design and implementation of a two-element interferometer. This is aimed to exhibit the basis of radio astronomy and act as a demonstrator of interferome tery. The instrument consists of two identical channels with three subsystems within the channels. The first is an offset reflector antenna system, where axially corrugated horn antennae are designed as the feed antennae. The second subsystem is a receiver chain, which consists of off-the-shelf components. A dual-conversion superheterodyne receiver is designed to process the radio frequency signal before digitisation. The last subsystem is a digital correlator. This consists of digitisation of the signal and correlation between the two receiver channels which is implemented using Red Pitaya and MATLAB. The system has an operating frequency of 12 GHz with a 60 MHz bandwidth. The chosen source for observation is the sun and the instrument is oriented on a east-west direction, while taking meridian drift scan observations. The observations show that it is possible to detect the sun.
- ItemDesign of a soluble P-selectin biosensor for detecting platelet activation(Stellenbosch : Stellenbosch University, 2019-12) Laubscher, Riaan Willem; Perold, W. J.; Stellenbosch University. Faculty of Engineering. Dept. of Electrical and Electronic Engineering.ENGLISH ABSTRACT: The main causes of death are shifting from communicable disease, such as Tuberculosis, to non-communicable disease, such as cardiovascular disease. This disease, along with many other diseases, have been associated with chronic in ammation which is also interrelated with activated blood platelets. When platelets activate, they release biomolecules that can be used as biomarkers for diagnosing or monitoring diseases. Currently, there are no highly accurate tests that can measure the level of chronic in ammation and conventional tests ar either costly, time-consuming, or unpractical in remote areas. Therefore, there is room for improvement in the area of diagnostics of chronic in ammation. This project set out to find a potential biomarker that is a good indicator of platelet activation and is upregulated in an in ammatory individual. Soluble P-selectin was identified as a platelet membrane receptor that is shed upon activation into the blood stream. A study was conducted about the different biosensing techniques, and a label-free electrochemical biosensor approach was chosen for this work. Electrochemical biosensors have been shown to be robust and can be manufactured with relatively low-cost materials. Commercial graphene oxide and carbon nanofiber screen printed electrodes were acquired to establish a working proof of concept. The sensors were electrografted with 4-carboxyphenyl diazonium salt to create support groups for the subsequent attachment of human soluble P-selectin antibodies through EDC/NHS crosslinking chemistry. Square wave voltammetry was identified as a very sensitive diagnostic technique that can be used to quantify the concentration of biomolecules in a sample. Ferricyanide/ferrocyanide and hexaammineruthenium(III) chloride was used as the two redox probes for the voltammetry measurements. In parallel, an application specific potentiostat device was developed to apply the square wave potential and measure the sensor response. The portable device had a potential range of ±1:65 V and a current range of ±244 µA. The project used three different approaches for the detection tests over a range of five full scale experiments. Even though none of the approaches were successful in realizing a linear detection range, one approach showed a difference in response between sP-selectin and non-specific CRP. Due to an unfortunate event with the sensors at the closing of the project, the home-built potentiostat could not be used to perform the final detection measurements of the immunosensor. However, it showed excellent performance against a conventional potentiostat device for detecting ferricyanide/ferrocyanide concentrations in the range of 0.5 to 5 mM. The work done in this thesis showed that if the immunosensor was optimized with the necessary equipment and materials, it would have been possible to use the home-built potentiostat to quantify the level of soluble P-selectin in a sample.
- ItemDevelopment of a digital rapid training course for improving the additive manufacturing adoption rate - fused filament fabrication(Stellenbosch : Stellenbosch University, 2022-04) van Wageningen, Roelof Pienaar; Hagedorn-Hansen, Devon; Von Leipzig, KonradENGLISH SUMMARY: Additive Manufacturing (AM) technologies, such as Fused Filament Fabrication (FFF), have a slow adoption rate. Training on these AM technologies is typically not included in primary to tertiary education curriculums and studies have shown that the lack of education on it, negatively affects the adoption rate. This issue was addressed in this study by developing a digital rapid training course on FFF. A literature study was first performed to gain a better understanding of the different AM technologies and the adoption thereof. The focus was then shifted to a set of learning methods and platforms that are used in the educational sphere. After completing the literature study, it was concluded that training users in FFF can help improve the adoption rate of the technology. The knowledge gained through the literature study was then used to develop a cross-platform digital training course (Web, iOS, and Android), aimed at introducing users to and educating them in FFF. The course consists of teaching sessions, tests, and questionnaires. The course was made available to the general public (free of charge) for a year with no specific target group, allowing users with and without FFF experience to participate. The training course automatically gathered quantitative and qualitative data by recording users’ answers during tests and questionnaires respectively. The course was completed by 198 participants. This data was then analysed to determine whether the training course increased the users’ knowledge of, confidence to engage with, and likelihood to adopt the FFF technology. From the group of participants, 87% claimed that their level of knowledge and understanding of FFF increased by participating in the course. The majority (94%) of the participants stated they are more likely to interact with the technology after participating. The users with no prior knowledge/experience with the technology were found to have benefited the most from the course. Such individuals can be targeted during the development and deployment of AM courses to have the biggest impact on the adoption rate. It was concluded that the training course increased the majority of users’ knowledge of, confidence to engage with, and likelihood to adopt the FFF technology.
- ItemDevelopment of a maintenance possession scheduler for a railway(Stellenbosch : Stellenbosch University, 2022-04) Cillie, Dewald; Bekker, James F.ENGLISH SUMMARY: Maintenance of rail infrastructure is an important element in rail operations in order to keep tra c moving. However, maintenance causes infrastructure to be taken out of service, which impacts tra c ow. In this study, the requirements of a maintenance possession scheduler for a South African application was investigated, and a proposed solution was subsequently developed. The main objective of the scheduler was to minimise the deviation of the train service on a subset of rail infrastructure while ensuring that the required maintenance is done. To achieve this, a literature study was done on a number of themes, which include an overview of the local railway operator with a look at the role of industrial engineering as a function in the railway operator business, railway infrastructure and operations, planning of railway operations, and maintenance in the context of rail operations. The topic of possession scheduling was then studied; the previous themes helped the researcher to learn the bigger picture while understanding possession scheduling is critical for this study. Past and recent works were studied and research areas and trends were synthesised, including time span of possession scheduling in optimisation models, and whether it was done on microscopic, mesoscopic or macroscopic level. The various optimisation objectives formulated by researchers were also noted, among other subthemes. An application case was identi ed as the railway infrastructure between Bellville and Wellington in the Western Cape province of South Africa. A novel mixed-integer linear programming model was formulated for this case and implemented in Cplex, after which it was validated. The model can do possession scheduling for 24 hours on a microscopic level. Finally, several experiments were conducted to investigate the performance and results of the model. It was found that the model delivered optimal results in less than eight minutes, which makes it a feasible maintenance possession scheduler for day-to-day work in the immediate planning horizon.
- ItemDevelopment of a smart trap for the surveillance of invasive fruit flies using internet of things and artificial intelligence(Stellenbosch : Stellenbosch University, 2022-04) Deacon, Quintus; Louw, Louis; Palm, DanielENGLISH SUMMARY: Invasive fruit flies are of major concern to the agricultural industry, causing millions of rands lost due to harvest damage, trade bans, and surveillance cost. Current surveillance methods of invasive fruit flies consist of entomologists manually inspecting fruit fly traps to determine the species of fruit flies captured. This process is time intensive, expensive, and inaccurate. This study proposes a smart trap approach based on vision system technology to automate the fruit fly species classification aspect of the surveillance process. The goal of the smart trap is to serve as an early warning system of invasive fruit fly outbreaks in pest free areas. A design science methodology was followed to design a smart trap that uses a camera imbedded in traditional fruit fly bucket traps to take images of new fruit fly captures and send them to a central server. Otsu's thresholding image segmentation was compared to the EfficienDet DO object detector for segmenting fruit fly instances from the image provided by the smart trap camera. EfficeintDet DO had the highest precision, recall, and Intersection over Union of 92%, 96.88% and 90.5% respectively. Thereafter pretrained models of EfficientNet BO, MobileNet V2, and MobileNet V3 Large were trained to differentiate between the Ceratitis capitata and -quilici fruit fly species segments provided by EfficientDet DO. MobileNet V3 Large had the highest accuracy and Fl-Score of 96.55% and 96.57% respectively. The object detection and image classification algorithms were trained on Google Colab using transfer learning and image augmentation. These were then executed on a Raspberry Pi 4 Model B microcomputer. The smart trap system was accurate in distinguishing between two fruit fly species, and capable of execution on a resource constrained device. The smart trap system shows promise for low cost, easy deployment smart traps but has some issues regarding connectivity in remote areas.
- ItemFault identification utilizing hybrid modelling based feature extraction models(Stellenbosch : Stellenbosch University, 2022-04) Ferreira, Fabian Ethan; Cripwell, Jamie Theo; Louw, Tobias Muller; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.Fault detection and identification models are critical in process monitoring and control as the models are essential in maintaining normal operating conditions. Fault identification models identifies the types of fault which occur in a process and the cause of the fault which allows corrective measures to be applied. Many fault identification models operate by identifying a process fault once a fault detection model has detected the presence of a fault. Fault identification is posed as a multiclass classification problem with each class corresponding to a fault case with a normal operation class introduced to account for the fault detection aspect. A one-vs-one multiclass support vector machine (SVM) classifier is proposed as the fault identification model. A model parameter estimation method was proposed to improve the performance of the fault identification model. The parameter estimation behaves as a feature extraction method. Hybrid modelling is identified as a possible model parameter estimation method. Hybrid modelling combines first-principle models and data-based models. The data-based models are trained to estimate the model parameters based on incoming process data. The data-based models considered are partial least squares regression (PLS), dynamic PLS, and recursive PLS models. A non-isothermal jacketed continuous stirred tank reactor model is developed as a test case model with a catalyst deactivation fault, inlet concentration fault and heat transfer fault applied to the model. The fault identification models are trained using process data corresponding to a catalyst deactivation-inlet concentration fault pair and catalyst deactivation–heat transfer fault pair. The performance of the fault identification models is compared using the sensitivity and specificity measures. The performance of fault identification models using a standard SVM and kernel SVM with a radial basis function kernel were compared. The kernel SVM showed similar performance to the SVM for the catalyst deactivation fault and heat transfer fault with sensitivity values of 0.684±0.044 and 0.752±0.067, and a shorter training time than the standard SVM model. When the performance of the classifiers incorporating non-linearly regressed model parameters were evaluated by identifying the catalyst deactivation fault and heat transfer fault it was found that the standard SVM model using the regressed parameters had higher sensitivities (0.686±0.042, 0.811±0.031) and specificities (0.989±0.005, 0.968±0.027) than the kernel SVM with sensitivities (0.633±0.058, 0.734±0.033) and specificities (0.974±0.005, 0.924±0.038) using the regressed parameters. When the performance of the hybrid fault identification models was evaluated, the standard SVM using dynamic PLS showed better performance than the other models with higher sensitivities (0.695±0.041, 0.761±0.056) and specificities (0.9800±0.004, 0.949±0.049). When the performance of all the models were compared it was found that the standard SVM using non-linearly regressed parameters was the best performing model. The multiclass SVM approach has been shown a viable fault identification method. Implementing the model-based feature extraction method was shown to improve the performance of fault identification models. The SVM model using non-linearly regressed parameter estimates was found to be the best performing model. It is recommended that in future work the PLS models are replaced with another data-based model such as artificial neural networks.
- ItemFractional condensation of pyrolysis volatiles produced from desulphurised waste tyre feedstock(Stellenbosch : Stellenbosch University, 2022-04) Stander, Adam Johannes; Görgens, Johann Ferdinand; Knoetze, Johannes Hendrik; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH SUMMARY: The world has always struggled with pollution and finding sustainable ways to manage waste disposal. Thus, waste valorisation strategies have become of the utmost importance in the engineering industry. There are currently millions of tons of waste tyres produced each year. Depolymerisation of these tyres can produce valuable products which provide a promising sustainable waste valorisation strategy. One of the most prominent depolymerisation strategies currently being employed is pyrolysis. The pyrolysis of waste tyres produces lumped oil, char and non-condensable gas. The lumped oil is commonly referred to as tyre derived oil (TDO). TDO has been investigated as a potential fuel replacement, however, several problems have been identified concerning TDO. TDO has a large boiling point range, high sulphur content, high aromatic content, low flashpoint and a large concentration of heavy boiling point (> 350˚C) molecules. Several studies have aimed to improve the quality of TDO through fractionation through distillation. Through these studies, it became apparent that TDO possessed three distinctive fractions (light, medium and heavy). These fractionations share some similarities with crude derived fuel products such as gasoline, diesel and marine bunker oil (MBO). This study had two distinct goals: (i) To achieve significant desulphurisation of the waste tyre feedstock and subsequently reduce the sulphur content of the produced oils. (ii) To fractionate the waste tyre pyrolysis vapours to produce three distinctive pyrolysis oil fractions, which were comparable to commercial crude derived fuels. The desulphurisation was achieved by treating the waste tyre feedstock through low temperature (180 - 220˚C) pyrolysis. The maximum desulphurisation was achieved at a treatment temperature of 220˚C, producing a feedstock that had a reduction in elemental sulphur of 54.1 wt%. Feedstock for the pyrolysis experiments was desulphurised at 190˚C (35.6 wt% elemental sulphur reduction from 2.342 wt% to 1.508 wt%) to avoid any significant rubber degradation which occurs at temperatures above 200˚C. The oil yield was optimised for reactor temperature and volatile residence time (nitrogen gas flowrate) through a two-factor central composite inscribed design. The reactor temperature was identified to significantly influence the oil yield for the parameter ranges investigated, while the nitrogen gas flowrate did not. The optimal oil yield (42.50 wt%) was achieved for parameter set points of 524˚C for reactor temperature and 1.9 l/min (volatile residence time of 3 min) for nitrogen gas flowrate. The fractional condensation system consisted of three controlled temperature condensers, which operated at 200˚C, 160˚C and 10˚C. Each condenser produced a distinctive fuel fraction in terms of boiling point range. Significant fractionation was achieved as the boiling point range of the lumped pyrolysis oil (54.41 – 246.23˚C) nearly covered the combined boiling point ranges of the obtained light-cut (48.99 – 77.32˚C), medium-cut (74.98 – 225.25˚C) and heavy-cut (133.12 – 288.75˚C) pyrolysis oil fractions. Overlap of the boiling point ranges for the produced fractions were ascribed to insufficient volatile residence times within condensers 1 and 2 of the system. The obtained fractions were analysed to determine their respective fuel properties (density, viscosity, flashpoint, ash content, moisture content and sulphur content). The fractions were compared to commercial crude derived products and other pyrolysis fractions obtained through distillation of lumped pyrolysis oil. The heavy-cut fraction was comparable to commercial MBO products (Engen 180, Engen 150) for all of the fuel properties tested. The sulphur content (0.79 wt%) did however not comply with IMO regulations published in January 2020, which reduced the sulphur content of marine fuels to 0.5 wt%. The medium-cut fraction was comparable to Chevron No.4 diesel for viscosity, moisture content, ash content and sulphur content. The light-cut fraction compared favourably with regards to viscosity, ash and moisture content, while the fraction was unfavourable in terms of density and sulphur content. It was concluded that the fractional condensation of waste tyre pyrolysis volatiles is possible without significant degradation of the produced oil fractions. The TDO fractions’ fuel characteristics can be manipulated through varying condenser operating temperatures, similar to distillation. The TDO fractions will however require extensive processing to reduce the sulphur content to within automotive fuel regulations.
- ItemHigh-throughput biomethane potential (BMP) tests as predictors for commercial-scale anaerobic digester performance(Stellenbosch : Stellenbosch University, 2022-04) van der Berg, David John; Görgens, Johann F.; Van Rensburg, Eugene; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH SUMMARY: Under steady-state conditions, full-scale anaerobic digestion (AD) plants generate methane gas by utilising industrial organic wastes such as food and beverage processing wastes. The selection and monitoring of operating parameters of AD provides insight into the process dynamics of the system, which presents opportunities to enhance the digester’s performance. However, predicting full-scale performance comes with several challenges, including the limitations of using bench-/lab-scale AD tests to accurately estimate industrial-scale AD performance, the impacts of environmental effects on biogas plant operations and the differences in process conditions between AD test scales. Bench-scale tests such as biomethane potential (BMP) assay tests are used to estimate the methane potential and degradability of a particular feedstock. BMP tests serve as indicators of AD performance and the basis of designing full-scale plants according to an expected biogas output. However, these methods are not always reliable due to the differences in bench- and full-scale AD process conditions, for example different reactor feeding modes. Furthermore, there is a lack of standardisation in BMP test protocols which impacts the reproducibility of BMP results. Very few studies have attempted to predict performance parameters of full-scale AD plants based on bench-scale AD experimental data. Performance parameters such as biogas and methane yields have been estimated for full-scale processes, but not over long durations of time where variations in feedstock compositions are accounted for. This study aimed to utilize bench-scale BMP tests that was efficient in estimating the performance of full-scale AD plants over an operational period spanning three years. The term “high-throughput BMP tests” pertains to the performing of numerous BMP tests simultaneously. Three full-scale AD plants were included in this study, namely a co-digestion plant of 3200 m3 working volume treating mixed food and agricultural wastes (Plant 1), a full-scale plant of 60 m3 working volume treating tomato wastes (Plant 2) and a liquid-based plant of 2200 m3 working volume treating distillery wastes (Plant 3). BMP tests (500 mL) were performed using a defined standardised BMP protocol on feedstock samples collected over a period of 6 to 8 months. Pilot-scale studies (50 L) were performed under operating conditions replicating those at full-scale AD to assess whether pilot-scale data could predict full-scale performance parameters more accurately than BMP tests. Full-scale performance was predicted using two methods identified from literature: (1) an extrapolation method and a (2) continuous-stirred tank dynamic model. Estimated full-scale performance parameters were then compared to real-time full-scale data to assess the deviation between ideal, bench-scale conditions and full-scale conditions. These deviations were defined as “scale factors” throughout this dissertation, defined as the ratio between real-time and estimated full-scale performance parameters. The three full-scale AD plants encountered various process disturbances, for example, variations in feedstock composition, which were observed from their full-scale operational datasets. It was found that BMP tests could be used to estimate the performance of full-scale AD processes using the two aforementioned methods identified from literature. Pilot-scale data could not be used to estimate full-scale AD process performance due to errors encountered during experimental procedures. For biogas production and yield estimations, the extrapolation method reflected scale factors of 0.42 for Plant 1 (mixed food wastes), a factor of 1.05 for Plant 2 (TW) and a factor of 0.69 for Plant 3. The dynamic model provided more accurate estimations of full-scale performance, where scale factors of 0.86, 3.10 and 0.92 were calculated for Plant 1, Plant 2 and Plant 3, respectively. These scale factors have the potential to estimate the energy production potentials of downstream power units for full-scale AD plant by accounting for changes in feedstock composition. Recommendations for this study included obtaining more reliable datasets of full-scale operational data by ensuring sufficient process monitoring instrumentation (e.g. gas flow meters) is installed at the plant, obtaining operational period spanning a longer time frame (Plants 1 and 2) and by ensuring the design of pilot-scale AD reactors better suit the conditions of full-scale AD plants, e.g. establish the same feeding mode.
- ItemHomogenised continuum model for structural analysis of unreinforced alternative masonry walls(Stellenbosch : Stellenbosch University, 2021-12) Thompson, Lemuel Yaw; De Villiers, Wibke; Stellenbosch University. Faculty of Engineering. Dept. of Civil Engineering.ENGLISH ABSTRACT: The construction of 40m2 low-cost, single storey, state subsidised houses with conventional masonry units (CMUs), to address the South African housing shortage of more than 2 million units poses an environmental sustainability problem. The use of concrete and burnt clay in these large volumes has a significant negative impact on the environment in the form of CO2 emissions and the use of non-renewable natural resources. This coupled with the introduction of a carbon tax by the South African parliament incentivises the need for alternative masonry units (AMUs). South African design guidelines and codes for CMUs are in widespread use however, to directly apply these to AMUs would be inappropriate. To investigate the suitability of the national building regulation (NBR) as applied to AMUs would require either large scale experimental testing or finite element (FE) analyses of masonry walls. The latter is preferable due to the high material and procedural costs involved in the former. A macro-modelling strategy using the FE approach is proposed in this study to facilitate the study of AMU buildings. The masonry materials of focus are concrete (CON), geo-polymer (GEO), compressed stabilised earth (CSE) and adobe (ADB) with the last three being AMUs. The elastic and inelastic homogenised properties of masonry are determined via the use of empirical and analytical homogenisation strategies applied to unit, mortar and interface properties from Fourie (2017), Shiso (2019), Jooste (2020) and Schmidt (2020). These homogenised properties are used in the nonlinear FE validation of in-plane (IP) and uni-axial out-of-plane (OP) loaded masonry walls of Shiso (2019) and Jooste (2020) respectively. The models showed good reproduction of the IP material load displacement behaviour and satisfactory results for the uni-axial OP behaviour. Since masonry walls in buildings are typically under bi-axial loading, the simplified micro-modelling bi-axial result on masonry walls from De Villiers (2019) is used as a baseline for validating the homogenised bi-axial properties, due to lack of bi-axial experimental loading tests on the materials of this study. This exercise showed that the macro-modelling strategy provides a good estimate of CON and GEO behaviour and satisfactory estimate of CSE and ADB behaviour. 40m2 and 80m2 single storey, fully detached masonry buildings of 90mm, 110mm and 140mm unit sizes developed by Rabie (unpublished) are then modelled and analysed using the macro- modelling strategy. The wall IP and OP capacities under ultimate limit state, wind (ULS-W), serviceability limit state (SLS) and ultimate limit state, seismic (ULS-S) are chosen as the focus. The results showed that the deemed-to-satisfy provisions of the NBR do not sufficiently cover CMUs under ULS-W/SLS for wind speeds of 44m/s, 40m/s and 36m/s as most walls failed to achieve the required loads. Likewise, the NBR provisions proved to be inadequate when applied to the AMUs under ULS-W/SLS for said wind speeds. Under ULS-S it was found that CON walls that have enough clearance between wall edge and adjacent opening had adequate capacity to resist the seismic loads. Similar walls with GEO and CSE also showed promising performance for ULS-S. This shows the inadequacy of the geometry limits of the NBR regarding ULS-S, since there were walls meeting the deemed-to-satisfy limits of the NBR but failed under ULS-S due to inadequate opening clearance. These findings further confirm those of De Villiers (2019) and call for the need to revise the wall geometry deemed-to-satisfy provisions of the NBR to cater for CMUs properly and allow for the inclusion of AMUs.
- ItemThe Influence of neck muscle characteristics on head kinematics during lateral impacts : a simulation based analysis(2023-03) Bergh, Oloff Charles Wessel; Van der Merwe, Johan; De Jongh, Cornel; Derman, Wayne; Stellenbosch University. Faculty of Engineering. Institute of Biomedical Engineering.ENGLISH SUMMARY: The skull contains the most critical component of the human body, the brain. Large changes in the velocity and acceleration of the skull, specifically in an angular manner, have been associated with an increased risk of concussion or mild traumatic brain injuries. Modifiable risk factors can be defined as intrinsic characteristics that can be altered to decrease the risk of head injury. Previous studies have investigated neck muscle strength as a potential modifiable risk factor in sports research. However, literature appears to be divided regarding the influence of neck muscle strength on head kinematics and injury risk. Additionally, research associated with individuals who demonstrate a decline in neck muscle strength compared to control subjects appears to be scarce, potentially due to ethical concerns. This project aims to contribute to current literature and evaluate the influence of neck muscle characteristics, such as the maximum isometric and eccentric strength, on the kinematics of the skull during laterally induced head collisions through a simulation-based approach. Multibody dynamic computer models were used to determine the influence of neck muscle characteristics on head kinematics and subsequent head injury risks. The models were based on the original Hyoid model in OpenSim by Mortensen, Vasavada and Merryweather (2018), which has been verified and validated against experimental responses with similar total neck muscle strength values. The Normal model in this project demonstrated the same muscle characteristics as the original Hyoid model. The two stronger models, referred to as the Intermediate and Max models, have increases in maximum isometric and eccentric muscle strength compared to the Normal model. The Intermediate model has realistic achievable neck muscle characteristics of an individual who has undergone specific neck training, while the Max model represents a highly trained athlete with significantly strengthened neck musculature. The Decreased model has lower total neck muscle strength compared to the Normal model and is based on the reductions in muscle characteristics of elderly individuals. The static optimization tool within the OpenSim environment was used to determine the optimal muscular activations of the different models. These activations were subsequently used in the forward dynamic tool to determine the influence of the neck muscle characteristics on head kinematics during increasing lateral impacts. The head kinematics were then used to calculate the head injury criterion (HIC15), a commonly used metric to determine the extent of head injuries based on empirical data. The stronger models consistently showed lower head kinematic and HIC15 values compared to the Normal model, while the Decreased model always demonstrated higher kinematics with a greater risk of injury. At a low external force there was a considerable influence of the neck muscle characteristics on head kinematics and injury risk. However, a non-linear trend indicated that the influence of the neck muscles declined as the external force increased. This could indicate that the influence of the neck muscle characteristics might be overshadowed by large external forces, but could still play a role in reducing head kinematics and injury-risk at lower forces.
- ItemSimulating the South African forestry supply chain(Stellenbosch : Stellenbosch University, 2022-04) Laubscher, Jennifer Mignonne; Bekker, James F.; Ackerman, SimonENGLISH SUMMARY: The purpose of this study was to provide base simulation models that demonstrated some capabilities of discrete-event stochastic simulation when applied to the South African pulp- and saw timber supply chains from nursery- to mill-gate. The simulation models were designed to provide support for strategic decision-making by allowing scenario analysis through experimentation and bi-objective optimisation. A literature study was performed on various topics relating to the South African forestry industry, supply chain management and simulation modelling. This literature study included a discussion on various simulation software packages, from which one, namely Siemens Tecnomatix Plant Simulation was selected to be utilised in this study. To simplify the simulation modelling process two case studies were developed through stakeholder and subject-matter expert consultation. The case studies were converted into process ow diagrams, which could further be expanded into concept models. Thereafter, the simulation input data requirements were identi ed and the concept models could be translated into computerised versions in the chosen simulation software. The simulation models were built following a systematic and iterative approach. Final veri cation was done through structured model walk-throughs, error elimination and by performing entity tracing tests. Validation was done in consultation with project stakeholders and subject-matter experts, as well as through validation experimentation. After the simulation models were veri ed, validated and proven to be credible representations of the real-world pulpwood- and saw timber supply chains from nursery- to mill-gate, their experimental capabilities were demonstrated. The models were found to have a large combinatorial nature with regards to the number of experiments that can be performed, and exhibited the capability to perform \what-if" analysis and bi-objective optimisation.
- ItemTowards a framework to guide the development of ICT4D: A South African Perspective(Stellenbosch : Stellenbosch University, 2019-12) Coetzee, Lauren Lize; Grobbelaar, Sara; Bam, Wouter; Schutte, C. S. L.; Stellenbosch University. Faculty of Engineering. Dept. of Industrial Engineering.ENGLISH ABSTRACT: Information and Communication Technologies for Development (ICT4sD) has the capability of facilitating the flow of knowledge which offers developing countries opportunities to enhance systems that may assist in poverty alleviation and other developmental initiatives. Even though ICTs4D hold the potential of harnessing ICTs for social development, there still exists a significant percentage of ICT4D that fail to deliver the results they were developed for. Many ICT4D projects have made no difference and some have even caused harm in the communities they have been implemented. To determine how ICTs can be harnessed to serve as a catalyst and not as a hindrance for social transformation, one needs to consider the literature surrounding the topic of ICT4D. There is, however, an overall lack in the consistency of theory surrounding the process of development of ICT4D, and a lack of empirical evidence of appropriate methods regarding the development of ICT4D. To address this lack of evidence this study explores the best practices for the development of ICT4D in developing countries, specifically in South Africa (SA). The findings are used to develop a framework to guide the formation of ICT4D specifically within the Analysis and Design phases of development. An iterative seven-step process, by Jabareen (2011), the Conceptual Framework Analysis (CFA) process, was adopted to develop the ICT4D framework. These steps and thus the formation of the framework was accomplished within two overarching research phases, 1) a theoretical study and 2) an empirical study. The study is situated within three fields of literature, namely 1) Information System Development, 2) Human Development and 3) Information Communication Technologies for Development. The theoretical study investigated these three fields to understand the functioning of ICT4D from the perspective of all three of these fields and to develop a theoretical base of knowledge within each field. The exploration of these three fields formed part of the overview literature study and the Systematic Literature Review. The findings resulted in various analytical, design and functional concepts that were integrated to develop the preliminary Analytical and Design Framework. The empirical study followed by adopting a mixed methodology approach, comprising of two stages: 1) qualitative semi-structured interviews and (2) a quantitative framework-ranking exercise. The findings within this phase were applied to the framework to provide improvements and validation to develop the final framework. A positive response resulted from the empirical study and the framework was validated as needed, reliable, relevant and useful within the ICT4D domain. Even though the validity of the framework was established, further study is required to map the issues that may arise through implementation, and to confirm the usefulness thereof in real life situations. Since not one coherent approach exists for developing ICTs4D, but a combination of frameworks and tools are needed, the final framework is a contribution to the available tools and approaches that can be used to provide a guide to develop ICT4D within the Analysis and Design phases.