Masters Degrees (Chemical Engineering)
Permanent URI for this collection
Browse
Browsing Masters Degrees (Chemical Engineering) by Title
Now showing 1 - 20 of 440
Results Per Page
Sort Options
- ItemAcoustic monitoring of DC placma arcs(Stellenbosch : Stellenbosch University, 2008-03) Burchell, John James; Eksteen, J. J.; Niesler, T. R.; Aldrich, C.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: The arc, generated between the cathode and slag in a dc electric arc furnace (EAF), constitutes the principal source of thermal energy in the furnace. Steady state melting conditions rely on efficient control of the arc's power. This is achieved by keeping the arc's length constant, which is currently not directly measured in the industry, but relies on an external voltage measurement. This voltage measurement is often subject to inaccuracies since it may be influenced by voltage fluctuations that are not necessarily related to the arc itself, such as the variable impedance of the molten bath and the degradation of the graphite electrode. This study investigated whether or not it is possible to develop a sensor for the detection of arc length from the sound that is generated by the arc during operation. Acoustic signals were recorded at different arc lengths using a 60 kW dc electric arc furnace and 600 g of mild steel as melt. Using a filterbank kernel (FB) based Fisher discriminant analysis (KFD) method, nonlinear features were extracted from these signals. The features were then used to train and test a k nearest neighbour (kNN) classifier. Two methods were used to evaluate the performance of the kNN classifier. In the first, both test and train features were extracted from acoustic signals recorded during the same experimental run and used a ten fold bootstrap method for integrity. The second method tested the generalized performance of the classifier. This involved training the kNN classifier with features extracted from the acoustic recordings made during a single or multiple experimental runs and then testing it with features drawn from the remaining experimental runs. The results from this study shows that there exists a relationship between arc length and arc acoustic which can be exploited to develop a sensor for the detection of arc length from arc acoustics in the de EAF. Indications are that the performance of such a sensor would rely strongly on how statistically representative the acoustic data are, used to develop the sensor, to the acoustics generated by industrial dc EAFs during operation.
- ItemAdaptive process monitoring using principal component analysis and Gaussian Mixture Models(Stellenbosch : Stellenbosch University, 2019-04) Addo, Prince; Auret, Lidia; Kroon, R. S. (Steve); Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Principal component analysis (PCA) is a well-known technique used in combination with monitoring statistics for fault detection. Moving window PCA and recursive PCA are adaptive extensions of PCA that operate by periodically updating the monitoring model to incorporate new observations. This allows the monitoring model to cope with process behaviours that change slowly over time such as equipment aging, catalyst deactivation, and reaction kinetics drift and thereby improving monitoring performance. Recent demands and advancements in process industries, however, may result in multimodal operations, where distinct clusters are present in measurement data. The performance of the aforementioned PCA-based monitoring techniques is hindered due to the violation of the implicit assumption that all the observed process data belong to the same Gaussian distribution. To improve monitoring performance, multimodal techniques are required. The Gaussian mixture model (GMM) is a probabilistic model that can account for the observed modes in the process data and therefore be used in the monitoring of multimode processes. However, multimodal processes also exhibit behaviours that change slowly over time, which is challenging. This work develops a monitoring approach that extends adaptive PCA techniques to GMM, which effectively addresses the aforementioned challenge. This is done by continuously refreshing the model parameters and monitoring statistics for the PCA and GMM. Other key areas that the work focuses on are in improving the specifications for adaptive PCA protocol (taking into consideration the various model update methods) and Gaussian mixture model methods (taking into consideration the monitoring model types and data types). Also, the performance of unimodal and multimodal process monitoring approaches was assessed. The performance of the developed approach and the improved implementations of the pre-existing methods were assessed using various case studies including unimodal and multimodal processes both with and without drift as well as various fault types. The Tennessee Eastman process and the non-isothermal continuously stirred tank reactor process are the two main simulators considered. Results for the considered cases show improved performance for the developed approach (adaptive PCA-based GMM) as compared to PCA, adaptive PCA, and traditional GMM, in fault detection. The GMM, as expected, performed better for multimodal cases than the PCA approaches. Also, the adaptive PCA approach performed better than PCA when there is process drift.
- ItemADM1 Parameter Calibration Method based on partial least squares regression framework for industrial-scale anaerobic digestion modelling(Stellenbosch : Stellenbosch University, 2019-12) Xu, Zhehua; Burger, A. J.; Louw, Tobias M.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Anaerobic Digestion Model 1 (ADM1) is the mainstay modelling tool for Anaerobic Digestion research and development. Its growing popularity is attributed to its sophisticated yet expandable structure. Not only does ADM1 encompass a broad range of biochemical, physicochemical and inhibition reactions, it provides the modeller a structured framework to add or remove reactions per application requirements. Two major challenges that ADM1 faces are the difficulty in translating common quality indicators into ADM1’s 26 state variables, and the complication with calibrating a large number of model parameters – 58 by default. There is currently no consensus with regards to the parameter calibration approach. Researchers utilise various sensitivity analysis techniques to identify sensitive parameters, but the selection of parameters to be calibrated relies largely on the modeller’s discretion. In some cases, decisions are simply made based on prior or expert knowledge. Since the installation, operation and maintenance of advanced instrumentation are often expensive, most industrial digesters are inadequately monitored and thus intentionally over-designed. A model that can be used on-site with acceptable accuracy could serve as a soft sensor to forecast inhibition risks and automate preventive actions. Therefore, this study aimed to develop a standardised way to calibrate parameters when optimising ADM1 models built for industrial-scale digesters. The proposed method, Partial Least Squares (PLS) Method, consists of four steps. In Step 1, a series of Monte Carlo simulations is carried out. For each Monte Carlo run, ADM1 is executed with all its model parameters sampled from independent probability distributions. These probability distributions were obtained by conducting a literature survey across 62 publications and all published parameters compiled into a domain which represents the uncertainty range of each parameter. In Step 2, a multivariate regression technique called PLS Regression (PLSR) is applied to the Monte Carlo results. The motives for employing PLSR are to reduce parameter dimensionality and to identify the underlying relationships between the model parameters and the model outputs. In Step 3, these relationships, which are mathematically described as PLS weights, loadings and latent variables, are utilised to guide parameter calibration. Lastly, the calibrated parameter set is validated against unseen data. This method successfully improved, in the absence of any modeller’s bias, the overall accuracy of a model based on data from an industrial-scale digester. The model is tasked to fit six typical plant measurements: Volatile Fatty Acids (VFA), ammonia, Volatile Suspended Solids (VSS), pH, methane gas flow & carbon dioxide gas flow. A configuration consisting of at least 500 Monte Carlo runs and two latent variables is required to produce a reasonably accurate fit. Although the use of more latent variables could enable PLSR to capture interactions of lesser weighted output variables, the model becomes increasingly prone to overfitting. However, it is envisaged that more latent variables would be necessary if more outputs are modelled. It is recommended to start the PLSR algorithm with one latent variable and only introduce more if necessary. Different parameter calibration methods produce different model outcomes. The PLS Method was benchmarked against two other methods, namely the Group Method and the “Brute Force” Method. In the former method, kinetic parameters were grouped into the three groups of sensitivities (High, Medium, Low) as suggested in the ADM1 Scientific and Technical Report. The three groups are then calibrated sequentially in order of decreasing sensitivity. The “Brute Force” Method involved calibrating all 58 parameters without any particular sequence, prioritisation or expert inputs. Lower and upper limits are, however, set as per the minimum and maximum values identified from the literature. Besides proving to be a suitable method for industrial-scale digester modelling, the PLS Method was found to exhibit several unique traits: • It is the only method that did not show signs of overfitting. • It is the only method that concluded the model optimisation with all calibrated parameter values within the surveyed minimum and maximum range. • It converges on the objective function 30-60% faster than the Group Method and 14 times quicker than the “Brute Force” Method The success is attributed to the fundamentals of PLS regression. Unlike other regression methods where parameters are adjusted independently, PLS enables parameters to be manipulated collectively in a manner that ensures maximum impact on the outputs while considering collinearities among the parameters. This guided approach effectively mitigates the so-called “curse of dimensionality” and, potentially, overfitting and thereby speeds up the calibration process.
- ItemAdsorption of 3,7-dimethyl-1-octanol in single and binary mixtures using Selexsorb CD®(Stellenbosch : Stellenbosch University, 2023-03) Louw, Anke; Schwarz, Cara Elsbeth; Stellenbosch University. Faculty of Engineering. Dept. of Chemical Engineering.ENGLISH ABSTRACT: The petrochemical industry is considered to be one of the large contributors to the global economy. The hydrocarbons produced by it are readily used as fuels. The product streams engendered by hydrocarbon production can contain low concentrations of by-products such as alcohols, which have inherent industrial value. Adsorption is a favoured method of separating 1-alcohols from an n-decane stream, as it is the most versatile, economic, and environmentally friendly among separation methods. It is a three-step process consisting of an external mass transfer, an internal mass transfer and adsorption onto active sites. The process typically occurs by means of physisorption or chemisorption. The aim of this study is to expand the limited 1-alcohol adsorption database by investigating the adsorption of 3,7-dimethyl-1-octanol (3,7- DMO), 1-octanol&3,7-DMO and 1-decanol&3,7-DMO from n-decane while using Selexsorb CD® (SCD). The project scope includes the investigation of these systems at different temperatures and initial concentrations and adsorbate ratios for the binary component systems through experimental work and kinetic and equilibrium Modelling. The experimental work was conducted using a bench-scale water bath batch-adsorption system. Mesh baskets were filled with 10 g adsorbent and fully submerged in beakers containing a solution of 0.5-3.3 alcohol and the remainder n-decane. Kinetic and equilibrium studies were conducted along with displacement tests for the two binary systems. Kinetic and equilibrium isotherm models were fitted to the datasets by using nonlinear regression. Certain project shortcomings were identified when it was consistently seen that the kinetic data generated for 3,7-DMO, 1-decanol&3,7-DMO and 1-octanol&3,7-DMO adsorption onto SCD was accompanied by a drop in adsorbate loading from 7 h to 24 h. The major project shortcoming was the primary batch experimental setup which had beakers open to atmosphere that facilitated evaporation of the water in the water bath and subsequent condensation of said water vapour dripping into the solution. Additionally, solution evaporation also took place which meant that the assumption of constant volume of solution throughout the 24 h experiment was incorrect. A secondary batch experimental setup, where the feed stock was submerged in a sealed Schott bottle, minimised the potential of evaporation or condensate droplets forming in the solution and the kinetic profiles generated at 45 °C approached equilibrium with no drop in adsorbate loading between 7 to 24 h. For the adsorption of 3,7-dimethyl-1-octanol, it was found that an increase in initial concentration increased the equilibrium adsorbent loading achieved but that it plateaued beyond 1.5 mass%. An increase in temperature increased the adsorbent loading achievable for the first 7 h. The maximum adsorbate loading achievable for 3,7-DMO onto SCD was found to be approximately 114 mg.g-1. The pseudo-second-order model (R2 = 0.98) was the best fitting kinetic model and the Langmuir and Redlich-Peterson equilibrium isotherm models (R2 = 0.9) fitted the single equilibrium data best. Stellenbosch University https://scholar.sun.ac.za iv In the case of the two binary component systems, it was found that temperature had no discernible effect on the adsorbent loading. An increase in overall initial concentration increased the adsorbent loading for the first 7 h, but the equilibrium adsorbent loading appeared to fluctuate with no discernible trend. The binary component systems both indicated that the Elovich model fitted the data well at 25 Σ and the pseudo-second-order model fitted best at 45 Σ . The rate constant and maximum loading were higher at 45 Σ as compared to 25 Σ . The binary component isotherm models that fit the two binary component systems best were the extended Freundlich and modified competitive Langmuir model indicating that both systems were heterogeneous in nature and interaction does occur between adsorbate molecules. could not predict the binary component adsorbate loadings as accurately as the single component models. Displacement tests showed that linear, smaller molecules are preferentially adsorbed when compared to branched, larger molecules. The displacement potential ranked from largest to smallest was 1-octanol, 1-decanol and 3,7-DMO. Recommendations for future studies include quantifying through experimental work some thermodynamic properties such Gibbs free energy and entropy of adsorption. All future batch adsorption tests are recommended to be performed on experimental setups that seal the adsorption system and prevents water ingress or solution evaporation. Lastly, semi-continuous experiments can be performed to obtain 3,7-DMO adsorption data when the solution is allowed to flow through a packed bed rather than stirred.
- ItemAdvantages associated with the implementation and integration of environmental management systems in small manufacturing businesses(Stellenbosch : Stellenbosch University, 2003-12) Bezuidenhout, Sol; Lorenzen, L.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: TheSouthAfrican economy islargely dependant on small enterprisesas a valuable source of job creation, gross domestic product as well as product development and innovation, However, unfortunately there existsan extremeiy high failure rate among small businesswith 20%of startup businessesfailing within the firstyear of operation, and an almost 60% failure rate within the first6 years of existence, Thesestatisticshave initiated several research studies,and have been the focus of many businessbooks, in itseif, creating a vast industry of small businesssuccesstools and quick fix solutions, When considering the high failure rates of small businesses,the concepts surrounding sustainable development come into question by pure method of association, Sustainable development issueshave become a top priority globally and have moved up the corporate agenda in recent years. When trying to "marry" these two concepts, questions arise regarding the effect of integrating sustainability principles and management systems,with contemporary small businessstrategy. Theaim of thisstudy isto investigate existingcritical successmodels and to integrate some simple initial stages of sustainable development business strategy within these models. Expectantly, some of the principles contained in the formalisation of management systems that address sustainability issues,could be incorporated in traditional management models in an attempt to identify possible interventions and tools that might positivelyimpact on the successrate of small businessenterprises. These concepts would be tested by means of implementing a formal environmental management system (based on the ISO 14001standard) as an initial approach to addressingsustainabilitygoals, as a case study, The successful implementation of an ISO 14001 environmental management system at this small businessenterprise, realised several advantages for the company, and have been used to adapt traditional management models to include for some of the simple concepts of sustainable development.
- ItemAerosol synthesis of ceramic particles by seed growth : analysis of process constraints(Stellenbosch : Stellenbosch University, 2002-04) Human, Chris; Bradshaw, S. M.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Aerosol synthesis involves the formation of condensable product species by gas-phase reaction, and the simultaneous growth of particles by coagulation. For the production of ceramic particles, reaction temperatures higher than 700 K are commonly used, and a maximum fusible particle size is observed. Coagulation-controlled growth yields spherical particles up to the maximum fusible size (approximately < 50 nm). Such particles coalesce rapidly and completely upon collision with other particles, whereas larger particles reach a meta-stable equilibrium for solid-state coalescence. Agglomerates with weak Van der Waal's bonds between particles inevitably form in the cooling/collection process. Coagulation of particles larger than the maximum fusible particle size yields agglomerates with significant neck growth between the primary particles. Spherical ceramic particles in the order of 1 J-Lm are favourable precursors for bulk electronic applications that require high purity. Such large spherical particles may possibly be produced in conditions of seed growth, which involves the deposition of small newly formed clusters onto larger existing particles. The central focus of the present work is to evaluate whether spherical ceramic particles significantly larger than the maximum fusible size may be produced by seed growth. The evaluation is done by modelling of process constraints and interpretation of published results. The modelling of constraints is based on a mathematical framework for comparison of different values of reactor design parameters. This framework comprises a simplified model system, a typology of quantities, and isolation of a set of independent design parameters. Comparison is done on the basis of fixed initial (seed) and final (product) particle sizes. The reactor design framework is used to evaluate the hypothesis on spherical seed growth, by assessing whether a reactor can be designed that satisfies all the process constraints. Future extension of the framework may allow optimisation for seed growth in general. The model system assumes laminar flow and isothermal conditions, and neglects the effect of reactor diameter on wall-deposition. The constraints are graphically represented in terms of the design parameters of initial reactant concentration and seed concentration. The effects of different temperatures and pressures on the constraints are also investigated. In a separate analysis, the suitability of turbulent flow for seed growth is assessed by calculating Brownian and turbulent collision coefficients for different colliding species. As turbulent intensity is increased, the seed coagulation rate is the first coagulation rate to be significantly enhanced by turbulence, resulting in a lowering of the maximum seed concentration allowed by the constraint for negligible seed coagulation. This tightening of a constraint by turbulence is the justification for considering only laminar flow for evaluating the hypothesis on spherical seed growth. Quantitative application of the model of constraints, as well as experimental and modelling results from the literature, did not demonstrate that significant spherical seed growth is possible without seed coagulation (agglomeration). As part of the conceptual effort in becoming familiar with aerosol reactor engineering, a simple two-mode plug-flow aerosol reactor model was developed, and verified with published results. This model has some novel value in that it translates the equations for aerosol dynamics into the terminology of reactor engineering.
- ItemAlkaline polyol fractionation of sugarcane bagasse and eucalyptus grandis into feedstock for value added chemicals and materials(Stellenbosch : Stellenbosch University, 2017-03) Pius, Moses Tuutaleni; Gorgens, Johann F.; Chimphango, Annie F. A.; Tyhoda, Luvuyo; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH SUMMARY: The main components of lignocellulosic biomass cellulose, hemicellulose and lignin are feedstock for chemical and material manufacturing processes. Integrated biorefinery processes incorporate the production of these valuable components from lignocellulose feedstock in good yield and quality. The nature and complexity of lignocellulose materials and its components require a well-designed process to fractionate these components into individual streams, while special attention is paid to the easily hydrolysed component, hemicelluloses. In the present study, a novel process for the fractionating sugarcane (Saccharum officinarum) bagasse (SCB) and Eucalytpusgrandis (EC) biomass into their main constituents (cellulose pulp, aqueous hemicellulose and lignin) is designed. Research focused on obtaining hemicelluloses in polymeric form or as biopolymers, while maintaining high yields and quality of cellulose and lignin polymers. This was achieved by following organosolv technique using high boiling point alcohols, xylitol and ethylene glycol as the fractionating solvents at concentrations between 20-30% (w/w) and 50-70% (v/v) respectively. The fractionation process’ central composite design incorporated mild conditions, i.e. fractionation time between 2-4 hours, temperatures at 140-180 ºC catalysed by sodium hydroxide between 1-2 wt.% and also subsequently investigated the option of pre-extracting hemicelluloses from the feedstock at previously established conditions prior to further fractionation with ethylene glycol given its hemicellulose destructing nature from literature studies. Results show hemicellulose alkaline pre-extraction to provide higher dissolutions and recoveries of hemicelluloses as compared to those extracted by direct fractionation with the two solvents. At optimum conditions xylitol fractionations achieved higher component recoveries as compared to ethylene glycol. However, ethylene glycol fractionations are more severe in dissolving not only hemicellulose and lignin from both materials but also cellulose. Ethylene glycol fractionations were also accompanied by a high degree of cellulose dissolutions, in some runs up to 39% of the initial, mostly at extreme conditions. Hemicelluloses from all processes were recovered as biopolymers with weight-average molecular weight (Mw) evaluation revealing that alkaline pre-extracted hemicelluloses had highest weight-average molecular weights, 33 638 and 61 644 gmol-1 for sugarcane bagasse and Eucalytpus grandis respectively, as compared to direct raw material fractionation processes which all gave below 23 000 gmol-1 with xylitol processes giving higher molecular weights than ethylene glycol processes. Enzymatic hydrolysis of cellulose revealed ethylene glycol residues to be more digestible (≥60%) than xylitol derived residues (≤60%). Digestibility is further improved with fractionation of hemicellulose pre-extraction solids (≥80%). In terms of cellulose crystallinity, a general increase after fractionation was observed. Residual solids from ethylene glycol treatments displayed higher crystallinity (50.08% EC, 48.44% SCB) as compared to xylitol processes (32.44% EC, 43.98% SCB). Residual solids from the NaOH hemicellulose pre-extraction step also had high crystallinities (43.58% EC and 47.81% SCB) than the xylitol process but just lower than EG derived residual solids (≥48%). There is a major decline in the amount of syringyl and guaiacyl groups in the lignin residues after treatment for all processes supported by low intensity bands in Fourier Transform Infrared Resonance (FTIR). Minimal degradation of lignin fraction by both processes was observed with low fixed carbon content of lignin rich solids, below 20%. In conclusion, xylitol fractionations overweighed ethylene glycol in hemicellulose, lignin and cellulose recoveries, and lignin and hemicellulose quality while ethylene glycol produced good quality cellulose. When compared to conventional organosolv fractionations (i.e. ethanol), these two polyols overweigh organosolv in aspects such as quality of cellulose, hemicellulose and lignin but comes short in terms of component recoveries particularly with ethylene glycol fractionations.
- ItemAmmonium thiosulphate leaching of gold from printed circuit board waste(Stellenbosch : Stellenbosch University, 2017-03) Albertyn, Pierre Wouter; Dorfling, Christie; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH SUMMARY: Technological innovation leads to a reduced lifespan of older electrical and electronic equipment, which in turn leads to the generation of vast quantities of electronic waste (e-waste). The recycling of e-waste is becoming increasingly important as it provides certain economic benefits apart from the obvious environmental benefits. Printed circuit boards (PCBs) are found in most forms of e-waste and contain especially high concentrations of base and precious metals. Hydrometallurgy is one of the major processing routes for the recovery of valuable metals from e-waste. This processing route normally implements several leaching stages to selectively recover certain metals. A two-step base metal leaching stage was implemented that utilized two different lixiviants. The first step used nitric acid to mainly recover Pb and Fe, while the second step used sulphuric acid in combination with hydrogen peroxide to mainly recover Cu, Zn and Ni. The Au and Ag were subsequently recovered in an additional leaching stage with ammonium thiosulphate in the presence of copper(II) sulphate. This study focused on the use of a less environmentally hazardous lixiviant than the traditional alternative, cyanide, to promote the development of a more sustainable recovery process. The primary objective of this study was to determine how the variation of copper in the first stage residue will affect the gold leaching in the second stage. The extent of interactions between process conditions was also studied. These process conditions included temperature, thiosulphate concentration, ammonium concentration, cupric ion concentration, pH and pulp density. The secondary objective of this study was to determine how the degradation of thiosulphate was affected by the change in certain process conditions. The screening phase determined that only a change in S2O3 concentration, pH range and pulp density had a statistically significant effect on the Au extraction. Statistically significant interactions existed between the Cu on the PCBs and Cu(II) concentration; and the Cu on the PCBs and pulp density. These results were used together with recommendations from literature to determine what factors to include in the full factorial design. The S2O32- concentration (0.1 and 0.2 M), NH3 concentration (0.2 and 0.4 M), pH range (9 – 9.5 and 10 – 10.5) and pulp density (25 and 50 g/L) were chosen. The investigation of the S2O32- and NH3 concentrations determined that Au leaching was dependent on the S2O32-/NH3 ratio. S2O32- concentrations that were too high relative to NH3 resulted in the Cu(S2O32-)35- complex becoming more prominent, which hindered Au dissolution. NH3 concentrations that were too high resulted in a decrease in the oxidation potential of the Cu(II)-Cu(I) couple, which in turn reduced the driving force for the Au leaching reaction. NH3 concentrations that were too low reduced the amount of Cu(NH3)42+ (oxidizing agent for gold) that was available, which in turn also reduced Au leaching. The optimum S2O32-/NH3 ratio for the range of parameters that were investigated was found to be 0.5. A change in NH3 concentration was found to have a more significant effect on Au extraction at the lower pH range of 9 – 9.5. This was believed to be due to a higher concentration of NH4+ relative to NH3 being present at lower pH values, which caused faster Au leaching. The lower pH range of 9 – 9.5 also generally produced better Au leaching. An increase in pulp density from 25 to 50 g/L resulted in a decrease in Au extraction, which could be attributed to the fact that the amount of reagent per unit weight of PCB decreased. The importance of the interactions between S2O32- and NH3; and pH range and NH3 were confirmed in the statistical analysis of the full factorial design. The statistical analysis produced a model with a R2 value of 0.94 that predicted an optimum Au extraction of 78.04 % at the same conditions that produced the optimum Au extraction during testing. The predicted an optimum compared well with actual value of 78.47 %, which was obtained at 0.2 M S2O32-, 0.4 M NH3, 0.02 M Cu(II), 25 g/L, 25°C, 1 – 10 % leftover Cu and pH range of 9 – 9.5. The optimum conditions were used to determine the effect of a variation in Cu in the first stage residue, temperature and Cu(II) concentration. Au extraction decreased with an increase in Cu leftover content, temperature and Cu(II) concentration. Increased amounts of Cu inhibited Au leaching through the dissolution of Cu to Cu(NH3)2+ with the consumption of Cu(NH3)42+. Increased rates of thiosulphate consumption/degradation were encountered at higher temperatures, Cu(II) concentrations and leftover Cu.
- ItemAnaerobic co-digestion of fish sludge originating from a recirculating aquaculture system(Stellenbosch : Stellenbosch University, 2022-12) Netshivhumbe, Rudzani; Goosen, Neill Jurgens; Faloye, F.; Gorgens, Johann F.; Mamphweli, N.S.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Recirculation aquaculture systems (RAS) are considered as sustainable and environmentally friendly aquaculture systems capable of meeting the growing demand of seafood for human consumption. However, RAS produce large quantities of waste sludge from uneaten feed and fish faecal matter, which need to be removed from the recirculating water and treated to prevent adverse environmental impacts. Anaerobic digestion (AD) has been considered as an alternative method to stabilize the amount of organic waste in the environment before its disposal, with the simultaneous production of bio-methane that can serve as a source of energy within RAS. However, there are some drawbacks in the mono-digestion process of fish sludge (FS) such as process inhibition, unbalanced nutrient contents, and low methane yields. The biomethane production from FS, food waste (FW), and fruit & vegetable waste (FVW) was optimized during anaerobic co-digestion using a mixture design. The synergistic and antagonistic interaction effects of the three substrates on specific methane yield, volatile solids reduction, and process stability were evaluated in both batch and semi-continuous mode. A mixture design was used to determine the best mixture compositions of FS, FW, and FVW for specific methane yield and volatile solids removal during the anaerobic co-digestion process based on biomethane potential (BMP) measurements. The results showed that the optimum mixture proportions of FS, FW, and FVW were 63 %, 18 %, and 19 %, respectively. The results showed the maximum methane production and VS removal of 401 mL CH4/gVS and 64%, respectively under the optimum mixture. Anaerobic co-digestion of FS with FW and FVW enhanced the methane yields by 8 folds compared with mono-digestion of FS. The optimum mixture proportions obtained from batch BMP tests were further evaluated in 50 L batch and 30 L semi-continuous pilot-scale digesters to evaluate the effect of organic loading rate (OLR) on biogas production and process performance stability. The methane yield obtained from the batch pilot-scale digester was 272 NmLCH4 /gVS. This was 71 % of the methane yield obtained from the BMP test under the same optimum mixture condition. The batch digester showed no substantial inhibition of the system due to its strong buffering capacity. In semi-continuous mode, the digester was conducted under different OLRs of 1, 2, and 3 𝑔𝑉𝑆𝐿−1𝑑−1 to investigate the impacts of OLR on biogas and methane production, and process performance stability of the anaerobic co-digestion of FS, FV, and fruit and FVW. The highest total biogas and methane production of 388 L/gVS and 67 L/gVS, with a methane content of 66.8% obtained at an OLR of 2 𝑔𝑉𝑆𝐿−1𝑑−1 compared to OLRs of 1 and 3 𝑔𝑉𝑆𝐿−1𝑑−1. The digester showed instabilities or failure at an OLR of 3 𝑔𝑉𝑆𝐿−1𝑑−1 due to acid crash and accumulation of VFA of 11g/L. An OLR of 1- 2 𝑔𝑉𝑆𝐿−1𝑑−1 is recommended for anaerobic co-digestion of FS, FW, and FVW in semi-continuous digesters because of less inhibitor indicators observed.
- ItemAnaerobic co-digestion of fruit juice industry wastes with lignocellulosic biomass(Stellenbosch : Stellenbosch University, 2019-04) Kell, Carissa Jordan Kayla; Gorgens, Johann F.; Louw, Tobias M.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: The fruit juice industry in South Africa forms an important part of the South African economy, however it generates large quantities of liquid and solid organic wastes. Landfilling is typically used to dispose of these wastes, resulting in uncontrolled greenhouse gas emissions (GHG). Anaerobic digestion (AD) offers an alternative waste disposal method and produces two valuable by-products: biogas (a renewable energy source) and a liquid fertiliser. The high sugar content of fruit waste alone often results in AD failure due to acidification, resulting in poor quality biogas. Consequently, there is relatively little information available on the AD of apple fruit juice process wastes (FJPW). Identification of substrate combinations that improve the energy value of the resultant biogas may mitigate GHG emissions and generate valuable by-products which provide additional revenue streams to industry. This study thus aimed to identify optimal substrate combinations to aid in waste disposal of FJPW and energy value of biogas from fruit juice industry waste based on seasonal availability of waste streams. Five waste streams: manure, food waste, retentate, pomace and waste apples were incorporated into a five-factor mixture design to assess food waste and manure as co-substrates of FJPW. This design was carried out in a series of biomethane potential (BMP) tests performed in 100 mL serum bottles. A second mixture design was performed using BMP tests in 100 mL bottles to evaluate lignocellulosic biomass (LCB) as a potential co-substrate of FJPW. A biogas and methane optimisation substrate mixture (50% manure, 30% LCB, 20% Retentate) and a manure minimisation mixture (30% manure, 30% LCB, 30% retentate, 10%waste apples) were selected and scaled up in 50 L CSTR reactors in batch process for 32 days with intermittent mixing. Two substrate combinations based on biogas optimisation and manure minimisation were scaled-up in 50 L reactors in semi-continuous process and fed increasing organic loading rates (OLRs) from 1-4 gVS/L/day over the course of 32 days to identify the maximum OLR that can be stably operated for each point. The results indicated food waste was highly variable and behaved similarly to FJPW when digested, thus food waste was deemed unsuitable as a co-substrate for FJPW. An ANOVA was performed on the results of the LCB mixture design revealing both biogas and methane production to be significant (p< 0.05). The standardised effect estimates of all five feedstocks revealed manure, LCB and retentate to have a significant (p<0.05) effect on biogas and methane production. LCB addition was found to significantly improve biogas production and prevent acid crash, however it mainly did so when compensating for the fruit waste fraction rather than the manure fraction except for two mixtures: 20% manure, 30% LCB, 30% pomace and 20% retentate and 20% manure; 30% LCB, 30% waste apples and 20% retentate. The highest yields obtained from the LCB supplementation experiment were 410.01 mL.gVS-1 fed biogas and 167.10 mL.gVS-1 fed methane for the fruit-juice producing season and 325.69 mL.gVS-1 fed and 131.95 mL.gVS-1 fed for the non- juice producing season. The improved biogas and methane yields in the batch experiment compared to lab-scale were as a result of slow intermittent mixing at 125 rpm for 5-10 minutes twice daily. The biogas optimisation point gave the highest yields at an OLR of 4 gVS/L/day. The manure minimisation point demonstrated the highest biogas and methane production at an OLR of 3.5 gVS/L/day, with the system showing signs of organic overloading at a higher OLR. To conclude, this study found a 30% LCB addition to improve digestibility of fruit process waste mixture for certain combinations of pomace and retentate, and waste apples and retentate with 20% manure. As this study only investigated 0%, 20% and 30% LCB supplementation, future research should focus on a broader array of supplementation levels in order to further maximise fruit waste disposal via AD.
- ItemAnalysis and modelling of mining induced seismicity(Stellenbosch : University of Stellenbosch, 2006-12) Bredenkamp, Ben; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.Earthquakes and other seismic events are known to have catastrophic effects on people and property. These large-scale events are almost always preceded by smallerscale seismic events called precursors, such as tremors or other vibrations. The use of precursor data to predict the realization of seismic hazards has been a long-standing technical problem in different disciplines. For example, blasting or other mining activities have the potential to induce the collapse of rock surfaces, or the occurrence of other dangerous seismic events in large volumes of rock. In this study, seismic data (T4) obtained from a mining concern in South Africa were considered using a nonlinear time series approach. In particular, the method of surrogate analysis was used to characterize the deterministic structure in the data, prior to fitting a predictive model. The seismic data set (T4) is a set of seismic events for a small volume of rock in a mine observed over a period of 12 days. The surrogate data were generated to have structure similar to that of T4 according to some basic seismic laws. In particular, the surrogate data sets were generated to have the same autocorrelation structure and amplitude distributions of the underlying data set T4. The surrogate data derived from T4 allow for the assessment of some basic hypotheses regarding both types of data sets. The structure in both types of data (i.e. the relationship between the past behavior and the future realization of components) was investigated by means of three test statistics, each of which provided partial information on the structure in the data. The first is the average mutual information between the reconstructed past and futures states of T4. The second is a correlation dimension estimate, Dc which gives an indication of the deterministic structure (predictability) of the reconstructed states of T4. The final statistic is the correlation coefficients which gives an indication of the predictability of the future behavior of T4 based on the past states of T4. The past states of T4 was reconstructed by reducing the dimension of a delay coordinate embedding of the components of T4. The map from past states to future realization of T4 values was estimated using Long Short-Term Recurrent Memory (LSTM) neural networks. The application of LSTM Recurrent Neural Networks on point processes has not been reported before in literature. Comparison of the stochastic surrogate data with the measured structure in the T4 data set showed that the structure in T4 differed significantly from that of the surrogate data sets. However, the relationship between the past states and the future realization of components for both T4 and surrogate data did not appear to be deterministic. The application of LSTM in the modeling of T4 shows that the approach could model point processes at least as well or even better than previously reported applications on time series data.
- ItemAnalysis of process data with singular spectrum methods(Stellenbosch : University of Stellenbosch, 2003-12) Barkhuizen, Marlize; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: The analysis of process data obtained from chemical and metallurgical engineering systems is a crucial aspect of the operating of any process, as information extracted from the data is used for control purposes, decision making and forecasting. Singular spectrum analysis (SSA) is a relatively new technique that can be used to decompose time series into their constituent components, after which a variety of further analyses can be applied to the data. The objectives of this study were to investigate the abilities of SSA regarding the filtering of data and the subsequent modelling of the filtered data, to explore the methods available to perform nonlinear SSA and finally to explore the possibilities of Monte Carlo SSA to characterize and identify process systems from observed time series data. Although the literature indicated the widespread application of SSA in other research fields, no previous application of singular spectrum analysis to time series obtained from chemical engineering processes could be found. SSA appeared to have a multitude of applications that could be of great benefit in the analysis of data from process systems. The first indication of this was in the filtering or noise-removal abilities of SSA. A number of case studies were filtered by various techniques related to SSA, after which a number of neural network modelling strategies were applied to the data. It was consistently found that the models built on data that have been prefiltered with SSA outperformed the other models. The effectiveness of localized SSA and auto-associative neural networks in performing nonlinear SSA were compared. Both techniques succeeded in extracting a number of nonlinear components from the data that could not be identified from linear SSA. However, it was found that localized SSA was a more reliable approach, as the auto-associative neural networks would not train for some of the data or extracted nonsensical components for other series. Lastly a number of time series were analysed using Monte Carlo SSA. It was found that, as is the case with all other characterization techniques, Monte Carlo SSA could not succeed in correctly classifying all the series investigated. For this reason several tests were used for the classification of the real process data. In the light of these findings, it was concluded that singular spectrum analysis could be a valuable tool in the analysis of chemical and metallurgical process data.
- ItemAntimicrobial lipopeptide production by Bacillus spp. for post-harvest biocontrol(Stellenbosch : Stellenbosch University, 2014-12) Pretorius, Danielle; Clarke, Kim G.; Stellenbosch University. Faculty of Engineering. Department of Process Engineering.ENGLISH ABSTRACT: As overpopulation threatens the world’s ability to feed itself, food has become an invaluable resource. Unfortunately, almost a third of the food produced for human consumption is lost annually. Pests including insects, phytopathogens and weeds are responsible for more than a third of the annual major crop losses suffered around the world. The majority of current post-harvest control strategies employ synthetic agents. These compounds, however, have been found to be detrimental to the environment as well as human health, which has led researchers to investigate alternative strategies. Biocontrol agents are environmentally compatible, have a lower toxicity and are biodegradable, making them an attractive alternative to the synthetic control agents. The lipopeptides produced by Bacillus spp. in particular, have shown great potential as biocontrol agents against various post-harvest phytopathogens. Most biocontrol strategies apply the biocontrol organism, for example Bacillus, directly, whereas this study focused on the use of the lipopeptide itself as an antifungal agent. This is advantageous as the lipopeptides are less sensitive to their surroundings, such as temperature and pH, compared to living organisms, allowing for the production of a standardized product. This study investigated the production of the Bacillus lipopeptides surfactin, fengycin and iturin under controlled batch conditions. Parameters increasing lipopeptide production were quantified, focussing on antifungal lipopeptides (iturin and fengycin), and lipopeptide production was optimized. Experiments were performed in a fully instrumented 1.3 L bench-top bioreactor and lipopeptide analyses were performed via high pressure liquid chromatography (HPLC) and liquid chromatography-mass spectroscopy (LC-MS). After screening four Bacillus spp., Bacillus amyloliquefaciens DSM 23117 was found to be the best antifungal candidate. This was based on it outperforming other candidates in terms of maximum antifungals produced, Yp/x,antifungals (yield per cells), and antifungal productivity. Nitrate, in the form of NH4NO3, was critical for lipopeptide production and an optimum concentration was observed above which the CDW (cell dry weight) no longer increased significantly and both μmax (maximum specific growth rate, h-1) and lipopeptide production decreased. For μmax, the optimum NH3NO4 concentration was 10 g/L and for lipopeptides it was 8 g/L. At these respective NH4NO3 concentrations μmax = 0.58 (h-1), the maximum antifungals (fengycin and iturin) were 285.7 mAU*min and the maximum surfactin concentration was 302 mg/L. The lipopeptides produced by B. amyloliquefaciens, the antifungals (fengycin and iturin) and surfactin, are secondary metabolites, regardless of the optimization treatment, i.e. increased NH4NO3 concentrations. Using 30% enriched air extended the nitrate utilization period, suggesting that when increasing supply concentration, more oxygen was available to act as electron acceptors, allowing nitrate to be used for lipopeptide production. The number of iturin and fengycin homologues generally increased with an increase in nitrate concentration. This suggested that process conditions, such as nitrate concentration, can be used to manipulate homologue ratios, allowing for the possibility to tailor-make biocontrol-agent upstream, during the production process, and possibly increase the efficacy of the biocontrol strategy. The lipopeptides produced by B. amyloliquefaciens showed complete inhibition against Botryotinia fuckeliana and diminished the growth capabilities of Botrytis cinerea. No inhibition was observed against Penicillium digitatum. These results indicate potential of the biocontrol strategy, although scale-up and fed-batch studies are recommended, especially when considering commercial implementation. Studies regarding the lipopeptide application method, i.e. a single application or multiple applications, should also be investigated as this will influence the efficacy of the lipopeptides against the target organisms.
- ItemApplication of membrane technology for purifying tyre derived oil(Stellenbosch : Stellenbosch University, 2018-03) Tshindane, Pfano; Van der Gryp, Percy; Gorgens, Johann F.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH SUMMARY: Tyre derived oil (TDO) is an abundant liquid product obtained after pyrolysis of waste tyre. It contains a complex mixture of C6–C24 organic compounds of various classes such as paraffins, olefins, aromatics, nitrogen and sulphur compounds as well as oxygenated compounds. TDO is a potential source of high market value compounds such as dl-limonene, 4-vinylcyclohexene, toluene, ethylbenzene, xylenes and many others. dl-limonene is the most abundant valuable chemical in TDO. Valuable chemicals in TDO are only marketable at purities greater than 90% v/v. dl-limonene together with p-Cymene, indane and 1,2,4-trimethylbenzene have similar physical properties such as boiling point and viscosity. Conventional distillation procedures fail to purify limonene from TDO because of these limonene-like impurities. TDO is also a potential fuel for diesel engines. The calorific value of TDO has been reported to be approximately equal to that of commercial fuels. Other commercial fuel properties that match that of TDO include flash point, density, viscosity etc. For fuels, the South African national standards (SANS) specifies that fuels must contain a sulphur content of less than 500 ppm. Benzothiazole is reportedly the most abundant sulphur species in TDO. The purification of limonene and the reduction of benzothiazole from TDO is vital in the field of waste tyre valorisation. The aim of this study is to investigate the purification (recovery of limonene and reduction of benzothiazole) of TDO using a novel green separation technology, namely organic solvent nanofiltration (OSN). OSN allows size-exclusion based separation with the absence of phase transitions to ensure much lower energy consumption and therefore a favourable economic and carbon footprint compared to the conventional separation methods such as distillation. Three different commercial OSN membranes, Puramem®-280 (PM-280), STARMEM™-280 (ST-228) and Duramem®-200 (DM-200), were employed for the experimental work of this study. TDO was allowed to permeate through a membrane installed to a dead-end OSN set-up. The transmembrane pressure (10-40 bar), species concentration (50-150 ppm) and feed dilution (toluene, 1-octene and ethanol) were all varied so as to investigate the effect on membrane performance (flux and rejection). It was found that pure species (Limonene, p-Cymene, 1,2,4-trimethylbenzene and toluene) permeate at distant rates through PM-280 relative to ST-228 and DM-200. The distant rates through PM-280 imply that the membrane is more selective than ST-228 and DM-200. Flux of pure species through PM-280 (30 bar) ranged from 75 L.m-2.h-1 to 297 L.m-2.h-1. Pure species flux was found to be highly dependent on transmembrane pressure, molecular weight and parameters describing the interaction between the membrane and the pure species. It was found that pure benzothiazole destroys the membrane surface of both PM-280 and ST-228. The purification of TDO through PM-280 and ST-228 resulted in high TDO rejections, 88% and 100% respectively. Concentration polarization was deduced as a possible explanation for the high TDO rejections. Negative rejections were recorded with PM-280 for limonene and benzothiazole, -6% and -7% respectively. Negative rejections imply that the species is more concentrated in the permeate than in the feed solution. For this study, a negative rejection is a good performance by the membrane since it implies that the targeted compounds are being drawn out of the crude TDO. As an effort of enhancing membrane performance, TDO was diluted with different solvents (toluene, 1-octene and ethanol). TDO/toluene dilution enhanced the membrane performance by resulting in higher negative rejections through ST-228, -10% and -98% for limonene and benzothiazole respectively. The membrane performance was still not competent since benzothiazole percent change was only 6.3%. However, it was found that the transport of diluted TDO species across the membrane is highly influenced by the interaction between the membrane and the species. A species having a strong affinity for the membrane recorded a low rejection compared to a species having a weak affinity for the membrane. It was also found that the membrane performance is unaffected by the concentration of TDO species. The technical-viability of OSN in purifying or fractionating crude TDO is unnoticed in this study. Through comparison, it was noticed that the breakthrough for TDO sulphur reduction and limonene recovery is likely to happen through distillation procedures.
- ItemThe application of the lipopeptide Surfactin in heavy metal extraction from mine wastewater(2023-12) Schlebusch, Izak David; Tadie, Margreth; Pott, Robert William McClelland; Stellenbosch University. Faculty of Engineering. Dept. of Chemical Engineering. Process Engineering.ENGLISH ABSTRACT: Heavy metals (HMs) are a common contaminant present in wastewater generated by mining operations. HMs can be toxic and carcinogenic, and do not naturally degrade. They, therefore, tend to accumulate in environments where they are discharged. Conventional HM separation processes can be effective, however several drawbacks including poor selectivity, high process costs, or generation of secondary pollutants can limit their efficacy in industry. Surfactin is a lipopeptide biosurfactant which shows great promise for application in HM separation processes. Surfactin is capable of coordinating HM cations into stable complexes, and the mechanism of cation coordination is hypothesized to render the complex insoluble in aqueous solutions. These properties, in tandem with the environmentally benign nature of biosurfactants, makes surfactin an attractive alternative to synthetic reagents using in conventional separation processes. The aim of this dissertation is to determine which HM extraction methods can successfully utilise surfactin to extract HMs from aqueous solution and investigate the efficacy of these processes. Based on the hypothesis that surfactin forms insoluble complexes with HM ions, a chelating precipitation process was identified as one potential mode of HM extraction. The ability of surfactin to bind HMs into a hydrophobic complex also suggests that it may be functional as a collector in an ion flotation process. To test the potential of these processes, single ion copper, nickel, and cobalt solutions were used simple model contaminated wastewaters. The ability of surfactin to coordinate and precipitate the metal ions was first confirmed by mixing equimolar concentrations of surfactin with each respective copper, nickel, and cobalt solution. The precipitates that spontaneously formed were dried and analysed by FTIR spectroscopy to determine the binding sites which coordinated the HM ions. It was found that the carboxylate groups and the amide groups present in the hydrophilic cyclic heptapeptide moiety were both capable of coordination, and coordination of the HMs at these sites would decrease aqueous solubility of the complex. The extent of precipitation of copper, nickel, and cobalt by surfactin was then quantified to determine the efficacy of the precipitation process with surfactin as a precipitant. Up to 84% and 88% of nickel and cobalt respectively was extracted by the surfactin precipitation process, and up to 100% of copper was extracted by surfactin precipitation in conjunction with alkaline precipitation. Initial relative surfactin concentration and pH were shown to be key operating parameters that should be controlled in the surfactin-aided precipitation process. The value of the ion flotation process utilising a surfactin collector was investigated by determining how far the ion flotation process could lower the concentration of HMs in the residual solution. The reduction in copper, nickel, and cobalt concentrations in the residual solution was 67%, 82%, and 96% respectively. It was further found that the extent of ion extraction could be improved by optimisation of the flotation pH, air flowrate, and initial concentration of surfactin. Based on these results, it appears that precipitation and ion flotation have the potential to effectively utilise the promising properties of surfactin to treat HM contaminated wastewater.
- ItemApplying dynamic Bayesian Networks to process monitoring(Stellenbosch : Stellenbosch University, 2018-12) Wakefield, Brandon Jason; Auret, Lidia; Kroon, R. S. (Steve); Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: In efforts to reduce the impact of human error on the operation of chemical and mineral processing plants, reliable process monitoring solutions attempt to assist plant operators and engineers to detect and diagnose process faults before significant loss is incurred. An existing solution, the traditional multivariate statistical process monitoring (MSPM) approach, is able to reliably detect abnormal process behaviour but struggles to unambiguously identify the root cause of the abnormal behaviour. It was identified that this is caused by a lack of incorporation of existing process knowledge into the framework of the MSPM approach. It was proposed to investigate a different fault diagnosis approach which directly incorporates process knowledge into its framework. Lerner et al. (2000) and Lerner (2002) present such an approach, using probabilistic methods to infer process behaviour given a particular process model. This model is in the form of a dynamic Bayesian network (DBN), and would contain various models which each describe particular process behaviour given information about the operational status of various process components. In particular, these DBN models were able to describe normal process behaviour in addition to highly specific abnormal process behaviour caused by, for instance, a sensor fault or a blocked pipe. Using optimised methods, the authors could then use a DBN model to make predictions about process behaviour and infer, given observation of actual process behaviour, which combination of component statuses best describe that observation. Therefore, solving the fault diagnosis problem could be reduced to performing inference in a DBN using this approach. A probabilistic fault diagnosis (PD) approach based on Lerner et al. (2000) and Lerner (2002) was therefore implemented and investigated in this thesis. A survey of recent DBN-based PD approaches was also performed, and it was determined that relatively little research had been done on the topic. Furthermore, published results presenting fault diagnosis performance for DBN-based PD approaches were typically found to be useless for meaningful comparison with a traditional MSPM approach. In this regard, this thesis aimed to investigate the usefulness of the PD approach in comparison to the MSPM approach, while providing useful fault diagnosis performance metrics to facilitate comparison with other fault diagnosis approaches. The PD approach tested in this research also extended upon Lerner et al. (2000) and Lerner (2002) by including models for regulatory control systems and recycle streams based on the work by Yu and Rashid (2013). Additionally, from the same paper, the concept of abnormality likelihood index (ALI) was implemented in the PD approach. This enabled the PD approach to function more similarly to the MSPM approach, facilitating direct comparison. Generally, it was found that the PD approach could provide competitive fault detection when compared with the MSPM approach. However, this was at the cost of real-time fault detection as well as longer detection delay for incipient faults. On the other hand, it was found that the PD approach performed better at root cause analysis than the MSPM approach. In particular, the PD approach typically provided better isolation for the root cause of fault conditions. Despite some issues, similar results were observed for the PD approach when scaling up to larger processes. Nonetheless, these issues may be addressed with additional research, further improving the capabilities of the PD approach. Therefore, it was concluded that the PD approach is useful for fault diagnosis and should be investigated further in future research.
- ItemAqueous two-phase systems for the extraction of polyphenols from wine solid waste(Stellenbosch : Stellenbosch University, 2019-12) Herbst, Jacqueline; Pott, Robert William M.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: The South African wine industry produces large amounts of solid waste, which is left over after the wine making process is complete. This solid waste accounts for approximately 25% of the fresh grape mass used for wine making and is called pomace. The pomace includes the parts of the grape not converted to wine: the skins, seeds and stems. These contain valuable compounds which could be used, for instance, for nutraceuticals, thus allowing for the valorisation of the wine solid waste. Included in these compounds are polyphenols. Polyphenols are compounds containing aromatic rings with hydroxyl groups, many of which have interesting or useful properties, such as being strong antioxidant molecules, and are therefore sought after for therapeutics and cosmetics. Polyphenols have been extracted from various plant sources using conventional extraction methods, such as solvent extraction using either methanol or ethanol or supercritical CO2 extraction. These solvents or processes are often expensive, and the high volumes needed drives up the processing costs. Alternative methods are needed to provide more cost-effective extraction processes, while also a Green Chemistry approach to extraction by taking into consideration the environmental impacts of the process and reducing harmful solvent use. One such alternative extraction is the use of Aqueous Two-Phase Systems (ATPS), which have been used as a process alternative for biomolecule extractions, including polyphenols. ATPS are composed of two immiscible aqueous solutions, often created with polyethylene glycol (PEG) and salt. Two phases form when these components are within specific concentration bounds. The biomolecules are extracted from the plant material using the ATPS and then concentrated to one phase of the ATPS. Many different PEG and salt combinations exist and have been studied in the literature, looking at the phase behaviour as well as the ability of the ATPS to extract and concentrate the biomolecules. In this study, ATPS with PEG 6000, PEG 8000 and PEG 10 000 with potassium sodium tartrate were studied. In the first set of experimentation, the phase behaviours were looked at, at different temperatures by constructing phase diagrams which included binodal curves and tie-line information (which define the two-phase region). The binodal data were fitted with a non-linear model, called the Merchuck equation, and the tie-line data were validated using the Othmer-Tobias and Bancroft equations. Phase diagrams were constructed to visualise these effects and indicated that higher temperatures and larger PEG molecular weights favoured phase formation, producing ATPS with a wider range of PEG/salt compositions. These results have been published in The Journal of Chemical and Engineering Data. In the next set of experimentation, these ATPS were evaluated for their ability to extract and concentrate polyphenols from wine solid waste, compared to a solvent extraction using ethanol/water (80:20 v/v). Various parameters were investigated including PEG Mw, salt type, TLL, extraction temperature, extraction time, pH, biomass loading and phase separation temperature. It was found that temperature, PEG composition (TLL) and biomass loading were the biggest drivers in improving the extraction and concentrating ability of the ATPS. The ATPS ability was judged using yield and partitioning coefficient (K) of the polyphenols by gallic acid equivalents (GAE). Yields upwards of 85% were achieved, with the K varying between 2-4, the highest K of 7.2 achieved by only one ATPS, which had the biggest fraction of PEG in the total ATPS composition. The results show that ATPS can be successfully used as an extraction method for polyphenols from wine solid waste.
- ItemArabinoxylan as partial flour replacer: The effect on bread properties and economics of bread making(Stellenbosch : Stellenbosch University, 2016-03) Koegelenberg, Danika; Chimphango, Annie F. A.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Wheat bran, used for animal feed, is a good candidate for production of higher value products such as arabinoxylan (AX). Extracted AX holds potential as a partial flour replacer in the bread making industry. The aim of this study was to maximise flour removal while using the minimum AX addition possible while maintaining physical bread properties. The extraction of AX from wheat bran was accomplished using alkaline conditions. The purity of AX extracted at lab scale (275 ml) was 44.3% at the optimum extraction conditions (0.5 M NaOH, 240 min, 80°C). Large scale extraction (27 l) resulted in an extract with 49.3% purity, with addition of purification steps including ultrafiltration, anion exchange chromatography and ethanol precipitation. The two extracts obtained on small scale (E1) and large scale (E2) both had high average molecular weights (620 000 and 470 000 Da, respectively) and arabinose to xylose (A/X) ratios of 0.7 and 0.6. With inclusion of the additional purification steps at large scale, the whiteness index of the final extract was increased from 33 to 93. For the application purpose, the lighter extract colour will have a less prominent effect on bread colour and is therefore advantageous. The high water binding capacity of AX allows for increased dough water absorption resulting in an altered final bread weight and volume. However, at optimal AX addition and flour removal levels, these product properties can be maintained. This was achieved with inclusion of 0.8% crude AX extract and 2.5% flour removal, while increasing water absorption by nearly 2%. The only physical difference between the AX containing loaves and the control was in colour, due to the darker colour of the extract. However, a discolouration step included in the extraction of E2 resulted in a significantly lighter final product compared to loaves containing E1. Comparison of E1 and E2 to highly pure AX resulted in similar final product properties indicating that the extracts’ performance was not affected by the purity. Furthermore, inclusion of an oxidative enzyme, laccase, resulted in a softer final product as determined using a texture analyser. AX production cost was estimated at R110/ kg resulting in higher production costs for AX containing loaves compared to commercial white bread. In order to maintain profit margins the selling price of AX containing loaves have to be increased by 9.6%. In conclusion, crude AX extracted from the animal feed co-product, wheat bran, is a feasible candidate for application in the bread making process as a partial flour replacer.
- ItemArray completion methods for thermodynamic data generation(Stellenbosch : Stellenbosch University, 2023-12) Middleton, Francesca; Cripwell, Jamie Theo; Stellenbosch University. Faculty of Engineering. Dept. of Chemical Engineering.ENGLISH ABSTRACT: This investigation considered the viability of array completion methods (ACMs), a class of machine learning method, for pseudo-data generation for thermodynamic properties. The purpose of the pseudo-data generation was to aid thermodynamic model development, such as that of complex equations of state used in the development and optimisation of processes in the chemical engineering industry. The property of the excess enthalpy of binary liquid mixtures was used for this investigation. This property has significant variations in behaviour that are difficult to predict accurately. Excess enthalpy data are expensive to produce with experimental methods, and, thus, the machine learning method of array completion aims to reduce this expense. The ACM was proposed as opposed to other machine learning methods as it is purely data-driven, therefore, does not require descriptors, and works well with sparse datasets. ACMs operate solely on the data available within the array, making data quality a critical factor for optimal outcomes. A meticulous data collection effort was undertaken to achieve the overarching goal of pseudo-data generation. Reliable excess enthalpy data was collected for binary liquid mixtures encompassing various temperature conditions. The array of excess enthalpy data had 4 dimensions or ways, including the mixtures’ two components on the first two ways and the mixtures’ composition and temperature conditions on the third and fourth ways, respectively. The study involved the exploration of three ACMs, using singular value decomposition (SVD) for 2-way or matrix completion methods (MCM) and higher-order SVD (HOSVD) for 3- and 4-way completion. When used in conjunction with UNIFAC predictions, the MCM outperformed the standalone UNIFAC model. Notably, it is found that a rank of ses for the decomposition suffices for completing the excess enthalpy data array. The research demonstrated, however, that the 3-way and 4-way ACMs did not apply to the excess enthalpy data. The MCM was, therefore, applied on 2-way or matrix slices of the array formed at discrete temperature and composition conditions. The slices were related via a constraint when completing matrix slices in parallel, ensuring smooth predictions across composition. This adjustment to the MCM significantly improved prediction quality and allowed the MCM to be successfully applied to matrices of constant temperature and composition conditions. The optimal pattern of missing entries for pseudo-data generation was found to be randomly missing entries, as opposed to systematically entries. Therefore, the concept of targeted measurements is proposed. This involves directing thermodynamic experiments towards creating randomly missing patterns of entries in arrays. This fills sparse areas of the arrays as well, allowing the MCM to be applied for better quality pseudo-data at a lower cost than experimentation. This circumvention of the limitations imposed by data sparsity could enrich the training data for thermodynamic models and enhance their predictive capabilities. The efficacy of the MCM was also found to rely on initial guesses for missing entries in an array. The research demonstrated the synergy of ACMs with UNIFAC, where the group contribution method provided initial guesses for the MCM, resulting in a hybrid thermodynamic-machine learning method. These informed initial guesses also provided insight into the interpretation of pseudo-data sets, as UNIFAC provides informed estimations for the dataset and can, thus, provide quick checks to users of the MCM. The efficacy of the MCM for varied thermodynamic complexity was also investigated, using the mathematical and thermodynamic descriptions of the data. This included investigating behaviour for the functional groups present in a mixture and other measures of the complexity of mixture behaviour. The MCM recognised underlying patterns inherent in thermodynamic theory, and grouped systems based on their behaviour. The mixture complexity played a small role in prediction accuracy, as mixtures of varied complexity required the same rank for optimal completion. It was, instead, clear that the distribution of data and the presence of similar mixtures played a more pivotal role in predicting the accuracy of the pseudo-data generated. The implications of the study extend to future research. While effective, the MCM employed in this study warrants further refinement, possibly by incorporating fundamental knowledge and robust statistical motivations. This research contributes to understanding how ACMs can be used for pseudo-data generation for composition-dependent thermodynamic properties. The investigation used the excess enthalpy of binary liquid mixtures, a difficult-to-predict property, and succeeded, demonstrating the MCMs efficacy.
- ItemAssessing water-energy nexus dynamics for sustainable resource management in Cape Town : a system dynamics approach(Stellenbosch : Stellenbosch University, 2024-03) Ndlela, Thandekile Julianah; Goosen, Neill Jurgens; de Kock, Imke Hanlu; Stellenbosch University. Faculty of Engineering. Dept. of Chemical Engineering. Process Engineering.ENGLISH ABSTRACT: This study addresses the critical intersection of energy and water resources within an urban context, which has not been vastly explored in water-energy-food (WEF) nexus studies. The intricate interplay between these resources, particularly within cities where energy and water consumption are linked, can be explored using a system dynamics approach. Previous studies conducted in Cape Town have examined the energy-water nexus; however, none have used system dynamics to quantify the relationships existing between water and energy. This research fills this gap by developing a system dynamics model that simulates the energy-water relationship for Cape Town's metropolitan area. The model was rigorously tested across various scenarios, each providing distinct approaches to enhance water resource management in Cape Town. The scenarios tested on the water submodel include measures focused on Water conservation and Water Demand Management (WC&WDM). These measures encompass initiatives such as leak repairs, pressure regulation, and extensive user education on water conservation. Strategies involving the development of groundwater resources for augmentation purposes were analysed to enhance water supply. The potential for wastewater reuse as a sustainable water management solution was assessed, contributing to a more holistic approach. Considerable attention has been given to evaluating the effects of temperature and rainfall changes since they are crucial factors in understanding evolving water dynamics. The findings highlight increased water and electricity supply as key leverage points that could prevent future shortages while emphasising potential behavioural changes to reduce wasted water to enhance sustainability efforts. By 2035, the model predicted a balance between the supply and demand. Additionally, the model considered energy scenarios involving constructing a 650 MW solar farm and the integration of independent energy producers. These interventions were predicted to significantly increase the energy supply within Cape Town, effectively mitigating the risk of energy shortages. Integrating independent energy producers contributes to a more diversified and resilient energy network while reducing dependency on centralised sources. This approach aligns with global trends towards decentralised and renewable energy systems, reinforcing Cape Town's position as a forward-thinking and sustainable city.