Faculty of Engineering
Permanent URI for this community
The Faculty of Engineering at Stellenbosch University is one of South Africa's major producers of top quality engineers. Established in 1944, it currently has five Engineering Departments.
News
For the latest news click here.
Browse
Browsing Faculty of Engineering by browse.metadata.advisor "Auret, Lidia"
Now showing 1 - 20 of 24
Results Per Page
Sort Options
- ItemAdaptive process monitoring using principal component analysis and Gaussian Mixture Models(Stellenbosch : Stellenbosch University, 2019-04) Addo, Prince; Auret, Lidia; Kroon, R. S. (Steve); Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Principal component analysis (PCA) is a well-known technique used in combination with monitoring statistics for fault detection. Moving window PCA and recursive PCA are adaptive extensions of PCA that operate by periodically updating the monitoring model to incorporate new observations. This allows the monitoring model to cope with process behaviours that change slowly over time such as equipment aging, catalyst deactivation, and reaction kinetics drift and thereby improving monitoring performance. Recent demands and advancements in process industries, however, may result in multimodal operations, where distinct clusters are present in measurement data. The performance of the aforementioned PCA-based monitoring techniques is hindered due to the violation of the implicit assumption that all the observed process data belong to the same Gaussian distribution. To improve monitoring performance, multimodal techniques are required. The Gaussian mixture model (GMM) is a probabilistic model that can account for the observed modes in the process data and therefore be used in the monitoring of multimode processes. However, multimodal processes also exhibit behaviours that change slowly over time, which is challenging. This work develops a monitoring approach that extends adaptive PCA techniques to GMM, which effectively addresses the aforementioned challenge. This is done by continuously refreshing the model parameters and monitoring statistics for the PCA and GMM. Other key areas that the work focuses on are in improving the specifications for adaptive PCA protocol (taking into consideration the various model update methods) and Gaussian mixture model methods (taking into consideration the monitoring model types and data types). Also, the performance of unimodal and multimodal process monitoring approaches was assessed. The performance of the developed approach and the improved implementations of the pre-existing methods were assessed using various case studies including unimodal and multimodal processes both with and without drift as well as various fault types. The Tennessee Eastman process and the non-isothermal continuously stirred tank reactor process are the two main simulators considered. Results for the considered cases show improved performance for the developed approach (adaptive PCA-based GMM) as compared to PCA, adaptive PCA, and traditional GMM, in fault detection. The GMM, as expected, performed better for multimodal cases than the PCA approaches. Also, the adaptive PCA approach performed better than PCA when there is process drift.
- ItemAdvanced control with semi-empirical and data based modelling for falling film evaporators(Stellenbosch : Stellenbosch University, 2013-03) Haasbroek, Adriaan Lodewicus; Steyn, W. H.; Auret, Lidia; Stellenbosch University. Faculty of Engineering. Dept. of Electrical and Electronic Engineering.ENGLISH ABSTRACT: This work focussed on a local multiple chamber falling film evaporator (FFE). The FFE is currently under operator control and experiencing large amounts of lost production time due to excessive fouling. Furthermore, the product milk dry mass fraction (WP) is constantly off specification, negatively influencing product quality, while the first effect temperature (TE1) runs higher than the recommended 70°C (this is a main cause of fouling). A two month period of historical data were received with the aim to develop a controller that could outperform the operators by keeping both control variables, WP and TE1, at desired set points while also increasing throughput and maintaining product quality. Access to the local plant was not possible and as such available process data were cleaned and used to identify two data based models, transfer function and autoregressive with exogenous inputs (ARX) models, as well as a semi-empirical-model. The ARX model proved inadequate to predict TE1 trends, with an average TE1 correlation to historical data of 0.36, compared to 0.59 and 0.74 for the transfer function and semi-empirical-models respectively. Product dry mass correlations were similar between the models with the average correlations of 0.47, 0.53 and 0.51 for the semi-empirical, transfer function and ARX models respectively. Although the semi-empirical showed the lowest WP correlation, it was offset by the TE1 prediction advantage. Therefore, the semi-empirical model was selected for controller development and comparisons. The success of the semi-empirical model was in accordance with previous research [1] [2] [3], yet other studies have concluded that ARX modelling was more suited to FFE modelling [4]. Three controllers were developed, namely: a proportional and integral (PI) controller as base case, a linear quadratic regulator (LQR) as an optimal state space alternative and finally, to make full use of process knowledge, a predictive fuzzy logic controller (PFC). The PI controller was able to offer zero offset set point tracking, but could not adequately reject a feed dry mass (WF) disturbance (as proposed and reported by Winchester [5]). The LQR was combined with a Kalman estimator and used pre-delay states. In order to offer increased disturbance rejection, the feedback gains of the disturbance states were tuned individually. The altered LQR and PFC solutions proved to adequately reject all modelled disturbances and outperform a cascade controller designed by Bakker [6]. The maximum deviation in WP was a fractional increase of 0.007 for LQR and 0.005 for FPC, compared to 0.012 for PI and 0.0075 for the cascade controller [6] (WF disturbance fractional increase of 0.01). All the designed controllers managed to reduce the standard deviation of operator controlled WP and TE1 by at least 700% and 450%, respectively. The same level of reduction was seen for maximum control variable deviations (370%), the integral of the absolute error (300%) and the mean squared error (900%). All these performance metrics point to the controllers performing better than the operator based control. In order to prevent manipulated variable saturation and optimise the feed flow rate (F1), a fuzzy feed optimiser (FFO) was developed. The FFO focussed on maximising the available evaporative capacity of the FFE by optimising the motive steam pressure (PS), which supplied heat to the effects. By using the FFO for each controller the average feed flow rate was increased by 4.8% (±500kg/h) compared to the operator control. In addition to flow rate gain, the controllers kept TE1 below 70°C and WP on specification. As such, the overall product quality also increased as well as decreasing the down time due to less fouling.
- ItemApplying dynamic Bayesian Networks to process monitoring(Stellenbosch : Stellenbosch University, 2018-12) Wakefield, Brandon Jason; Auret, Lidia; Kroon, R. S. (Steve); Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: In efforts to reduce the impact of human error on the operation of chemical and mineral processing plants, reliable process monitoring solutions attempt to assist plant operators and engineers to detect and diagnose process faults before significant loss is incurred. An existing solution, the traditional multivariate statistical process monitoring (MSPM) approach, is able to reliably detect abnormal process behaviour but struggles to unambiguously identify the root cause of the abnormal behaviour. It was identified that this is caused by a lack of incorporation of existing process knowledge into the framework of the MSPM approach. It was proposed to investigate a different fault diagnosis approach which directly incorporates process knowledge into its framework. Lerner et al. (2000) and Lerner (2002) present such an approach, using probabilistic methods to infer process behaviour given a particular process model. This model is in the form of a dynamic Bayesian network (DBN), and would contain various models which each describe particular process behaviour given information about the operational status of various process components. In particular, these DBN models were able to describe normal process behaviour in addition to highly specific abnormal process behaviour caused by, for instance, a sensor fault or a blocked pipe. Using optimised methods, the authors could then use a DBN model to make predictions about process behaviour and infer, given observation of actual process behaviour, which combination of component statuses best describe that observation. Therefore, solving the fault diagnosis problem could be reduced to performing inference in a DBN using this approach. A probabilistic fault diagnosis (PD) approach based on Lerner et al. (2000) and Lerner (2002) was therefore implemented and investigated in this thesis. A survey of recent DBN-based PD approaches was also performed, and it was determined that relatively little research had been done on the topic. Furthermore, published results presenting fault diagnosis performance for DBN-based PD approaches were typically found to be useless for meaningful comparison with a traditional MSPM approach. In this regard, this thesis aimed to investigate the usefulness of the PD approach in comparison to the MSPM approach, while providing useful fault diagnosis performance metrics to facilitate comparison with other fault diagnosis approaches. The PD approach tested in this research also extended upon Lerner et al. (2000) and Lerner (2002) by including models for regulatory control systems and recycle streams based on the work by Yu and Rashid (2013). Additionally, from the same paper, the concept of abnormality likelihood index (ALI) was implemented in the PD approach. This enabled the PD approach to function more similarly to the MSPM approach, facilitating direct comparison. Generally, it was found that the PD approach could provide competitive fault detection when compared with the MSPM approach. However, this was at the cost of real-time fault detection as well as longer detection delay for incipient faults. On the other hand, it was found that the PD approach performed better at root cause analysis than the MSPM approach. In particular, the PD approach typically provided better isolation for the root cause of fault conditions. Despite some issues, similar results were observed for the PD approach when scaling up to larger processes. Nonetheless, these issues may be addressed with additional research, further improving the capabilities of the PD approach. Therefore, it was concluded that the PD approach is useful for fault diagnosis and should be investigated further in future research.
- ItemAutomating the initialisation of relay autotuning using control performance monitoring(Stellenbosch : Stellenbosch University., 2020-03) Albertus, Grant John; Auret, Lidia; Dorfling, Christie; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Relay autotuning is an automated procedure that obtains accurate tuning parameters for the process controller when required. An automated controller tuning procedure would be desirable within industry due to the poor controller response resultant from incorrect manual tuning of control loops. However, it is not known in industry how relay autotuning should be implemented and present literature studies do not extensively address this problem. Therefore, the aim of the research was to determine a control performance monitoring (CPM) technique that initiates relay autotuning due to incorrect tuning parameters. Furthermore, key parameters and factors associated with relay autotuning were evaluated such that a robust procedure can be proposed. To simulate realistic process conditions a milling circuit simulation model with disturbances and sensor noise was used as a case study. Historical benchmarking was identified as a technique suitable to initiate the relay autotuning procedure. The product particle size’s (PSE) variance was selected as a benchmark to which the current control performance was assessed. Defined as the PSE’s variance at a period of good controller performance without the presence of faults, the 90th percentile of the PSE (𝜎𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑2) at normal operating conditions (NOC) was utilised as the historical benchmark. Using the benchmark, the current controller performance was assessed as poor if the variance of the PSE (𝜎𝑃𝑆𝐸2) was persistently larger than the 𝜎𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑2. As a result, the relay start time was defined as the allowable time 𝜎𝑃𝑆𝐸2 is above 𝜎𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑2 before relay autotuning is initiated. In addition to 𝜎𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑2, the historical benchmarking method required selection of a moving variance length. For the project, the choice of a 1 hour sliding window was assessed as suitable, as it was able to detect the oscillations that occur. The poor controller performance was detected as an increase in the moving variance over time. To simulate conditions of poor controller performance and incorrect tuning parameters, valve degradation was implemented on the PSE control loop. The valve was changed from linear to quick opening characteristics. Without retuning the controller, there was a persistent increase in variance due to the current tuning parameters being too aggressive. Therefore, oscillations within the milling circuit was produced. The relay autotuning procedure was evaluated as beneficial in the reduction of variance when valve degradation was implemented. Therefore, relay autotuning can attenuate faults which introduce oscillations into the process if the original tuning parameters are too aggressive. In addition, key parameters were assessed for industrial application. The relay amplitude is suggested to be the smallest value possible to overcome the hysteresis band. Furthermore, a smaller relay amplitude reduces the inflated tuning parameters observed at lower sensor noise levels. With respect to the historical benchmarking technique, earlier initialisation of the relay autotuner resulted in better controller performance. Lastly, varying the extent of valve wear showed that retuning is not necessary for small degrees of valve wear. Despite the improved controller performance, economic performance assessment of relay autotuning and key parameters were inconclusive.
- ItemBayesian parameter estimation for process monitoring(Stellenbosch : Stellenbosch University., 2020-03) Basson, Marno; Cripwell, Jamie T.; Auret, Lidia; Coetzer, R. L. J.; Kroon, R. S. (Steve); Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: The underlying mechanism of many physical systems studied in engineering can be described by algebraic, ordinary differential and auxiliary equations. While these equations stem from engineering expertise, the principles underpinning the model development phase do not always provide sufficient insight into selecting suitable values for all the model parameters. Furthermore, it might not be possible to directly measure all the model parameters (which can be related to several physicochemical system properties) from the system under consideration due to physical, economic and time constraints. As a result, the engineer often has to estimate the model parameters from noise-corrupted, time series data obtained from the physical system, while simultaneously quantifying how reliable these parameter estimates are. The purpose of the current study is to investigate model parameter estimation, from both the frequentist and Bayesian statistical inference perspectives, and to evaluate the merit of applying Bayesian probabilistic techniques in the chemical engineering setting. Two Bayesian parameter estimation methodologies were developed. The first methodology applies to estimating the parameters of lumped system algebraic dynamic models, while the second methodology is focused on lumped system ordinary differential equation model parameter estimation. Both proposed Bayesian methodologies were benchmarked against the Gauss-Newton nonlinear least squares implementation for which the resulting estimated model parameters have a (frequentist) maximum likelihood interpretation. The results obtained from the proposed Bayesian methodologies were compared to the benchmark approach results based on several performance criteria for a single data set manifestation as well as for multiple independently generated data sets. It was found that the proposed Bayesian methodologies, as well as the benchmark approaches, provide consistent parameter estimation results when compared to the simulation ground truth parameter values, across the multiple independent data sets. Based on the parameter inference results obtained from the different case studies considered in the current work, it was determined that, from a pragmatic engineering perspective, there is no reason to favour the use of the proposed Bayesian methodologies over the frequentist benchmark approaches and vice versa as both approaches provide comparable results. However, the benefit of the Bayesian approach (which explicitly expresses the model parameter uncertainty) was illustrated by considering a simple cost-benefit analysis for several of the case studies where it was possible to make more informed engineering decisions under uncertainty compared to the traditional frequentist benchmark approach. In conclusion, even though there is no noteworthy difference between the parameter inference results obtained from the benchmark and proposed Bayesian approaches, the value of the Bayesian approach shows up when one considers the subsequent application of the inferred parameters in day-to-day engineering tasks. Consequently, it is worth further exploring the benefit of applying probabilistic techniques and explicitly modeling with uncertainty, i.e. Bayesian statistical inference, in chemical engineering applications.
- ItemA comparitive evaluation of membrane bioreactor technology at Darvill Wastewater Works(Stellenbosch : Stellenbosch University, 2017-03) Metcalf, Graham James; Pillay, Lingam; Auret, Lidia; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH SUMMARY: Water scarcity is one of the overriding concerns of the 21st century. Improving wastewater treatment is a relatively cost-effective solution that reduces strain on the available water supply. Reducing and improving the quality of wastewater discharges should be at the forefront of integrated water management. The aim of the research was to investigate the ability of different Membrane Bioreactor (MBR) configurations to treat municipal wastewater to a standard above that achieved by conventional processes. The research objective was to install two MBR pilot plants with different configurations to run parallel (using the same influent wastewater) to the Darvill Wastewater Works (WWW). The performance of the two MBR pilot plants and the Darvill WWW is compared in terms of their treatment efficacy and performance reliability. A number of MBR comparative studies have been undertaken internationally, but none in South Africa. The two MBRs tested (Toray and Norit) have previously been pilot tested on municipal sewage by other researchers and therefore the results from these studies have proved useful for comparing performance. The MBR pilot plants were operated for an extended period of one year in order to take into account seasonality and variability of influent quality. Samples of influent and effluent were taken and analysed on a daily basis. The Darvill WWW is currently operational so these samples were already taken on a routine basis. The performance of the MBR pilot plants and Darvill WWW were compared by analysing the effluent water quality data using statistical techniques (t-test and F-test). A reliability analysis was also undertaken to determine performance against set water quality discharge standards. Based on the operating experience at Darvill and recorded MBR performance the average flux for the submerged Toray MBR system was 17 lmh, whereas that for the sidestream Norit MBR system was 37.5 lmh. The predicted peak flux for the Toray membrane was 20 lmh whereas for the Norit sidestream membrane it was 45 lmh. The predicted cleaning frequency for the Toray MBR is 5-6 weeks and 7-8 weeks for the Norit MBR. The MBR pilot plants out-perform the conventional activated sludge and secondary clarification process that is operated at the Darvill WWW for all determinands measured with the exception of phosphate removal. The performance of the MBRs could not be separated in terms of treatment efficacy with regard to all determinands as both outperformed the other depending on the determinand measured. The results showed that MBRs produce an effluent water quality that exceeds the capability of the conventional activated sludge process (CASP) operated at the Darvill WWW. The reliability of the MBR pilot plants was also higher than that of the Darvill WWW. MBRs thus have an advantage if compliance with stricter discharge standards is required or if treatment of the effluent for reclamation is the goal.
- ItemControl performance assessment for a high pressure leaching process by means of fault database creation and simulation(Stellenbosch : Stellenbosch University, 2016-03) Miskin, Jason John; Auret, Lidia; Dorfling, C.; Bradshaw, S. M.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Platinum group metal (PGM) producing companies typically extract PGMs from a nickel-copper ore through a combination of processes including comminution, flotation, smelting, converter treatment, and leaching (Dorfling, 2012; Lamya, 2007; Liddell et al., 1986). The latter processing step is a hydrometallurgical process which aims to dissolve base metals (i.e. copper and nickel sulphides) out of a converter matte and into the liquid phase by means of oxidative reaction, while limiting the dissolution of PGMs. Dorfling (2012) developed an open-loop dynamic process model within MATLAB™ comprising the second and third stage pressure leaching system and surrounding process units of Western Platinum base metal refinery (BMR). The dynamic process model was subsequently reprogrammed into Simulink™ by Haasbroek and Lindner (2015). The developed dynamic process model is a powerful tool which can be used to investigate and possibly improve several aspects of the Western Platinum BMR operation. This project aims to improve the dynamic process model to mimic the Western Platinum BMR operation, and to ultimately use the model to analyse the process performance during the occurrence of faults (i.e. abnormal events that potentially lead to failure or malfunction of equipment which causes significant process performance degradation). The updated dynamic process model will allow the possibility of developing and testing fault detection and diagnostic algorithms for Western Platinum BMR. The Simulink™ dynamic process model was firstly validated using an approach developed by Sargent (2005). This approach validates the entire model on four different levels namely conceptual model validation, computerised model verification, operational validation and data validation. A total of 34 dynamic process model issues divided into the four validation categories of Sargent (2005) were identified. It was concluded that the reaction kinetics used within the baseline dynamic process model might cause inaccurate leaching predictions. This is attributed to issues existing in both the rate expressions and the experimental data used to fit the kinetics. Most of the other issues which effect the model predictability were addressed. The dynamic process model is therefore valid for predicting general process behaviour, but invalid for exact leaching predictions. The affect which a variety of variable step-changes has on the direction of leaching behaviour is however as expected. Several control layers which exist at Western Platinum BMR were implemented on the Simulink™ open-loop dynamic process model. This includes regulatory control, supervisory control, alarm systems and safety interlock systems. The addition of control layers ensures that the dynamic process model mimics and acts in a similar manner than the actual process. A total of 35 sensors; 21 actuators; 30 regulatory controllers; 33 alarms systems; 37 safety interlocks; and 4 supervisory controllers was implemented into the open-loop dynamic process model. These control layers correspond to that which is used at Western Platinum BMR. The developed closed-loop dynamic process model is a useful tool which can be used to train operators and therefore assist in developing operator decision making. A fault database was developed which contains entries of faults which commonly occur at Western Platinum BMR. Valuable fault characteristics (Himmelblau, 1978; Isermann, 2005; Patton et al., 2013) such as transition rate, frequency of occurrence, fault type and symptoms were included for each fault present in the fault database. Faults were organised based on their point of origin (Venkatasubramanian et al., 2003). Several faults were modelled which ultimately served as a tool to perturb the process so as to assess the process performance during fault occurrences. A total of 17 faults with the necessary fault characteristics were gathered during a site visit (McCulloch et al., 2014) and composed into a fault database. This includes common faults such as valve wear, valve stiction, pump impeller wear, and controller misuse. A total of 12 faults were subsequently modelled. The fault database can serve as a means of information transfer between several Western Platinum BMR operators and personnel. The control performance was expressed in terms of control and operational key performance indicators which were calculated at several locations within the dynamic process model. The control and operational key performance indicators (Gerry, 2005; Marlin, 1995; McCulloch et al., 2014; Zevenbergen et al., 2006) include integral absolute error, maximum deviation, time not at set-point, valve reversals, valve saturation; and throughput, extent of base metal leaching, extent of PGM leaching, spillage; respectively. The process performance during the occurrence of faults was compared to a faultless baseline run. The control performance during the occurrence of 8 independent fault cases was investigated. The extent in which process performance degraded varied significantly between faults. Two faults namely pump impeller wear and solid build-up in cooling coils proved to be the faults which caused the largest process upset. This is attributed to significant autoclave pressure and temperature variations, and the activation of safety interlocks. These two faults also proved to have the least localised symptoms. This is attributed to the major effect they have early in the process which results in a propagation of symptoms. Two faults namely valve wear and level sensor blockage on the other hand caused minimal deviation in process performance while also propagating through only a few of the measured key performance indicators. These faults occur in the latter part of the process which explains their localised symptoms. The extent to which the process performance was degraded by the level sensor blockage corresponds with expert knowledge (McCulloch et al., 2014); while the model underpredicts the process performance degradation caused by valve wear. The updated closed-loop dynamic process model together with the modelled faults can be used to develop and test fault detection and diagnostic algorithms for Western Platinum BMR. Moreover, fault signatures produced in this this project could possibly be used as a baseline at Western platinum BMR in an attempt to detect and identify fault occurrences though expert interpretation.
- ItemEvaluation of statistical analyses for the identification of surrogates and indicators using historical plant data from a water reclamation plant(Stellenbosch : Stellenbosch University, 2017-03) Coomans, Cornelius Johannes; Auret, Lidia; Burger, A. J.; Swartz, C. D.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH SUMMARY: The lag time associated with water quality monitoring at water reclamation plants (WRPs) is a major hurdle in the way of implementing potable water reclamation in areas suffering from water shortages. The application of advanced monitoring techniques, which rely in part on surrogate and indicator variables, are one way of reducing the lag time associated with water quality monitoring. The aim of this study was to evaluate statistical analyses that could be used to identify variable relationships, which in turn could be used for the development of surrogate and indicator variables, following the data-driven approach. The plant data used in this study were obtained from an existing WRP that has been operational for more than five years without undergoing any major changes to the treatment and operational procedures. An initial assessment of the data found that the data contained large amounts of missing values. The assessment also identified the data periods during which the plant was operating under ‘normal’ conditions. Several time periods were removed since abnormal events occurred during these time periods. Pre-processing the data consisted of outlier removal (three sigma rule and Hampel filter), noise reduction (moving average filter) and missing data replacement (linear interpolation). The statistical analyses, Pearson’s and Spearman’s correlation, principal component analysis (PCA), linear discriminant analysis (LDA) and partial least squares (PLS) regression, were then incorporated into models for identifying variable relationships. The performance of the different statistical analyses were measured using statistical metrics such as R2 for correlation, visualisation of separation for PCA, classification error for LDA and both R2 and mean squared error (MSE) for the PLS models. The bivariate correlations provided the most concise results, whilst the LDA models could not be effectively assessed due to a change in the behaviour of the training and testing data. The PLS models performed poorly and did not produce any significant results. Expert process knowledge was also used to determine which variable relationships, identified by the models, could be regarded as valuable contributions, and which ought to be regarded as trivial. Overall it was found that the bivariate correlations were effective for detecting relationships between variables. PCA was a valuable tool that provided insight into the potential use of multivariate analyses. LDA and PLS regression may require further testing before a definitive ruling can be made regarding their usefulness for identifying variable relationships from unprocessed historical plant data. Although historical data could be used to identify variable relationships using bivariate correlations, it is not recommended for multivariate statistical analyses. A planned sampling campaign could be much more effective for data collection than using historical data, although the cost associated with a planned sampling campaign must be taken into consideration.
- ItemExploiting process topology for optimal process monitoring(Stellenbosch : Stellenbosch University, 2014-12) Lindner, Brian Siegfried; Auret, Lidia; Stellenbosch University. Faculty of Engineering. Department of Process Engineering.ENGLISH ABSTRACT: Modern mineral processing plants are characterised by a large number of measured variables, interacting through numerous processing units, control loops and often recycle streams. Consequentially, faults in these plants propagate throughout the system, causing significant degradation in performance. Fault diagnosis therefore forms an essential part of performance monitoring in such processes. The use of feature extraction methods for fault diagnosis has been proven in literature to be useful in application to chemical or minerals processes. However, the ability of these methods to identify the causes of the faults is limited to identifying variables that display symptoms of the fault. Since faults propagate throughout the system, these results can be misleading and further fault identification has to be applied. Faults propagate through the system along material, energy or information flow paths, therefore process topology information can be used to aid fault identification. Topology information can be used to separate the process into multiple blocks to be analysed separately for fault diagnosis; the change in topology caused by fault conditions can be exploited to identify symptom variables; a topology map of the process can be used to trace faults back from their symptoms to possible root causes. The aim of this project, therefore, was to develop a process monitoring strategy that exploits process topology for fault detection and identification. Three methods for extracting topology from historical process data were compared: linear cross-correlation (LC), partial cross-correlation (PC) and transfer entropy (TE). The connectivity graphs obtained from these methods were used to divide process into multiple blocks. Two feature extraction methods were then applied for fault detection: principal components analysis (PCA), a linear method, was compared with kernel PCA (KPCA), a nonlinear method. In addition, three types of monitoring chart methods were compared: Shewhart charts; exponentially weighted moving average (EWMA) charts; and cumulative sum (CUSUM) monitoring charts. Two methods for identifying symptom variables for fault identification were then compared: using contributions of individual variables to the PCA SPE; and considering the change in connectivity. The topology graphs were then used to trace faults to their root causes. It was found that topology information was useful for fault identification in most of the fault scenarios considered. However, the performance was inconsistent, being dependent on the accuracy of the topology extraction. It was also concluded that blocking using topology information substantially improved fault detection and fault identification performance. A recommended fault diagnosis strategy was presented based on the results obtained from application of all the fault diagnosis methods considered.
- ItemFault detection, identification and economic impact assessment for a pressure leaching process(Stellenbosch : Stellenbosch University, 2017-12) Strydom, Johannes Jacobus; Auret, Lidia; Dorfling, Christie; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH SUMMARY: Modern chemical and metallurgical processes consist of numerous process units with several complex interactions existing between them. The increased process complexity has in turn amplified the effect of faulty process conditions on the overall process performance. Fault diagnosis forms a critical part of a process monitoring strategy and is crucial for improved process performance. The increased amount of process measurements readily available in modern process plants allows for more complex data-driven fault diagnosis methods. Linear and nonlinear feature extraction methods are popular multivariate fault diagnosis procedures employed in literature. However, these methods are yet to find wide spread industrial application. The multivariate fault diagnosis methods are not often evaluated on real-world modern chemical processes. The lack of real world application has in turn led to the absence of economic performance assessments evaluating the potential profitability of these fault diagnosis methods. The aim of this study is to design and investigate the performance of a fault diagnosis strategy with both traditional fault diagnosis performance metrics and an economic impact assessment (EIA). A complex dynamic process model of the pressure leach at a base metal refinery (BMR) was developed by Dorfling (2012). The model was recently updated by Miskin (2015), who included the actual process control layers present at the BMR. A fault library was developed, through consultation of expert knowledge from the BMR, and incorporated into the dynamic model by Miskin (2015). The pressure leach dynamic model will form the basis for the investigation. Principal component analysis (PCA) and kernel PCA (KPCA) were employed as feature extraction methods. Traditional and reconstruction based contributions were employed as fault identification methods. Economic Performance Functions (EPFs) were developed from expert knowledge from the plant. The fault diagnosis performance was evaluated through the traditional performance metrics and the EPFs. Both PCA and KPCA provided improved fault detection results when compared to a simple univariate method. PCA provided significantly improved detection results for five of the eight faults evaluated, when compared to univariate detection. Fault identification results suffered from significant fault smearing. The significant fault detection results did not translate into a significant economic benefit. The EIA proved the process to be robust against faults, when implementing a basic univariate fault detection approach. Recommendations were made for possible industrial application and future work focusing on EIAs, training data selection and fault smearing.
- ItemFault pattern recognition in simulated furnace data(Stellenbosch : Stellenbosch University, 2021-03) Theunissen, Carl Daniel; Louw, Tobias M.; Bradshaw, S. M.; Auret, Lidia; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Modern submerged arc furnaces are plagued by blowbacks; hazardous occurrences where hot, toxic furnace freeboard gases are blown into the environment. While common occurrences, their causes are currently unknown, hence they cannot be predicted with mechanistic models. Data-driven models use data recorded from modern processes, like submerged arc furnaces, to recognize specific process conditions. This project aimed to identify and compare fault pattern recognition models that could be used for detecting and recognizing blowback-preceding conditions. A simple submerged arc furnace model that emulates blowbacks was developed with which to generate large volumes of data for model comparison. This submerged arc furnace model was developed from mass- and energy balances over distinct furnace zones, and yielded a large dataset with dynamic- and nonlinear characteristics. This dataset contained observations from multiple distinct operating modes, and was deemed suitable for fault pattern recognition model evaluation. A semi-supervised learning approach was selected as most suitable for recognizing blowback preceding conditions. Semi-supervised fault pattern recognition models are trained on a set of only blowback-preceding observations; this fits the typical constraints imposed by industrial datasets, where data is poorly defined and only a few observation of the target fault are labelled as such. Principal component analysis (PCA), kernel PCA and input-reconstructing neural networks called auto-encoders are established semi-supervised pattern recognition methods. One-dimensional convolutional auto-encoders are neural network architectures that effectively compress multivariate time series, but their application to on-line fault pattern recognition is relatively novel. This work applied these methods to on-line fault pattern recognition for blowback prediction, and presented algorithms for applying these methods for semi-supervised fault pattern recognition tasks. Feature engineering has the largest impact on fault pattern recognition performance, therefore feature engineering techniques were applied as part of an overall approach to data-driven fault pattern recognition. The investigation into the above fault pattern recognition models showed that kernel PCA’s superior performance over standard PCA is limited to smaller datasets, and that large datasets must be compressed significantly before kernel PCA can be applied. Consequently this investigation found linear PCA to be superior to nonlinear kernel PCA for modelling large datasets. Both auto-encoders and the developed convolutional auto-encoders outperformed linear PCA modelling, highlighting the improved fault pattern recognition capabilities of nonlinear models. This investigation found that one-dimensional convolutional auto-encoders were far more effective than the other presented models when applied to raw multivariate time series data, confirming that one-dimensional convolutional auto-encoders are effective at processing time series. However, the best performance was observed for auto-encoders models when applied to feature engineered data. This highlighted the guiding role that feature engineering should have in developing and implementing fault pattern recognition models.
- ItemFroth texture extraction with deep learning(Stellenbosch : Stellenbosch University, 2018-03) Horn, Zander Christo; Auret, Lidia; Aldrich, C.; Herbst, B. M.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH SUMMARY: Soft-sensors are of interest in mineral processing and can replace slower or more expensive sensors by using existing process sensors. Sensing process information from images has been demonstrated successfully, but performance is dependent on feature extractors used. Textural features which utilise spatial relationships within images are preferred due to greater resilience to changing imaging and process conditions. Traditional texture feature extractors require iterative design and are sensitive to changes in imaging conditions. They may have many hyperparameters, leading to slow optimisation. Robust and accurate sensing is a key requirement for mineral processing, making current methods of limited potential under realistic industrial conditions. A platinum froth flotation case study was used to compare traditional texture feature extractors with a proposed deep learning feature extractor: convolutional neural networks (CNNs). Deep learning applies artificial neural networks with many hidden layers and specialised architectures for powerful correlative performance through automated training. All information of the input data structure is determined inherently in training with only a limited number of hyperparameters. However, deep learning methods risk overfitting with small datasets, which must be mitigated. A CNN classifier and a framework for unbiased comparison between feature extractors were developed for predicting high to low grade classes of platinum in flotation froth images. CNNs can perform all the functions of a soft-sensor, but this may bias performance comparison. Instead, features were extracted from hidden layers in CNNs and fed into a traditional soft-sensor. This ensured performance measurements were unbiased across all feature extractors. With a full factorial experiment, the following CNN hyperparameters were evaluated: batch size, number of convolutional filters, and convolutional filter size. Accuracy of grade classification was used to score feature extractors. These reference texture feature extractors were compared to CNNs: Local Binary Patterns, Grey-Level Co-occurrence Matrices, and Wavelets. The impact of spectral features (bulk image features such as average colour) was also evaluated, as CNNs can also use spectral image properties to create features, unlike traditional texture extractors. Extractors were tested with input resolutions from 16x16 to 128x128 with two soft-sensor models: Linear Discriminant Analysis, and k-Nearest Neighbour classifiers. Optimal grade classification accuracies were: CNN – 96.5%, LBP – 100%, GLCM – 73.7%, Wavelets – 98.3%, and Spectral – 98.4% Training CNNs to extract features was successful with robust results regardless of hyperparameters selected. The only statistically significant differences obtained during training were that smaller batch size and smaller input resolution gave superior training performance. Results were found to be reproducible for all models. Analysing learned CNN features indicated both textural and spectral features were utilised. Overall results showed spectral features gave good classification performance, potentially adding to CNN performance. CNNs showed comparable performance to other texture feature extractors at all resolutions. This proof of concept implementation shows promise for deep learning methods in mineral processing applications. The resilience of CNNs to changes in imaging and process conditions could not be evaluated due to limited data in the case study. Future work with deep learning methods, while promising, will require larger datasets which are more representative of a variety of process conditions.
- ItemImage texture analysis for inferential sensing in the process industries(Stellenbosch : Stellenbosch University, 2013-12) Kistner, Melissa; Auret, Lidia; Aldrich, C.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: The measurement of key process quality variables is important for the efficient and economical operation of many chemical and mineral processing systems, as these variables can be used in process monitoring and control systems to identify and maintain optimal process conditions. However, in many engineering processes the key quality variables cannot be measured directly with standard sensors. Inferential sensing is the real-time prediction of such variables from other, measurable process variables through some form of model. In vision-based inferential sensing, visual process data in the form of images or video frames are used as input variables to the inferential sensor. This is a suitable approach when the desired process quality variable is correlated with the visual appearance of the process. The inferential sensor model is then based on analysis of the image data. Texture feature extraction is an image analysis approach by which the texture or spatial organisation of pixels in an image can be described. Two texture feature extraction methods, namely the use of grey-level co-occurrence matrices (GLCMs) and wavelet analysis, have predominated in applications of texture analysis to engineering processes. While these two baseline methods are still widely considered to be the best available texture analysis methods, several newer and more advanced methods have since been developed, which have properties that should theoretically provide these methods with some advantages over the baseline methods. Specifically, three advanced texture analysis methods have received much attention in recent machine vision literature, but have not yet been applied extensively to process engineering applications: steerable pyramids, textons and local binary patterns (LBPs). The purpose of this study was to compare the use of advanced image texture analysis methods to baseline texture analysis methods for the prediction of key process quality variables in specific process engineering applications. Three case studies, in which texture is thought to play an important role, were considered: (i) the prediction of platinum grade classes from images of platinum flotation froths, (ii) the prediction of fines fraction classes from images of coal particles on a conveyor belt, and (iii) the prediction of mean particle size classes from images of hydrocyclone underflows. Each of the five texture feature sets were used as inputs to two different classifiers (K-nearest neighbours and discriminant analysis) to predict the output variable classes for each of the three case studies mentioned above. The quality of the features extracted with each method was assessed in a structured manner, based their classification performances after the optimisation of the hyperparameters associated with each method. In the platinum froth flotation case study, steerable pyramids and LBPs significantly outperformed the GLCM, wavelet and texton methods. In the case study of coal fines fractions, the GLCM method was significantly outperformed by all four other methods. Finally, in the hydrocyclone underflow case study, steerable pyramids and LBPs significantly outperformed GLCM and wavelet methods, while the result for textons was inconclusive. Considering all of these results together, the overall conclusion was drawn that two of the three advanced texture feature extraction methods, namely steerable pyramids and LBPs, can extract feature sets of superior quality, when compared to the baseline GLCM and wavelet methods in these three case studies. The application of steerable pyramids and LBPs to further image analysis data sets is therefore recommended as a viable alternative to the traditional GLCM and wavelet texture analysis methods.
- ItemImproving energy and economic performances of a typical sugarcane factory through energy indicator development, set-point optimization, and optimal sensor placement(Stellenbosch : Stellenbosch University, 2021-03) Mkwananzi, Thobeka; Gorgens, Johann F.; Auret, Lidia; Louw, Tobias M.; Mandegari, Mohsen A.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: The volatile sugar markets and the recent recognition of bagasse as a key feedstock to produce biofuels and bioproducts have prompted a desire in the sugarcane industry to correct energy inefficiencies thereby allowing for additional revenue from increased surplus bagasse availability. However, the desire for improved energy efficiency is often beset by the lack of adequate measurements, imprecise measurements, budget constraints, and random variations in external process disturbances and market prices. In this regard, this study seeks to evaluate optimal control solutions that can be used to enhance the plant-wide monitoring and control of existing process operations in a typical sugarcane mill that processes 250 tonnes of sugarcane per hour. Objective 1 sought to identify the controlled variables (CVs) whose steady-state set-point deviations are associated with excess energy demands through energy indicator definition, sensitivity, and statistical analysis. An established sugarcane mill model was used to simulate the steady-state deviations of the CVs and to quantify their effect on energy usage based on defined energy indicators. Objective 2 entailed the use of Monte Carlo analysis to investigate the effect of process disturbances and market price variations on the steady-state factory control and net- revenue. Six disturbances were considered for simulation using the sugarcane mill model while the net revenue was defined in terms of raw materials cost and product revenue. From the observed steady-state deviations, set-point optimizing control (objective 3) was investigated for use in maximizing the net revenue by finding the optimal set-points for the CVs when disturbances and market prices vary. Fourteen CVs identified from objective 1 to have a large influence on energy consumption were used for set-point optimization. From objective 1, massecuite recycling was identified to result in excess energy demands and with set-point optimization, recycling was reduced by 23%. Surplus bagasse was increased by 8.5% with an acceptable 0.43% reduction in sugar yield and a 2.4% increase in net revenue. Nine CVs were identified to have optimal steady-state set-points that are insensitive to disturbance variations, thus allowing for simplified implementation of set-point optimization by keeping these CVs at constant set-points while re-optimizing for the remaining 5 CVs. The availability of precise measurements is crucial for effective automated control. Hence, the self-optimizing control concept was used to find an optimal linear combination of 41 CVs and their optimal sensor placement for use as constant CVs while eliminating the need for frequent online re-optimization when disturbances occur (objective 4). Optimality is defined as maximizing the net revenue by minimizing the total cost of purchasing the measuring instruments and the average revenue loss due to implementing the constant set-point policy rather than continuous real-time optimization. The cost of purchasing the sensor is normalized based on its expected lifespan. The attained optimal sensor placement has an average revenue loss of US$61.93/hr while the base case sensor placement loss is US$157.72/hr. The reduction in average revenue loss is attributed to 19 CVs for which the optimal sensor placement allocated more precise sensors compared to the base case sensor placement. The cost of purchasing the more precise sensors for these 19 CVs is US$2.73/hr. Overall, this study was able to successfully formulate strategies for enhanced process monitoring and control in sugarcane mills while contributing to the available literature.
- ItemImproving the control structure of a high pressure leaching process(Stellenbosch : Stellenbosch University, 2015-03) Knoblauch, Pieter Daniel; Bradshaw, S. M.; Dorfling, Christie; Auret, Lidia; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: The main purpose of the base metal refinery (BMR) as operated by Lonmin at their Western Platinum Ltd BMR, is to remove base metals – such as copper and nickel – from a platinum group metal (PGM) containing matte. The leaching processes in which this is done pose several challenges to the control of the process. The most significant of these is the slow dynamics of the process, due to large process units, as well as the continuously changing composition of the first stage leach residue, which is not measured on-line. This is aggravated by the fact that the exact leaching kinetics (and therefore the effect of the disturbances) are not understood well fundamentally. The slow process dynamics mean that controllers cannot be tuned aggressively, resulting in slow control action. The large residence times and off-line composition analyses of major controlled variables also mean that the effects of operator set point changes are visible only the following day, often by a different shift of operators. Dorfling (2012) recently developed a fundamental dynamic model of the pressure leach process at Lonmin‟s BMR. This dynamic model incorporates 21 chemical reactions, as well as mass and energy balances, into a system of 217 differential equations. The model provides a simulation framework within which improved control strategies can be investigated. The primary aims of this study are twofold. The first is to validate the model for the purpose of the investigation and development of control structure improvements. This is done by comparing the model to plant data, and adapting it if necessary. The second aim to reconsider the current control philosophy to the extent that is allowed by the model‟s determined validity. The current plant control philosophy aims to maintain a PGM grade of 65%, while the copper in the solids products of the second and third leaching stages should be below 25% and 3.5% by mass, respectively. Two areas of particular concern in this process that have been raised by Lonmin are the control of the temperature of the first compartment and the addition of pure sulphuric acid to control the acid concentration in the second stage leach. Dynamic plant data were used to calibrate the model, which was migrated from its received MATLAB platform to Simulink, to assist with control development. Flow rates were imported from the data, with some data values adapted for this purpose, due to mass balance inconsistencies. The outputs from the calibrated model were compared with corresponding data values. The model was found to be suitable for the investigation and development of the control structures of pressure, temperatures and inventories (termed basic regulatory control) and the acid concentration and solids fraction in the preparation tanks (termed compositional regulatory control). It was, however, found to be inadequate for the investigation and development of supervisory control, since it does not provide accurate compositional results. The leaching of copper is especially under-predicted, with the predicted copper concentration in the second stage product being approximately 46% lower than data values. The basic and compositional regulatory control structures were investigated. For each of these a base case was developed which aimed to represent the relevant current control structure, assuming optimal tuning. The variable pairings for the basic regulatory control were reconsidered using a method proposed by Luyben and Luyben (1997), since this part of the process does not permit the generation of a relative gain array (RGA) for variable pairing. The resulting pairing corresponds with Lonmin‟s current practice. Considering the temperature control of compartment 1, it was found that the addition of feed-forward control to the feedback control of the level of the flash tank improves the temperature control. More specifically, during an evaluation where the temperature‟s set point is varied up to 1%, the IAE of the temperature of compartment 1 was decreased with 7.5% from the base case, without disturbing the flash tank. The addition of feed-forward control allows for more rapid control and more aggressive tuning of this temperature, removing the current limit on ratio between the flash recycle stream and the autoclave feed. The compositional control was investigated for the second stage leach only, due to insufficient flow rate and compositional information around the third stage preparation tank. Variable pairing showed that three additive streams are available for the preparation tanks of the second and third stage leach to control the acid concentration and solids fraction in those tanks. Focussing on the second stage, the aim was to determine whether the acid concentration in the flash tank can be successfully controlled without the addition of pure acid to the tank. With four streams available around the second stage preparation tank to control its mass/level, the acid concentration and solids fraction, three manipulated variables were derived from these streams. The resulting pairings were affirmed by an RGA. Control loops for the control of acid concentration and solids fraction in the flash tank were added as cascade controllers, using the preparation tank‟s control as secondary loops. The added compositional control was evaluated in two tests. The first of these entailed the adding of typical disturbances, being the flash recycle rate, the solids and water in the feed to the second stage preparation tank and the acid concentration in copper spent electrolyte. In the second test the control system was tested for tracking an acid concentration set point. It was found that the cascade structure controls the acid concentration in the flash tank less tightly than the base case (with an IAE that is 124% and 80.6% higher for the two tests), but that it decreases the variation of solids fraction (lowering the IAE with 40.8% with the first test) in the same tank and of the temperature in the first compartment (lowering the IAE with 73.6% in the second test). It is recommended that the relative effects of these three variables on leaching behaviour should be investigated with an improved model that is proven to accurately predict leaching reactions in the autoclave.
- ItemImproving the interpretability of causality maps for fault identification(Stellenbosch : Stellenbosch University, 2020-12) Van Zijl, Natali; Louw, Tobias M.; Bradshaw, S. M.; Auret, Lidia; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Worldwide competition forces modern mineral processing plants to operate at high productivity. This high productivity is achieved by implementing process monitoring to maintain the desired operating conditions. However, a fault originating in one section of a plant can propagate throughout the plant and so obscure its root cause. Causality analysis is a method that identifies the cause-effect relationships between process variables and presents these in a causality map which can be used to track the propagation path of a fault back to its root cause. A major obstacle to the wide acceptance of causality analysis as a tool for fault diagnosis in industry is the poor interpretability of causality maps. This study identified, proposed and assessed ways to improve the interpretability of causality maps for fault identification. All approaches were tested on a simulated case study and the resulting maps compared to a standard causality map or its transitive reduction. The ideal causality map was defined and all comparisons were performed based on its characteristics. Causality maps were produced using conditional Granger causality (GC), with a novel heuristic approach for selecting sampling period and time window. Conditional GC was found to be ill-suited to plant-wide causality analysis, due to large data requirements, poor model order selection using AIC, and inaccuracy in the presence of multiple different residence times and time delays. Methods to incorporate process knowledge to constrain connections and potential root causes were investigated and found to remove all spurious connections and decrease the pool of potential root cause variables respectively. Tools such as visually displaying node rankings on the causality map and incorporating sliders to manipulate connections and variables were also investigated. Furthermore, a novel hierarchical approach for plant-wide causality analysis was proposed, where causality maps were constructed in two subsequent stages. In the first stage, a less-detailed plant-wide map was constructed using representatives for groups of variables, and used to localise the fault to one of those groups of variables. Variables were grouped according to plant sections or modules identified in the data, and the first principal component (PC1) was used to represent each group (PS-PC1 and Mod-PC1 respectively). PS-PC1 was found to be the most promising approach, as its plant-wide map clearly identified the true root cause location, and the stage-wise application of conditional GC significantly reduced the required number of samples from 13 562 to 602. Lastly, a usability study in the form of a survey was performed to investigate the potential for industrial application of the tools and approaches presented in this study. Twenty responses were obtained, with participants consisting of Stellenbosch University final-year/postgraduate students, employees of an industrial IoT firm, and Anglo American Platinum employees. Main findings include that process knowledge is vital; grouping variables improves interpretability by decreasing the number of nodes; accuracy must be maintained during causality map simplification; and sliders add confusion by causing significant changes in the causality map. In addition, survey results found PS-PC1 to be the most user-friendly approach, further emphasizing its potential for application in industry.
- ItemImproving the performance of causality analysis techniques for automated fault diagnosis in mineral processing plants(Stellenbosch : Stellenbosch University, 2019-04) Lindner, Brian Siegfried; Auret, Lidia; Bauer, Margret; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Modern mineral processing companies are driven towards improving productivity by leveraging existing processes optimally. This can be achieved by improving diagnosis of faults that degrade process performance to provide insightful and actionable information to process engineers. In mineral processing plants, units and variables are connected to each other through material ow, energy ow, and information ow. Faults propagate through a process along these interconnections, and can be traced back along their propagation paths to their root causes. Techniques have been developed for extracting these causal connections from historical process data. These techniques have proven successful for fault diagnosis in chemical processes. However, they have not been widely accepted by industry due to lack of automation of the techniques, complicated implementation, and complicated interpretation. This dissertation investigated the limitations of the causality analysis procedures currently available to process engineers as fault diagnosis tools and developed improvements on them. Improvements were developed and tested using a combination of simulated case studies and real world case studies of operational faults occurring in a mineral processing plant. Objective I: was to investigate the factors that a ect performance of causality analysis techniques. The use of transfer entropy for fault diagnosis in a minerals processing concentrator plant was demonstrated. The desired performance criteria of causality analysis techniques were then de ned in terms of: general applicability; automatability; interpretability; accuracy; precision; and computational complexity. The impact of process conditions on the performance of Granger causality and transfer entropy were then investigated. An analysis of variance (ANOVA) was performed to investigate the impact of process dynamics, fault dynamics, and the parameters on the accuracy of transfer entropy. Objective II: was to design a systematic work ow for application of causality analysis for fault diagnosis. The ANOVA was used to develop a novel relationship between the optimal transfer entropy parameters and the process and fault dynamics. This relationship was then placed within a systematic work ow developed for the application of transfer entropy for oscillation diagnosis, addressing the need for clear procedures and guidelines for data selection and parameter selection. The work ow was applied to an oscillation diagnosis case study from a minerals concentrator plant, and shown to provide a systematic approach to accurately determining the fault propagation path. Objective III: was to design a tool to aid the decision of which causality analysis method to select. A comparative analysis of Granger causality and transfer entropy for fault diagnosis based on the performance criteria de ned was performed. The comparison showed that transfer entropy was more precise, generalisable, and visually interpretable. Granger causality was more automatable, less computationally expensive, and easier to interpret. Guidelines were developed from these comparisons to aid users in deciding when to use Granger causality or transfer entropy Objective IV: was to present tools for interpretation of causal maps for root cause analysis. Methods for construction of causal maps from the results of the causality analysis calculation were presented, and methods for interpretation of causal maps. The usefulness of these techniques for diagnosis of real world case studies was demonstrated.
- ItemMechanisms and kinetics of atmospheric sphalerite oxidative and non-oxidative leaching(Stellenbosch : Stellenbosch University, 2018-03) Henning, Adriaan Johannes; Auret, Lidia; Steyl, J. D. T.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH SUMMARY: A kinetic study of the non-oxidative and oxidative leaching of sphalerite concentrates, under elevated temperatures (75 – 95 °C) and atmospheric pressure is presented in this dissertation. Sphalerite, a zinc sulphide ore, is commonly associated with impurities and other sulphides (i.e. chalcopyrite, galena, pyrite etc.). The mineralogical nature of sphalerite concentrates is complex and the chemistry of iron-containing reactive systems is generally poorly understood, especially under aggressive hydrometallurgical conditions. The aim of this work was the development of an engineering model capable of describing the rate and extent of sphalerite leaching in non-ferric and ferric containing systems. The mathematical framework presented in this thesis consists of various objectives, each addressing thermodynamic and kinetic aspects of the primary leach process. Comprehensive literature investigations are presented which constitutes the mechanisms and rate models, supplemented by phenomenological data obtained from batch experimentation. The different objectives are each covered in a chapter of this dissertation, and include the following: i) a solution thermodynamic framework, ii) intrinsic oxidation mechanisms and rate expressions and iii) quantification and validation of the intrinsic rate expression. Thermodynamic considerations provided a rigorous framework for the interpretation of the solution chemistry, with the explicit recognition of the important solution species. Speciation measurements from various literature sources were utilised to construct the Pitzer model for the various subsystems of the ZnSO4 – Fe2(SO4)3 – FeSO4 – H2SO4 – H2O system. The model gave accurate speciation trends up to concentrations of 1.5 M ZnSO4, 1.5 M FeSO4, 1.5 M Fe2(SO4)3 and 2 M H2SO4. The model distinguishes between inner- and outer-sphere complexes, which was achieved through the inclusion of Raman spectroscopic stability constants. Contact ion pair (CIP) formations was predicted by the Pitzer model and shown results with suitable accuracy for the application in modelling the ionic aqueous solution relevant to this metallurgical kinetic study. A detailed investigation into the electrochemical and mineralogical nature of natural sphalerite gave insights to the leaching mechanism. Iron impurity was found to be integral to sphalerites dissolution mechanism, with the electron exchange at the mineral surface limiting reaction rate. Polarization of sphalerite particle surface by the electrolytic solution caused surface states (barriers) that limits the rate of movement of charge carriers (i.e. electrons). A mechanism was proposed based on the assumption that the first electron or proton transfer step are the rate-limiting step of the non-oxidative and oxidative leaching mechanism. The resulting electrochemical half reactions from the mechanism was used to define the activation polarisation relationships, the Butler-Volmer equations. Through application of the mixed potential theory of metallic corrosion, rate expression for the non-oxidative and oxidative leaching of sphalerite were derived. Experimental batch data obtained from Dr JDT Steyl (1996) were used to quantify and validate the rate parameters of the derived rate expressions. The shrinking core model was applied within a batch reactor model to predict the leaching extents of sphalerite under various initial conditions. The rate parameter regression followed a two-fold strategy whereby the model was first linearized and regressed using a linear regression technique, and obtaining preliminary kinetic constants at average solution compositions. The second strategy consisted of a detailed differential batch reactor model including the solution speciation model and concentrate characteristics, which was used to quantify the intrinsic rate parameters using a non-linear regression technique. A sphalerite leaching mechanism and intrinsic reaction rate model was proposed in this study and the model was quantified using phenomenological batch data. The model was found to be able to predict the leaching rate of sphalerite.
- ItemMonitoring, modelling and simulation of spiral concentrators(Stellenbosch : Stellenbosch University, 2018-12) Nienaber, Ernst Carel; Auret, Lidia; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Spiral concentrators are robust gravity separation devices often compactly implemented in industry with large amounts of spirals per plant – organized in banks. Current automated monitoring strategies at spiral concentrator plants involve quantifying overall feed and product stream states. However, spiral unit monitoring is performed by manual operator inspection and control is mainly achieved by operators manually changing splitter settings of spirals across a plant. In large spiral plants, containing thousands of individual spiral concentrators, changing splitters can become tedious or is sometimes neglected. Automated monitoring and control of spirals can aid spiral plant operators in achieving optimal spiral plant performance. Computer vision orientated mineral interface detection have been proposed, in past studies, as a method to monitor spiral concentrators. This is due to the formation of different mineral bands within spiral troughs during heavy mineral separation. Particles differentiate based on density and size differences usually creating three, visually discernible, mineral bands (flowing down the spiral trough). These streams are known as the concentrate, middling and tailings streams. The concentrate band is often visually darker than the streams containing gangue and the mineral interfaces can serve as a useful cue for setting splitters. However, interface tracking on industrial slurries have not yet been demonstrated and due to the large number of spirals within spiral plants it is necessary to determine what sparse sensor implementation will look like (this is due to the lack of appropriate sensor placement algorithms for metallurgical plants). This text follows a framework that spans from sensor development to sensor implementation strategy within spiral concentration plants – exploring possible stumbling blocks along the way. A spiral interface sensor is proposed, as a spiral monitoring tool, and demonstrated with experimental work during which spiral modelling was also performed. Two image processing algorithms, CVI (edge detection based) and CVII (logistic regression based), were prepared to detect spiral interfaces. Experimental modelling of a Multotec SC21 spiral concentrator was performed by formulating and comparing response surface methodology (RSM) with a proposed extended Holland-Batt model. Two sensor placement strategies, SPI (state estimation based) and SPII (metallurgical performance based), were prepared to help determine important monitoring positions based on steady state spiral plant simulations. Optimal monitoring locations minimize sensor network financial cost while maximizing some proxy for monitoring benefit. Spiral concentrator and spiral plant modelling (including optimal sensor placement) is based on the case study of the Glencore Rowland spiral plant which treats slurry containing UG2 ores to upgrade chromite content. Algorithm CVII proved to be the superior interface detection approach and can identify chromite concentrate interfaces in slurry representative of industrial conditions. Spiral splitter control should be further investigated; however, spiral unit monitoring will still provide operators with useful information on process changes (should control be infeasible or unprofitable). RSM models were more precise than the extended Holland-Batt model; however, the latter showed superior extrapolation and plant simulation ability (emphasizing the need that modelling should be done with plant simulation in mind). SPI and SPII were used to rank different sensor configurations. Optimal sensor configurations determined by SPI were ultimately controlled by sensor financial cost. SPII is accepted as a superior sensor placement algorithm since sensor cost and metallurgical performance benefit were weighted in a way similar to a return on investment problem (suggesting a new perspective for this inherent multi-objective problem).
- ItemOn-line fault detection and end-of-batch quality prediction for batch processes incorporating on-line synchronisation and phase identification(Stellenbosch : Stellenbosch University, 2016-12) Myburgh, Travis Louis; Auret, Lidia; Burger, A. J.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Batch processes are transient processes, which present a unique monitoring challenge. Whereas the expected output variables of a continuous process centre on specific target values that represent some steady state, batch process variables inherently change from an initial state to a final state. Abnormal events, or faults, that occur in batch processes can be identified with an appropriate monitoring scheme. Bilinear statistical modelling based on latent methods, such as Partial Least Squares (PLS), have been shown by Nomikos and MacGregor (1995) to be an effective way to detect faults and predict the end-of-batch quality. A batch process monitoring toolbox has previously been developed, which supports off-line modelling and monitoring of batch processes. The limitation of off-line monitoring is that product end-of-batch quality and fault detection can only be assessed once the batch has been completed. On-line fault detection and end-of-batch quality prediction in near-real time endeavors to address this limitation. The aim of this project was to implement an on-line monitoring platform which can detect faults and predict end-of-batch quality in a timely manner. Synchronization of the batch trajectories is a computationally expensive step in off-line monitoring, and the ability of the on-line platform to produce computationally efficient results comparable to off-line synchronization was assessed. The Relaxed Greedy Time Warping (RGTW) approach (González-Martínez et al., 2011) was used to synchronise the trajectories in an on-line fashion. The approach was able to produce comparable results in a computationally efficient way, but some inaccuracies were observed during flat regions of the batch trajectories. The nature of phases for accurate modelling of batch data was also investigated. Models were built for batch data partitioned in three ways: no partitions, partitions based on known process stages and partitions based on linear correlation structure between the variables and the end-of-batch quality prediction accuracy was compared. The multiphase partial least squares (MPPLS) algorithm (Camacho, et al., 2007) was used to find the changes in linear correlation among the process variables and partition the data accordingly. Partitioning the data using the MPPLS algorithm showed comparable or statistically significant reductions in the overall end-of-batch prediction error compared with the other two data partitioning methods. The models were applied on-line to two case studies using the on-line monitoring platform and the fault detection and end-of-batch quality prediction accuracy were assessed for the three data partioning methods. The end-of-batch predictions made using the MPPLS algorithm provided the most accurate end-of-batch predictions. Improper on-line synchronisation caused false alarms in certain areas of the batch trajectory, but faults were detected with relative accuracy.