Doctoral Degrees (Chemical Engineering)
Permanent URI for this collection
Browse
Browsing Doctoral Degrees (Chemical Engineering) by browse.metadata.advisor "Aldrich, C."
Now showing 1 - 9 of 9
Results Per Page
Sort Options
- ItemThe adsorption characteristics of precious and base metals on four different ion-exchange resins(Stellenbosch : Stellenbosch University, 2000-12) Els, Ellis Raymond; Lorenzen, L.; Aldrich, C.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Adsorption tests were conducted with four different ion-exchange resins to determine the equilibrium adsorption of a range of precious and base metals. The adsorption characteristics were determined for synthetic single metal, as well as for multicomponent and base metal solutions. The effect of the el- concentration on the equilibrium adsorption was established for three different Hel concentrations in the above solutions. From the ion-exchange characteristics determined, a selective adsorption sequence is proposed for the separation of precious and base metals. Pure platinum, palladium and gold were dissolved in aqua regia and diluted to 2000 ppm (as metal) in 4M Hel. Ruthenium, rhodium and iridium were dissolved from pure salts in Hel. A 2000 ppm base metal solution was prepared by dissolving all the required components, including precious metals, to match an in-plant industrial basemetals solution composition. For each precious metal the equilibrium adsorption was determined for a couple of solution concentrations. Data points for adsorption curves were established by varying the amount of resin added to the test solution of a specific concentration. The equilibrium solution concentrations were determined by Iep analysis after 24 hours of exposure, using the bottle-roll technique. The experimental results obtained indicate a possible process route for the separation of precious metals with ion-exchange resin. The XAD7 resin is highly selective for gold from mixed solutions containing precious and base metals. It is also evident that, with the gold removed from the solution, the A22 resin adsorbs only palladium. IR200 resin adsorbs only the base metals from the solution. With all other precious metals removed from the solution (platinum and ruthenium must be extracted by other means), iridium can be adsorbed from the solution by IRA900 resin which is highly selective for iridium over rhodium. For all of the anion resins, XAD7, IRA900, and A22, the chloride concentration of the solution did not have a big effect on the adsorption capacity. However, the adsorption of base metals on IR200 is sensitive to chloride concentration, with a rapid reduction in adsorption at higher chloride concentrations. Statistical models were developed for the adsorption of each of the precious metals, as well as for the base metal solution. All adsorption data, obtained for a resin (typically 250 equilibrium data points), was used in the development of the model. The SPSS statistical software package was used to develop linear regression models. The interaction between all the input parameters, e.g. the interaction of gold and chloride ions, was modelled by specifying the product of the gold and the chloride concentrations as an input variable. The variables that determine the adsorption quantities were identified. High solution concentrations of the target adsorption component increase the adsorption quantity. It has been established that a higher platinum concentration increases the adsorption quantity of gold on XAD7 resin. However, the adsorption quantity is reduced at higher ruthenium concentrations. The adsorption quantity of iridium on IRA900 is reduced with increased rhodium concentration. The adsorption quantity of palladium on A22 is increased by the presence of rhodium and decreased by larger concentrations of iridium and platinum. The adsorption of base metals on IR200 is decreased at higher acid concentrations. Higher concentrations of gold in the base metal solution also decrease the adsorption quantity of base metals. The model predicted adsorption of each component compares well with the actual measured values. In batch adsorption tests the counter ion is not removed from the resin. The resin capacity for a specific ion concentration could therefore not be determined. As such, the adsorption models are only valid for the initial part of the ion-exchange process. The effect of kinetics on the adsorption was not determined.
- ItemThe bio-disposal of lignocellulose substances with activated sludge(Stellenbosch : Stellenbosch University, 2001-03) Qi, Bing Cui; Lorenzen, L.; Aldrich, C.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Lignocellulose is the principal form of biomass in the biosphere and therefore the predominant renewable source in the environment. However, owing to the chemical and structural complexity of lignocellulose substrates, the effective and sustainable utilization of lignocellulose wastes is limited. Many environments where lignocellulose residues are ordinarily stored can be highly acidic (e.g. landfills), and under these circumstances biodegradation of the lignocellulose is slow and unhygienic. Owing to the metabolic activities of the micro-organisms, the initially acidified habitats rapidly undergoes self-neutralization. A number of pathogenic bacteria (coliforms and Salmonella sp.) are present during this slow degradation process and it is therefore imperative to improve the efficiency and hygienic effects of the biodegradation of the lignocellulose. Although the fundamentals of biodegradation of lignocellulose have been widely investigated, many issues still need to be resolved in order to develop commercially viable technology for the exploitation of these waste products. For example, owing to the complex, heterogeneous structure of lignocellulose, the degree of solubilization, modification and conversion of the different components are not clear. Likewise, the overall anaerobic degradation of lignocellulose is not understood well as yet. In this study, the emphasis was on the promotion of solid anaerobic digestion of lignocellulose wastes for environmental beneficiation and waste reutilization. The degradation of lignocellulose in landfill environments was first simulated experimentally. Once the microbial populations and the degradation products of the system were characterized, the promotion of anaerobic digestion by use of activated sludge was studied. This included acidogenic fermentation, as well as recovery of the methanogenic phase. Moreover, special attention was given to the further disposal of humic acids or humic acid bearing leachates formed in the digestive system, since these acids pose a major problem in the digestion of the lingocellulose. With ultrasonication, approximately 50% of the lower molecular weight fraction of humic acids could be decomposed into volatile forms, but the higher molecular weight fraction tended to aggregate into a colloidal form, which could only be removed from the system by making use of ultrasonically assisted adsorption on preformed aluminium hydroxide floes. This was followed by an investigation of the microbial degradation of humic acids and the toxicity of these acids to anaerobic consortia. Further experimental work was conducted to optimize the biological and abiological treatment of lignocellulose in an upflow anaerobic sludge blanket (DASB) reactor fed with glucose substrate. The humic acids could be partially hydrolysed and decomposed by the acid fermentative consortia of the granules in the DASB reactor. Finally, solid mesothermophilic lignocellulose anaerobic digestive sludge can be viewed as a humus-rich hygienic product that can improve the fertility and water-holding capacity of agricultural soil, nourish plants and immobilize heavy metals in the environment as a bioabsorbent.
- ItemInduction of fuzzy rules from chemical process data using Growing Neural Gas and Reactive Tabu search methods(Stellenbosch : Stellenbosch University, 1999-10) Gouws, Francois Stefan; Aldrich, C.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: The artificial intelligence community has developed a large body of algorithms that can be employed as powerful data analysis tools. However, such tools are not readily used in petrochemical plant operational decision support. This is primarily because the models generated by such tools are either too inaccurate or too difficult to understand if of acceptable accuracy. The Combinatorial Rule Assembler (CORA) algorithm is proposed to address these problems. The algorithm uses membership functions made by the Growing Neural Gas (GNG) radial basis function network training technique to assemble internally disjunctive, Oth -order Sugeno fuzzy rules using the nongreedy Reactive Tabu Search (RTS) combinatorial search method. An evaluation of the influence of CORA training parameters revealed the following. First, for certain problems CORA models have an attribute space overlap that is one third of their GNG-generated counterparts. Second, the use of more fuzzy rules generally leads to better model accuracy. Third, decreased swap (or move) thresholds do not consistently lead to more accurate and / or simpler models. Fourth, utilisation of moves rather than swaps during rule antecedent assembly leads to better rule simplification. Fifth, consequent magnitude penalisation generally improves accuracy, especially if many rules are built. Variance of results is also usually reduced. Sixth, employing Yu rather than Zadeh operators leads to improved accuracy. Seventh, use of the GNG adjacency matrix significantly reduces the combinatorial complexity of rule construction. Eighth, AlC and BIC criteria used find the "right-sized" model exhibited local optima. Last, the CORA algorithm struggles to model problems that have a low exemplar to attribute ratio. On a chaotic time series problem the CORA algorithm builds models that are significantly (with at least 95% confidence) more accurate than those generated using multiple linear regression (MLR), CART regression trees and multivariate adaptive regression splines (MARS). However, only the RTS component models are significantly more accurate than those of the GNG and k-means (RBF) radial basis function network methods. In terms of complexity, the CORA models were significantly simpler than the CART and RBF models but more complex than the MLR, MARS and multilayer perceptron models that were evaluated. Taking all results for this problem into account, it is the author's opinion that the drop in accuracy (at worst 0.42%) of the CORA models, because of membership function merging and rule reduction, is justified by the increase in model simplicity (at least 22%). In addition, these results show that relatively intelligible "if. .. then ... " fuzzy rule models can be built from chemical process data that are competitive (in terms of accuracy) with other, less intelligible, model types (e.g. multivariate spline models).
- ItemAn integrated approach to waste and energy minimization in the wine industry : a knowledge-based decision methodology(Stellenbosch : University of Stellenbosch, 2004-12) Musee, Ndeke; Lorenzen, L.; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: The importance of waste management is growing rapidly for several reasons. These reasons include the escalating cost of wastewater treatment and cleaning chemicals, an emerging trend of onerous regulatory regime regarding e uent disposal from governments, rising public awareness on the adverse e ects of industrial waste as well as drastic reduc- tion in water resources in the winegrowing regions. In addition, owing to the large energy demand for refrigeration purposes for high quality wine production and rapidly increasing energy costs, the challenges of energy management in the wine industry were also inves- tigated. In order to address these challenges adequately, the solutions were derived via the integration of two disciplines: environmental science (waste and energy management) and computer science (applications of arti cial intelligence). Therefore, the ndings re- ported from this study seek to advance knowledge through the construction of decision support systems for waste and energy management in circumstances where conventional mathematical formalisms are inadequate. In that sense, the dissertation constitutes in- terdisciplinary research on the application of integrated arti cial intelligence technologies (expert systems and fuzzy logic) in designing and developing decision tools for waste and energy management in the wine industry. The dissertation rst presents the domain of interest, where the scope and breadth of the problems it addresses are clearly de ned. Critical examination of the domain data- bases revealed that data, information, and knowledge for waste and energy management in the wine industry are generally incomplete and lack structure overall. Owing to these characteristics, a hybrid system approach was proposed for the development of decision support systems based on fuzzy logic. The integrated decision support systems were de- veloped based on an object-oriented architecture. This approach facilitated the exible design required for waste and energy management-related complex problem-solving. To illustrate the applicability of the o -line decision tools developed, several case stud- ies mirroring on actual industrial practices were considered. These systems were found to be robust and yielded results that were in accordance with actual industrial practices inthe wine industry. Furthermore, they provided intelligent suggestions in scenarios where there was minimal information, and under certain instances they o ered feasible sugges- tions in circumstances where a human novice could have problems in making the right decisions.
- ItemMonitoring and diagnosis of process systems using kernel-based learning methods(Stellenbosch : University of Stellenbosch, 2007-12) Jemwa, Gorden Takawadiyi; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: The development of advanced methods of process monitoring, diagnosis, and control has been identified as a major 21st century challenge in control systems research and application. This is particularly the case for chemical and metallurgical operations owing to the lack of expressive fundamental models as well as the nonlinear nature of most process systems, which makes established linearization methods unsuitable. As a result, efforts have been directed in the search of alternative approaches that do not require fundamental or analytical models. Data-based methods provide a very promising alternative in this regard, given the huge volumes of data being collected in modern process operations as well as advances in both theoretical and practical aspects of extracting information from observations. In this thesis, the use of kernel-based learning methods in fault detection and diagnosis of complex processes is considered. Kernel-based machine learning methods are a robust family of algorithms founded on insights from statistical learning theory. Instead of estimating a decision function on the basis of minimizing the training error as other learning algorithms, kernel methods use a criterion called large margin maximization to estimate a linear learning rule on data embedded in a suitable feature space. The embedding is implicitly defined by the choice of a kernel function and corresponds to inducing a nonlinear learning rule in the original measurement space. Large margin maximization corresponds to developing an algorithm with theoretical guarantees on how well it will perform on unseen data. In the first contribution, the characterization of time series data from process plants is investigated. Whereas complex processes are difficult to model from first principles, they can be identified using historic process time series data and a suitable model structure. However, prior to fitting such a model, it is important to establish whether the time series data justify the selected model structure. Singular spectrum analysis (SSA) has been used for time series identification. A nonlinear extension of SSA is proposed for classification of time series. Using benchmark systems, the proposed extension is shown to perform better than linear SSA. Moreover, the method is shown to be useful for filtering noise in time series data and, therefore, has potential applications in other tasks such as data rectification and gross error detection. Multivariate statistical process monitoring methods are well-established techniques for efficient information extraction from multivariate data. Such information is usually compact and amenable to graphical representation in two or three dimensional plots. For process monitoring purposes control limits are also plotted on these charts. These control limits are usually based on a hypothesized analytical distribution, typically the Gaussian normal distribution. A robust approach for estimating con dence bounds using the reference data is proposed. The method is based on one-class classification methods. The usefulness of using data to define a confidence bound in reducing fault detection errors is illustrated using plant data. The use of both linear and nonlinear supervised feature extraction is also investigated. The advantages of supervised feature extraction using kernel methods are highlighted via illustrative case studies. A general strategy for fault detection and diagnosis is proposed that integrates feature extraction methods, fault identification, and different methods to estimate confidence bounds. For kernel-based approaches, the general framework allows for interpretation of the results in the input space instead of the feature space. An important step in process monitoring is identifying a variable responsible for a fault. Although all faults that can occur at any plant cannot be known beforehand, it is possible to use knowledge of previous faults or simulations to anticipate their recurrence. A framework for fault diagnosis using one-class support vector machine (SVM) classification is proposed. Compared to other previously studied techniques, the one-class SVM approach is shown to have generally better robustness and performance characteristics. Most methods for process monitoring make little use of data collected under normal operating conditions, whereas most quality issues in process plants are known to occur when the process is in-control . In the final contribution, a methodology for continuous optimization of process performance is proposed that combines support vector learning with decision trees. The methodology is based on continuous search for quality improvements by challenging the normal operating condition regions established via statistical control. Simulated and plant data are used to illustrate the approach.
- ItemA neurocontrol paradigm for intelligent process control using evolutionary reinforcement learning(Stellenbosch : University of Stellenbosch, 2004-12) Conradie, Alex van Eck; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: A Neurocontrol Paradigm for Intelligent Process Control using Evolutionary Reinforcement Learning Balancing multiple business and operational objectives within a comprehensive control strategy is a complex configuration task. Non-linearities and complex multiple process interactions combine as formidable cause-effect interrelationships. A clear understanding of these relationships is often instrumental to meeting the process control objectives. However, such control system configurations are generally conceived in a qualitative manner and with pronounced reliance on past effective configurations (Foss, 1973). Thirty years after Foss' critique, control system configuration remains a largely heuristic affair. Biological methods of processing information are fundamentally different from the methods used in conventional control techniques. Biological neural mechanisms (i.e., intelligent systems) are based on partial models, largely devoid of the system's underlying natural laws. Neural control strategies are carried out without a pure mathematical formulation of the task or the environment. Rather, biological systems rely on knowledge of cause-effect interactions, creating robust control strategies from ill-defined dynamic systems. Dynamic modelling may be either phenomenological or empirical. Phenomenological models are derived from first principles and typically consist of algebraic and differential equations. First principles modelling is both time consuming and expensive. Vast data warehouses of historical plant data make empirical modelling attractive. Singular spectrum analysis (SSA) is a rapid model development technique for identifying dominant state variables from historical plant time series data. Since time series data invariably covers a limited region of the state space, SSA models are almost necessarily partial models. Interpreting and learning causal relationships from dynamic models requires sufficient feedback of the environment's state. Systemisation of the learning task is imperative. Reinforcement learning is a computational approach to understanding and automating goal-directed learning. This thesis aimed to establish a neurocontrol paradigm for non-linear, high dimensional processes within an evolutionary reinforcement learning (ERL) framework. Symbiotic memetic neuro-evolution (SMNE) is an ERL algorithm developed for global tuning of neurocontroller weights. SMNE is comprised of a symbiotic evolutionary algorithm and local particle swarm optimisation. Implicit fitness sharing ensures a global search and the synergy between global and local search speeds convergence.Several simulation studies have been undertaken, viz. a highly non-linear bioreactor, a rigorous ball mill grinding circuit and the Tennessee Eastman control challenge. Pseudo-empirical modelling of an industrial fed-batch fermentation shows the application of SSA for developing partial models. Using SSA, state estimation is forthcoming without resorting to fundamental models. A dynamic model of a multieffect batch distillation (MEBAD) pilot plant was fashioned using SSA. Thereafter, SMNE developed a neurocontroller for on-line implementation using the SSA model of the MEBAD pilot plant. Both simulated and experimental studies confirmed the robust performance of ERL neurocontrollers. Coordinated flow sheet design, steady state optimisation and nonlinear controller development encompass a comprehensive methodology. Effective selection of controlled variables and pairing of process and manipulated variables were implicit to the SMNE methodology. High economic performance was attained in highly non-linear regions of the state space. SMNE imparted significant generalisation in the face of process uncertainty. Nevertheless, changing process conditions may necessitate neurocontroller adaptation. Adaptive neural swarming (ANS) allows for adaptation to drifting process conditions and tracking of the economic optimum online. Additionally, SMNE allows for control strategy design beyond single unit operations. SMNE is equally applicable to processes with high dimensionality, developing plant-wide control strategies. Many of the difficulties in conventional plant-wide control may be circumvented in the biologically motivated approach of the SMNE algorithm. Future work will focus on refinements to both SMNE and SSA. SMNE and SSA thus offer a non-heuristic, quantitative approach that requires minimal engineering judgement or knowledge, making the methodology free of subjective design input. Evolutionary reinforcement learning offers significant advantages for developing high performance control strategies for the chemical, mineral and metallurgical industries. Symbiotic memetic neuro-evolution (SMNE), adaptive neural swarming (ANS) and singular spectrum analysis (SSA) present a response to Foss' critique.
- ItemNonlinear singular spectrum analysis and its application in multivariate statistical process monitoring(Stellenbosch : Stellenbosch University, 2016-03) Krishnannair, Syamala; Aldrich, C.; Bradshaw, S. M.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Multivariate statistical process control (MSPC) approaches based on principal component analysis (PCA), partial least squares (PLS) and related extensions are now widely used for process monitoring and diagnosis in process systems where observed correlated measurements are readily available. However, highly nonlinear (dynamic) processes pose a challenge for MSPC methods as a large set of nonlinear features are typically required to capture the underlying characteristic behaviour of the process in the absence of faults. Several extensions of basic (PCA) methods have previously been proposed to handle features such as autocorrelation in data, time-frequency localization, and nonlinearity. In this study multivariate statistical process monitoring methods based on nonlinear singular spectrum analysis which use nonlinear principal component analysis, multidimensional scaling and kernel multidimensional scaling are proposed. More specifically, singular spectrum analysis using covariance and dissimilarity scale structure are proposed to express multivariate time series as the sum of identifiable components whose basis functions are obtained from the process measurements. Such an approach is useful for extracting trends, harmonic patterns and noise in time series data. Using nonlinear SSA decomposition of time series data, a multimodal representation is obtained that can be used together with existing statistical process control methods to develop novel process monitoring schemes. The advantages of these approaches are demonstrated on simulated multivariate nonlinear data and compared with those of classical PCA and multimodal SSA on base metal flotation plant data and the Tennessee Eastman process benchmark data. The nonlinear SSA methods better captured the nonlinearities in the observed data. Consequently, this yielded improved detection rates for various faults in nonlinear data over those obtainable by alternative competing multivariate methods.
- ItemProcess monitoring and fault diagnosis using random forests(Stellenbosch : University of Stellenbosch, 2010-12) Auret, Lidia; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Fault diagnosis is an important component of process monitoring, relevant in the greater context of developing safer, cleaner and more cost efficient processes. Data-driven unsupervised (or feature extractive) approaches to fault diagnosis exploit the many measurements available on modern plants. Certain current unsupervised approaches are hampered by their linearity assumptions, motivating the investigation of nonlinear methods. The diversity of data structures also motivates the investigation of novel feature extraction methodologies in process monitoring. Random forests are recently proposed statistical inference tools, deriving their predictive accuracy from the nonlinear nature of their constituent decision tree members and the power of ensembles. Random forest committees provide more than just predictions; model information on data proximities can be exploited to provide random forest features. Variable importance measures show which variables are closely associated with a chosen response variable, while partial dependencies indicate the relation of important variables to said response variable. The purpose of this study was therefore to investigate the feasibility of a new unsupervised method based on random forests as a potentially viable contender in the process monitoring statistical tool family. The hypothesis investigated was that unsupervised process monitoring and fault diagnosis can be improved by using features extracted from data with random forests, with further interpretation of fault conditions aided by random forest tools. The experimental results presented in this work support this hypothesis. An initial study was performed to assess the quality of random forest features. Random forest features were shown to be generally difficult to interpret in terms of geometry present in the original variable space. Random forest mapping and demapping models were shown to be very accurate on training data, and to extrapolate weakly to unseen data that do not fall within regions populated by training data. Random forest feature extraction was applied to unsupervised fault diagnosis for process data, and compared to linear and nonlinear methods. Random forest results were comparable to existing techniques, with the majority of random forest detections due to variable reconstruction errors. Further investigation revealed that the residual detection success of random forests originates from the constrained responses and poor generalization artifacts of decision trees. Random forest variable importance measures and partial dependencies were incorporated in a visualization tool to allow for the interpretation of fault conditions. A dynamic change point detection application with random forests proved more successful than an existing principal component analysis-based approach, with the success of the random forest method again residing in reconstruction errors. The addition of random forest fault diagnosis and change point detection algorithms to a suite of abnormal event detection techniques is recommended. The distance-to-model diagnostic based on random forest mapping and demapping proved successful in this work, and the theoretical understanding gained supports the application of this method to further data sets.
- ItemA process performance monitoring methodology for mineral processing plants(Stellenbosch : Stellenbosch University, 2014-04) Groenewald, Jacobus Willem De Villiers; Aldrich, C.; Bradshaw, S. M.; Akdogan, G.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Key to remaining competitive within the mineral industry is ensuring that all processes are always being operated optimally. Process performance monitoring is an ideal initiative with which to accomplish this. Not only can it be used to ensure fault free process operation, but it can also be applied for plant performance improvement through a better understanding of the contributors to the success or failure of the process operation. Critical to the success of any proposed monitoring approach would be its ability to cater for the fact that these mineral processes are typically highly complex, dynamic and non-linear. The purpose of this study was to propose and evaluate a methodical approach to plant-wide process performance monitoring for mineral processing plants. Crucial to this approach is the concept of integrating process causality maps with data-based systems for event detection and diagnosis. To this end, process causality maps were developed to provide a means of structuring process data through the use of fundamental process knowledge. Statistical data-based fault detection techniques, being especially powerful with regards to data compression and dimensionality reduction, were employed to allow huge data sets to be analysed more easily. Change point detection techniques allowed for the identification of stationary segments of data in otherwise non-stationary data sets. Variable importance analysis was used to identify and interpret the variable(s) responsible for the event conditions. Using simulated data sets, different techniques were evaluated in order to acquire an appreciation for their effectiveness and reliability. While it was found that no single technique significantly outperformed any other, it was confirmed that for data having different structures and characteristics, none of the techniques were effective in analysing all potential event conditions. It was suggested that all available techniques be run in parallel, with expert interpretation of the results, ensuring a more comprehensive analysis to be performed. Furthermore, given that only process measurements being monitored could be used to detect events and be analysed for importance, the consequences of monitoring too few process measurements were highlighted. A generic analytical methodology for multivariate process performance monitoring was defined, ensuring the use of appropriate techniques and interpretations. The methodology was subsequently successfully applied to a mineral processing concentrator case study. The application of process causality maps was found to significantly simplify the challenge of monitoring the process, not only improving the ability of the techniques applied through a better focussed application, but also the interpretability of the results due to the reduction in complexity. Extreme learning machine, a robust and computationally inexpensive algorithm, was identified as a potential core algorithm for the data analysis techniques forming part of a process performance monitoring solution. With different drivers, at different times, having different effects on the process, visual representation of the data through canonical variate analysis biplots, combined with a sound understanding of the process under investigation, contributed significantly to a better understanding of the important variables for each event condition. From an implementation perspective, adoption of the methodology remains the biggest barrier to success, requiring the most attention in the immediate future.