Masters Degrees (Chemical Engineering)
Permanent URI for this collection
Browse
Browsing Masters Degrees (Chemical Engineering) by browse.metadata.advisor "Aldrich, C."
Now showing 1 - 20 of 43
Results Per Page
Sort Options
- ItemAcoustic monitoring of DC placma arcs(Stellenbosch : Stellenbosch University, 2008-03) Burchell, John James; Eksteen, J. J.; Niesler, T. R.; Aldrich, C.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: The arc, generated between the cathode and slag in a dc electric arc furnace (EAF), constitutes the principal source of thermal energy in the furnace. Steady state melting conditions rely on efficient control of the arc's power. This is achieved by keeping the arc's length constant, which is currently not directly measured in the industry, but relies on an external voltage measurement. This voltage measurement is often subject to inaccuracies since it may be influenced by voltage fluctuations that are not necessarily related to the arc itself, such as the variable impedance of the molten bath and the degradation of the graphite electrode. This study investigated whether or not it is possible to develop a sensor for the detection of arc length from the sound that is generated by the arc during operation. Acoustic signals were recorded at different arc lengths using a 60 kW dc electric arc furnace and 600 g of mild steel as melt. Using a filterbank kernel (FB) based Fisher discriminant analysis (KFD) method, nonlinear features were extracted from these signals. The features were then used to train and test a k nearest neighbour (kNN) classifier. Two methods were used to evaluate the performance of the kNN classifier. In the first, both test and train features were extracted from acoustic signals recorded during the same experimental run and used a ten fold bootstrap method for integrity. The second method tested the generalized performance of the classifier. This involved training the kNN classifier with features extracted from the acoustic recordings made during a single or multiple experimental runs and then testing it with features drawn from the remaining experimental runs. The results from this study shows that there exists a relationship between arc length and arc acoustic which can be exploited to develop a sensor for the detection of arc length from arc acoustics in the de EAF. Indications are that the performance of such a sensor would rely strongly on how statistically representative the acoustic data are, used to develop the sensor, to the acoustics generated by industrial dc EAFs during operation.
- ItemAnalysis and modelling of mining induced seismicity(Stellenbosch : University of Stellenbosch, 2006-12) Bredenkamp, Ben; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.Earthquakes and other seismic events are known to have catastrophic effects on people and property. These large-scale events are almost always preceded by smallerscale seismic events called precursors, such as tremors or other vibrations. The use of precursor data to predict the realization of seismic hazards has been a long-standing technical problem in different disciplines. For example, blasting or other mining activities have the potential to induce the collapse of rock surfaces, or the occurrence of other dangerous seismic events in large volumes of rock. In this study, seismic data (T4) obtained from a mining concern in South Africa were considered using a nonlinear time series approach. In particular, the method of surrogate analysis was used to characterize the deterministic structure in the data, prior to fitting a predictive model. The seismic data set (T4) is a set of seismic events for a small volume of rock in a mine observed over a period of 12 days. The surrogate data were generated to have structure similar to that of T4 according to some basic seismic laws. In particular, the surrogate data sets were generated to have the same autocorrelation structure and amplitude distributions of the underlying data set T4. The surrogate data derived from T4 allow for the assessment of some basic hypotheses regarding both types of data sets. The structure in both types of data (i.e. the relationship between the past behavior and the future realization of components) was investigated by means of three test statistics, each of which provided partial information on the structure in the data. The first is the average mutual information between the reconstructed past and futures states of T4. The second is a correlation dimension estimate, Dc which gives an indication of the deterministic structure (predictability) of the reconstructed states of T4. The final statistic is the correlation coefficients which gives an indication of the predictability of the future behavior of T4 based on the past states of T4. The past states of T4 was reconstructed by reducing the dimension of a delay coordinate embedding of the components of T4. The map from past states to future realization of T4 values was estimated using Long Short-Term Recurrent Memory (LSTM) neural networks. The application of LSTM Recurrent Neural Networks on point processes has not been reported before in literature. Comparison of the stochastic surrogate data with the measured structure in the T4 data set showed that the structure in T4 differed significantly from that of the surrogate data sets. However, the relationship between the past states and the future realization of components for both T4 and surrogate data did not appear to be deterministic. The application of LSTM in the modeling of T4 shows that the approach could model point processes at least as well or even better than previously reported applications on time series data.
- ItemAnalysis of process data with singular spectrum methods(Stellenbosch : University of Stellenbosch, 2003-12) Barkhuizen, Marlize; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: The analysis of process data obtained from chemical and metallurgical engineering systems is a crucial aspect of the operating of any process, as information extracted from the data is used for control purposes, decision making and forecasting. Singular spectrum analysis (SSA) is a relatively new technique that can be used to decompose time series into their constituent components, after which a variety of further analyses can be applied to the data. The objectives of this study were to investigate the abilities of SSA regarding the filtering of data and the subsequent modelling of the filtered data, to explore the methods available to perform nonlinear SSA and finally to explore the possibilities of Monte Carlo SSA to characterize and identify process systems from observed time series data. Although the literature indicated the widespread application of SSA in other research fields, no previous application of singular spectrum analysis to time series obtained from chemical engineering processes could be found. SSA appeared to have a multitude of applications that could be of great benefit in the analysis of data from process systems. The first indication of this was in the filtering or noise-removal abilities of SSA. A number of case studies were filtered by various techniques related to SSA, after which a number of neural network modelling strategies were applied to the data. It was consistently found that the models built on data that have been prefiltered with SSA outperformed the other models. The effectiveness of localized SSA and auto-associative neural networks in performing nonlinear SSA were compared. Both techniques succeeded in extracting a number of nonlinear components from the data that could not be identified from linear SSA. However, it was found that localized SSA was a more reliable approach, as the auto-associative neural networks would not train for some of the data or extracted nonsensical components for other series. Lastly a number of time series were analysed using Monte Carlo SSA. It was found that, as is the case with all other characterization techniques, Monte Carlo SSA could not succeed in correctly classifying all the series investigated. For this reason several tests were used for the classification of the real process data. In the light of these findings, it was concluded that singular spectrum analysis could be a valuable tool in the analysis of chemical and metallurgical process data.
- ItemChange-point detection in dynamical systems using auto-associative neural networks(Stellenbosch : Stellenbosch University, 2012-03) Bulunga, Meshack Linda; Aldrich, C.; Stellenbosch University. Faculty of Engineering. Dept. of Process engineering.ENGLISH ABSTRACT: In this research work, auto-associative neural networks have been used for changepoint detection. This is a nonlinear technique that employs the use of artificial neural networks as inspired among other by Frank Rosenblatt’s linear perceptron algorithm for classification. An auto-associative neural network was used successfully to detect change-points for various types of time series data. Its performance was compared to that of singular spectrum analysis developed by Moskvina and Zhigljavsky. Fraction of Explained Variance (FEV) was also used to compare the performance of the two methods. FEV indicators are similar to the eigenvalues of the covariance matrix in principal component analysis. Two types of time series data were used for change-point detection: Gaussian data series and nonlinear reaction data series. The Gaussian data had four series with different types of change-points, namely a change in the mean value of the time series (T1), a change in the variance of the time series (T2), a change in the autocorrelation of the time series (T3), and a change in the crosscorrelation of two time series (T4). Both linear and nonlinear methods were able to detect the changes for T1, T2 and T4. None of them could detect the changes in T3. With the Gaussian data series, linear singular spectrum analysis (LSSA) performed as well as the NLSSA for the change point detection. This is because the time series was linear and the nonlinearity of the NLSSA was therefore not important. LSSA did even better than NLSSA when comparing FEV values, since it is not subject to suboptimal solutions as could sometimes be the case with autoassociative neural networks. The nonlinear data consisted of the Belousov-Zhabotinsky (BZ) reactions, autocatalytic reaction time series data and data representing a predator-prey system. With the NLSSA methods, change points could be detected accurately in all three systems, while LSSA only managed to detect the change-point on the BZ reactions and the predator-prey system. The NLSSA method also fared better than the LSSA method when comparing FEV values for the BZ reactions. The LSSA method was able to model the autocatalytic reactions fairly accurately, being able to explain 99% of the variance in the data with one component only. NLSSA with two nodes on the bottleneck attained an FEV of 87%. The performance of both NLSSA and LSSA were comparable for the predator-prey system, both systems, where both could attain FEV values of 92% with a single component. An auto-associative neural network is a good technique for change point detection in nonlinear time series data. However, it offers no advantage over linear techniques when the time series data are linear.
- ItemCleaning of micro- and ultrafiltration membranes with infrasonic backpulsing.(Stellenbosch : University of Stellenbosch, 2009-12) Shugman, Emad Musbah; Aldrich, C.; Sanderson, R. D.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Membrane fouling is universally considered to be one of the most critical problems in the wider application of membranes in filtration separation. Fouling is caused by the deposition of particles not only on the surface of the membrane, but also inside the membrane pores, which reduces permeate flux and leads to a reduction of the efficiency and the longevity of the membrane. The backpulsing cleaning method can be used to remove deposited foulants from the surface of the membrane, without having to shut down the plant. Ultrasonic time-domain reflectometry (UTDR) is a nondestructive technique, used to detect and measure the growth of fouling layer on the membrane surface during microfiltration and ultrafiltration processes. In this study flat-sheet microfiltration (MF) and ultrafiltration (UF) membranes were fouled during a cross-flow filtration processes using dextrin, yeast or alumina (feed pressure 100 kPa and feed flow rate 0.45 liter/minute), in a flat cell. Infrasound frequency backpulsing, in the permeate space, was used to clean the membranes. Backpulsing was carried out using the permeate water or soap solutions. The peak pressure amplitude of the pulses used to clean the membranes was 140 kPa, the pulsing was applied at a frequency of 6.7 Hz. The main objectives of this research were: (1) to obtain a fundamental understandimg of how foulants deposit on membrane surfaces and how the foulant deposits can be removed using the backpulsing cleaning technique during MF and UF, (2) to use the ultrasonic measurement technique for monitoring the growth and removal of the fouling layer on the membrane surface and (3) Use scanning electron microscopy (SEM) as a direct measurement technique to analyze the structure the foulant deposits on membrane surfaces before and after cleaning. Results showed that a flux value of between 55% and 98% of the clean water flux value can be achieved by backpulsing cleaning. UTDR was successfully applied to monitor membrane cleaning and provide information about the growth and removal of fouling layers on the membrane surface.
- ItemCluster analysis and classification of process data by use of principal curves(Stellenbosch : Stellenbosch University, 1999-12) Van Coller, Cornelia Susanna; Aldrich, C.; Lorenzen, L.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH SUMMARY: In this thesis a new method of clustering as wen as a new method of classification is proposed. Cluster analysis is a statistical method used to search for natural groups in an unstructured multivariate data set. Clusters are obtained in such. a way that the observations belonging to the same group are more alike than observations across groups. For instance, long data records are found in mineral processing plants, where the data can be reduced to clusters according to different ore types. Most of the existing clustering methods do not give reliable results when applied to engineering data, since these methods were mainly developed in the domains of psychology and biology. Classification analysis can be regarded as the natural continuation of cluster analysis. In order to classify objects, two types of observations are needed. The first are those observations whose group memberships are known a priori, which can be acquired through cluster analysis. The second kind of observations are those whose group memberships are unidentified. By means of classification these observations are allocated to one of the existing groups. Both of the proposed techniques are based on the use of a smooth one-dimensional curve, passing through the middle of the data set. To formalise such an idea, principal curves were developed by Hastie and Stuetzle (1989). A principal curve summarises the data in a non-linear fashion. For clustering, the principal curve of the entire unstructured data set is extracted. This one-dimensional representation of the data set is then used to search for different clusters. For classification, a principal curve is fitted to every known group in the data set. The observations to be assigned to one of the known groups are allocated to the group closest to the new point. Clustering with principal curves grouped engineering data better than most of the well-known clustering algorithms. Some shortcomings of this method were also established. Classification with principal curves gave similar, optimal results as compared to some existing classification methods. This classification method can be applied to data of any distribution, unlike statistical classification techniques.
- ItemThe control of calcium and magnesium in a base metal sulphate leach solution(Stellenbosch : University of Stellenbosch, 2003-04) Pelser, Max; Eksteen, J. J.; Lorenzen, L.; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: This thesis investigates the control of calcium and magnesium in a base metal sulphate leach solution containing nickel and cobalt. The presence of calcium and magnesium in the hydrometallurgical processing of base metals, result in a number of difficulties. These problems range from the contamination of the final product, to high energy consumption and large bleed streams during electrowinning. Calcium poses a greater problem in sulphate solutions due to the low solubility of its sulphate salts. No conventional method exists for the control of calcium and magnesium. As part of this study a review of possible control methods was conducted, which is listed within. From this list the precipitation of fluorides was selected for further investigation. The results showed that it is possible to control calcium and magnesium through the precipitation of their respective fluorides, without the co-precipitation of nickel and cobalt. For 10% stoichiometric excess of fluoride 96.5% calcium and 98.5% magnesium were removed during batch experiments. It is known that mixing and hydrodynamics plays an important role on the characteristics of the formed precipitate, making these processes inherently difficult to scale-up. To evaluate these effects on a continuous process, the three-zone model proposed by Gösele and Kind (1991) was used. A precipitate with consistent characteristics was produced while varying the mixing on the macro, meso and micro scale. Additionally, methods were investigated for the removal or possible recycling of the unreacted fluoride, for which activated alumina was identified. It was observed that activated alumina could adsorb fluoride to low levels in the presence of the base metal solution, after which it could be regenerated again. The activated alumina (AA) had a capacity of 8.65 gF/lAA at a 10 mg/l fluoride breakthrough level during column tests. Based on the experimental results a conceptual process was devised whereby only a portion of the leach stream is subjected to the fluoride precipitation process, after which it is returned to lower the overall calcium and magnesium concentrations. This method would reduce the effect of the observed dominance of magnesium precipitation, in processes where the maximum removal of both elements is not required. The fluoride precipitation process consisted of three steps being precipitation, solid-liquid separation and adsorption of the unreacted fluoride. Sufficient information is provided on the process for a cost estimation to be carried out. Should this found to be feasible, a continuation of the project is recommended. Different reactor configurations could be evaluated for precipitation. The scaling observed during the continuous experiments should also be investigated to minimise its effect. The investigation of activated alumina was only a secondary project and more work is required on optimisation, particularly for the desorption cycle to enable the recycling of the unreacted fluoride.
- ItemDetecting change in complex process systems with phase space methods(Stellenbosch : University of Stellenbosch, 2006-12) Botha, Paul Jacobus; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.Model predictive control has become a standard for most control strategies in modern process plants. It relies heavily on process models, which might not always be fundamentally available, but can be obtained from time series analysis. The first step in any control strategy is to identify or detect changes in the system, if present. The detection of such changes, known as dynamic changes, is the main objective of this study. In the literature a wide range of change detection methods has been developed and documented. Most of these methods assume some prior knowledge of the system, which is not the case in this study. Furthermore a large number of change detection methods based on process history data assume a linear relationship between process variables with some stochastic influence from the environment. These methods are well developed, but fail when applied to nonlinear dynamic systems, which is focused on in this study. A large number of the methods designed for nonlinear systems make use of statistics defined in phase space, which led to the method proposed in this study. The correlation dimension is an invariant measure defined in phase space that is sensitive to dynamic change in the system. The proposed method uses the correlation dimension as test statistic with and moving window approach to detect dynamic changes in nonlinear systems. The proposed method together with two dynamic change detection methods with different approaches was applied to simulated time series data. The first method considered was a change-point algorithm that is based on singular spectrum analysis. The second method applied to the data was mutual cross prediction, which utilises the prediction error from a multilayer perceptron network. After the proposed method was applied to the data the three methods’ performance were evaluated. Time series data were obtained from simulating three systems with mathematical equations and observing one real process, the electrochemical noise produced by a corroding system. The three simulated systems considered in this study are the Belousov-Zhabotinsky reaction, an autocatalytic process and a predatory-prey model. The time series obtained from observing a single variable was considered as the only information available from the systems. Before the change detection methods were applied to the time series data the phase spaces of the systems were reconstructed with time delay embedding. All three the methods were able to do identify the change in dynamics of the time series data. The change-point detection algorithm did however produce a haphazard behaviour of its detection statistic, which led to multiple false alarms being encountered. This behaviour was probably due to the distribution of the time series data not being normal. The haphazard behaviour reduces the ability of the method to detect changes, which is aggravated by the presence of chaos and instrumental or measurement noise. Mutual cross prediction is a very successful method of detecting dynamic changes and is quite robust against measurement noise. It did however require the training of a multilayer perceptron network and additional calculations that were time consuming. The proposed algorithm using the correlation dimension as test statistic with a moving window approach is very useful in detecting dynamic changes. It produced the best results on the systems considered in this study with quick and reliable detection of dynamic changes, even in then presence of some instrumental noise. The proposed method with the correlation dimension as test statistic was the only method applied to the real time series data. Here the method was successful in distinguishing between two different corrosion phenomena. The proposed method with the correlation dimension as test statistic appears to be a promising approach to the detection of dynamic change in nonlinear systems.
- ItemDetecting change in nonlinear dynamic process systems(Stellenbosch : University of Stellenbosch, 2004-04) Bezuidenhout, Leon Christo; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: As result of the increasingly competitive performance in today’s industrial environment, it has become necessary for production facilities to increase their efficiency. An essential step towards increasing the efficiency of these production facilities is through tighter processes control. Process control is a monitoring and modelling problem, and improvements in these areas will also lead to better process control. Given the difficulties of obtaining theoretical process models, it has become important to identify models from process data. The irregular behaviour of many chemical processes, which do not seem to be inherently stochastic, can be explained by analysing time series data from these systems in terms of their nonlinear dynamics. Since the discovery of time delay embedding for state space analysis of time series, a lot of time has been devoted to the development of techniques to extract information through analysis of the geometrical structure of the attractor underlying the time series. Nearly all of these techniques assume that the dynamical process under question is stationary, i.e. the dynamics of the process did not change during the observation period. The ability to detect dynamic changes in processes, from process data, is crucial to the reliability of these state space techniques. Detecting dynamic changes in processes is also important when using advanced control systems. Process characteristics are always changing, so that model parameters have to be recalibrated, models have to be updated and control settings have to be maintained. More reliable detection of changes in processes will improve the performance and adaptability of process models used in these control systems. This will lead to better automation and enormous cost savings. This work investigates and assesses techniques for detecting dynamical changes in processes, from process data. These measures include the use of multilayer perceptron (MLP) neural networks, nonlinear cross predictions and the correlation dimension statistic.The change detection techniques are evaluated by applying them to three case studies that exhibit (possible) nonstationary behaviour. From the research, it is evident that the performance of process models suffers when there are nonstationarities in the data. This can serve as an indication of changes in the process parameters. The nonlinear cross prediction algorithm gives a better indication of possible nonstationarities in the process data; except for instances where the data series is very short. Exploiting the correlation dimension statistic proved to be the most accurate method of detecting dynamic changes. Apart from positively identifying nonstationary in each of the case studies, it was also able to detect the parameter changes sooner than any other method tested. The way in which this technique is applied, also makes it ideal for online detection of dynamic changes in chemical processes.
- ItemThe development of a simulative hybrid model for optimising the production of a high-carbon ferromanganese furnace.(Stellenbosch : University of Stellenbosch, 2009-12) Sundstrom, Ashley William; Eksteen, J. J.; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: A project was initially commenced for optimising the production output at a specific high-carbon ferromanganese furnace. Since operational difficulties were experienced in this furnace and with a lack of reliable data for the year 2007, it was decided that data from a more stable high-carbon ferromanganese furnace should be analysed instead. Three key performance indicators (KPI’s) were selected to give an indication of overall process performance. These were: (1) the total tonnes of high-carbon ferromanganese produced per tonne of feed material, (2) the percentage recovery of manganese to the alloy product, and (3) the alloy:slag ratio. Maximisation of each of these would contribute to the overall improvement of the process. To achieve the objectives of the project, a hybrid model was developed to characterise the production behaviour of the furnace and to optimise the proposed KPI’s. The hybrid model consisted of two modelling branches, viz. equilibrium and dynamic modelling. An equilibrium sub-model was created and the output results were then used as inputs into a dynamic sub-model, which not only considered the effects of thermo-equilibrium interactions, but also the faster-changing electrical dynamics of furnace control. The final modelling step involved genetic optimisation, whereby model variables were manipulated to optimise the proposed KPI’s. In other words, operating conditions were established to improve furnace performance. It was determined that significant improvement in the values of the KPI’s may be expected if the optimised setpoints are implemented on-site. The existing setpoints for electrical operation should be maintained while the power expended per tonne of alloy should be altered (by tapping more regularly). Specific adjustments to the proportions of the feed recipe should also be made.
- ItemDevelopment of an acoustic classification system for predicting rock structural stability(Stellenbosch : Stellenbosch University, 2015-03) Brink, Stefan; Aldrich, C.; Dorfling, Christie; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Rock falls are the cause of the majority of mining-related injuries and fatalities in deep tabular South African mines. The standard process of entry examination is performed before working shifts and after blasting to detect structurally loose rocks. This process is performed by a miner using a pinch bar to ‘sound’ a rock by striking it and making a judgement based on the frequency response of the resultant sound. The Electronic Sounding Device (ESD) developed by the CSIR aims to assist in this process by performing a concurrent prediction of the structural state of the rock based on the acoustic waveform generated in the sounding process. This project aimed to identify, develop and deploy an effective classification model to be used on the ESD to perform this assessment. The project was undertaken in three main stages: the collection of labelled acoustic samples from working areas; the extraction of descriptive features from the waveforms; and the competitive evaluation of suitable classification models. Acoustic samples of the sounding process were recorded at the Driefontein mine operation by teams of Gold Fields employees. The samples were recorded in working areas on each of the four reefs that were covered by the shafts of the mine complex. Samples were labelled as ‘safe’ or ‘unsafe’ to indicate an expert’s judgement of the rock’s structural state. A laboratory-controlled environment was also created to provide a platform from which to collect acoustic samples with objective labelling. Three sets of features were extracted from the acoustic waveforms to form a descriptive feature dataset: four statistical moments of the frequency distribution of the waveform formed; the average energy contained in 16 discrete frequency bands in the data; and 12 Mel Frequency Cepstral Coefficients (MFCCs). Classification models from four model families were competitively evaluated for best accuracy in predicting structural states. The models evaluated were k-nearest neighbours, self-organising maps, decision trees, random forests, logistic regression, neural networks, and support vector machines with radial basis function and polynomial kernels. The sensitivity of the models, i.e. their ability to avoid predicting a ‘safe’ status when the rock mass was actually loose, was used as the critical performance measure. A single-hidden-layer feed-forward neural network with 15 nodes in the hidden layer and a sigmoid activation function was found to best suited for acoustic classification on the ESD. Additional feature selection was performed to identify the optimised form of the model. The final model was successfully implemented on the ESD platform.
- ItemDiagnostic monitoring of dynamic systems using artificial immune systems(Stellenbosch : University of Stellenbosch, 2006-12) Maree, Charl; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.The natural immune system is an exceptional pattern recognition system based on memory and learning that is capable of detecting both known and unknown pathogens. Artificial immune systems (AIS) employ some of the functionalities of the natural immune system in detecting change in dynamic process systems. The emerging field of artificial immune systems has enormous potential in the application of fault detection systems in process engineering. This thesis aims to firstly familiarise the reader with the various current methods in the field of fault detection and identification. Secondly, the notion of artificial immune systems is to be introduced and explained. Finally, this thesis aims to investigate the performance of AIS on data gathered from simulated case studies both with and without noise. Three different methods of generating detectors are used to monitor various different processes for anomalous events. These are: (1) Random Generation of detectors, (2) Convex Hulls, (3) The Hypercube Vertex Approach. It is found that random generation provides a reasonable rate of detection, while convex hulls fail to achieve the required objectives. The hypercube vertex method achieved the highest detection rate and lowest false alarm rate in all case studies. The hypercube vertex method originates from this project and is the recommended method for use with all real valued systems, with a small number of variables at least. It is found that, in some cases AIS are capable of perfect classification, where 100% of anomalous events are identified and no false alarms are generated. Noise has, expectedly so, some effect on the detection capability on all case studies. The computational cost of the various methods is compared, which concluded that the hypercube vertex method had a higher cost than other methods researched. This increased computational cost is however not exceeding reasonable confines therefore the hypercube vertex method nonetheless remains the chosen method. The thesis concludes with considering AIS’s performance in the comparative criteria for diagnostic methods. It is found that AIS compare well to current methods and that some of their limitations are indeed solved and their abilities surpassed in certain cases. Recommendations are made to future study in the field of AIS. Further the use of the Hypercube Vertex method is highly recommended in real valued scenarios such as Process Engineering.
- ItemThe diagnostic monitoring of the acoustic emission from a laboratory ball mill(Stellenbosch : Stellenbosch University, 1999-12) Theron, Douglas Arnoldus; Aldrich, C.; Weber, O. M.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: The harsh interior environment of mills makes on-line monitoring of these grinding systems difficult. Not only are conventional contact sensors expensive, but the nature of the grinding process makes their application impractical. Unfortunately few accurate quantitative measures are in place in industry to describe or assist in the operation and diagnosis of ball mills. In the South African context operators learn to control the mill based on a priori knowledge of the system gathered from years of process experience. It is common knowledge in industry that these operators associate the sound emission from the system with certain process conditions, and adjust the mill set points to obtain optimal grinding conditions. Unfortunately the high turnover of manpower in the mining industry has led to a drain of knowledge from many operations, leading to a loss of valuable control information. In this work the acoustic emission from a ball mill was studied making use of a laboratory ball mill, acoustic microphones and a personal computer, equipped with a sound card. The mill signal was recorded for a series of batch experiments. These consisted of single parameter experiments where single parameters such as percentage filling, mill speed, percentage water and percentage charge mass were varied, while keeping all other parameters constant. A second series of experiments were conducted with two platinum ore types, namely UG2 and Merensky, to study the influence of changing particle size on the acoustic emission from the mill. The acoustic signal was transformed into the frequency domain from the time domain by using Welch's averaged periodogram method. Hereby the power spectral density function for each acoustic sample was obtained and used as the basis for further data analysis. The structure of the data was investigated with a Sammon map obtained from the power spectral density data. This method confirmed that specific conditions in the mill each had a unique fingerprint which enabled differentiation of the acoustic information. Feature vectors were obtained by principal component analysis of the power spectrum density function extracted from the original mill signal. These feature vectors were used for the modelling of different data sets. Linear regression was applied to the Single parameter experiments yielding modelling results with r² values above 0.95. With the platinum- ore data both linear regression and feed forward neural networks were used for modelling. However, the linear regression model was unable to predict the ore particle size from the acoustic data. The non-linear neural network models achieved accurate particle size predictions for both ore types on both known and unknown validation data sets. r² values greater than 0.93 for the test data and 0.97 for the training data were obtained.
- ItemEstimated environmental risks of engineered nanomaterials in Gauteng.(Stellenbosch : University of Stellenbosch, 2011-02-28) Nota, Nomakhwezi Kumbuzile Constance; Musee, N.; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.Please refer to full text for abstracts
- ItemEstimating particle size of hydrocyclone underflow discharge using image analysis(Stellenbosch : Stellenbosch University, 2014-04) Uahengo, Foibe Dimbulukwa Lawanifwa; Aldrich, C.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Hydrocyclones are stationary separating machines that separate materials based on centrifugal separation and are widely used in chemical engineering and mineral processing industries. Their design and operation, compact structure, low running costs and versatility all contribute to their applications in liquid clarification, slurry thickening, solid washing and classification. With any of these operations, the overall profitability of the process relies on the effective control of the process equipment. However, in practice, hydrocyclones are difficult to monitor and control, owing to the complexity and difficulty in measuring internal flows in the equipment. Several studies have indicated that hydrocyclone underflow images can be used to monitor process conditions. The research described in this thesis considers the use of image analysis to monitor particle size and solids concentration in the underflow discharge of a hydrocyclone. The experimental work consisted of laboratory and industrial-based case studies. The laboratory cyclone used was a 76 mm general laboratory cyclone. A Canon EOS 400D digital camera was used for the underflow imaging. Image features such as pixel intensity values, underflow discharge width and grey level co-occurrence matrix (GLCM) were extracted from the images using MATLAB Toolbox software. Linear discriminant analysis (LDA) and neural network (NN) classification models were used to discriminate between different PGM ore types based on features extracted from the underflow of the hydrocyclone. Likewise, multiple linear regression and neural network models were used to estimate the underflow solids content and mean particle size in the hydrocyclone underflow. The LDA model could predict the PGM ore types with 61% reliability, while the NN model could do so with a statistically similar 62% reliability. The multiple linear regression models could explain 56% and 40% of variance in the mean particle size and solids content respectively. In contrast, the neural network model could explain 67% and 45% of the variance of the mean particle size and solids content respectively. For the industrial system, a 100% correct classification was achieved with all methods. However, these results are regarded as unreliable, owing to the insufficient data used in the models.
- ItemEstimation of concentrate grade in platinum flotation based on froth image analysis(Stellenbosch : University of Stellenbosch, 2010-12) Marais, Corne; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Flotation is an important processing step in the mineral processing industry wherein valuable minerals are extracted. Flotation is a difficult process to control due to its complexity, meaning that the reversal of series of changes will not necessarily bring the process back to its original state. Expert knowledge is incorporated in flotation control through operator experience and intervention, which is subject to many challenges, creating the need for improvement in control. The performance of a flotation cell is often determined by evaluating froth appearance. The application of image analysis to capture, evaluate and monitor froth appearance poses multiple benefits such as consistent and reliable froth appearance evaluation. The objective for this study was to conduct a laboratory study for the collection of froth images with the purpose of evaluating the feasibility of using image information to predict platinum froth grade. Laboratory test work was performed according to a fractional factorial experimental design. Six variables were considered: air flowrate, pulp level and collector, activator, frother and depressant dosages. The laboratory study results were quantified by assay analysis. Analysis of variance only revealed the significant effect of pulp height and collector addition on flotation performance. Data pre-processing revealed information regarding feature correlations and variance contributions. Data analysis from captured images achieved reliable froth grade predictions using random forest classification and artificial neural network (ANN) regression techniques. Random forest classification accuracies of 86.8% and 75.5% were achieved for the following respective datasets: image data of each individual experiment (average of all experiments) and all image data. The applied ANN models achieved R2 values 0.943 and 0.828 for the same 2 datasets. An industrial case study was done wherein a series of step changes in air flowrate was made on a specific flotation cell. The limited industrial case study results supported laboratory study results. Multiple linear regression performed very well, reaching Rª values up to 0.964. Neural networks achieved slightly better with R2 values of up to 0.997. Based on the findings, the following main conclusions were drawn from this study: - Reliable predictions using classification and regression models on image data were proved possible in concept by the laboratory study, and supported by results from an industrial case study on a narrow system. The following main recommendations were made for further investigation: - Research over a larger range of operating conditions is needed to find a more comprehensive solution. - Investigations should be conducted to determine hardware requirements and specifications in terms of minimum resolution, lighting requirements, sampling frequency and data storage. Software requirements, specifications and maintenance challenges should also be investigated for implementation purposes once a more comprehensive solution has been found. - Strategies in terms of camera placement and model building will need to follow, giving special attention to a strategy to handle ore composition change.
- ItemAn experimental study of slag foaming(Stellenbosch : Stellenbosch University, 2002-12) Stadler, S. A. C. (Susanna Aletta Carolina); Aldrich, C.; Eksteen, J. J.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH ABSTRACT: Slag foaming occurs in several pyrometallurgical processes. These processes include steelmaking in basic oxygen steelmaking furnaces and electric arc furnaces (EAF) as well as various non-ferrous operations like sulphide smelting/converting and base metal slag cleaning. Although slag foaming in steelmaking processes has been extensively researched, little attention has been given to slag foaming in non-ferrous operations. Slag foaming phenomena are complex because often the system consists of three or more phases. The objectives of this study is to review the published work on slag foaming, to obtain through physical modelling an understanding of the principles governing foaming and to investigate slag foaming phenomena through pyrometallurgical experiments. To obtain these objectives, experiments were carried out with aqueous mixtures at different column sizes, different pore sizes for gas injection and varying liquid depths, and also for high temperature metallurgical slags with varying composition and at different temperatures. Through gas injection, foaming conditions were simulated and the equilibrium foam height was measured for different gas velocities. The following conclusions were drawn: 1. For physical modelling of slag foaming in 3-phase systems the average foam index increases with increasing amounts of solids present in the system. The effect of additional solids in the system is independent of the system geometry. 2. The following conclusions were reached by determining coefficients for an empirical dimensional model fitted to aqueous mixtures: Higher liquid density leads to lower foam index values. The influence of the liquid viscosity is dependent on the system investigated and may have a positive or negative result on foaming. The empirical model should only be applied to the property range and geometric set-up for which it was derived, as coefficients may vary greatly for different systems. 3. Influence of solid precipitates on slag foaming can be summarised by noting that small amounts of magnetite stabilise slag foaming, while precipitates of wollastonite and anorthite decreased foaming. The influence of solid precipitates is thought to be related to the density, morphology and degree of surface activity of the solid precipitates. 4. The foam index decreases with increasing basicity due to the lowering of the slag viscosity. This continues until the precipitation of solids starts and the foam index once again increases. 5. For increasing "FeO" concentration the foam index will decrease due to lower viscosity, but higher surface tension depression may lead to increased foam index values at high "FeO" concentrations. 6. Higher foam index values were obtained for slags with lower densities. The _1 empirical relationship observed is L IX: P 3 . 7. Higher foam index values were obtained for slags with higher viscosity. The empirical relationship observed is L IX: f1 . 8. Higher foam index values were obtained for slags with lower surface tensions. The empirical relationship observed is L IX: U-I. 9. Models derived for the foaming of basic steelmaking slags does not satisfactorily describe the foaming behaviour of acidic slags. 10. The physical properties of the slag influence the foam stabilisation mechanism.
- ItemExploratory data analysis and empirical modelling of stationary processes by use of genetic programming(Stellenbosch : Stellenbosch University, 1999-12) Chemaly, Timothy Paul; Aldrich, C.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH SUMMARY: Enhancing the performance of any process requires a detailed knowledge of the unknown system, with a mathematical model being the most common means of representing this knowledge. The most frequently used statistical techniques, assume that any relationships between input and output variables are linear and that the data itself is normally distributed. However, real world systems can be highly non-linear and linear approaches can therefore fail to predict the behaviour of the system accurately. Explicit specification of optimal structure in large non-linear models is often not practical and as a result, non-parametric methods (kernel regression, artificial neural networks, etc.) are usually employed. Although these models allow accurate representation of complex systems, they can be very difficult to interpret. This research project explores a novel approach to this problem of mathematical modelling which attempts to evolve optimal parametric models, based on the Darwinian mechanism of evolution. This approach, referred to as genetic programming (GP), facilitates development of explicit or implicit models, or any mix of these two extremes, as dictated by the problem and unlike other methods, it can handle a trade-off between accuracy and interpretability with great ease. During this research; a -commercial application (a-GP) was developed, since very few commercial systems are currently available. Some techniques were developed, which improved the performance ofthe original algorithm considerably. For instance, memory demands were decreased by a factor of 5 by utilizing a different implementation model. Improved convergence and robustness was obtained by using a correlation-based fitness function in conjunction with a correction filter which reduced the sum of the squared errors; at the expense of a more complex model. The evaluation process was expedited by evaluating each tree-like structure as a reverse polish expression; as opposed to a branch-node reduction technique. Additional execution speed was further obtained by implementing the algorithm in c++ (an object oriented compiled language) which is significantly faster than the original LISP (an interpreted language) implementation, . The newly improved algorithm, a-GP, was applied to four industrial data sets and the results were compared against other methods such as standard genetic programming, multilayer perceptron neural networks and linear regression. It was found that a-GP outperformed standard genetic programming on all four case studies, while improving on neural networks on half of the runs. The evolved models tended to be complex. This could be attributed to the lack of parameter estimation that the genetic programming algorithm tried to compensate for by evolving complex tree structures; which it used to approximate the parameters. As a data visualization tool, a-GP was applied to four bench marking data sets used extensively in the literature. The results acquired with a-GP compared favourably with those obtained by other methods with the additional benefit in that a-GP was able to evolve simple mapping functions, which clearly indicated how the variables related to the structure. Additionally, the algorithm was applied in the mapping of two industrial processes. The results showed distinct clustering tendencies within the data, indicating the different operating regimes of the processes under investigation.
- ItemFlow behavior, mixing and mass transfer in a Peirce-Smith converter using physical model and computational fluid dynamics(Stellenbosch : University of Stellenbosch, 2011-03) Chibwe, Deside Kudzai; Akdogan, G.; Aldrich, C.; University of Stellenbosch. Faculty of Engineering. Dept. of Process Engineering.Please refer to full text to view abstract.
- ItemFroth texture extraction with deep learning(Stellenbosch : Stellenbosch University, 2018-03) Horn, Zander Christo; Auret, Lidia; Aldrich, C.; Herbst, B. M.; Stellenbosch University. Faculty of Engineering. Dept. of Process Engineering.ENGLISH SUMMARY: Soft-sensors are of interest in mineral processing and can replace slower or more expensive sensors by using existing process sensors. Sensing process information from images has been demonstrated successfully, but performance is dependent on feature extractors used. Textural features which utilise spatial relationships within images are preferred due to greater resilience to changing imaging and process conditions. Traditional texture feature extractors require iterative design and are sensitive to changes in imaging conditions. They may have many hyperparameters, leading to slow optimisation. Robust and accurate sensing is a key requirement for mineral processing, making current methods of limited potential under realistic industrial conditions. A platinum froth flotation case study was used to compare traditional texture feature extractors with a proposed deep learning feature extractor: convolutional neural networks (CNNs). Deep learning applies artificial neural networks with many hidden layers and specialised architectures for powerful correlative performance through automated training. All information of the input data structure is determined inherently in training with only a limited number of hyperparameters. However, deep learning methods risk overfitting with small datasets, which must be mitigated. A CNN classifier and a framework for unbiased comparison between feature extractors were developed for predicting high to low grade classes of platinum in flotation froth images. CNNs can perform all the functions of a soft-sensor, but this may bias performance comparison. Instead, features were extracted from hidden layers in CNNs and fed into a traditional soft-sensor. This ensured performance measurements were unbiased across all feature extractors. With a full factorial experiment, the following CNN hyperparameters were evaluated: batch size, number of convolutional filters, and convolutional filter size. Accuracy of grade classification was used to score feature extractors. These reference texture feature extractors were compared to CNNs: Local Binary Patterns, Grey-Level Co-occurrence Matrices, and Wavelets. The impact of spectral features (bulk image features such as average colour) was also evaluated, as CNNs can also use spectral image properties to create features, unlike traditional texture extractors. Extractors were tested with input resolutions from 16x16 to 128x128 with two soft-sensor models: Linear Discriminant Analysis, and k-Nearest Neighbour classifiers. Optimal grade classification accuracies were: CNN – 96.5%, LBP – 100%, GLCM – 73.7%, Wavelets – 98.3%, and Spectral – 98.4% Training CNNs to extract features was successful with robust results regardless of hyperparameters selected. The only statistically significant differences obtained during training were that smaller batch size and smaller input resolution gave superior training performance. Results were found to be reproducible for all models. Analysing learned CNN features indicated both textural and spectral features were utilised. Overall results showed spectral features gave good classification performance, potentially adding to CNN performance. CNNs showed comparable performance to other texture feature extractors at all resolutions. This proof of concept implementation shows promise for deep learning methods in mineral processing applications. The resilience of CNNs to changes in imaging and process conditions could not be evaluated due to limited data in the case study. Future work with deep learning methods, while promising, will require larger datasets which are more representative of a variety of process conditions.
- «
- 1 (current)
- 2
- 3
- »