Browsing by Author "Werle van der Merwe, Andreas"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
- ItemInvestigating the evolution of modularity in neural networks(Stellenbosch : Stellenbosch University, 2020-03) Werle van der Merwe, Andreas; Van den Heever, David Jacobus; Du Plessis, Stefan; Stellenbosch University. Faculty of Engineering. Dept. of Mechanical and Mechatronic Engineering.ENGLISH ABSTRACT: Neural networks are not inherently interpretable as a direct consequence of their operating principle and the high dimensional opacity of their internal computations. The neural network interpretability problem is detrimental to reliability, meaningful human-AI interaction and the ethics of deployment. The problem can be approached from the perspective of neural modularity which frames a modular network as one that contains any number of disjoint subnetworks and identifies an interpretable modular network as one that groups its internal representations within such subnetworks in an explanative, task specific way. This study aims to investigate how neural modularity evolves and how it can benefit interpretability. HyperNEAT under connectivity constraints is the chosen neuroevolutionary method, and the following key points of research are investigated with respect to the evolution of neural modularity: general substrates, a variety of connection cost and novel input competition constraints, HyperNEAT modifications based on CPPN disjoints, and the interaction between lifetime and evolutionary learning with neuron nomination given the inclusion of a training phase. The results indicate that the connectivity constraints successfully promote the evolution of neural modularity across a variety of tasks on a general substrate and show that the novel input competition constraints are competitive with the established connection costs as a means of driving the evolution of neural modularity. The HyperNEAT modifications based on CPPN disjoints did not benefit the evolution of neural modularity. Investigating the interaction between lifetime and evolutionary learning with neuron nomination links greater concurrency between the processes that determine a network’s form and function with higher levels of evolved neural modularity for the connection cost constraints. The interpretability assessment shows that while the evolved networks’ interpretable qualities are task dependent, two connectivity constraints deliver statistically different functional module overlap distributions. This study highlights new possibilities for future research and contributes to the knowledge basis on evolving neural modularity by showing that input competition constraints are competitive with connection cost constraints, by examining how the interaction between lifetime and evolutionary learning influences the evolution of neural modularity as well as looking at the interpretable qualities of the evolved networks.