Unsupervised pre-training for fully convolutional neural networks

Wiehman, Stiaan ; Kroon, Steve ; De Villiers, Hendrik (2016)

CITATION: Wiehman, S., Kroon, S. & De Villiers, H. 2016. Unsupervised pre-training for fully convolutional neural networks. Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech), 30 November-2 December 2016, Stellenbosch, South Africa.

The original publication is available at http://ieeexplore.ieee.org

Conference Paper

Unsupervised pre-training of neural networks has been shown to act as a regularization technique, improving performance and reducing model variance. Recently, fully con-volutional networks (FCNs) have shown state-of-the-art results on various semantic segmentation tasks. Unfortunately, there is no efficient approach available for FCNs to benefit from unsupervised pre-training. Given the unique property of FCNs to output segmentation maps, we explore a novel variation of unsupervised pre-training specifically designed for FCNs. We extend an existing FCN, called U-net, to facilitate end-to-end unsupervised pre-training and apply it on the ISBI 2012 EM segmentation challenge data set. We performed a battery of significance tests for both equality of means and equality of variance, and show that our results are consistent with previous work on unsupervised pre-training obtained from much smaller networks. We conclude that end-to-end unsupervised pre-training for FCNs adds robustness to random initialization, thus reducing model variance.

Please refer to this item in SUNScholar by using the following persistent URL: http://hdl.handle.net/10019.1/100503
This item appears in the following collections: