Abstract
Training a neural network with backpropagation algorithm is a systematic process to model a set of given data. This training process involves, among other things, scaling the input and output datasets provided to the neural network. The reason that the scaling process is required, is that real world problem datasets might not be in the range [0, 1], whereas the neural networks work with data only in the range [0, 1], i.e. the neurons fire or they do not. To achieve this, linear scaling is typically used, which for certain datasets can make it difficult for the neural network to properly differentiate between values that are close together. This project will show how the implementation of non-linear median scaling can be applied to the training datasets and compares its performance against the linear scaling methodology for a variety of training datasets both for speed of learning and subsequent ability to generalize. This project demonstrates that after introduction of non-linear scaling into the backpropagation and applying it to the various datasets, there is an improvement in performance to some extent.