The Multi-Layer Perceptron (MLP) is an ‘Artificial Neural Network’ (ANN) having one or more hidden layers (of perceptrons), in addition to the input and output layers. It is a ‘Feedforward Model, meaning that information flows from the input node in one direction – forwards, eventually reaching the output. In a fully connected network, all units in consecutive layers are connected to one another. In other words, the output of the nodes in a previous layer becomes the input of the nodes in the next layer. In the context of Deep Learning, a Perceptron is usually referred to as a neuron, and a Multi-Layer Perceptron structure is referred to as a Neural Network.