Briefly describe the architecture of a Recurrent Neural Network (RNN) and how it addresses the shortcomings of traditional Neural Networks.

The Recurrent Neural Network (RNN) is designed to work with data that naturally exists as part of a sequence, such as in the context of time series or text generation. Because subsequent values are often highly correlated with previous values of the sequence, traditional neural networks are poor choices for making predictions on this type of data.

In a Recurrent Neural Network, at each step of the sequence, a neuron receives as input both information from the current step as well as the output from the activation of the previous step. Thus, the network is able to keep prior values in its memory and use them to help predict later values of a sequence. While the outputs are generated through forward propagation and the weight parameters are updated through back propagation similar to how it is done with regular Neural Networks, RNNs share parameters across layers rather than initializing different weights corresponding to each node of the network.