Both discriminative and generative models have the ability to learn model parameters in a classification setting, but they are based on entirely different mechanisms and serve different purposes. The goal of discriminative models is to correctly separate observations based on the classes they belong to. They are most interested in finding the decision boundary between classes in a hard classification context, which is the case in most supervised classification settings. Examples of discriminative models include SVM, Decision Trees, and Logistic Regression.
Rather than just finding a decision boundary, generative models have the ability to actually generate new instances of data based on the estimated parameters of each class distribution learned from the training process. Generative models are able to model patterns within the data and thus are useful for describing the data generation process rather than just classifying observations. They also have the advantage of being able to work with missing data by marginalizing over the missing observations, which is not the case for most discriminative models. Examples of generative models used for classification include Naive Bayes and Latent Dirichlet Association.