*Related Questions:**–* *What is the loss function for Logistic Regression?**– What are the advantages and disadvantages of logistic regression?**– What are the major assumptions of logistic regression?*

Logistic regression is a supervised learning algorithm that is used to predict the probability of categorical dependent variable: *y*, using a given set of independent variables. As opposed to linear regression where *y* is continuous, in logistic regression, *y* can take only two discrete values 1 or 0; True or False; Positive or Negative.

One can think of Logistic regression as an extension of Linear regression, but for binary classification problems. Let’s look at linear regression first. In linear regression, the response variable is modeled as a linear combination of the predictor variables, and the model output is a continuous numeric value that is unbounded (-∞, ∞).

For Logistic regression, one can convert this unbounded continuous value to a bounded range of (0, 1), by using the logistic function (also known as the sigmoid function). The output of this logistic function, represents the probability of the binary outcome.

The plot of *g(z)* is shown below:

To estimate the regression coefficients θ, maximum likelihood estimation (MLE) method is used. MLE involves finding the set of coefficients that maximizes the likelihood of the observed data. The likelihood function is based on the assumption that the outcomes are independent and identically distributed. The recommended video from Prof. Sudeshna Sarkar (included below), describes in detail the logistic loss function, and how to estimate the regression coefficients for logistic regression using MLE.

Once the regression coefficients have been estimated, the logistic regression model can be used to predict the probability of the outcome (ranges between 0 to 1) for new observations by plugging in the values of the independent variables into the equation for the logistic function. In logistic regression, we use the concept of the *threshold*. If the predicted probability is above the *threshold* value, the outcome is categorized as 1 (positive class), and if the probability is below the *threshold* value, the outcome is predicted to be 0 (negative class).

Logistic regression uses regression as the underlying concept of predictive modeling; therefore, it is called logistic regression, but is used to classify samples; Therefore, it falls under the classification algorithm.

### Video Explanations

In order to build a good understanding of logistic regression, multiple videos are recommended below:

1. **[Recommended]** Start with this short introductory video from Cassie Kozyrkov. This video will help you build an intuition for what is logistic regression, how does it work, and how is it different from linear regression (Runtime: 3:32 minutes)

2. **[Recommended]** For a thorough understanding of Logistic regression, including the math used behind the scenes, watch this course lecture (embedded below) from Prof. Sudeshna Sarkar. The video includes the following (Runtime: 20 mins)

– why do we need the logistic function,

– using Maximum Likelihood as the cost function for logistic regression, and

– how to derive the coefficients for logistic regression using gradient descent

3. **[Recommended]** After you watch the video (embedded above) from Prof. Sudeshna Sarkar, watch this video from Prof. Dmitry Kobak to re-enforce your understanding for logistic regression. Additionally, the video demonstrate how to generate non-linear decision boundaries using logistic regression (example starts at 12:00) (Runtime: first 29 mins)

4. [Optional] In this in-class video lecture from Stanford’s graduate course on Machine Learning (CS229), instructor, Anand Avati, describes logistic regression in detail, including the associated derivations: (Watch between timestamp 37:30 – 1:13:00)

### Recommended Lecture Series on Regression

Want to learn Regression methods in detail? If so, we recommend the following two lecture series on Regression methods. These lectures are thorough with clear explanations:

1. Regression Series by Prof. Justin Zeltzer from University of Sydney: https://www.youtube.com/playlist?list=PLTNMv857s9WUI1Nz4SssXDKAELESXz-bi

2. Regression Modeling for Health Research from Prof. Mike Marin at University of British Columbia: https://www.youtube.com/playlist?list=PLqzoL9-eJTNBDAG955KrzpduiPCj8-_3m