Logistic regression is the task of discriminatively classifying a set of inputs. That is given a label set of size two, let's say consisting of bikes and cars, the logistic model tries to learn what separates these two. Maybe wheel count can be a feature here. The model do only know about cars and bikes by these features that makes them different. Formally a discriminative model, given a class $c$ and a document $d$, attempts to compute $p(c|d)$ directly.
Logistic regression is the basis of neural networks. A single layer network with one neuron is exactly logistic regression where the output is the probability of assigning the "positive" label of two labels to the input. We can add more labels to the model obtaining what is called multinomial logistic regression.
Logistic regression is a supervised learning model for classification. The model works in general terms with the following setup
During training we need a loss function for measurement of performance. For that we in the next chapter introduce the cross entropy loss.
Making use of vector notation logistic regression trains to optimize the weight matrix $w$ and the bias term $b$ in the equation $$ z = w \cdot x + b $$ This setup is the same as for linear regression. As an example let the weight matrix compose of the 4 values $$ w = [2,1,-4,-2] $$ We can observe that feature 0 has a positive impact, in fact the largest one. Features 2 and 3 have a negative impact, where feature 2 has the most negative impact.
Note that the weight matrix $w$ has length the number of features, and height equal to 1. This matrix is used as what is called a linear transformation, by matrix multiplication it transforms a vector of some dimension into a vector of another dimension. In the above example it transforms a vector of dimension 4 into a vector of dimension 1, that is a scalar. Hence for logistic regression how complex the model is, relies on the number of features. In the multinomial approach (described in the bottom of this page and in the last chapter) we have more than 2 labels, and hence the model complexity relies on two parameters.
In order to make the above setup into a classification we activate it with the sigmoid function: $$ \sigma(z) = \frac{1}{1 + e^{-z}} $$ We have that $\sigma(x) \in (0,1)$. This function has the following important behavior
So given the example $w$ above, given $b = 1$, and given $x_1 = [1,1,1,1]$, we have $$ z_1 = w \cdot x = 2 + 1 + -4 + -2 + b = -3 + b = -2 $$ and hence that $0.0 \lt \sigma(z_1) \lt 0.5$. So $\sigma(z_1)$ will give highest probability to a negative label.
The above is an example of a binary logistic model. Binary here means that we have two possible outcomes, for example car or bike, 0 or 1, or so on. In the later chapters we will look at how to create models using PyTorch that are not binary, this approach is called multinomial logistic regression. The approach is the same as for the binary model, though we use softmax for activation instead of sigmoid. In this way we still have a distribution as output.