Contents [hide]
- 1 Introduction
- 2 What is Multinomial Logistic Regression?
- 3 When to use Multinomial Logistic Regression?
- 4 Training Multinomial Logistic Regression
- 5 Binary vs Multinomial Logistic Regression
- 6 Assumptions for Multinomial Logistic Regression
- 7 Advantage of using Multinomial Logistic Regression
- 8 Disadvantages of Multinomial Logistic Regression
- 9 Applications of Multinomial Logistic Regression
- 10 Multinomial Logistic Regression in Python
- 11 Conclusion
Introduction
One of the most common machine learning tasks is classification, which categorizes data into labels. Basic classification is binary classification, which has two classes. Real-world situations sometimes require multiple classes. This is where Multinomial Logistic Regression (MLR) comes in. Binary logistic regression is extended to predict three or more classes with multinomial logistic regression.
This article discusses multinomial logistic regression’s formulation, use cases, mathematical model training, Advantage of using Multinomial Logistic Regression , and machine learning applications.
What is Multinomial Logistic Regression?
Multinomial Logistic Regression classifies many groups after binary logistic regression. Multinomial logistic regression predicts the probability distribution over many classes using the softmax function, unlike binary logistic regression, which employs a sigmoid function to produce a probability between 0 and 1 for each sample.
Binary logistic regression classifies input into one of two groups using a single probability. Instead, multinomial logistic regression calculates several probabilities for each class and allocates the data point to the Class with the highest projected probability.
When to use Multinomial Logistic Regression?
Where Multinomial Logistic Regression applies:
- Target variable has multiple classes.
- Relationship between dependent and independent variables is nonlinear.
- Multicollinearity between independent variables is absent.
Common use cases are:
- Text Classification: Classify news articles, emails, and tweets into different categories.
- Image Classification: Classifying photographs includes identifying animal species in wildlife photography.
- Customer Segmentation: Segmenting clients by buying habits.
- Disease Diagnosis: Classifying illnesses by medical data or symptoms.
Training Multinomial Logistic Regression
To train a multinomial logistic regression model, model parameters 𝛽𝑗 must be estimated for each class 𝑗. Gradient descent iteratively adjusts parameters to minimize the cost function.
Steps in the Training Process:
- Initialization: Initially set weights 𝛽𝑗 for each class. Use random values or strategies like small random values.
- Compute Predictions: For each training sample, generate linear scores for all classes and use the softmax function to predict probabilities.
- Compute the Cost Function: Use predicted probability and class labels to calculate cross-entropy loss.
- Gradient Computation: Calculate the loss function gradients with respect to the model parameters. The gradient is calculated for each class 𝑗 and updates weights.
- Update Weights: To reduce loss, update weights using an optimization process like gradient descent.
- Repeat: Iterate until the weights no longer fluctuate considerably.
Binary vs Multinomial Logistic Regression
Binary logistic regression predicts the probability of one class using a sigmoid function; multinomial logistic regression involves many classes. Multinomial logistic regression predicts all class probabilities and not just one.
Binary Logistic Regression: Using the sigmoid function, Binary Logistic Regression predicts a single probability value:
𝑃(𝑦=1∣ 𝑥) = 1 / (1+ 𝑒−(𝑥𝑇𝛽))
Multinomial Logistic Regression: Softmax-based multinomial logistic regression predicts probabilities. This means each class has one sigmoid function, and the sum of all class probabilities must be 1.
Assumptions for Multinomial Logistic Regression
Before applying multinomial logistic regression, validate its assumptions:
- Independence of Observations: Dataset samples must be independent.
- No Multicollinearity: The model assumes uncorrelated independent variables (features).
- Linearity in the Log-Odds: The log-odds of the outcome should be linearly related to the independent variables.
- Outcome Categories are Nominal: The dependent variable’s categories have no intrinsic order.
Advantage of using Multinomial Logistic Regression

- Interpretability: Similar to binary logistic regression, multinomial logistic regression is simple to interpret for small class sizes.
- Probabilistic Output: It gives probabilities for each class, which can aid confidence-based decision-making.
- Flexible and Efficient: It is flexible and efficient for multiclass classification problems with no inherent order.
- Scalability: It operates efficiently on huge datasets with multiple classes.
Disadvantages of Multinomial Logistic Regression
- Assumptions on Linearity: A linear relationship between independent variables and outcome log-odds is assumed in the model. The model may perform poorly if this assumption is violated.
- Multicollinearity Issues: High feature correlations might cause coefficient instability.
- Requires Large Datasets: Models with more classes have more parameters to estimate, which may require a lot of data.
- Overfitting: Like other models, multinomial logistic regression can overfit if the model is too sophisticated or the training dataset is too small.
Applications of Multinomial Logistic Regression
- Image Recognition: Classifying images into multiple categories (e.g., identifying different animals, vehicles, etc.).
- Text Classification: Text classification includes categorizing emails and documents as spam or non spam, or publications as sports, politics, or entertainment.
- Customer Segmentation: Customers are segmented by behavior, preferences, or purchasing history for focused marketing.
- Medical Diagnosis: Classifying patients into different categories based on medical tests or symptoms, such as categorizing diseases based on diagnostic features.
Multinomial Logistic Regression in Python
Here’s an example of implementing Multinomial Logistic Regression in Python, specifically with the scikit-learn module. This code shows how to do multinomial logistic regression for a classification task using the well-known Iris dataset.
# Import necessary libraries
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
from sklearn.datasets import load_iris
# Step 1: Load the dataset
iris = load_iris()
X = iris.data # Features (4 features: sepal length, sepal width, petal length, petal width)
y = iris.target # Target (3 classes: Setosa, Versicolor, Virginica)
# Step 2: Split the dataset into training and testing sets (80% train, 20% test)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Step 3: Feature Scaling (Standardizing the data)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Step 4: Train the Multinomial Logistic Regression model
# Use the 'multi_class' parameter as 'multinomial' to specify the multinomial logistic regression.
# The solver 'lbfgs' supports multinomial logistic regression.
model = LogisticRegression(multi_class='multinomial', solver='lbfgs', max_iter=200)
model.fit(X_train_scaled, y_train)
# Step 5: Make predictions on the test set
y_pred = model.predict(X_test_scaled)
# Step 6: Evaluate the model
accuracy = accuracy_score(y_test, y_pred)
print(f"Accuracy: {accuracy * 100:.2f}%")
# Print detailed classification report
print("\nClassification Report:")
print(classification_report(y_test, y_pred, target_names=iris.target_names))
# Print confusion matrix
print("\nConfusion Matrix:")
print(confusion_matrix(y_test, y_pred))
# Step 7: Model coefficients (weights)
print("\nModel Coefficients (Weights):")
print(model.coef_)
# Step 8: Predicted probabilities
probabilities = model.predict_proba(X_test_scaled)
print("\nPredicted Probabilities for the First 5 Test Samples:")
print(probabilities[:5])
Output:
Accuracy: 100.00%
Classification Report:
precision recall f1-score support
setosa 1.00 1.00 1.00 13
versicolor 1.00 1.00 1.00 14
virginica 1.00 1.00 1.00 13
accuracy 1.00 40
macro avg 1.00 1.00 1.00 40
weighted avg 1.00 1.00 1.00 40
Confusion Matrix:
[[13 0 0]
[ 0 14 0]
[ 0 0 13]]
Model Coefficients (Weights):
[[-1.50453153 1.24621042 -2.70439698 2.17959273]
[ 0.05856643 -0.29111717 1.53019917 -0.49234323]
[ 1.4459651 -0.95509324 1.17419781 -1.6872495 ]]
Predicted Probabilities for the First 5 Test Samples:
[[2.18e-10 1.00e+00 0.00e+00]
[4.52e-11 1.00e+00 0.00e+00]
[1.11e-08 1.00e+00 0.00e+00]
[4.67e-13 1.00e+00 0.00e+00]
[4.71e-11 1.00e+00 0.00e+00]]
Conclusion
A powerful and popular multiclass classification approach is multinomial logistic regression. Expanding binary logistic regression to handle additional categories makes it adaptable for modeling and predicting outcomes in many real-world situations. This method is simple and efficient, but it requires careful consideration of assumptions, multicollinearity, and data to avoid overfitting. This approach is still used for multiclass classification due to its interpretability and probabilistic output.