6 Assumptions of Linear Regression :Plots and Solutions (2024)

Regression analysis is crucial in predictive modeling, but merely running a line of code or looking at R² and MSE values isn’t enough. In R, the plot() function generates four plots that reveal valuable insights about the data. Unfortunately, many beginners fail to interpret these plots. This article explains important linear regression assumptions, fixes for violations, and the significance of these plots. Understanding these concepts can greatly enhance your regression models. Read on to learn all about the assumptions of linear regression and polynomial regression of these machine learning models.

In this article, you will explore the key assumptions of linear regression, including the assumptions for linear regression, such as linearity, independence, hom*oscedasticity, and normality, which are essential for valid regression analysis.

Table of contents

  • What are Assumptions in Regression?
  • What are Assumptions of Linear Regression?
  • Important Assumptions of Linear Regression Analysis
  • What Happens When You Violate the Assumptions of Linear Regression?
    • 1. Linear and Additive
    • 2. Autocorrelation
    • 3. Multicollinearity
    • 4. Heteroskedasticity
    • 5. Normal Distribution of error terms
  • Interpretation of Regression Plots
    • 1. Residual vs Fitted Values
    • 2. Normal Q-Q Plot
    • 3. Scale Location Plot
    • 4. Residuals vs Leverage Plot
  • How do you check the assumptions of linear regression before modeling?
  • Conclusion

What are Assumptions in Regression?

Regression is a parametric approach. ‘Parametric’ means it makes assumptions about data for the purpose of analysis. Due to its parametric side, regression is restrictivein nature. Itfails to deliver good results withdata sets which doesn’t fulfillits assumptions. Therefore, for a successful regression analysis, it’s essential to validate these assumptions.

So, how would you check (validate) if a data set follows all regression assumptions? You check it using the regression plots (explained below) along with some statistical test.

What are Assumptions of Linear Regression?

Violations of assumptions of linear regression model can lead to biased or inefficient estimates, and it is important to assess and address these violations for accurate and reliable regression results.

6 Assumptions of linear regression include:

  1. Linearity: The relationship between the dependent and independent variables is linear.
  2. Independence: The observations are independent of each other.
  3. hom*oscedasticity: The variance of the errors is constant across all levels of the independent variables.
  4. Normality: The errors follow a normal distribution.
  5. No multicollinearity: The independent variables are not highly correlated with each other.
  6. No endogeneity: There is no relationship between the errors and the independent variables.

Important Assumptions of Linear Regression Analysis

Let’s look at the important assumptions of Linear regression analysis:

  1. There should be a linear and additive relationship between dependent (response) variable and independent (predictor) variable(s).A linear relationship suggests thatachange in response Y due to one unit change in X¹ is constant, regardless of the value of X¹. An additive relationship suggests that the effect of X¹ on Y is independent of other variables.
  2. There should be no correlation between the residual (error) terms. Absence of thisphenomenon is known as Autocorrelation.
  3. The independent variables should not be correlated. Absence of thisphenomenon is known as multicollinearity.
  4. The errorterms must have constant variance. This phenomenon is known as hom*oskedasticity. The presence of non-constant variance is referred to heteroskedasticity.
  5. The error terms must be normally distributed.

What Happens When You Violate the Assumptions of Linear Regression?

Let’s dive into specific assumptions of linear regression and learn about their outcomes (if violated):

1. Linear and Additive

If you fit a linear model to a non-linear, non-additive data set, the regression algorithm would failto capture the trend mathematically, thus resulting in an inefficient model. Also, this will result in erroneous predictions on an unseen data set.

How to check: Look for residual vs fitted value plots (explained below). Also,you can includepolynomial terms (X, X², X³) in your model to capture the non-linear effect.

2. Autocorrelation

The presence of correlation in error terms drastically reduces model’s accuracy. This usually occurs in time series models where the next instant is dependent on previous instant. If the error terms are correlated, the estimated standard errors tend to underestimate the true standard error.

If this happens, itcausesconfidence intervals and prediction intervals to be narrower. Narrower confidence interval meansthat a 95% confidence interval would have lesser probability than 0.95 that it would contain the actual value of coefficients. Let’s understand narrow prediction intervals with an example:

For example, the least square coefficient of X¹ is 15.02 and its standard error is 2.08 (without autocorrelation). But in presence of autocorrelation, the standard error reduces to 1.20. As a result, the prediction interval narrows down to (13.82, 16.22) from (12.94, 17.10).

Also, lower standard errors would cause the associatedp-values to be lower than actual. This will make us incorrectly conclude a parameter to be statistically significant.

How to check: Look for Durbin – Watson (DW) statistic. It must lie between 0 and 4. If DW = 2, implies no autocorrelation, 0< DW < 2 implies positive autocorrelation while 2 < DW < 4indicates negative autocorrelation.Also, you can seeresidual vs time plot and look for the seasonal or correlatedpattern in residual values.

3. Multicollinearity

It occurs when the independent variables show moderate to high correlation. In a model with correlated variables, it becomes a tough task to figure out the true relationship of a predictors with response variable. In other words, it becomes difficult to find out which variable is actually contributing to predict the response variable.

Another point, with presence of correlated predictors, the standard errors tend to increase. And, with large standard errors, the confidence interval becomes wider leading to less precise estimates of slope parameters.

Additionally, when predictors are correlated, the estimated regression coefficient of a correlated variable depends on the presence of other predictors in the model.If this happens, you’ll end up with an incorrect conclusion that a variable strongly / weakly affects target variable. Since,even if you drop one correlated variable from themodel, itsestimated regression coefficients would change. That’s not good!

How to check: You can use scatter plot to visualize correlation effect among variables. Also, you can also use VIF factor. VIF value <=4 suggests no multicollinearity whereas a value of >= 10 implies serious multicollinearity. Above all, a correlation table should also solve the purpose.

4. Heteroskedasticity

The presenceof non-constant variance in the error terms results inheteroskedasticity. Generally, non-constant variance arises in presence of outliers or extreme leverage values. Look like, these values get too much weight, thereby disproportionately influences the model’s performance.When this phenomenon occurs, the confidence interval for out of sample prediction tends to be unrealistically wide or narrow.

How to check: You can look at residual vs fitted values plot. If heteroskedasticity exists, the plot would exhibit a funnel shape pattern (shown in next section). Also, you can use Breusch-Pagan / Cook – Weisberg test or White general test to detect this phenomenon.

5. Normal Distribution of error terms

If the error terms are non- normally distributed, confidence intervals may become too wide or narrow. Once confidence interval becomes unstable, it leads to difficulty in estimating coefficientsbased on minimization of least squares.Presence of non – normal distribution suggests that there are a few unusual data points which must be studied closely to make a better model.

How to check:You can look at QQ plot (shown below). You can also perform statistical tests of normality such as Kolmogorov-Smirnov test, Shapiro-Wilk test.

Interpretation of Regression Plots

Now, we know all about important linear regression assumptions and the methods take care of them in case of violation.

But that’s not the end. Now, you shouldknow the solutions also to tackle the violation of these assumptions. In this section, I’ve explained the 4 regression plots along with the methods to overcome limitations on assumptions.

1. Residual vs Fitted Values

6 Assumptions of Linear Regression :Plots and Solutions (1)
6 Assumptions of Linear Regression :Plots and Solutions (2)

This scatter plot shows the distribution of residuals (errors) vs fitted values (predicted values). It is one of the most important plot which everyone must learn. It reveals various useful insights including outliers. The outliers in this plot arelabeled by their observation number which make themeasy to detect.

There are two major things which you should learn:

  1. If there exist any pattern (may be, a parabolic shape) in this plot, consider it as signs of non-linearity in the data. It means that the model doesn’t capture non-linear effects.
  2. If a funnel shape is evident in the plot, consider it as the signs of non constant variance i.e. heteroskedasticity.

Solution:To overcome the issue of non-linearity, you can do a non linear transformation of predictors such as log (X),√X or X² transform the dependent variable. To overcome heteroskedasticity, a possible way is to transform the response variable such as log(Y) or√Y. Also, you can use weighted least square method to tackle heteroskedasticity.

2. Normal Q-Q Plot

6 Assumptions of Linear Regression :Plots and Solutions (3)

This q-q or quantile-quantile is a scatter plot which helps us validatethe assumptionof normal distribution in a data set. Using this plot we can infer if the data comes from a normal distribution. If yes, the plot would show fairly straight line. The straight lines shows the absence of normality in the errors.

If you are wondering what is a ‘quantile’, here’s a simple definition: Think of quantiles as points in your data below which a certain proportion of data falls. Quantile is often referred to as percentiles. For example: when we say the value of 50th percentile is 120, it means half of the data lies below 120.

Solution: If the errorsare not normally distributed, non – linear transformation of the variables (response or predictors)canbring improvement in the model.

3. Scale Location Plot

6 Assumptions of Linear Regression :Plots and Solutions (4)

This plot isalso detects hom*oskedasticity (assumption of equal variance). It shows how the residual are spread along the range of predictors. It’s similar to residual vs fitted value plot except it uses standardized residual values. Ideally, there should be no discernible pattern in the plot. This would imply that errors are normally distributed. But, in case, if the plot shows anydiscernible pattern (probably a funnel shape), it would implynon-normal distribution of errors.

Solution:Follow the solution for heteroskedasticity given in plot 1.

4. Residuals vs Leverage Plot

6 Assumptions of Linear Regression :Plots and Solutions (5)

Cook’s distance attempts to identify the points which have more influence than other points. Such influential points tends to have a sizable impact of the regression line. In other words, adding or removing such points from the model can completely change the model statistics.

But, can these influential observations be treated as outliers? Only data can answer this question.. Therefore, in this plot, the large values marked by cook’s distance might require further investigation.

Solution:For influential observations which are nothing but outliers, if not many, you can remove those rows. Alternatively, you can scale down the outlier observation with maximum value in data or else treat those values as missing values.

Case Study: How I improved myregression model using log transformation

How do you check the assumptions of linear regression before modeling?

Here is you can check the Assumptions of linear regression

  1. Linearity: This denotes a nearly straight line-like relationship between the independent and dependent variables in a multiple linear regression. Plotting your data in a scatter plot will help you see this. Look for a haphazard dispersion surrounding a line segment, which suggests that the linearity assumption is met.
  2. Independence: The inaccuracies, or variations between the expected and actual values, ought to be unrelated to one another. This indicates that no error term affects another, which is crucial when dealing with time series data. Plotting the residuals, or errors, against the independent variables allows you to evaluate this. Ideally, there should be no trends or patterns.
  3. hom*oscedasticity: For every level of the independent variables, the variance of the errors should remain constant. In other words, the distribution of residuals should be consistent across the entire range of the independent variables. To check for heteroscedasticity, make a residual vs. fitted plot. The red line, representing the fitted values, should ideally have a horizontal line centered around zero.
  4. Normality: A normal distribution should be present in the errors (residuals). You can verify this by using Q-Q plots of the residuals or histograms. Additionally, statistical tests such as the Kolmogorov-Smirnov test can be used to check the normality assumption.
  5. Lack of Multicollinearity: There should be little to no significant correlation between the independent variables. Multicollinearity can make it difficult to interpret your model’s coefficients. Correlation matrices (preferably with correlation coefficients less than 0.8) or the Variance Inflation Factor (VIF) can be used to test for this. Values greater than or equal to 10 indicate significant multicollinearity.

Conclusion

In conclusion, understanding and acknowledging the assumptions of linear regression is vital for accurate and reliable analysis. By recognizing the regression assumptions, we can ensure the validity of our models and interpret the results effectively. It is essential to assess the assumptions and address any violations to enhance the reliability of our findings. Adhering to these assumptions allows us to make informed decisions and draw meaningful insights from assumptions of linear regression analyses in various data science applications.

Hope you like the article! Understanding the assumptions of linear regression is essential for effective analysis. Key linear regression assumptions include linearity, independence, hom*oscedasticity, and normality, ensuring reliable results in regression analysis.

Q1. What are the assumptions of linear regression in data science?

A. The assumptions of linear regression in data science are linearity, independence, hom*oscedasticity, normality, no multicollinearity, and no endogeneity, ensuring valid and reliable regression results.

Q2. What are the 4 assumptions for regression analysis?

A. Regression analysis relies on the assumptions of linearity, independence, hom*oscedasticity, and normality to interpret and validate the model.

Q3. What are the 5 assumptions of linear regression?

Linearity: The relationship between variables is linear.
Independence: Observations are independent of each other.
hom*oscedasticity: The variance of the errors is constant.
Normality of Residuals: Residuals follow a normal distribution.
No Multicollinearity: Predictor variables are not highly correlated.

Q4. What are the assumptions of regression error?

A. The assumptions of regression error include independence, zero mean, constant variance, and normality, ensuring adherence to the regression model’s assumptions.

advanced regressionassumption of linear regressionassumptions for linear regressionassumptions in linear regressionassumptions of linear regressionassumptions of regressionassumptions of regression analysisheteroskedasticityhom*oskedasticityinterpretation regression plotslinear regression assumptionlinear regression assumptionsMultiple Regressionordinary least squareregressionregression assumptionsresidual plotsResidual sum of squaresresidual vs leverage plotscale location plottotal sum of squareswhat are the assumptions of linear regression

a

avcontentteam13 Sep, 2024

AlgorithmBusiness AnalyticsIntermediateMachine Learning

6 Assumptions of Linear Regression :Plots and Solutions (2024)
Top Articles
How to Write a Well-Researched Finance Essay: Your Personal Guide to Succeed | HowToWrite by Customwritings.com
The Fix It Page for Flash Games
Chs.mywork
Public Opinion Obituaries Chambersburg Pa
Aberration Surface Entrances
Sprinter Tyrone's Unblocked Games
Fat Hog Prices Today
Occupational therapist
Did 9Anime Rebrand
Delectable Birthday Dyes
Unraveling The Mystery: Does Breckie Hill Have A Boyfriend?
Catsweb Tx State
Prices Way Too High Crossword Clue
Olivia Ponton On Pride, Her Collection With AE & Accidentally Coming Out On TikTok
Builders Best Do It Center
Guilford County | NCpedia
Www Craigslist Com Phx
Kitty Piggy Ssbbw
Les Rainwater Auto Sales
Used Sawmill For Sale - Craigslist Near Tennessee
Rams vs. Lions highlights: Detroit defeats Los Angeles 26-20 in overtime thriller
Nail Salon Goodman Plaza
Grayling Purnell Net Worth
Pretend Newlyweds Nikubou Maranoshin
Jang Urdu Today
Juicy Deal D-Art
Weathervane Broken Monorail
Geico Car Insurance Review 2024
Cal State Fullerton Titan Online
Ascensionpress Com Login
Lindy Kendra Scott Obituary
Santa Barbara Craigs List
Pioneer Library Overdrive
Insidious 5 Showtimes Near Cinemark Southland Center And Xd
Missing 2023 Showtimes Near Grand Theatres - Bismarck
Publix Daily Soup Menu
Bridger Park Community Garden
Priscilla 2023 Showtimes Near Consolidated Theatres Ward With Titan Luxe
Wayne State Academica Login
20 bank M&A deals with the largest target asset volume in 2023
Craigslist - Pets for Sale or Adoption in Hawley, PA
Citibank Branch Locations In North Carolina
Sour OG is a chill recreational strain -- just have healthy snacks nearby (cannabis review)
R: Getting Help with R
Quiktrip Maple And West
Willkommen an der Uni Würzburg | WueStart
Value Village Silver Spring Photos
Kushfly Promo Code
Barber Gym Quantico Hours
Nfsd Web Portal
Ics 400 Test Answers 2022
The Ultimate Guide To 5 Movierulz. Com: Exploring The World Of Online Movies
Latest Posts
Article information

Author: Eusebia Nader

Last Updated:

Views: 5630

Rating: 5 / 5 (60 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Eusebia Nader

Birthday: 1994-11-11

Address: Apt. 721 977 Ebert Meadows, Jereville, GA 73618-6603

Phone: +2316203969400

Job: International Farming Consultant

Hobby: Reading, Photography, Shooting, Singing, Magic, Kayaking, Mushroom hunting

Introduction: My name is Eusebia Nader, I am a encouraging, brainy, lively, nice, famous, healthy, clever person who loves writing and wants to share my knowledge and understanding with you.