Multiple regressions are an extension of simple linear regression. It is used when we want to predict the value of a continuous variable based on the value of two or more other independent or predictor variables. The variable we want to predict is called the dependent variable (or sometimes, the outcome, target, response variable) and usually denoted by Y. The variables we are using to predict the value of the dependent variable are called the independent variables (or sometimes, the predictor, explanatory or repressors variables and usually denoted by X1, X2, X3....and so on. By this modeling technique, we are going to make a linear functional relationship between Y and X1, X2, X3…..i.e Y=f(X) where X is multidimensional vector.
For example, you could use multiple regressions to understand
- Whether exam performance can be predicted based on revision time, test anxiety, lecture attendance, and gender. (University data)
- Whether daily number of cigarette consumption can be predicted based on smoking duration, age when smoking, smoker type, income, and gender started. (Health care data)
- Whether sales/revenue of a retail store is driven by advertisement spend, square foot area of the store, number of labors deployed, experience of the store manager, and several demographic and macroeconomic factors ( store Retail Analytics)
- Whether credit card limit depends on employees age, salary, csat score, credit score, level of education, job type, number of other products hold (Retail Banking)
- Whether the cholesterol level in blood depends on foot habit, physical exercise, body weight and some genetic factors (Hospital Health data)
- Whether production yield of a company depends on temperature, pressure, no of labors deployed, climate, rain fall and local tax (Industrial production data)
This model is also used across various industries to solve diverse problems. The following are the list of industry types where this model is used to solve various interdisciplinary problems
Multiple regressions also allow you to determine the overall fit (variance explained of dependent variable Y) of the model and the relative contribution of the predictor variables. For example, you might want to know how much of the variation in production yields can be explained by production hours, break down hours, "as a whole", but also the "relative contribution" of each independent variable in explaining the variance.
There are various analytics tool available to perform Linear Regression Model. SAS, R, SPSS, Python are the most popular data mining tool that are used in the Industry. Each tool has some pros and cons to use. Briefly saying SPSS do not require any programming skill to perform modeling exercise whereas others need some coding experience. The key factor of using any analytical tool to fit a model is to get the knowledge of how to interpret the model output in making business decisions. In this connection little bit of theoretical knowledge or the math behind this model is important to know.
In a nutshell the following are the key parameters to decide how good or bad a linear regression model is :
- R square -> higher the R square better the relationship between dependent and independent variables.
- The variable selection process has to be well described and the selected variabe1s should be statistically significant at some level of significance (alpha level 0.05/0.01)
- The sign of the beta estimate should be consistent with underlying buisness hypothesis
- MAPE should be within cosiderable range in training and test sample
- Residuals should be independent and more or less normally distributed. It would hardly be perfectly normally distributed but it should be approximately normally distributed.