If you repeated this study 1000's of times, the probability that the association between the independent variable, DIVM1, and the dependent variable, DIV1, in this model would be as large or larger than the observed regression coefficient of 0.71799 is less than 0.0001 [=the probability that the t-statistic for this regression coefficient, 28.39 (=0.07199/0.02529), equals 0.00]. Murray_Court just subtracted 0.0001 from 1.0000 to describe his interpretation of "99.99% certainty", which I don't endorse. Note that any conclusion about the association and the size of the association between DIVM1 and DIV1 depends on the model [that is, the other independent variables in the model, the form of those other independent variables (for example, the inclusion of squared terms or interaction terms, and the form of the model (=multiple linear or ordinary least-squares regression in your example). With respect to other statistics listed in your output, the last (third) "Bedingungs-index" under "Collinearity Diagnostics" is less than 30 for all your models, indicating that your independent variables are not highly correlated ("collinear") with one another such that their presence together may make it difficult to estimate precisely the size of their associations with the dependent variable. In your first model, the Durbin-Watson statistic of 1.65 at a significance level of 0.05 and a sample size of 384 is inconclusive, and the estimated autocorrelation is not too large to indicate serial correlation due to the order of the observations in your data set; thus, you do not need to account for serial correlation in this model. With respect to the residual plots, the plots of the residuals or the studentized residuals on the Y-axis vs. the predicted values on the X-axis should resemble a horizontal band centered at Y=0. In your first model, these plots resemble more a fan opening to the right, indictating that the variance may change with increasing size of the predicted value (that is, heteroskedasticity). Two large predicted values have very large negative studentized residuals, and several other predicted values with studentized residuals either larger or smaller than about two standard errors from zero, indicating potential outliers. The quantile plot of the residuals of the observations shows deviations from a straight line in both tails, also indicating negative (left-tail) and positive (right-tail) outliers. The plot of studentized residuals on the Y-axis against leverage values on the X-axis identifies two observations with leverage values exceeding 0.10, which may affect the results of the regression model substantially. The plot of the Cook's D-statistic (which detects observations with both high leverage values and outlying values) against observation number on the X-axis identifies two observations with large D-statistics near observation 270. Removing these observations or determining why they are so outlying and influential may affect the results and your interpretation of the model. The distribution plot and histogram of the residuals is approximately normally distributed though somewhat asymmetric. Compare the statistics and the plots for this model to those from your other two models to see if the addition of other independent variables improves the model fit and "accommodates" the outlying and influential observations.
... View more