statsmodels.regression.linear_model.WLS

class statsmodels.regression.linear_model.WLS(endog, exog, weights=1.0, missing='none', hasconst=None, **kwargs)[source]

A regression model with diagonal but non-identity covariance structure.

The weights are presumed to be (proportional to) the inverse of the variance of the observations. That is, if the variables are to be transformed by 1/sqrt(W) you must supply weights = 1/W.

Parameters:

endog : array-like

1-d endogenous response variable. The dependent variable.

exog : array-like

A nobs x k array where nobs is the number of observations and k is the number of regressors. An intercept is not included by default and should be added by the user. See statsmodels.tools.add_constant.

weights : array-like, optional

1d array of weights. If you supply 1/W then the variables are pre- multiplied by 1/sqrt(W). If no weights are supplied the default value is 1 and WLS results are the same as OLS.

missing : str

Available options are ‘none’, ‘drop’, and ‘raise’. If ‘none’, no nan checking is done. If ‘drop’, any observations with nans are dropped. If ‘raise’, an error is raised. Default is ‘none.’

hasconst : None or bool

Indicates whether the RHS includes a user-supplied constant. If True, a constant is not checked for and k_constant is set to 1 and all result statistics are calculated as if a constant is present. If False, a constant is not checked for and k_constant is set to 0.

Notes

If the weights are a function of the data, then the post estimation statistics such as fvalue and mse_model might not be correct, as the package does not yet support no-constant regression.

Examples

>>> import numpy as np
>>> import statsmodels.api as sm
>>> Y = [1,3,4,5,2,3,4]
>>> X = range(1,8)
>>> X = sm.add_constant(X)
>>> wls_model = sm.WLS(Y,X, weights=list(range(1,8)))
>>> results = wls_model.fit()
>>> results.params
array([ 2.91666667,  0.0952381 ])
>>> results.tvalues
array([ 2.0652652 ,  0.35684428])
>>> print(results.t_test([1, 0]))
<T test: effect=array([ 2.91666667]), sd=array([[ 1.41224801]]), t=array([[ 2.0652652]]), p=array([[ 0.04690139]]), df_denom=5>
>>> print(results.f_test([0, 1]))
<F test: F=array([[ 0.12733784]]), p=[[ 0.73577409]], df_denom=5, df_num=1>

Attributes

weights (array) The stored weights supplied as an argument.
See regression.GLS  

Methods

fit([method, cov_type, cov_kwds, use_t]) Full fit of the model.
from_formula(formula, data[, subset, drop_cols]) Create a Model from a formula and dataframe.
get_distribution(params, scale[, exog, ...]) Returns a random number generator for the predictive distribution.
hessian(params) The Hessian matrix of the model
information(params) Fisher information matrix of model
initialize()
loglike(params) Returns the value of the gaussian log-likelihood function at params.
predict(params[, exog]) Return linear predicted values from a design matrix.
score(params) Score vector of model.
whiten(X) Whitener for WLS model, multiplies each column by sqrt(self.weights)

Methods

fit([method, cov_type, cov_kwds, use_t]) Full fit of the model.
from_formula(formula, data[, subset, drop_cols]) Create a Model from a formula and dataframe.
get_distribution(params, scale[, exog, ...]) Returns a random number generator for the predictive distribution.
hessian(params) The Hessian matrix of the model
information(params) Fisher information matrix of model
initialize()
loglike(params) Returns the value of the gaussian log-likelihood function at params.
predict(params[, exog]) Return linear predicted values from a design matrix.
score(params) Score vector of model.
whiten(X) Whitener for WLS model, multiplies each column by sqrt(self.weights)

Attributes

df_model The model degree of freedom, defined as the rank of the regressor matrix minus 1 if a constant is included.
df_resid The residual degree of freedom, defined as the number of observations minus the rank of the regressor matrix.
endog_names Names of endogenous variables
exog_names Names of exogenous variables