statsmodels.discrete.discrete_model.DiscreteResults

class statsmodels.discrete.discrete_model.DiscreteResults(model, mlefit, cov_type='nonrobust', cov_kwds=None, use_t=None)[source]

A results class for the discrete dependent variable models.

Parameters:

model : A DiscreteModel instance

params : array-like

The parameters of a fitted model.

hessian : array-like

The hessian of the fitted model.

scale : float

A scale parameter for the covariance matrix.

Returns:

Attributes

aic : float

Akaike information criterion. -2*(llf - p) where p is the number of regressors including the intercept.

bic : float

Bayesian information criterion. -2*llf + ln(nobs)*p where p is the number of regressors including the intercept.

bse : array

The standard errors of the coefficients.

df_resid : float

See model definition.

df_model : float

See model definition.

fitted_values : array

Linear predictor XB.

llf : float

Value of the loglikelihood

llnull : float

Value of the constant-only loglikelihood

llr : float

Likelihood ratio chi-squared statistic; -2*(llnull - llf)

llr_pvalue : float

The chi-squared probability of getting a log-likelihood ratio statistic greater than llr. llr has a chi-squared distribution with degrees of freedom df_model.

prsquared : float

McFadden’s pseudo-R-squared. 1 - (llf / llnull)

Methods

aic()
bic()
bse()
conf_int([alpha, cols, method]) Returns the confidence interval of the fitted parameters.
cov_params([r_matrix, column, scale, cov_p, ...]) Returns the variance/covariance matrix.
f_test(r_matrix[, cov_p, scale, invcov]) Compute the F-test for a joint linear hypothesis.
fittedvalues()
get_margeff([at, method, atexog, dummy, count]) Get marginal effects of the fitted model.
initialize(model, params, **kwd)
llf()
llnull()
llr()
llr_pvalue()
load(fname) load a pickle, (class method)
normalized_cov_params()
predict([exog, transform]) Call self.model.predict with self.params as the first argument.
prsquared()
pvalues()
remove_data() remove data arrays, all nobs arrays from result and model
save(fname[, remove_data]) save a pickle of this instance
summary([yname, xname, title, alpha, yname_list]) Summarize the Regression Results
summary2([yname, xname, title, alpha, ...]) Experimental function to summarize regression results
t_test(r_matrix[, cov_p, scale, use_t]) Compute a t-test for a each linear hypothesis of the form Rb = q
tvalues() Return the t-statistic for a given parameter estimate.
wald_test(r_matrix[, cov_p, scale, invcov, ...]) Compute a Wald-test for a joint linear hypothesis.
wald_test_terms([skip_single, ...]) Compute a sequence of Wald tests for terms over multiple columns

Methods

aic()
bic()
bse()
conf_int([alpha, cols, method]) Returns the confidence interval of the fitted parameters.
cov_params([r_matrix, column, scale, cov_p, ...]) Returns the variance/covariance matrix.
f_test(r_matrix[, cov_p, scale, invcov]) Compute the F-test for a joint linear hypothesis.
fittedvalues()
get_margeff([at, method, atexog, dummy, count]) Get marginal effects of the fitted model.
initialize(model, params, **kwd)
llf()
llnull()
llr()
llr_pvalue()
load(fname) load a pickle, (class method)
normalized_cov_params()
predict([exog, transform]) Call self.model.predict with self.params as the first argument.
prsquared()
pvalues()
remove_data() remove data arrays, all nobs arrays from result and model
save(fname[, remove_data]) save a pickle of this instance
summary([yname, xname, title, alpha, yname_list]) Summarize the Regression Results
summary2([yname, xname, title, alpha, ...]) Experimental function to summarize regression results
t_test(r_matrix[, cov_p, scale, use_t]) Compute a t-test for a each linear hypothesis of the form Rb = q
tvalues() Return the t-statistic for a given parameter estimate.
wald_test(r_matrix[, cov_p, scale, invcov, ...]) Compute a Wald-test for a joint linear hypothesis.
wald_test_terms([skip_single, ...]) Compute a sequence of Wald tests for terms over multiple columns

Attributes

use_t