jaxlogit._config_data.ConfigData#
- class jaxlogit._config_data.ConfigData(weights: ndarray | Series | Sequence[Any] | None = None, avail: ndarray | Series | Sequence[Any] | None = None, panels: ndarray | Series | Sequence[Any] | None = None, init_coeff: ndarray | Series | Sequence[Any] | None = None, maxiter: int = 2000, random_state: int | None = None, n_draws: int = 1000, halton: bool = True, halton_opts: dict | None = None, tol_opts: dict | None = None, num_hess: bool = False, set_vars: dict[str, float] | None = None, optim_method: str = 'L-BFGS-scipy', skip_std_errs: bool = False, include_correlations: bool = False, force_positive_chol_diag: bool = True, hessian_by_row: bool = True, finite_diff_hessian: bool = False, batch_size: int | None = None, setup_completed: bool = False)#
Configurations for the fit and predict functions with default values.
Parameters#
- weightsarray-like, shape (n_samples,), optional
Sample weights in long format.
- availarray-like, shape (n_samples*n_alts,), optional
Availability of alternatives for the choice situations. One when available or zero otherwise.
- panelsarray-like, shape (n_samples*n_alts,), optional
Identifiers in long format to create panels in combination with
ids.- init_coeffnumpy.ndarray, shape (n_variables,), optional
Initial coefficients for estimation.
- maxiterint, default=2000
Maximum number of iterations.
- random_stateint, optional
Random seed for numpy random generator.
- n_drawsint, default=1000
Number of random draws to approximate the mixing distributions of the random coefficients.
- haltonbool, default=True
Whether the estimation uses halton draws.
- halton_optsdict, optional
Options for generation of halton draws. The dictionary accepts the following options (keys):
- shufflebool, default=False
Whether the Halton draws should be shuffled.
- dropint, default=100
Number of initial Halton draws to discard to minimize correlations between Halton sequences.
- primeslist
List of primes to be used as base for generation of Halton sequences.
- tol_optsdict, optional
Options for tolerance of optimization routine. The dictionary accepts the following options (keys):
- ftolfloat, default=1e-10
Tolerance for objective function (log-likelihood).
- gtolfloat, default=1e-5
Tolerance for gradient function.
- num_hessbool, default=False
Whether numerical hessian should be used for estimation of standard errors.
- set_varsdict, optional
Specified variable names (keys) of variables to be set to the given value (values).
- optim_method{‘BFGS-scipy’, ‘L-BFGS-scipy’, ‘L-BFGS-jax’, ‘BFGS-jax’}, default=’L-BFGS-scipy’
Optimization method to use for model estimation.
- skip_std_errsbool, default=False
Whether estimation of standard errors should be skipped.
- include_correlationsbool, default=False
Whether correlations between variables should be included as explanatory variables.
- force_positive_chol_diagbool, default=True
Whether to force positive diagonal elements in Cholesky decomposition.
- hessian_by_rowbool, default=True
Whether to calculate the hessian row by row in a for loop to save memory at the expense of runtime.
- finite_diff_hessianbool, default=False
Whether the hessian should be computed using finite difference. If True, this will stay within memory limits.
- batch_sizeint, optional
Size of batches used to avoid GPU memory overflow.
- setup_completed: bool, default=False
Whether the setup has already been completed
- __init__(weights: ndarray | Series | Sequence[Any] | None = None, avail: ndarray | Series | Sequence[Any] | None = None, panels: ndarray | Series | Sequence[Any] | None = None, init_coeff: ndarray | Series | Sequence[Any] | None = None, maxiter: int = 2000, random_state: int | None = None, n_draws: int = 1000, halton: bool = True, halton_opts: dict | None = None, tol_opts: dict | None = None, num_hess: bool = False, set_vars: dict[str, float] | None = None, optim_method: str = 'L-BFGS-scipy', skip_std_errs: bool = False, include_correlations: bool = False, force_positive_chol_diag: bool = True, hessian_by_row: bool = True, finite_diff_hessian: bool = False, batch_size: int | None = None, setup_completed: bool = False) None#
Methods
__init__([weights, avail, panels, ...])Attributes
availbatch_sizefinite_diff_hessianforce_positive_chol_diaghaltonhalton_optshessian_by_rowinclude_correlationsinit_coeffmaxitern_drawsnum_hessoptim_methodpanelsrandom_stateset_varssetup_completedskip_std_errstol_optsweights