fouriax.analysis#

Fisher information, sensitivity analysis, and design optimality utilities.

Provides two tiers of Fisher information computation:

  1. Closed-form FIM for Gaussian or Poisson noise models, using a single Jacobian evaluation of the forward model.

  2. Score-based Monte Carlo FIM for arbitrary distributions, using outer products of the score function averaged over samples.

Functions

cramer_rao_bound(fim, *[, regularize])

Cramér–Rao lower bound: diagonal of the inverse FIM.

d_optimality(fim, *[, prior_covariance, ...])

D-optimality criterion, optionally with a Gaussian prior.

fisher_information(forward_fn, params, *[, ...])

Closed-form Fisher information matrix for analytic noise models.

jacobian_matrix(forward_fn, params, *[, mode])

Compute the Jacobian d(output)/d(params).

parameter_tolerance(forward_fn, params, *, ...)

Estimate allowable perturbation per parameter for a given output change.

score_fisher_information(log_prob_fn, ...)

Fisher information via score function outer products (Monte Carlo).

sensitivity_map(forward_fn, params, *[, ...])

Per-parameter sensitivity of an output metric.

jacobian_matrix(forward_fn, params, *, mode='reverse')#

Compute the Jacobian d(output)/d(params).

The parameter pytree is flattened internally, so the returned Jacobian has shape (n_outputs, n_params) regardless of pytree structure.

Parameters:
  • forward_fn (Callable[[Any], Array]) – Maps params → 1-D output array.

  • params (Any) – Parameter pytree to differentiate with respect to.

  • mode (Literal['forward', 'reverse']) – "reverse" (default) uses jax.jacobian; "forward" uses jax.jacfwd. Reverse is faster when n_outputs > n_params.

Returns:

Jacobian array of shape (n_outputs, n_params).

Return type:

Array

fisher_information(forward_fn, params, *, noise_model=None)#

Closed-form Fisher information matrix for analytic noise models.

For a forward model mu = forward_fn(params), this computes FIM = J^T Lambda J where J = dmu/dparams and Lambda is the analytic noise precision matrix provided by noise_model.

If noise_model is omitted, unit-variance independent Gaussian noise is assumed, so Lambda = I and the result reduces to J^T J.

Parameters:
  • forward_fn (Callable[[Any], Array]) – Maps params → 1-D predicted measurement vector.

  • params (Any) – Parameter pytree.

  • noise_model (SensorNoiseModel | None) – Optional analytic noise model. It must provide a precision(expected) method through SensorNoiseModel.

Returns:

FIM of shape (n_params, n_params).

Return type:

Array

score_fisher_information(log_prob_fn, params, samples)#

Fisher information via score function outer products (Monte Carlo).

Estimates the FIM as:

FIM ≈ (1/N) Σᵢ ∇_θ log p(yᵢ|θ) ∇_θ log p(yᵢ|θ)^T

This is the general definition and applies to any distribution where the log-probability is differentiable w.r.t. params.

Parameters:
  • log_prob_fn (Callable[[Any, Array], Array]) – (params, sample) scalar log-probability.

  • params (Any) – Parameter pytree to differentiate with respect to.

  • samples (Array) – Array of shape (N, ...) drawn from p(y|params).

Returns:

FIM of shape (n_params, n_params).

Return type:

Array

cramer_rao_bound(fim, *, regularize=1e-10)#

Cramér–Rao lower bound: diagonal of the inverse FIM.

Each element gives the minimum achievable variance for the corresponding parameter under any unbiased estimator.

Parameters:
  • fim (Array) – Fisher information matrix of shape (n, n).

  • regularize (float) – Small constant added to the diagonal before inversion for numerical stability.

Returns:

Array of shape (n,) with per-parameter variance lower bounds.

Return type:

Array

d_optimality(fim, *, prior_covariance=None, prior_precision=None, relative_to_prior=True)#

D-optimality criterion, optionally with a Gaussian prior.

Higher values indicate more information in the measurement. This is differentiable and commonly used as an optimization objective for experimental design.

With a Gaussian prior on the parameter vector, this computes the posterior-precision log-determinant. If relative_to_prior=True (default), the prior baseline is subtracted:

log det(FIM + Lambda_prior) - log det(Lambda_prior)

which is equivalent to log det(I + Sigma_prior FIM) and is proportional to the mutual information for a linear-Gaussian model.

Parameters:
  • fim (Array) – Fisher information matrix of shape (n, n).

  • prior_covariance (Array | float | None) – Optional prior covariance Sigma_prior. Scalar, diagonal vector, or full matrix.

  • prior_precision (Array | float | None) – Optional prior precision Lambda_prior. Scalar, diagonal vector, or full matrix.

  • relative_to_prior (bool) – When a prior is provided, subtract the prior log-determinant baseline so the result measures information gain.

Returns:

Scalar log-determinant.

Return type:

Array

sensitivity_map(forward_fn, params, *, metric_fn=None)#

Per-parameter sensitivity of an output metric.

Parameters:
  • forward_fn (Callable[[Any], Array])

  • params (Any)

  • metric_fn (Callable[[Array], Array] | None)

Return type:

Any

parameter_tolerance(forward_fn, params, *, target_change, metric_fn=None)#

Estimate allowable perturbation per parameter for a given output change.

Parameters:
  • forward_fn (Callable[[Any], Array])

  • params (Any)

  • target_change (float)

  • metric_fn (Callable[[Array], Array] | None)

Return type:

Any