boofun.analysis.invariance

Invariance principles for Boolean functions (O’Donnell Chapter 11).

The Invariance Principle connects the behavior of Boolean functions on the discrete hypercube {-1,1}^n to their behavior on Gaussian space. This is one of the most powerful tools in analysis of Boolean functions.

Key results: - Mossel’s Invariance Principle: Low-influence functions have similar

distributions on Boolean and Gaussian inputs

  • “Majority is Stablest” theorem: Among low-influence functions, majority has the highest noise stability

  • Applications to hardness of approximation (Unique Games Conjecture)

The principle shows that:

E[Ψ(f(x))] ≈ E[Ψ(f̃(G))]

where x is uniform on {-1,1}^n, G is standard Gaussian on ℝ^n, f̃ is the multilinear extension, and Ψ is a “nice” test function.

References: - O’Donnell, “Analysis of Boolean Functions”, Chapter 11 - Mossel, O’Donnell, Oleszkiewicz, “Noise stability of functions with low influences” - Khot et al., “Optimal inapproximability results from the Unique Games Conjecture”

Functions

bivariate_gaussian_cdf(x, y, rho)

Compute the bivariate Gaussian CDF with correlation rho.

compute_test_function_expectation(f, test_fn)

Compute E[Ψ(f(x))] on either Boolean or Gaussian domain.

gaussian_cdf(x)

Compute the standard Gaussian CDF Φ(x).

invariance_distance(f[, test_fn])

Estimate the invariance distance between Boolean and Gaussian expectations.

is_stablest_candidate(f[, epsilon])

Check if f is a candidate for being "stablest" among its influence class.

majority_noise_stability(n, rho)

Compute the noise stability of the n-variable majority function at correlation rho.

max_cut_approximation_ratio(rho)

Compute the Goemans-Williamson MAX-CUT approximation ratio.

multilinear_extension_gaussian_expectation(f)

Estimate E[f̃(G)] and E[sign(f̃(G))] via Monte Carlo.

noise_stability_deficit(f, rho)

Compute how much less stable f is compared to majority at correlation rho.

unique_games_hardness_bound(f)

Estimate the UGC-hardness bound implied by f's noise stability.

Classes

InvarianceAnalyzer(f)

Comprehensive invariance principle analysis for Boolean functions.

boofun.analysis.invariance.invariance_distance(f: BooleanFunction, test_fn: Callable | None = None) float[source]

Estimate the invariance distance between Boolean and Gaussian expectations.

The invariance principle states that for functions with low influences:

|E[Ψ(f(x))] - E[Ψ(f̃(G))]| ≤ ε(τ)

where τ = max_i Inf_i[f] and ε(τ) → 0 as τ → 0.

This function estimates this distance using the default test function Ψ(t) = sign(t).

Parameters:
  • f – BooleanFunction to analyze

  • test_fn – Optional test function Ψ (default: sign)

Returns:

Estimated invariance distance

boofun.analysis.invariance.multilinear_extension_gaussian_expectation(f: BooleanFunction, num_samples: int = 10000) Tuple[float, float][source]

Estimate E[f̃(G)] and E[sign(f̃(G))] via Monte Carlo.

Samples standard Gaussian vectors and evaluates the multilinear extension.

Parameters:
  • f – BooleanFunction to analyze

  • num_samples – Number of Monte Carlo samples

Returns:

Tuple of (E[f̃(G)], E[sign(f̃(G))])

boofun.analysis.invariance.compute_test_function_expectation(f: BooleanFunction, test_fn: Callable[[float], float], domain: str = 'boolean') float[source]

Compute E[Ψ(f(x))] on either Boolean or Gaussian domain.

Here “test function” refers to a mathematical test function Ψ (as in the Invariance Principle), not a unit test.

Parameters:
  • f – BooleanFunction to analyze

  • test_fn – Test function Ψ (mathematical, e.g., smooth bounded function)

  • domain – “boolean” for {-1,1}^n or “gaussian” for ℝ^n

Returns:

Expectation E[Ψ(f(x))]

boofun.analysis.invariance.majority_noise_stability(n: int, rho: float) float[source]

Compute the noise stability of the n-variable majority function at correlation rho.

For large n, this converges to:

Stab_ρ[Maj_n] → (2/π) arcsin(ρ)

This is the “Sheppard’s formula” result.

Parameters:
  • n – Number of variables (odd for majority)

  • rho – Noise correlation parameter

Returns:

Noise stability of majority

boofun.analysis.invariance.noise_stability_deficit(f: BooleanFunction, rho: float) float[source]

Compute how much less stable f is compared to majority at correlation rho.

The “Majority is Stablest” theorem states that among all functions with E[f] = 0 and low influences, majority has the highest noise stability.

This function computes:

Stab_ρ[Maj] - Stab_ρ[f]

Positive values indicate f is less stable than majority.

Parameters:
  • f – BooleanFunction to analyze

  • rho – Noise correlation parameter

Returns:

Noise stability deficit

boofun.analysis.invariance.is_stablest_candidate(f: BooleanFunction, epsilon: float = 0.01) bool[source]

Check if f is a candidate for being “stablest” among its influence class.

A function is a stablest candidate if: 1. It has low influences (max influence < epsilon) 2. It is balanced (E[f] ≈ 0)

Such functions have noise stability close to (2/π) arcsin(ρ).

Parameters:
  • f – BooleanFunction to analyze

  • epsilon – Influence threshold

Returns:

True if f is a stablest candidate

boofun.analysis.invariance.max_cut_approximation_ratio(rho: float) float[source]

Compute the Goemans-Williamson MAX-CUT approximation ratio.

The GW algorithm achieves approximation ratio:
α_GW = (2/π) min_{θ ∈ [0, π]} θ / (1 - cos(θ))

≈ 0.87856

The “Majority is Stablest” theorem implies this is optimal assuming the Unique Games Conjecture.

Parameters:

rho – Correlation parameter (for generalized analysis)

Returns:

Approximation ratio

boofun.analysis.invariance.unique_games_hardness_bound(f: BooleanFunction) float[source]

Estimate the UGC-hardness bound implied by f’s noise stability.

The Unique Games Conjecture implies that no algorithm can do better than the “noise stability” bound for certain CSPs.

For MAX-CUT-like problems, the hardness depends on the noise stability curve of the dictator function vs. the majority function.

Parameters:

f – BooleanFunction to analyze

Returns:

Estimated hardness bound

class boofun.analysis.invariance.InvarianceAnalyzer(f: BooleanFunction)[source]

Comprehensive invariance principle analysis for Boolean functions.

Provides methods for analyzing how well a Boolean function’s behavior on the discrete hypercube matches its behavior on Gaussian space.

__init__(f: BooleanFunction)[source]

Initialize invariance analyzer.

Parameters:

f – BooleanFunction to analyze

invariance_bound() float[source]

Compute theoretical bound on invariance distance.

Based on the max influence of f.

noise_stability_deficit(rho: float = 0.9) float[source]

Compute how much less stable f is than majority.

is_stablest_candidate(epsilon: float = 0.01) bool[source]

Check if f satisfies conditions for Majority is Stablest.

gaussian_expectation(num_samples: int = 10000) Tuple[float, float][source]

Estimate E[f̃(G)] via Monte Carlo.

compare_domains() Dict[str, float][source]

Compare function behavior on Boolean vs Gaussian domains.

Returns:

Dictionary of comparison metrics

summary() str[source]

Return a summary of invariance properties.

boofun.analysis.invariance.gaussian_cdf(x: float) float[source]

Compute the standard Gaussian CDF Φ(x).

Parameters:

x – Point to evaluate

Returns:

Φ(x) = Pr[N(0,1) <= x]

boofun.analysis.invariance.bivariate_gaussian_cdf(x: float, y: float, rho: float) float[source]

Compute the bivariate Gaussian CDF with correlation rho.

This is Pr[(X, Y) in (-∞, x] × (-∞, y]] where (X, Y) are joint Gaussian with correlation rho.

Uses a simple approximation for efficiency.

Parameters:
  • x – Upper bounds

  • y – Upper bounds

  • rho – Correlation coefficient

Returns:

Bivariate Gaussian CDF value