Proposition: A Rawlsian Maximin Objective Is a Well-Defined, Optimizable Scalar Criterion
- Don Hilborn
- 3 days ago
- 2 min read
Proposition 1 (Rawlsian Maximin as a Scalar Optimization Objective)
Let G be a finite set of groups and let m_g: Θ → R be a real-valued performance function for group g ∈ G, where θ ∈ Θ denotes the parameters of a model or architecture.
Define the Rawlsian performance of θ as:
R(θ) = min_{g∈G} m_g(θ).
Then maximizing Rawlsian performance,
max_{θ∈Θ} R(θ),
is equivalent to the constrained optimization problem:
max_{θ,t} t subject to m_g(θ) ≥ t for all g ∈ G.
In particular, R(θ) is a single scalar objective that exactly captures the Rawlsian requirement that the least-advantaged group be made as well off as possible.
Proof
Fix any θ ∈ Θ. By definition,
R(θ) = min_{g∈G} m_g(θ).
Therefore, for every group g,
m_g(θ) ≥ R(θ).
It follows that the pair (θ,t) = (θ,R(θ)) satisfies the constraints of the optimization problem, with objective value t = R(θ).
Conversely, suppose (θ,t) is any feasible solution to the constrained problem. Feasibility implies:
m_g(θ) ≥ t for all g ∈ G.
Taking the minimum over all groups yields:
min_{g∈G} m_g(θ) ≥ t,
or equivalently t ≤ R(θ).
Thus, for any fixed θ, the maximum feasible value of t is exactly R(θ), and optimizing over θ in either formulation produces the same optimal value and the same set of optimal solutions.
Interpretation
This proposition formalizes an intuition that is often stated but rarely proved explicitly in the AI governance literature: Rawlsian fairness is not an additional constraint layered onto optimization, but a particular choice of objective function.
Proposition 2 (Differentiable Approximation of the Maximin Objective)
Let x₁,…,xₙ ∈ R. For β > 0, define the soft minimum:
softmin_β(x₁,…,xₙ) = -(1/β) log(∑ e^{-βxᵢ}).
Then:
lim_{β→∞} softmin_β(x₁,…,xₙ) = minᵢ xᵢ,
with approximation error bounded above by log(n)/β.
Implications for Neural Architecture Search and Governance
The Rawlsian objective is both normatively principled and computationally tractable, permitting gradient-based optimization while preserving fidelity to the underlying ethical commitment.
Comments