I've been working on a non-scalar decision framework I call the Productive Value, Productive Power (PV-PP) Framework, and I'm curious how economists would react to this question:
**Can a non-scalar selection architecture strictly contain scalar aggregation?**
The starting intuition is straightforward. Scalar models work well when everything relevant can be reduced to a single common metric — and in a lot of ordinary decisions, that's fine. But there seem to be decision classes where that's not actually what's doing the work. Choice in those cases seems to depend on structured domain preservation, threshold adequacy, and an ordering of losses that aren't naturally interchangeable.
My current view is that the right framing isn't "scalar versus irrationality." It's "scalar as a restricted internal case of a broader decision structure."
The project has two parts.
The first is a containment result: over a well-defined restricted class of environments, the non-scalar framework reproduces the same maximal set as scalar aggregation. If that holds, scalar reasoning isn't being rejected — it's being located inside a larger architecture.
The second is a stress-test: are there structured benchmark cases where scalar aggregation can't recover the same choice behavior without either losing essential structure, or quietly smuggling it back in through ad hoc adjustments? The clearest cases I've found are sacrifice-style decisions, where preserving a governing domain appears to override compensating gains elsewhere — and scalar methods struggle to represent that cleanly.
So the question isn't whether scalar methods are useful. Obviously they are. The harder question is whether they're *fundamental*, or whether they're a special case of something more general.
My current read:
1. There are restricted domains where scalar and non-scalar approaches coincide.
2. There are also structured cases where scalar representation looks incomplete.
3. If both hold, then the right result is containment — not equivalence, not rejection.
The general theorem program isn't finished yet. But I think there's enough on the table now to ask whether this registers as a serious representational question.
The specific feedback I'd most want:
If a framework reproduces scalar results on a restricted class, but also handles benchmark cases that scalar aggregation appears unable to represent cleanly — is that best understood as:
- a genuine generalization of scalar aggregation,
- a decision-theoretic redescription with no real gain, or
- a benchmark artifact that disappears once the scalar model is properly specified?
Especially interested in reactions from anyone working in welfare theory, aggregation, decision theory, or behavioral economics.