‘coarse
setupside-by-sidegithub ↗
↓ paper PDF

Claude Sonnet 4.6 vs refine.ink

5.62/6

Targeting Interventions in Networks

Coverage 6.0/5Specificity 5.5/5Depth 6.0/5
Jump to
‘coarse

TARGETING INTERVENTIONS IN NETWORKS

Date: 03/08/2026 Domain: social_sciences/economics Taxonomy: academic/research_paper Filter: Active comments


Overall Feedback

Here are some overall reactions to the document.

Property A Is Load-Bearing but Its Scope and Restrictions Are Understated in the Main Text

Theorem 1, Corollary 1, Propositions 1–4, and essentially all of the paper's central results are derived under Property A, which requires aggregate equilibrium welfare to be proportional to (a*)ᵀa*. The main text presents this as a 'technically convenient' simplification satisfied by two canonical examples, but never quantifies how restrictive it is for the broader class of linear-quadratic network games — in particular, it rules out welfare functions with linear terms in equilibrium actions. The Online Appendix (OA3.1) extends to general externalities, but only by imposing Assumption OA1 (constant row sums), which is itself an additional restriction not present in the main model and which rules out many natural network structures with heterogeneous degree distributions. Crucially, when Property A fails, the optimal intervention for the first principal component acquires a qualitatively different structure (Theorem OA1), including the possibility that the large-budget limit concentrates on the second rather than the first eigenvector — a reversal of the paper's headline result. It would be helpful to include in the main text a precise characterization of which externality structures satisfy Property A without Assumption OA1, to add a prominent caveat to Theorem 1 and Corollary 1 noting that the monotone ordering of similarity ratios can break down when Property A fails, and to clarify that OA3.1 substitutes one restriction for another rather than providing a full relaxation.

The Proof of Theorem 1 Is Incomplete When Status-Quo Projections Vanish, and the Large-Budget Results Inherit the Same Gap

The proof of Theorem 1 pivots on the change of variables x_ℓ = y_ℓ / b̂_ℓ, which is undefined whenever b̂_ℓ = 0 — that is, whenever the status-quo vector is orthogonal to some eigenvector of G. The paper acknowledges this only in passing ('we take a generic b̂ such that b̂_ℓ ≠ 0 for all ℓ'), but never establishes that this genericity condition holds under any natural distribution over status-quo vectors, nor does it characterize the boundary behavior. This matters because b̂ is a primitive of the policy problem: a planner facing a status quo orthogonal to some eigenvectors — for example, a uniform b̂ on a bipartite graph, which is orthogonal to all odd eigenvectors — would find equation (5) inapplicable, and the shadow-price formula in equation (6) assigns near-zero weight to those components regardless of their eigenvalue, potentially overturning the ordering in Corollary 1. The same gap propagates to Proposition 1: the large-budget argument that ρ(y*, u¹) → 1 requires ρ(b̂, u¹) ≠ 0, since the proof relies on the identity Σ_ℓ (‖b̂‖ ρ(b̂, u^ℓ) x_ℓ*/√C)² = 1 and the vanishing of all ℓ ≠ 1 terms — a step that fails if the ℓ = 1 term is itself zero. It would be helpful to add a formal lemma characterizing the limiting behavior of the optimal intervention as b̂_ℓ → 0, to add the condition ρ(b̂, u¹) ≠ 0 (resp. ρ(b̂, u^n) ≠ 0) explicitly to Proposition 1 and Proposition 2, and to discuss what the optimal intervention looks like when these conditions fail.

Assumption 2's Distinct-Eigenvalue Condition Silently Excludes Economically Important Networks and Is Invoked Without Full Justification

Assumption 2 requires all eigenvalues of G to be distinct, justified as holding 'generically.' However, many of the most economically studied network structures — complete graphs, regular graphs, bipartite graphs, and graphs with any non-trivial symmetry group — have repeated eigenvalues as a structural feature rather than a knife-edge case. For instance, the complete graph K_n has eigenvalue −1 with multiplicity n−1, and the circle network used in Figure 1 and Example 3 of the paper has repeated eigenvalues for even n, meaning the paper's own illustrative examples may fall outside the formal scope of Theorem 1. When eigenvalues are repeated, the eigenvectors u^ℓ(G) in Definition 1 are not uniquely determined — only the eigenspace is — so the cosine similarity ρ(y*, u^ℓ(G)) in equation (5) becomes basis-dependent, and the formula for the similarity ratio r_ℓ/r_ℓ' = α_ℓ/α_ℓ' = 1 provides no guidance on how to split the budget between degenerate components. It would be helpful to add a remark after Assumption 2 proving that the welfare value W* and the set of optimal interventions are well-defined even when eigenvalues are repeated, to explain whether the budget is split arbitrarily among degenerate components or whether the status-quo vector b̂ pins down the allocation, and to verify that the circle network example satisfies or can be approximated to satisfy Assumption 2.

The Budget Threshold in Proposition 2 Diverges as the Spectral Gap Closes, and the Proof's Logical Chain Between the Welfare and Cosine Bounds Is Not Made Explicit

Proposition 2 gives a sufficient budget level C > (2‖b̂‖²/ε)(α₂/(α₁ − α₂))² for the simple intervention to be ε-optimal in both welfare and cosine similarity. The ratio α₂/(α₁ − α₂) diverges as the spectral gap λ₁ − λ₂ → 0 or as β → 1/λ₁, making the bound vacuous for networks with small spectral gaps — a regime where the paper's headline claim that 'large-budget interventions are simple' is technically correct but practically uninformative, since 'large' may be astronomically large. The paper discusses the spectral gap qualitatively in Section 4.2 and Figure 3 but does not characterize the welfare loss from using the simple intervention when the gap is small, nor the rate at which the optimal intervention mixes the top two components as a function of the gap. Additionally, the proof establishes the welfare ratio and cosine similarity bounds via separate arguments, and the logical step verifying that the single threshold C* is sufficient for both claims simultaneously — specifically, that C > (2‖b̂‖²/ε)(α₂/(α₁−α₂))² implies the required lower bound on (wα₁/(μ−wα₁))²b̂²₁ needed for Lemma 1 — is left implicit. It would be helpful to complement Proposition 2 with a lower bound on the welfare loss from the simple intervention as a function of the spectral gap, and to add a sentence in the proof of Lemma 1 explicitly verifying the sufficiency of the stated threshold for the cosine bound.

The Incomplete Information Section Delivers No Qualitatively New Characterization Beyond the Complete-Information Benchmark

Section 5 is framed as a substantive extension to settings where the planner lacks knowledge of agents' standalone marginal returns, but neither of its main results delivers a qualitatively new insight about how informational frictions alter optimal targeting. Proposition 3 (mean shifts) shows that the optimal policy under incomplete information is identical to the complete-information optimum evaluated at E[b̂] — a direct consequence of Theorem 1 that adds no new economic content. Proposition 4 (variance control) characterizes the ordering of variances across principal components, but this ordering follows almost immediately from the same eigenvalue-monotonicity argument as Corollary 1, with variance replacing cosine similarity. Neither result characterizes the welfare loss from incomplete information as a function of network structure, addresses robustness to uncertainty about G itself, or delivers a policy prescription that differs qualitatively from the complete-information case. The paper explicitly declines (footnote 23) to analyze incomplete information among agents, which would be the more substantively novel direction. It would be helpful to either derive a sharp bound on the value of information as a function of spectral properties of G, or to reframe Section 5 explicitly as a robustness check rather than a standalone contribution, so that its relationship to the main results is accurately represented.

Two Gaps in the Online Appendix Extensions Require Correction or Clarification: A Likely Typographical Error in OA3.2 and an Uncharacterized Constrained Regime

Two issues in the paper's extensions warrant attention. First, in Online Appendix OA3.2, the equilibrium condition for the non-symmetric SVD extension is stated as ā_ℓ* = (1/s_ℓ) b̄_ℓ², where the squaring appears to be a typographical error: the correct expression from the SVD decomposition M ā = b̄ in the rotated basis gives s_ℓ ā_ℓ = b̄_ℓ, so ā_ℓ* = b̄_ℓ/s_ℓ without squaring. If taken literally, the stated formula would make welfare a quartic function of b̄_ℓ and would not support the claimed analogy α_ℓ = 1/s²_ℓ with the symmetric case. It would be helpful to correct this expression and verify that α_ℓ = 1/s²_ℓ is consistent with the corrected equilibrium formula. Second, Section 4 notes that the nonnegativity constraint on actions is satisfied automatically for budgets below some threshold Ĉ, but provides no characterization of the optimal intervention when C > Ĉ and the constraint binds — a regime that is particularly relevant under strategic substitutes, where the optimal intervention u^n(G) assigns opposite signs to neighboring nodes and necessarily prescribes negative standalone returns for some agents. It would be helpful to derive an explicit expression for Ĉ in terms of b̂ and the spectral properties of G, and to characterize whether the qualitative ordering result of Corollary 1 survives when corner solutions are present.

Assumption 2's spectral-radius condition may be incompatible with the strategic-substitutes regime analyzed in Section 4

Assumption 2 requires that the spectral radius of βG is strictly less than 1, i.e., |β|·ρ(G) < 1, where ρ(G) = max{|λ₁|, |λₙ|} is the spectral radius of G. Under strategic substitutes (β < 0), the binding constraint is |β|·|λₙ| < 1. However, the large-budget analysis in Proposition 1 and Proposition 2 derives its sharpest results precisely when |λₙ| is large — specifically, when the 'bottom gap' |λₙ| − |λₙ₋₁| is large (Section 4.2 and Figure 4C). The paper's own illustration uses a bipartite graph with λₙ = −3 and β = −0.1, giving |β|·|λₙ| = 0.3, which satisfies Assumption 2. But the theoretical results are stated for general networks and general β < 0, and the condition for simplicity to kick in at moderate budgets (Proposition 2) involves the ratio αₙ₋₁/(αₙ − αₙ₋₁), which grows large as |β|·|λₙ| → 1. It is not clear how the paper's comparative-statics claims about 'large bottom gap' networks remain valid uniformly over the parameter space without an explicit joint restriction on (β, G) that keeps the spectral-radius condition comfortably satisfied. It would be helpful to state explicitly how the admissible range of β shrinks as |λₙ| increases, and to verify that the illustrative examples in Figures 3–4 are representative of the regime where Assumption 2 holds with meaningful slack.

Property A is assumed rather than derived for the welfare analysis, and its failure changes the qualitative targeting prescription

The main results — Theorem 1, Corollary 1, and Propositions 1–2 — all require Property A (W ∝ (a*)ᵀa*). The paper acknowledges in Online Appendix OA3.1 that Property A fails in natural settings, such as the social-interaction/peer-effects example (Example OA1), where W = ½(a*)ᵀa* − nγΣᵢaᵢ*. In that extension, Theorem OA1 shows that the optimal x₁* (the intervention along the first principal component) acquires an additional corrective term involving γ, and the large-budget limit can converge to either the first or second principal component depending on whether w₁α₂ ≷ (w₁+w₂)α₁. This is a qualitatively different prescription from the main text. Readers might note that the scope of the headline results is therefore narrower than the general model of Section 2 suggests: the payoff function in equation (1) allows arbitrary pure externalities Pᵢ(a₋ᵢ, G, b), but the characterization theorems apply only to the subset satisfying Property A. It would be helpful to include a brief discussion in the main text — rather than only in the appendix — of which economic environments satisfy Property A and which do not, so that readers can assess whether the canonical results apply to their application of interest.

Status: [Pending]


Detailed Comments (25)

1. Assumption 3 Uses Unsquared Norm Instead of Squared Norm

Status: [Pending]

Quote:

Either w<0w<0 and C<b^C<\|\hat{\bm{b}}\|, or w>0w>0.

Feedback: The motivating paragraph states that the first-best is achievable when C ≥ ‖b̂‖², since the cost of moving from b̂ to 0 under the budget constraint Σᵢ(bᵢ − b̂ᵢ)² ≤ C is exactly ‖b̂‖². Assumption 3 should therefore require C < ‖b̂‖² (squared norm) to rule out the first-best. The Appendix proof of Theorem 1 confirms this, stating 'Assumption 3 says that either w > 0, or w < 0 and Σ_ℓ b̂²_ℓ > C,' and Σ_ℓ b̂²_ℓ = ‖b̂‖² by Parseval's identity. The main text's condition C < ‖b̂‖ is inconsistent with both the motivating paragraph and the appendix proof. It would be helpful to rewrite as 'Either w < 0 and C < ‖b̂‖², or w > 0.'


2. α_ℓ Definition Written Without Squaring in Discussion Paragraph

Status: [Pending]

Quote:

The second factor, wαμwα\frac{w\alpha_{\ell}}{\mu-w\alpha_{\ell}}, is determined by two quantities: the eigenvalue corresponding to u(G)\bm{u}^{\ell}(\bm{G}) (via α=11βλ\alpha_{\ell}=\frac{1}{1-\beta\lambda_{\ell}}), and the budget CC (via the shadow price μ\mu).

Feedback: The definition given just before Theorem 1 states α_ℓ = 1/(1−βλ_ℓ)², and the welfare decomposition W = Σ_ℓ wα_ℓ(1+x_ℓ)²b̂²_ℓ requires the squared denominator to match a*_ℓ = √α_ℓ b̲_ℓ. The discussion paragraph's inline formula α_ℓ = 1/(1−βλ_ℓ) (unsquared) contradicts both the theorem statement and the appendix proof. It would be helpful to rewrite the inline formula as α_ℓ = 1/(1−βλ_ℓ)².


3. Lagrangian Missing Squared Factor on Status-Quo Projection

Status: [Pending]

Quote:

L=wα(1+x)2b^+μ[Cb^2x2].\mathcal{L}=w\sum_{\ell}\alpha_{\ell}(1+x_{\ell})^{2}\hat{\underline{b}}_{\ell}+\mu\left[C-\sum_{\ell}\hat{\underline{b}}_{\ell}^{2}x_{\ell}^{2}\right].

Feedback: After the change of variables x_ℓ = y̲_ℓ/b̲̂_ℓ, the objective becomes w Σ_ℓ α_ℓ(1+x_ℓ)²b̲̂²_ℓ (squared), as confirmed by the displayed problem (IT-PC) immediately above. The Lagrangian as written carries only a single power b̲̂_ℓ in the first sum rather than b̲̂²_ℓ. The first-order condition (10) is derived correctly with b̲̂²_ℓ, so this is a typographical error that does not propagate, but it should be corrected to b̲̂²_ℓ for consistency.


4. Sign Error in Bottom-Gap Term of Proposition 2 Discussion

Status: [Pending]

Quote:

If β<0\beta<0, then the term αn1/(αn1αn)\alpha_{n-1}/(\alpha_{n-1}-\alpha_{n}) is small when the “bottom gap” of the graph, the difference λn1λn\lambda_{n-1}-\lambda_{n}, is small.

We now examine what network features affect these gaps, and illustrate with exam

Feedback: Under β < 0, α_n > α_{n−1} (since |λ_n| > |λ_{n−1}|), so α_{n−1} − α_n < 0, making the denominator negative and the ratio meaningless. Proposition 2 part 2 correctly writes the threshold as (α_{n−1}/(α_n − α_{n−1}))², with α_n − α_{n−1} > 0. Moreover, a small bottom gap makes this ratio large (not small), consistent with the budget bound being harder to satisfy. It would be helpful to rewrite as 'the term α_{n−1}/(α_n − α_{n−1}) is large when the bottom gap λ_{n−1} − λ_n is small.'


5. Small-Budget Ratio Limit Contains Typographical Error in Subscript

Status: [Pending]

Quote:

As C0C\to 0, in the optimal intervention, rtrαα\frac{r_{t}^{*}}{r_{\ell^{\prime}}^{*}}\to\frac{\alpha_{\ell}}{\alpha_{\ell^{\prime}}}.

Feedback: Readers might note that the subscript 't' in r_t appears to be a typographical error for r_ℓ, creating an undefined symbol in the limit statement. The limit itself is consistent with the Appendix proof of Proposition 1 part 1, which uses the similarity ratio definition r_ℓ = ρ(y, u^ℓ)/ρ(b̂, u^ℓ) to cancel the projection factor. It would be helpful to replace r_t with r_ℓ throughout.


6. Large-Budget Convergence y* → √C u¹ Conflates Directional and Norm Convergence

Status: [Pending]

Quote:

a<0a<0 (by equation (6)). Plugging this into equation (5), we find that in the case of strategic complements, the optimal intervention shifts individuals’ standalone marginal returns (very nearly) in proportion to the first principal component of G\bm{G}, so that yCu1(G)\bm{y}^{*}\to\sqrt{C}\bm{u}^{1}(\bm{G}). In the case of strategic substitutes, on the other hand, the planner changes individuals’ standalone marginal returns (very nearly) in proportion to the last principal component, namely yCun(G)\bm{y}^{*}\to\sqrt{C}\bm{u}^{n}(\bm{G}).

Figure 2 presents optimal targets when the budget is large – in particular, for C=500C=500. We consider an

Feedback: As C → ∞, both y* and √C u¹ grow without bound, so the notation y* → √C u¹ cannot mean norm convergence ‖y* − √C u¹‖ → 0. Proposition 1 part 2a states the correct result: ρ(y*, u¹(G)) → 1, i.e., directional convergence. Writing y* → √C u¹ without qualification implies a stronger statement than what is proved. It would be helpful to rewrite as 'y*/‖y*‖ → u¹(G), equivalently ρ(y*, u¹(G)) → 1' to match Proposition 1.


7. Large-Budget Proof Gap: ρ(b̂, u¹) ≠ 0 Required but Not Stated

Status: [Pending]

Quote:

\infty} \sum_{\ell} \left( | \hat{\boldsymbol{b}} | \rho (\hat{\boldsymbol{b}}, \boldsymbol{u}^{\ell}(\boldsymbol{G})) \frac{x_{\ell}^{}}{\sqrt{C}} \right)^{2} = \lim_{C \to \infty} \left( | \hat{\boldsymbol{b}} | \rho (\hat{\boldsymbol{b}}, \boldsymbol{u}^{1}(\boldsymbol{G})) \frac{x_{1}^{}}{\sqrt{C}} \right)^{2} = 1,

Feedback: The second equality — that the limit equals 1 — requires x₁/√C to remain bounded away from zero, which follows from the binding budget constraint Σ_ℓ b̲̂²_ℓ x²_ℓ = C only if b̲̂₁ = ‖b̂‖ρ(b̂, u¹) ≠ 0. If ρ(b̂, u¹) = 0, the ℓ = 1 term is identically zero and the argument collapses. This condition is not stated in Proposition 1 or its proof. It would be helpful to add the explicit condition ρ(b̂, u¹(G)) ≠ 0 (resp. ρ(b̂, uⁿ(G)) ≠ 0) to Proposition 1 parts 2a and 2b, and to add a sentence in the proof: 'The second equality uses b̲̂₁ ≠ 0, i.e., ρ(b̂, u¹(G)) ≠ 0, which ensures x*₁/√C → 1/|b̲̂₁| from the binding budget constraint.'


8. Welfare Ratio Notation W*/W² Is a Typographical Error for W*/Wˢ

Status: [Pending]

Quote:

is sufficient to establish that WW2<1+ϵ\frac{W^{*}}{W^{2}}<1+\epsilon.

Feedback: Throughout the proof of Proposition 2, the simple-intervention welfare is denoted W^s. The conclusion writes W*/W², where the superscript '2' is undefined and is clearly a typographical corruption of 's'. The bound derived is W*/W^s ≤ 1 + (2‖b̂‖²/C)(α₂/(α₁−α₂))², so the stated sufficient condition correctly implies W*/W^s < 1+ε. It would be helpful to rewrite as W*/W^s < 1+ε.


9. Proof of Proposition 2 Cites Corollary 1 for a Step That Follows from Theorem 1 and Monotonicity of f

Status: [Pending]

Quote:

α2x2(x2+2)1b^2\leq\alpha_{2}x_{2}^{*}(x_{2}^{*}+2)\sum_{\ell\neq 1}\hat{\underline{b}}_{\ell}^{2} Corollary 1

Feedback: This step requires x_ℓ(x_ℓ+2) ≤ x₂(x₂+2) for all ℓ > 1. From Theorem 1, x_ℓ = wα_ℓ/(μ−wα_ℓ) is decreasing in ℓ for β > 0 (since α_ℓ is decreasing). The function f(x) = x(x+2) = (x+1)²−1 is increasing for x ≥ 0, and x_ℓ ≥ 0 for w > 0 (established earlier in the proof). So f(x_ℓ) ≤ f(x₂) follows from monotonicity of x_ℓ and f, not from Corollary 1 (which concerns similarity ratios, not x_ℓ values). It would be helpful to replace the citation 'Corollary 1' with 'monotonicity of x*_ℓ in ℓ from Theorem 1 and monotonicity of f(x) = x(x+2) for x ≥ 0.'


10. Lemma 1 Invokes μ > wα₁ Without Establishing It in the Proof

Status: [Pending]

Quote:

The final inequality follows because, from the facts that μ>wα1\mu>w\alpha_{1} and that α1>α2>>αn\alpha_{1}>\alpha_{2}>\dots>\alpha_{n}, we can deduce that for each >1\ell>1

wαμwα<wαwα1wα=αα1α<α2α1α2\frac{w\alpha_{\ell}}{\mu-w\alpha_{\ell}}<\frac{w\alpha_{\ell}}{w\alpha_{1}-w\alpha_{\ell}}=\frac{\alpha_{\ell}}{\alpha_{1}-\alpha_{\ell}}<\frac{\alpha_{2}}{\alpha_{1}-\alpha_{2}}

Feedback: The inequality μ > wα₁ is used as a fact but is not derived in the Lemma 1 proof. It follows from x₁ = wα₁/(μ−wα₁) ≥ 0 (established earlier in the proof of Theorem 1 for w > 0), which requires μ − wα₁ > 0. It would be helpful to add a sentence before the final inequality: 'Since w > 0 and x₁ ≥ 0, the expression x*₁ = wα₁/(μ−wα₁) requires μ > wα₁.'


11. Variance Swap Equation in Proof of Proposition 4 Is Self-Contradictory

Status: [Pending]

Quote:

and so Var(bk)=Var(bk)\operatorname{Var}(\underline{b}_k^{**}) = \operatorname{Var}(\underline{b}_k^*) for all k{,}k \notin \{\ell, \ell'\} and Var(b)=Var(b)>Var(b)=Var(b)\operatorname{Var}(\underline{b}_{\ell}^{*}) = \operatorname{Var}(\underline{b}_{\ell'}^{*}) > \operatorname{Var}(\underline{b}_{\ell'}^{*}) = \operatorname{Var}(\underline{b}_{\ell}^{*}).

Feedback: The permutation P swaps indices ℓ and ℓ', so B** = OB* has Var(b̲ℓ) = Var(b̲*{ℓ'}) and Var(b̲{ℓ'}) = Var(b̲_ℓ). The chain as written — Var(b̲ℓ) = Var(b̲*{ℓ'}) > Var(b̲*{ℓ'}) = Var(b̲_ℓ) — asserts A = B > B = A, which is impossible. The correct statement is Var(b̲**_ℓ) = Var(b̲{ℓ'}) > Var(b̲ℓ) = Var(b̲**{ℓ'}), reflecting the hypothesis Var(b̲ℓ) < Var(b̲*{ℓ'}). The welfare gain from the swap is w(α_ℓ − α{ℓ'})(Var(b̲_{ℓ'}) − Var(b̲_ℓ)) > 0, which is the intended contradiction.


12. Variance-Covariance Formula for B** Conflates Original and Rotated Bases

Status: [Pending]

Quote:

ΣB=PΣBP\boldsymbol {\Sigma} _ {\mathcal {B} ^ {* *}} = \boldsymbol {P} \boldsymbol {\Sigma} _ {\mathcal {B} ^ {*}} \boldsymbol {P} ^ {\top}

Feedback: B** = OB* where O = UPU^⊤. The covariance of B** in the original basis is Σ_{B**} = OΣ_{B*}O^⊤ = U P (U^⊤Σ_{B*}U) P^⊤ U^⊤ = U P Σ_{B̲*} P^⊤ U^⊤, not PΣ_{B*}P^⊤. The formula PΣ_{B*}P^⊤ would be correct only if U = I. The correct statement is that the covariance of the rotated variable B̲** = U^⊤B** satisfies Σ_{B̲**} = PΣ_{B̲*}P^⊤, where B̲* = U^⊤B*. The downstream conclusion about Var(b̲k) is correct but follows from the rotated covariance, not the original one. It would be helpful to rewrite the displayed equation as Σ{B̲} = PΣ_{B̲*}P^⊤.


13. Circle Network Example Violates Assumption 2 (Repeated Eigenvalues)

Status: [Pending]

Quote:

Example 3 (Illustration in the case of the circle).

Figure 1 depicts six of the eigenvectors/principal components of a circle network with 14 nodes.

Feedback: The eigenvalues of the n-node cycle graph are λ_k = 2cos(2πk/n). For n = 14, λ_k = λ_{14−k} for k = 1, …, 6, giving 6 pairs of repeated eigenvalues. This directly violates Assumption 2 (distinct eigenvalues), which is explicitly invoked in Proposition 4 that Example 3 is meant to illustrate. With repeated eigenvalues, the eigenvectors u^ℓ(G) are not uniquely determined and the variances Var(u^ℓ(G)·b*) are basis-dependent, making the proposition's statement ambiguous for this example. It would be helpful to use an odd number of nodes (e.g., n = 13 or n = 15), for which the cycle graph has all distinct eigenvalues, so the example falls within the formal scope of Proposition 4.


14. Equilibrium Condition in SVD Extension Incorrectly Squares b̄_ℓ

Status: [Pending]

Quote:

Let a=VTa\underline{\bm{a}}=\bm{V}^{\mathsf{T}}\bm{a} and b=UTb\underline{\bm{b}}=\bm{U}^{\mathsf{T}}\bm{b}; then the equilibrium condition implies that:

a=1sb2,\underline{a}_{\ell}^{*}=\frac{1}{s_{\ell}}b_{\ell}^{2},

Feedback: From the SVD M = USV^T, the equilibrium a* = M^{−1}b = VS^{−1}U^Tb. Defining ā = V^Ta and b̄ = U^Tb gives Sā* = b̄, so ā_ℓ = b̄_ℓ/s_ℓ — without squaring. The stated formula ā_ℓ = b̄²_ℓ/s_ℓ is inconsistent with the linear equilibrium condition. With the correct formula, W = wā*^Tā* = w Σ_ℓ b̄²_ℓ/s²_ℓ, which is quadratic in b̄_ℓ and yields α_ℓ = 1/s²_ℓ as claimed. The squaring of b̄_ℓ in the stated formula would make W quartic and would not support the analogy with the symmetric case. It would be helpful to rewrite as ā*_ℓ = b̄_ℓ/s_ℓ.


15. Domain of x₁ Written as C/b̂₁ Instead of √C/b̂₁

Status: [Pending]

Quote:

Lemma OA2 implies that μ\mu as a function of x1[C/b^1,Cb^1]x_{1}\in\left[-C/\hat{b}_{1},C\hat{b}_{1}\right] is U-shaped; the slope is -\infty at x1=C/b^1x_{1}=-C/\hat{b}_{1} and ++\infty at x1=C/b^1x_{1}=C/\hat{b}_{1}; and it reaches a minimum at x1=0x_{1}=0.

Feedback: The feasibility constraint C(x₁) = C − b̲̂²₁x²₁ ≥ 0 gives |x₁| ≤ √C/b̲̂₁. Lemma OA2 itself uses the limits x₁ → ±√C/b̂₁ in its statement, confirming the correct domain. The passage writes [−C/b̂₁, Cb̂₁], which has two errors: both endpoints should be ±√C/b̲̂₁. The same error appears in the First Step of the proof. It would be helpful to rewrite as x₁ ∈ [−√C/b̲̂₁, √C/b̲̂₁].


16. w₂ Coefficient Formula Uses m₅ Where m₄ Is Expected

Status: [Pending]

Quote:

w1w_{1} == 1+m2+m5+(n1)m41+m_{2}+m_{5}+(n-1)m_{4} w2w_{2} == nm5(n2)nm_{5}(n-2) w3w_{3} == n[m1+(n1)m3].\sqrt{n}[m_{1}+(n-1)m_{3}].

Feedback: The squared-aggregate term (Σᵢaᵢ*)² in W arises from the externality m₄(Σ_{j≠i}aⱼ)². Summing over i: Σᵢ m₄(Σ_{j≠i}aⱼ)² = m₄[(n−2)(Σⱼaⱼ)² + Σᵢaᵢ²], giving a coefficient m₄(n−2) on (Σᵢaᵢ*)², so w₂/n = m₄(n−2) and w₂ = nm₄(n−2). The m₅Σ_{j≠i}aⱼ² term contributes to (a*)^Ta* (the w₁ term), not to the squared aggregate. The stated formula w₂ = nm₅(n−2) has m₅ where m₄ is expected. It would be helpful to rewrite as w₂ = nm₄(n−2).


17. Lemma OA1 Part 2 States u_i^1(G) = √n But Eigenvectors Must Be Unit Vectors

Status: [Pending]

Quote:

2. λ1(G)=1\lambda_{1}(\bm{G})=1 and ui1(G)=nu_{i}^{1}(\bm{G})=\sqrt{n} for all ii

Feedback: Under Assumption OA1, G·1 = 1 confirms λ₁ = 1 and that 1 is an eigenvector. The spectral decomposition G = UΛU^T requires U to be orthogonal, so ‖u¹‖ = 1. The normalized eigenvector is u¹ = (1/√n)·1, giving u_i^1 = 1/√n for all i, not √n. The value √n is the norm of the unnormalized eigenvector. Part 3 of Lemma OA1 uses this normalization to derive Σᵢaᵢ* = √nα₁^{1/2}b̲₁, which is consistent with u_i^1 = 1/√n. It would be helpful to rewrite as u_i^1(G) = 1/√n for all i.


18. Beauty Contest FOC Has β̃ Replaced by b̃ᵢ in the Interaction Coefficient

Status: [Pending]

Quote:

ai=b~i1+γ+b~i+γ1+γgijaj.a_i = \frac{\tilde{b}_i}{1 + \gamma} + \frac{\tilde{b}_i + \gamma}{1 + \gamma} \sum g_{ij} a_j.

Feedback: Differentiating Uᵢ with respect to aᵢ and using Σⱼgᵢⱼ = 1 gives aᵢ(1+γ) = b̃ᵢ + (β̃+γ)Σⱼgᵢⱼaⱼ, so aᵢ = b̃ᵢ/(1+γ) + (β̃+γ)/(1+γ)·Σⱼgᵢⱼaⱼ. The coefficient on Σgᵢⱼaⱼ should be (β̃+γ)/(1+γ), not (b̃ᵢ+γ)/(1+γ). The printed expression has the individual-specific standalone return b̃ᵢ in the numerator of the second term, which is dimensionally inconsistent and makes the coefficient agent-specific, destroying the uniform best-response structure needed for the mapping to condition (2). This is consistent with the subsequent definition β = (β̃+γ)/(1+γ). It would be helpful to rewrite as aᵢ = b̃ᵢ/(1+γ) + (β̃+γ)/(1+γ)·Σⱼgᵢⱼaⱼ.


19. Cross-Effect Sign Condition in Beauty Contest Is Stated Incorrectly

Status: [Pending]

Quote:

an increase in jj's action has a positive effect on individual ii's utility if and only if aj<aia_j < a_i.

Feedback: Computing ∂Uᵢ/∂aⱼ = gᵢⱼ[β̃aᵢ − γ(aⱼ − aᵢ)] = gᵢⱼ[(β̃+γ)aᵢ − γaⱼ]. This is positive when aⱼ < (β̃+γ)/γ · aᵢ, not simply when aⱼ < aᵢ (which would require β̃ = 0, contradicting β̃ > 0). For β̃ > 0, the threshold is strictly above aᵢ, so the cross-effect is positive over a wider range than claimed. It would be helpful to rewrite as 'an increase in j's action has a positive effect on individual i's utility if and only if aⱼ < ((β̃+γ)/γ)aᵢ.'


20. Cost Formula in IT-P Has a Spurious Factor of 1/2 on the First Sum

Status: [Pending]

Quote:

K(y)K(\bm{y}) == 12i1yi>0ai(y)yiai(y)si1(τi)dτi+i(11yi>0)ai(y)ai(y)+yisi0(τi)dτi\frac{1}{2}\sum_{i}\bm{1}_{y_{i}>0}\int_{a_{i}(\bm{y})-y_{i}}^{a_{i}(\bm{y})}s_{i}^{1}(\tau_{i})d\tau_{i}+\sum_{i}(1-\bm{1}_{y_{i}>0})\int_{a_{i}(\bm{y})}^{a_{i}(\bm{y})+|y_{i}|}s_{i}^{0}(\tau_{i})d\tau_{i} == 12iyi2\frac{1}{2}\sum_{i}y_{i}^{2}

Feedback: For the action-1 subsidy case (yᵢ > 0): ∫{aᵢ−yᵢ}^{aᵢ} (τᵢ − (aᵢ−yᵢ)) dτᵢ = yᵢ²/2. For the action-0 subsidy case (yᵢ < 0): ∫{aᵢ}^{aᵢ+|yᵢ|} ((aᵢ+|yᵢ|) − τᵢ) dτᵢ = yᵢ²/2. Both integrals already yield yᵢ²/2, so the total cost is K(y) = Σᵢ yᵢ²/2, matching the stated result. The leading 1/2 on the first sum is therefore spurious: it would give (1/2)·yᵢ²/2 = yᵢ²/4 for the action-1 case, inconsistent with the final expression. It would be helpful to remove the leading 1/2 from the first sum.


21. Proof of Lemma OA3 Does Not Establish the Claimed Uniform Convergence

Status: [Pending]

Quote:

Consider the Taylor expansion of κ\kappa around 0\bm{0} (κ\kappa is defined by part (1) of the assumption). We will now study its properties under parts (2) to (5) of Assumption OA2. (5) ensures that the Taylor expansion exists. Local separability (4) says that there are no terms of the form yiyjy_{i}y_{j}. Non-negativity (3) (κ\kappa is nonnegative and κ(0)=0\kappa(\bm{0})=0) implies that all first-order terms are zero. Also, (5) says that terms of the form yi2y_{i}^{2} must have positive coefficients, and symmetry (2) says that their coefficients must all be the same. ∎

Feedback: The proof identifies the leading term of κ as k‖z‖² but never establishes the uniform convergence C⁻¹κ(C^{1/2}z) → k‖z‖² on compact sets, which is the operative claim invoked in the subsequent application of Berge's Theorem. Writing κ(C^{1/2}z) = kC‖z‖² + R(C^{1/2}z) where R(y) = O(‖y‖³), we get C⁻¹κ(C^{1/2}z) = k‖z‖² + C⁻¹R(C^{1/2}z). On any compact set K, sup_{z∈K}|C⁻¹R(C^{1/2}z)| = O(C^{1/2}sup_{z∈K}‖z‖³) → 0 uniformly. This argument is elementary but must be written out, since the proof of Proposition OA1 explicitly invokes 'the convergence of the objective is actually uniform on K by the Lemma.' It would be helpful to add this uniform convergence argument before the ∎.


22. Proof of Proposition OA3 Uses an Incorrect Convexity Argument

Status: [Pending]

Quote:

Note that LL is contained in a convex set

E={b:W(b)W}.E=\{\bm{b}:W(\bm{b})\leq W^{*}\}.

The point b\bm{b^{*}} is contained in the interior of LL; thus b\bm{b^{*}} is in the interior of EE. On the other hand, b\bm{b^{*}} must be on the (elliptical) boundary of EE because UU is strictly increasing in each component (by irreducibility of the network) and continuous. This is a contradiction.

Feedback: Since b* maximizes W, W(b*) = W*, so b* lies on the boundary of E = {b : W(b) ≤ W*}, not in its interior. The proof claims b* is in the interior of E (because it is in the interior of L ⊆ E), then says b* is on the boundary of E — but there is no contradiction if b* is on the boundary in both steps. The correct argument uses strict convexity of W(b) = a(b)^Ta(b) (a positive-definite quadratic form in b, since [I−βG]^{−1} is positive definite under Assumption 2): if b* is in the interior of a line segment L ⊆ F, there exist points on L with strictly greater W, contradicting optimality. It would be helpful to rewrite the contradiction step accordingly.


23. Rescaled Problem IT-hat(C) Uses Wrong Maximization Variable

Status: [Pending]

Quote:

maxb C1Δ(C1/2yˇ)\max_{\bm{b}}\ C^{-1}\Delta(C^{1/2}\check{\bm{y}}) (IT^(C)\hat{\text{IT}}(C)) s.t. C1κ(C1/2yˇ)1.\text{s.t.}\ C^{-1}\kappa(C^{1/2}\check{\bm{y}})\leq 1.

Feedback: After the change of variables ỹ = C^{−1/2}y (where y = b − b̂), the natural decision variable is ỹ, not b. Writing 'max_b' in the rescaled problem is inconsistent with the subsequent analysis, which treats ỹ as the argument of both the objective and the constraint. The same issue appears in the two subsequent optimization problems. It would be helpful to rewrite 'max_b' as 'max_{ỹ}' throughout the rescaled problems.


24. Change-of-Variables Formula in Example 2 Is Self-Referential

Status: [Pending]

Quote:

Performing the change of variables bi=[τbi]/2b_{i}=[\tau-b_{i}]/2 and β=β~/2\beta=-\tilde{\beta}/2 (with the status quo equal to b^i=[τb~i]/2\hat{b}_{i}=[\tau-\tilde{b}_{i}]/2) yields a best-response structure exactly as in condition (2).

Feedback: The formula bᵢ = [τ − bᵢ]/2 is self-referential (it implies bᵢ = τ/3 regardless of b̃ᵢ). Re-deriving from the FOC: 2aᵢ = τ − b̃ᵢ − β̃Σⱼgᵢⱼaⱼ, so setting bᵢ = (τ − b̃ᵢ)/2 and β = −β̃/2 gives aᵢ = bᵢ + βΣⱼgᵢⱼaⱼ, matching condition (2). The status-quo formula b̂ᵢ = [τ − b̃ᵢ]/2 is correctly stated. This is a typographical error where b̃ᵢ was written as bᵢ inside the bracket. It would be helpful to rewrite as bᵢ = [τ − b̃ᵢ]/2.


25. Concluding Remark Overstates the Scope of the Property A Relaxation

Status: [Pending]

Quote:

We also relax Property A, a technical condition which facilitated our basic analysis, and cover a more general class of externalities.

Feedback: The relaxation of Property A in OA3.1 is achieved by imposing Assumption OA1 (constant row sums on G), which is itself absent from the main model and rules out networks with heterogeneous degree distributions. As Theorem OA1 shows, when Property A fails the large-budget limit can converge to the second rather than the first principal component — a qualitatively different prescription. The conclusion's phrasing gives the impression of an unconditional generalization, whereas one restriction is substituted for another. It would be helpful to rewrite as 'We also partially relax Property A under an additional constant-row-sum condition on the interaction matrix (Assumption OA1 in the Online Appendix), covering a broader class of externalities subject to that restriction.'


Targeting Interventions in Networks

Date: 3/3/2026, 8:59:21 PM Domain: Example Taxonomy: Demo Filter: Active comments


Overall Feedback

Central Claim The paper characterizes optimal incentive targeting in network games by decomposing policies along the principal components of the interaction matrix, finding that optimal interventions load on top eigenvectors for strategic complements and bottom eigenvectors for substitutes, eventually converging to simple single-component policies as budgets increase.

Main Areas for Reflection

  • Generalizing the welfare specification The current analysis relies on Property A, where aggregate welfare takes the form W(b,G)aaW(b,G) \propto a^{* \top} a^*. To assist readers interested in broader applications, it may be helpful to briefly discuss how the principal-component logic extends to general quadratic welfare forms. Incorporating a result from the current appendices into the main text could concisely demonstrate that the high-versus-low spectral targeting insight remains robust beyond this specific scalar structure.

  • Asymmetries and directed networks Since the decomposition G=UΛUG=U\Lambda U^\top assumes symmetry, readers might naturally wonder about the applicability to directed networks common in economic settings. A short derivation or proposition clarifying how the analysis adapts—perhaps by focusing on the symmetric component (G+G)/2(G+G^\top)/2 or singular vectors—would help clarify the scope. This could reinforce the qualitative findings regarding spectral modes without requiring a complete re-derivation of the model.

  • Instrument costs and heterogeneity The optimization currently exploits the rotational invariance of the cost function K(b,b^)=(bib^i)2K(b,\hat b)=\sum (b_i-\hat b_i)^2. It might be beneficial to briefly address how the conceptual insights translate when costs are heterogeneous or when instruments operate on actions directly (e.g., subsidies). A short note mapping these more complex instruments into the existing bb-space framework could clarify the conditions under which the eigenbasis decoupling remains a valid guide for policy.

  • Distinction from centrality measures Given the rich literature on spectral centrality, distinguishing the specific contributions of this decomposition approach is valuable. It could be illuminating to explicitly contrast the "bottom-eigenvector" targeting for substitutes against standard centrality heuristics found in prior work. A simple worked example where this method's recommendations diverge from traditional key-player policies would effectively highlight the distinct economic prescriptions offered by this framework.

Status: [Pending]


Detailed Comments (6)

1. Limiting direction of intervention in Proposition 1

Status: [Pending]

Quote:

If β>0\beta>0 (the game features strategic complements), then the similarity of y\boldsymbol{y}^{*} and the first principal component of the network tends to 1:ρ(y,u1(G))11: \rho\left(\boldsymbol{y}^{*}, \boldsymbol{u}^{1}(\boldsymbol{G})\right) \rightarrow 1. 2b. If β<0\beta<0 (the game features strategic substitutes), then the similarity of y\boldsymbol{y}^{*} and the last principal component of the network tends to 1:ρ(y,un(G))11: \rho\left(\boldsymbol{y}^{*}, \boldsymbol{u}^{n}(\boldsymbol{G})\right) \rightarrow 1.

Feedback: In Proposition 1(2a–b) you state that, as CC\to\infty, the cosine similarity between the optimal intervention and the relevant principal component converges to 1, and you later write that yCu1(G)\boldsymbol{y}^*\to\sqrt{C}\,\boldsymbol{u}^1(\boldsymbol{G}) (or Cun(G)\sqrt{C}\,\boldsymbol{u}^n(\boldsymbol{G})).

Given Theorem 1 and the budget constraint, what can generally be shown under Assumptions 1–3 and Property A is that ρ(y,u1)21\rho(\boldsymbol{y}^*,\boldsymbol{u}^1)^2\to 1 when β>0\beta>0 (and analogously for un\boldsymbol{u}^n when β<0\beta<0). For w>0w>0 we have x=wα/(μwα)0x_\ell^* = w\alpha_\ell/(\mu-w\alpha_\ell)\ge 0 and y=b^x\underline{y}_\ell^* = \underline{\hat b}_\ell x_\ell^*. As CC\to\infty, the component y1\underline{y}_1^* dominates the budget, with (y1)2C(\underline{y}_1^*)^2\sim C, so [ \rho(\boldsymbol{y}^,\boldsymbol{u}^1) = \frac{\underline{y}_1^}{\sqrt{C}} \to \mathrm{sign}(\hat{\boldsymbol{b}}\cdot\boldsymbol{u}^1), ] and similarly with un\boldsymbol{u}^n in the substitutes case. Thus, for a fixed orientation of the eigenvectors, the limit of the cosine similarity is ±1\pm 1, with the sign determined by b^u1\hat{\boldsymbol{b}}\cdot\boldsymbol{u}^1 (or b^un\hat{\boldsymbol{b}}\cdot\boldsymbol{u}^n).

Since eigenvectors are only defined up to sign, one can always reorient u1\boldsymbol{u}^1 (or un\boldsymbol{u}^n) so that the limiting cosine similarity is +1+1, and the economic content of the result is that the intervention concentrates on the corresponding one-dimensional eigenspace. It would nonetheless be helpful to make this explicit—e.g., by formulating Proposition 1 in terms of ρ(y,u1)1|\rho(\boldsymbol{y}^*,\boldsymbol{u}^1)|\to 1 (and ρ(y,un)1|\rho(\boldsymbol{y}^*,\boldsymbol{u}^n)|\to 1), or by stating any normalization of eigenvectors and/or restriction on the status quo vector that guarantees b^u1>0\hat{\boldsymbol{b}}\cdot\boldsymbol{u}^1>0 and b^un>0\hat{\boldsymbol{b}}\cdot\boldsymbol{u}^n>0.


2. Comparative static claim in Footnote 16

Status: [Pending]

Quote:

It can be verified that, for every {1,,n1}\ell \in\{1, \ldots, n-1\}, the ratio x/x+1x_{\ell} / x_{\ell+1} is increasing (decreasing) in β\beta for the case of strategic complements (substitutes): thus the intensity of the strategic interaction shapes the relative importance of different principal components.

Feedback: Footnote 16 states that, for each \ell, the ratio x/x+1x_\ell/x_{\ell+1} is increasing in β\beta when β>0\beta>0 and decreasing in β\beta when β<0\beta<0. Using the expression from Theorem 1, x=wα/(μwα)x_\ell^* = w\alpha_\ell/(\mu - w\alpha_\ell) with α=(1βλ)2\alpha_\ell=(1-\beta\lambda_\ell)^{-2} and μ\mu determined by (6), one can look at the small‑budget limit C0C\to 0, where μ\mu\to\infty and hence [ \frac{x_\ell^}{x_{\ell+1}^}\to \frac{\alpha_\ell}{\alpha_{\ell+1}} = \left(\frac{1-\beta\lambda_{\ell+1}}{1-\beta\lambda_\ell}\right)^2. ] Differentiating this limiting expression with respect to β\beta gives a strictly positive derivative for all admissible β\beta, irrespective of the sign of β\beta. Thus, at least for small budgets, the ratio x/x+1x_\ell^*/x_{\ell+1}^* is increasing in β\beta both for strategic complements and for strategic substitutes, contrary to the literal wording of the footnote.

This is a local, not a global, calculation, but by continuity it is hard to reconcile with the claim that the ratio is everywhere decreasing in β\beta when β<0\beta<0. It would be helpful to double‑check this comparative static and either (i) reverse the direction for the substitutes case, or (ii) clarify that the intended statement is about monotonicity in the intensity β|\beta| rather than in β\beta itself, or else drop the claim.


3. Sign error in discussion of Proposition 2 (substitutes case)

Status: [Pending]

Quote:

If the game has the strategic substitutes property, β<0\beta<0, then for any ϵ>0\epsilon>0, if C>2b^2ϵ(αn1αnαn1)2C> \frac{2\|\hat{\boldsymbol{b}}\|^{2}}{\epsilon}\left(\frac{\alpha_{n-1}}{\alpha_{n}-\alpha_{n-1}}\right)^{2}, then... ... If β<0\beta<0, then the term αn1/(αn1αn)\alpha_{n-1} /\left(\alpha_{n-1}-\alpha_{n}\right) is large when the difference λn1λn\lambda_{n-1}-\lambda_{n}, which we call the "bottom gap," is small.

Feedback: In the paragraph interpreting Proposition 2 for the substitutes case, the factor governing the bound on CC is described as αn1/(αn1αn)\alpha_{n-1}/(\alpha_{n-1}-\alpha_n), whereas in the formal statement of the proposition the factor is αn1/(αnαn1)\alpha_{n-1}/(\alpha_n-\alpha_{n-1}) inside a square. Since for β<0\beta<0 one has αn>αn1\alpha_n>\alpha_{n-1}, the latter uses a positive denominator and matches the expression used in the proof, while the former has the opposite sign. Because the ratio is squared in the bound, this does not affect the actual condition on CC, but it introduces an avoidable notational inconsistency. It would be helpful to align the discussion with the proposition (using αnαn1\alpha_n-\alpha_{n-1}) so that the "corresponding factor for bottom α\alpha's" is written with the same ordering throughout.


4. Unclear notation in proof of Proposition 2

Status: [Pending]

Quote:

Cosine similarity. We now turn to the cosine similarity result. We focus on the case of strategic complements. The proof for the case of strategic substitutes is analogous. We start by writing a useful explicit expression for ρ(Δb,Cu1)\rho\left(\Delta \boldsymbol{b}^{*}, \sqrt{C} \boldsymbol{u}^{1}\right):

ρ(Δb,Cu1)=(bb^)(Cu1)bb^Cu1=(bb^)(u1)C,\rho\left(\Delta \boldsymbol{b}^{*}, \sqrt{C} \boldsymbol{u}^{1}\right)=\frac{\left(\boldsymbol{b}^{*}-\hat{\boldsymbol{b}}\right) \cdot\left(\sqrt{C} \boldsymbol{u}^{1}\right)}{\left\|\boldsymbol{b}^{*}-\hat{\boldsymbol{b}}\right\|\left\|\sqrt{C} \boldsymbol{u}^{1}\right\|}=\frac{\left(\boldsymbol{b}^{*}-\hat{\boldsymbol{b}}\right) \cdot\left(\boldsymbol{u}^{1}\right)}{\sqrt{C}},

where the last equality follows because, at the optimum, bb^2=C\left\|\boldsymbol{b}^{*}-\hat{\boldsymbol{b}}\right\|^{2}=C. ... Hence, using this in equation (12), we can deduce that

ρ(Δb,u1)=1Cwα1μwα1b^11ϵ iff (wα1μwα1)2b^12C(1ϵ)0.\rho\left(\Delta \boldsymbol{b}^{*}, \boldsymbol{u}^{1}\right)=\frac{1}{\sqrt{C}} \frac{w \alpha_{1}}{\mu-w \alpha_{1}} \hat{b}_{1} \geq \sqrt{1-\epsilon} \quad \text { iff } \quad\left(\frac{w \alpha_{1}}{\mu-w \alpha_{1}}\right)^{2} \hat{b}_{1}^{2}-C(1-\epsilon) \geq 0 .

Feedback: At first the cosine‑similarity part of the proof of Proposition 2 was a bit hard to parse because of notation. The vector Δb\Delta\boldsymbol{b}^* is not explicitly defined, and the text switches between ρ(Δb,Cu1)\rho(\Delta\boldsymbol{b}^*,\sqrt{C}\boldsymbol{u}^1) and ρ(Δb,u1)\rho(\Delta\boldsymbol{b}^*,\boldsymbol{u}^1). From the displayed equality

ρ(Δb,Cu1)=(bb^)(Cu1)bb^Cu1,\rho(\Delta\boldsymbol{b}^*,\sqrt{C}\boldsymbol{u}^1) = \frac{(\boldsymbol{b}^*-\hat{\boldsymbol{b}})\cdot(\sqrt{C}\boldsymbol{u}^1)} {\|\boldsymbol{b}^*-\hat{\boldsymbol{b}}\|\,\|\sqrt{C}\boldsymbol{u}^1\|},

one can infer that Δb=bb^\Delta\boldsymbol{b}^*=\boldsymbol{b}^*-\hat{\boldsymbol{b}}, and using bb^2=C\|\boldsymbol{b}^*-\hat{\boldsymbol{b}}\|^2=C and u1=1\|\boldsymbol{u}^1\|=1 it is easy to verify that ρ(Δb,Cu1)=ρ(Δb,u1)\rho(\Delta\boldsymbol{b}^*,\sqrt{C}\boldsymbol{u}^1)=\rho(\Delta\boldsymbol{b}^*,\boldsymbol{u}^1). Still, defining Δb\Delta\boldsymbol{b}^* explicitly (or sticking to y\boldsymbol{y}^*) and noting the scale‑invariance of cosine similarity would make this part of the proof smoother to follow.


5. "Maximizer" vs. "minimizer" for smallest eigenvalues

Status: [Pending]

Quote:

Turning next to strategic substitutes, recall that the smallest two eigenvalues, λn\lambda_{n} and λn1\lambda_{n-1}, can be written as follows:

λn=minu:u=1i,jNgijuiuj,λn1=minu:u=1uun=0i,jNgijuiuj.\lambda_{n}=\min _{\boldsymbol{u}:\|\boldsymbol{u}\|=1} \sum_{i, j \in \mathcal{N}} g_{i j} u_{i} u_{j}, \quad \lambda_{n-1}=\min _{\substack{\boldsymbol{u}:\|\boldsymbol{u}\|=1 \\ \boldsymbol{u} \cdot \boldsymbol{u}^{n}=0}} \sum_{i, j \in \mathcal{N}} g_{i j} u_{i} u_{j} .

Moreover, the eigenvector un\boldsymbol{u}^{n} is a maximizer of the first problem, while un1\boldsymbol{u}^{n-1} is a maximizer of the second; these are uniquely determined under Assumption 2.

Feedback: In the substitutes discussion you correctly write the Rayleigh–Ritz characterizations for λn\lambda_n and λn1\lambda_{n-1} as minimization problems, but then state that un\boldsymbol{u}^n and un1\boldsymbol{u}^{n-1} are "maximizers" of these problems. Since the display uses min\min and the next sentence reintroduces un\boldsymbol{u}^n via an explicit argmin\arg\min, these eigenvectors should be described as minimizers, not maximizers. This is clearly just a wording slip and does not affect any results, but it would be good to correct it for consistency with the complements case and with standard spectral terminology.


6. Typo in Lagrangian in proof of Theorem 1

Status: [Pending]

Quote:

Observe that the Lagrangian corresponding to the maximization problem is

L=w=1nα(1+x)2b^+μ[C=1nb^2x2].\mathcal{L}=w \sum_{\ell=1}^{n} \alpha_{\ell}\left(1+x_{\ell}\right)^{2} \underline{\hat{b}}_{\ell}+\mu\left[C-\sum_{\ell=1}^{n} \hat{b}_{\ell}^{2} x_{\ell}^{2}\right] .

Taking our observation above that the constraint is binding at x=x\boldsymbol{x}=\boldsymbol{x}^{*}, together with the standard results on the Karush-Kuhn-Tucker conditions, the first-order conditions must hold exactly at the optimum with a positive μ\mu :

Feedback: The displayed Lagrangian in the proof of Theorem 1 appears to omit a square on the status‑quo component. Just above, the problem in the xx_\ell variables is written as

maxx  w=1nα(1+x)2b^2s.t. =1nb^2x2C,\max_x\; w\sum_{\ell=1}^n \alpha_\ell (1+x_\ell)^2 \underline{\hat b}_\ell^2 \quad \text{s.t. } \sum_{\ell=1}^n \underline{\hat b}_\ell^2 x_\ell^2 \le C,

so the corresponding Lagrangian should involve b^2\underline{\hat b}_\ell^2 in the first term as well. The first‑order condition you derive immediately afterward matches the derivative of this corrected Lagrangian, not the one as currently printed. This is a minor typographical issue, but correcting the exponent (and, if desired, harmonizing the underline notation for b^\hat b_\ell) would remove the internal inconsistency in this display.