Multi-agent Trust Expansion¶
Introduction¶
Booth and Hunter [BH18] consider how to weaken incoming information based on the trust placed in the information source. This is used as a pre-processing step before AGM belief revision. Specifically, it is assumed that each agent can only be trusted to distinguish between certain states, and if states \(s\) and \(t\) are indistinguishable to the source, then evidence for \(s\) should also be seen as evidence for \(t\).
At the end of the paper, they extend this model to handle conflicting information from multiple agents. This document looks further at the expansion method they introduce.
Framework¶
Consider a finite and non-empty set of states \(S\) and \(n\) agents \(A = \{1,\ldots,n\}\). A report is a subset \(U \subseteq S\). A profile is a vector \(\vec{U} = (U_1,\ldots,U_n)\) of reports from each agent. An expansion function \(f: (2^S)^n \to 2^S\) maps each profile \(\vec{U}\) to a set of states \(f(\vec{U})\).
Intuitively, a profile \(\vec{U}\) represents each agent \(i\) reporting that the true state of the world lies in \(U_i\). The set \(f(\vec{U})\) is the set of states that are intuitively compatible with all reports \(U_i\), after taking into account the trust in agent \(i\).
Metric-based expansion¶
Let \(D = (d_1,\ldots,d_n)\) be a collection of psuedo-ultrametrics on \(S\), i.e. each \(d_i\) is a function \(S \times S \to \mathbb{R}\) satisfying the following properties, for all \(x, y, z \in S\):
\(d_i(x, y) \ge 0\) and \(d_i(x, x) = 0\)
\(d_i(x, y) = d_i(y, x)\)
\(d_i(x, y) \le \max\{d_i(x, z), d_i(z, y)\}\)
Intuitively, \(d_i(x, y)\) represents how able agent \(i\) is to distinguish between states \(x\) and \(y\): the smaller \(d_i(x, y)\), the more readily agent \(i\) may confuse \(x\) and \(y\). In the extreme case \(d_i(x, y) = 0\), agent \(i\) cannot distinguish between \(x\) and \(y\) at all.
The idea of Booth and Hunter is that upon receiving a profile \(\vec{U}\), one can weaken each \(U_i\) by considering balls of increasing radius around each \(x \in U_i\), with respect to the distance \(d_i\). One can then consider the smallest radius such that the weakened \(U_i\)s intersect. In other words, the incoming information is weakened as little as possible so that there is consensus among all the agents. The formal definitions are as follows. Note that while these definitions should be taken with respect to \(D\), we omit \(D\) from the notation.
Definition 1
The (closed) ball of radius \(r \ge 0\) at a point \(x \in S\) with respect to \(d_i\) is
The \(r\)-expansion of a set \(U \subseteq S\) with respect to \(d_i\) is
Finally, the \(r\)-expansion of a profile \(\vec{U}\) is
It can be shown that so long as each \(U_i\) is non-empty, there is a minimal radius \(r\) such that \(E_r(\vec{U})\) is non-empty.
Proposition 1
Let \(\vec{U}\) be a profile such that \(U_i \ne \emptyset\) for each \(i\). Then the set \(\{r \ge 0 \mid E_r(\vec{U}) \ne \emptyset\}\) is non-empty and has a minimal element \(\rho\), given by
Proof.
Note that the maxima and minima in \(\rho\) are well-defined since we only quantify over finite (and non-empty) sets. We first prove the following claim.
Claim: For any \(r \ge 0\),
\[E_r(\vec{U}) = \{y \in S \mid \max_{i \in A} \min_{x \in U_i} d_i(x, y) \le r\}\]Proof of claim. Note that the maximum and minimums above exist since we only quantify over finite (and non-empty) sets. Let \(y \in E_r(\vec{U})\). Then for each \(i \in A\) we have \(y \in E^i_r(U_i) = \bigcup_{x \in U_i}{B^i_r(x)}\), so there is \(\hat{x}_i \in U_i\) such that \(d_i(\hat{x}_i, y) \le r\). Consequently
\[\min_{x \in U_i}{d_i(x,y)} \le d_i(\hat{x}_i, y) \le r \tag{*}\]Since this holds for any \(i\), it in particular holds for the \(i\) for which the left hand side of \((*)\) is maximal, i.e. \(\max_{i \in A} \min_{x \in U_i} d_i(x,y) \le r\) as required.
Now suppose \(\max_{i \in A} \min_{x \in U_i} d_i(x,y) \le r\). For each \(i \in A\), let \(\hat{x}_i \in U_i\) be any state in \(U_i\) at which the minimum is achieved, i.e. \(\hat{x}_i \in \argmin_{x \in U_i}{d_i(x, y)}\). Then we have
\[d_i(\hat{x}_i, y) = \min_{x \in U_i}{d_i(x, y)} \le \max_{j \in A}\min_{x \in U_j}{d_j(x, y)} \le r\]so \(y \in B^i_r(\hat{x}_i) \subseteq E^i_r(U_i)\). Since this holds for all \(i \in A\), we have \(y \in \bigcap_{i \in A}{E^i_r(U_i)} = E_r(\vec{U})\) as required. This completes the proof of the claim.
Note that, by the claim, \(E_\rho(\vec{U})\) is exactly the set of \(y \in S\) at which the minimum in the definition of \(\rho\) is achieved. Since this occurs for some \(y\), \(E_\rho(\vec{U})\) is non-empty.
Finally, suppose \(r \ge 0\) and \(E_r(\vec{U}) \ne \emptyset\). Take any \(\hat{y} \in E_r(\vec{U})\). Then by definition of \(\rho\) and the claim,
so \(\rho\) is indeed minimal, and this completes the proof.
In light of Proposition 1, write \(\rho(\vec{U}) = \min\{r \ge 0 \mid E_r(\vec{U}) \ne \emptyset\} \in \mathbb{R} \cup \{\infty\}\) for any profile \(\vec{U}\), where \(\min\emptyset := \infty\). Note that \(\rho(\vec{U}) = \infty\) if and only if \(U_i = \emptyset\) for some \(i\).
We can now define expansion functions derived from (psuedo-ultra)metrics.
Definition 2 ([BH18])
The expansion function \(f_D\) is defined by
This definition can be re-stated as follows. For any profile \(\vec{U}\) with \(\emptyset \notin \vec{U}\) and any \(y \in S\), write \(M_{\vec{U}}(y) = \max_{i \in A} \min_{x \in U_i} d_i(x, y)\), and define a total preorder \(\preceq_{\vec{U}}\) by \(y \preceq_{\vec{U}} y'\) iff \(M_{\vec{U}}(y) \le M_{\vec{U}}(y)\). Then, by the claim in the proof of Proposition 1,
Remarks.
This reformulation is close in form to the distance-based belief merging operators, where the aggregation function is \(\max\). The difference is that in our setting the distance function depends on the agent.
Does this also bear similarity to minimax in game theory? Imagine a game where we choose a state \(y\), the opponent chooses some \(i \in A\), and the cost incurred is the distance from \(y\) to an element of \(U_i\). Then it seems \(f_D(\vec{U})\) consists of the states \(y\) with the lowest minimax cost.
A straightforward characterisation¶
This section gives a basic characterisation of metric-based expansion functions in terms of a coarsening sequence of state partitions for each agent.
Definition 3
An expansion function \(f\) is hierarchical if there is \(m \in \mathbb{N}_0\) such that for each \(i \in A\) there is a sequence \(\Pi_0^i,\ldots,\Pi_m^i\) of partitions of \(S\) such that
\(\Pi_k^i\) is a refinement of \(\Pi_{k+1}^i\) (i.e. for all \(V \in \Pi_k^i\) there is \(V' \in \Pi_{k+1}^i\) such that \(V \subseteq V')\)
for all profiles \(\vec{U}\) there is \(k \in \{0,\ldots,m\}\) such that \(f(\vec{U}) = \bigcap_{i \in A}{[U_i]_k^i}\), and \(\bigcap_{i \in A}{[U_i]_l^i} = \emptyset\) for \(l < k\)
(where \([x]_k^i\) denotes the cell of \(\Pi_k^i\) containing \(x\), and \([U]_k^i := \bigcup_{x \in U}[x]_k^i\)).
We also require the following property:
non-triviality: If \(U_i \ne \emptyset\) for all \(i \in A\), then \(f(\vec{U}) \ne \emptyset\)
Proposition 2
\(f\) is hierarchical and satisfies non-triviality if and only if \(f = f_D\) for some collection of pseudo-ultrametrics \(D = (d_1,\ldots,d_n)\).
Proof.
“if”: Suppose \(f = f_D\). Non-trivialilty is immediate from the definition. We show \(f\) is hierarchical. Write \(R = \{\rho(\vec{U}) \mid \vec{U} \in (2^S \setminus \{\emptyset\})^n\} \subseteq \mathbb{R}_{\ge 0}\). Then \(R\) is a finite set, which we can emunerate as \(R = \{r_1,\ldots,r_m\}\) with \(r_k < r_{k+1}\). For each \(i\), define the partition \(\Pi_k^i\) by
We need to show \(\Pi_k^i\) is indeed a partition. Since \(d_i(x,x) = 0\) for any psuedo-ultrametric \(d_i\), we have \(x \in B_{r_k}^i(x)\) and thus \(\Pi_k^i\) covers \(S\). To show the distinct sets in \(\Pi_k^i\) are disjoint, suppose \(z \in B_{r_k}^i(x) \cap B_{r_k}^i(y)\). Let \(w \in B_{r_k}^i(x)\). By the ultrametric inequality, we have
i.e. \(w \in B_{r_k}^i(y)\), i.e. \(B_{r_k}^i(x) \subseteq B_{r_k}^i(y)\). By symmetry, \(B_{r_k}^i(y) \subseteq B_{r_k}^i(x)\) also, i.e. \(B_{r_k}^i(x) = B_{r_k}^i(y)\). This shows that \(\Pi_k^i\) is a partition of \(S\).
We now show the two properties in the definition of a hierarchical expansion function.
Let \(V \in \Pi_k^i\), i.e. \(V = B_{r_k}^i(x)\) for some \(x \in S\). Since \(r_k < r_{k+1}\), \(V \subseteq B_{r_{k+1}}^i(x) \in \Pi_{k+1}^i\). Thus \(\Pi_k^i\) refines \(\Pi_{k+1}^i\).
Let \(\vec{U}\) be a profile. If \(\emptyset \in \vec{U}\) then \(f(\vec{U}) = \emptyset\). On the other hand, \([\emptyset]_k^i = \emptyset\) for any \(k\) and \(i\), so (2) holds (e.g. take \(k = 0\)).
Suppose \(\emptyset \notin \vec{U}\). Then \(\vec{U} \in (2^S \setminus \{\emptyset\})^n\), so \(\rho(\vec{U}) \in R\). Take \(k\) such that \(\rho(\vec{U}) = r_k\). Noting that \(E_{\rho(\vec{U})}^i(U_i) = \bigcup_{x \in U_i}{B_{r_k}^i(x)} = \bigcup_{x \in U_i}{[x]_k^i} = [U_i]_k^i\), we have
\[f(\vec{U}) = E_{\rho(\vec{U})}(\vec{U}) = \bigcap_{i \in A}{E_{\rho(\vec{U})}^i(U_i)} = \bigcap_{i \in A}{[U_i]_k^i}\]as required for (2). Finally, if \(l < k\), then \(r_l < r_k = \rho(\vec{U})\), so \(\bigcap_{i \in A}{[U_i]_l^i} = E_{r_l}(\vec{U}) = \emptyset\) by the defining property of \(\rho(\vec{U})\) from Proposition 1.
“only if”: Suppose \(f\) is a hierarchical expansion function satisfying non-triviality. Note that without loss of generality, we may assume \(\Pi_m^i\) is the coarsest partition \(\{S\}\) for all agents \(i\). For any \(i \in A\) and \(x, y \in S\), define
(this is well-defined by the wlog assumption, since we always have \(y \in [x]_m^i = S\)). Clearly \(d_i(x, y) \ge 0\) and \(d_i(x, x) = 0\), since \(x \in [x]_0^i\) for any choice of partition \(\Pi_0^i\). It is also easy to see that \(d_i(x,y) = d_i(y,x)\), since \(y \in [x]_k^i\) iff \(x \in [y]_k^i\). For the ultrametric inequality, let \(x, y, z \in S\) and write
We need to show that \(d_i(x,y) \le k_3\). First, note that \(z \in [x]_{k_1}^i\). Since \(k_1 \le k_3\), \(\Pi_{k_1}^i\) is a refinement of \(\Pi_{k_3}^i\). It follows that \([x]_{k_1}^i \subseteq [x]_{k_3}^i\), and consequently \(z \in [x]_{k_3}^i\). Similarly, \(k_2 \le k_3\) implies \(z \in [y]_{k_3}^i\). This means \(z \in [x]_{k_3}^i \cap [y]_{k_3}^i\), i.e. \([x]_{k_3}^i = [y]_{k_3}^i\). In particular, \(y \in [x]_{k_3}^i\), so \(d_i(x,y) \le k_3\) as required.
So, each \(d_i\) is indeed a pseudo-ultrametric. It remains to show that \(f = f_D\). Let \(\vec{U}\) be a profile. If \(\emptyset \in \vec{U}\) then \(f(\vec{U}) = \emptyset = f_D(\vec{U})\) is clear.
Otherwise, take \(k \in \{0,\ldots,m\}\) from the definition of a hierarchical expansion function so that \(f(\vec{U}) = \bigcap_{i \in A}{[U_i]_k^i}\). First we claim that for any \(l \in \{0,\ldots,m\}\), \([x]_l^i = B_l^i(x)\). Indeed, if \(y \in [x]_l^i\) then \(d_i(x,y) \le l\) is apparent from the definition of \(d_i\), so \(y \in B_l^i(x)\). On the other hand if \(y \in B_l^i(x)\) then \(d_i(x,y) \le l\), so \(y \in [x]_{d_i(x,y)}^i \subseteq [x]_l^i\) since \(\Pi_{d_i(x,y)}^i\) is a refinement of \(\Pi_l^i\). Hence \([x]_l^i = B_l^i(x)\).
It follows that \(E_l(\vec{U}) = \bigcap_{i \in A}{[U_i]_l^i}\). Also note that since each \(d_i\) is integer-valued, \(E_r(\vec{U}) = E_{\lfloor r \rfloor}(\vec{U})\) for any \(r \ge 0\). Consequently,
which means
This shows that \(f=f_D\), and we are done.
Some sound properties for metric-based expansion¶
Some sound properties for metric-based expansion are given in this section. In what follows, we write \(\vec{U} \sqsubseteq \vec{V}\) to mean \(U_i \subseteq V_i\) for each \(i \in A\).
Lemma 1
If \(\bigcap_{i \in A}{U_i} \ne \emptyset\) then \(\rho(\vec{U}) = 0\).
Proof.
Since \(x \in B_r^i(x)\) for any \(x \in S\), \(i \in A\) and \(r \ge 0\), we have \(U \subseteq E_r^i(U)\) for any \(U \subseteq S\). Consequently,
Since \(\rho(\vec{U})\) is the minimal radius \(r\) such that \(E_r(\vec{U}) \ne \emptyset\), we have \(\rho(\vec{U}) \le 0\). On the other hand \(\rho(\vec{U})\) cannot be negative, so \(\rho(\vec{U}) = 0\).
Proposition 3
For any collection of pseudo-ultrametrics \(D = (d_1,\ldots,d_n)\), the expansion function \(f=f_D\) satisfies the following properties, for all profiles \(\vec{U}\):
\(\bigcap_{i \in A}{U_i} \subseteq f(\vec{U})\)
There is \(\vec{V}\) such that \(\vec{U} \sqsubseteq \vec{V}\) and \(f(\vec{U}) = f(\vec{V}) = \bigcap_{i \in A}{V_i}\)
If \(f(\vec{U}) = \bigcap_{i \in A}{U_i}\) and \(\vec{U} \sqsubseteq \vec{V}\), then \(f(\vec{U}) \subseteq f(\vec{V})\)
\(f(f(\vec{U}),\ldots,f(\vec{U})) = f(\vec{U})\)
Intuitively: (1) says that \(f(\vec{U})\) is weaker than the conjunction of \(\vec{U}\); (2) says that each agent is able to weaken their report \(U_i\) to some \(V_i\) “outside” of \(f\), so that \(f(\vec{U})\) is just the conjunction of \(\vec{V}\); (3) says that if \(\vec{U}\) is already “trusted” by \(f\), then weakening \(\vec{U}\) weakens the result after applying \(f\), and (4) is an idempotency property.
Proof.
Each property is easy to see in the case where \(U_i = \emptyset\) for some \(i\). In what follows we assume \(U_i \ne \emptyset\) for each \(i\), and therefore \(f(\vec{U}) \ne \emptyset\) by non-triviality above.
Since \(U_i \subseteq E_{\rho(\vec{U})}^i(U_i)\), we have \(\bigcap_{i \in A}{U_i} \subseteq \bigcap_{i \in A}{ E_{\rho(\vec{U})}^i(U_i) } = f(\vec{U})\).
Take \(V_i = E_{\rho(\vec{U})}^i(U_i)\). Clearly \(U_i \subseteq V_i\), so \(\vec{U} \sqsubseteq \vec{V}\). Moreover, \(\rho(\vec{V}) = 0\) by Lemma 1.
We claim that \(E_0^i(V_i) = V_i\). That \(V_i \subseteq E_0^i(V_i)\) is clear. For the other inclusion, suppose \(y \in E_0^i(V_i)\). Then there is \(x \in V_i\) such that \(d_i(x, y) = 0\). But \(x \in V_i = E_{\rho(\vec{U})}^i(U_i)\) means there is \(z \in U_i\) such that \(d_i(z, x) \le \rho(\vec{U})\). By the ultrametric inequality,
\[d_i(z, y) \le \max\{d_i(z, x), \underbrace{d_i(x, y)}_{=0}\} = d_i(z, x) \le \rho(\vec{U})\]so \(y \in B_{\rho(\vec{U})}^i(z) \subseteq E_{\rho(\vec{U})}^i(U_i) = V_i\). This shows that \(E_0^i(V_i) \subseteq V_i\), and hence \(E_0^i(V_i) = V_i\) as claimed. 1
Recalling that \(\rho(\vec{V}) = 0\), we have
\[f(\vec{V}) = E_0(\vec{V}) = \bigcap_{i \in A}{E_0^i(V_i)} = \bigcap_{i \in A}{V_i} = \bigcap_{i \in A}{E_{\rho(\vec{U})}^i(U_i)} = f(\vec{U})\]This shows both \(f(\vec{U}) = f(\vec{V})\) and \(f(\vec{V}) = \bigcap_{i \in A}{V_i}\), as required.
We have \(U_i \subseteq V_i \subseteq E_0^i(V_i)\), so
\[\emptyset \subset f(\vec{U}) = \bigcap_{i \in A}{ U_i } \subseteq \bigcap_{i \in A}{ E_0^i(V_i) } = E_0(\vec{V})\]i.e. \(E_0(\vec{V}) \ne \emptyset\). Hence \(\rho(\vec{V}) = 0\), so \(f(\vec{V}) = E_0(\vec{V}) \supseteq f(\vec{U})\) as required.
To ease notation, write \(\vec{V} = (f(\vec{U}),\ldots,f(\vec{U}))\). By property (1),
\[f(\vec{V}) \supseteq \bigcap_{i \in A}{\underbrace{V_i}_{=f(\vec{U})}} = f(\vec{U})\]For the reverse inclusion, let \(y \in f(\vec{V})\). We have \(\rho(\vec{V}) = 0\) by Lemma 1, so \(y \in E_0^i(V_i)\) for each \(i \in A\). This means there is \(x_i \in V_i = f(\vec{U})\) such that \(d_i(x_i, y) \le \rho(\vec{V}) = 0\). But \(x_i \in f(\vec{U}) \subseteq E_{\rho(\vec{U})}^i(U_i)\) means there is \(z_i \in U_i\) such that \(d_i(z_i, x_i) \le \rho(\vec{U})\). Therefore
\[d_i(z_i, y) \le \max\{d_i(z_i, x_i), \underbrace{d_i(x_i, y)}_{=0}\} = d_i(z_i,x_i) \le \rho(\vec{U})\]i.e. \(y \in E_{\rho(\vec{U})}^i(U_i)\). Since \(i\) was arbitrary, we have \(y \in \bigcap_{i \in A}{E_{\rho(\vec{U})}^i(U_i)} = E_{\rho(\vec{U})}(\vec{U}) = f(\vec{U})\). This shows that \(f(\vec{V}) = f(\vec{U})\), as required.
We can also obtain \(n\) single-agent expansion functions from \(f_D\): for each \(i \in A\), define \(f_D^i: 2^S \to 2^S\) by
i.e. have all agents \(j \ne i\) give the completely non-informative report \(S\), and apply \(f_D\). It can be shown that each \(f_D^i\) is a single-agent trust-sensitive expansion function in the sense of [BH18].
Proposition 4
For each \(i \in A\) there is a partition \(\Pi\) of \(S\) such that for all \(U \subseteq S\),
where \([x]_\Pi \in \Pi\) denotes the cell of \(\Pi\) containing \(x\).
Proof.
Fix \(i \in A\). By Proposition 2, \(f_D\) is hierarchical. Let \(\Pi^i_0,\ldots,\Pi^i_m\) be as in Definition 3. Set \(\Pi = \Pi_0^i\).
Now let \(U \subseteq S\), and set \(\vec{U} = (S,\ldots,U,\ldots,S)\) so that \(f_D^i(U) = f_D(\vec{U})\). If \(U = \emptyset\), then \(f_D^i(U) = \emptyset = \bigcup_{x \in \emptyset}[x]\) as required.
Otherwise, observe that, using the notation of Definition 3,
This implies that \(k\) from property (2) in Definition 3 must be \(0\), and thus \(f_D^i(U) = f(\vec{U}) = [U]_0^i = [U]_\Pi\), and this completes the proof.
As a consequence of Proposition 4, each \(f_D^i\) satisfies the Kuratowski and Pawlak properties from [BH18].
Going to a higher dimensional state space¶
Consider aggregating the set of psuedo-ultrametrics \(D = (d_1,\ldots,d_n)\) to form a single function \(\hat{d}: S^n \times S^n \to \mathbb{R}\), where \(\hat{d}(\vec{x}, \vec{y}) = \max_{i \in A}{d_i(x_i, y_i)}\).
Proposition 5
\(\hat{d}\) is a psuedo-ultrametric on \(S^n\).
Proof. Each of the pseudo-ultrametric properties follow easily from the fact that each \(d_i\) is a psuedo-ultrametric.
We extend the definition of closed balls and \(r\)-expansions in the obvious way: for \(\vec{x} \in S^n\) and \(T \subseteq S^n\),
We then have the following.
Proposition 6
For any \(\vec{x} \in S^n\), any profile \(\vec{U}\) and any \(r \ge 0\):
\(\hat{B}_r(\vec{x}) = B^1_r(x_1) \times \cdots \times B^n_r(x_n)\)
\(\hat{E}_r(U_1 \times \cdots \times U_n) = E_r^1(U_1) \times \cdots \times E_r^n(U_n)\)
If \(\delta: S \to S^n\) is the diagonal function \(\delta(y) = (y,\ldots,y)\), then
\[E_r(\vec{U}) = \delta^{-1}(\hat{E}_r(U_1 \times \cdots \times U_n))\]If \(U_i \ne \emptyset\) for all \(i \in A\),
\[f_D(\vec{U}) = \argmin_{y \in S}\min_{\vec{x} \in U_1 \times \cdots \times U_n}{\hat{d}(\vec{x}, \delta(y))} \]
Proof.
We have the following chain of equivalences
\[\begin{aligned} \vec{y} \in \hat{B}_r(\vec{x}) &\iff \hat{d}(\vec{x}, \vec{y}) \le r \\ &\iff \max_{i \in A}d_i(x_i,y_i) \le r \\ &\iff \forall i \in A:\ d_i(x_i,y_i) \le r \\ &\iff \forall i \in A:\ y_i \in B_r^i(x_i) \\ &\iff \vec{y} \in B_r^1(x_1) \times \cdots \times B_r^n(x_n) \end{aligned}\]Using (1):
\[\begin{aligned} \vec{y} \in \hat{E}_r(U_1 \times \cdots \times U_n) &\iff \exists \vec{x} \in U_1 \times \cdots \times U_n: \vec{y} \in \hat{B}_r(\vec{x}) \\ &\iff \exists \vec{x} \in U_1 \times \cdots \times U_n: \vec{y} \in B_r^1(x_1) \times \cdots \times B_r^n(x_n) \\ &\iff \forall i \in A\ \exists x_i \in U_i: y_i \in B_r^i(x_i) \\ &\iff \forall i \in A: y_i \in E_r^i(U_i) \\ &\iff \vec{y} \in E_r^1(U_1) \times \cdots \times E_r^n(U_n) \end{aligned}\]Using (2):
\[\begin{aligned} y \in E_r(\vec{U}) &\iff \forall i \in A: y \in E_r^i(U_i) \\ &\iff \delta(y) \in \underbrace{ E_r^1(U_1) \times \cdots \times E_r^n(U_n) }_{=\hat{E}_r(U_1 \times \cdots \times U_n)} \\ &\iff y \in \delta^{-1}(\hat{E}_r(U_1 \times \cdots \times U_n)) \end{aligned}\]First note that by an argument similar to the one employed in the proof of Proposition 1, we have \(\hat{E}_r(T) = \{\vec{y} \in S^n \mid \min_{\vec{x} \in T}{\hat{d}(\vec{x}, \vec{y}) \le r}\}\) for any \(T \subseteq S^n\) and \(r \ge 0\). Applying (3), this implies
\[\begin{aligned} y \in E_r(\vec{U}) &\iff \delta(y) \in \hat{E}_r(U_1 \times \cdots \times U_n) \\ &\iff \min_{\vec{x} \in U_1 \times \cdots \times U_n}{ \hat{d}(\vec{x}, \delta(y)) } \le r \end{aligned}\]It follows that \(\rho(\vec{U}) = \min_{y \in S}\min_{\vec{x} \in U_1 \times \cdots \times U_n}{ \hat{d}(\vec{x}, \delta(y)) }\), and the result follows.
The potential usefulness of this result is that we now only have to worry about one metric function. This (psuedo-ultra)metric induces a topology on \(S^n\): maybe it is possible to use this to understand the behaviour of \(f_D\)? For example, the single-agent expansion functions from [BH18] are characterised by the Kuratowski and Pawlak properties, which have topological interpretations.
- 1
In fact, one can show more generally that \(E_{r_1}^i(E_{r_2}^i(U)) = E_{r_2}^i(U)\) for any \(U \subseteq S\) and \(r_1 \le r_2\).