Logic of Expertise: extended¶
Introduction¶
We propose a formal model expertise in a modal logic setting
The two main notions we study are expertise and soundness
Intuitively, a source has expertise on \(\phi\) if they are able to correctly refute \(\phi\) whenever it is false 1
It is sound for a source to report \(\phi\) if \(\phi\) is true up to lack of expertise: if \(\phi\) is logically weakened to a proposition \(\psi\) on which the source has expertise, then \(\psi\) must be true
In terms of refutation, \(\phi\) is sound if the source cannot refute \(\neg\phi\). That is, either \(\phi\) is in fact true, or the source does not possess sufficient expertise to rule out \(\phi\).
Importantly, our notion of expertise does not depend on the “actual” state of affairs. This reflects the idea that expertise is a property of the source’s cognitive state, not the ground truth facts of the world.
Expertise and Soundness¶
[Motivating running example]
Syntax¶
Let \(\Prop\) be a countable set of atomic propositions. Our base language includes modal operators to express expertise and soundness statements for a single source. Formally, we define the language \(\cL\) by the following grammar:
for \(p \in \Prop\). We read \(\E\phi\) as “the source has expertise on \(\phi\), and \(\S\phi\) has “\(\phi\) is sound for the source to report”. We include the universal modality \(\A\) for technical convenience [GP92]; \(\A\phi\) is read as “\(\phi\) holds in all states”. Other logical connectives (\(\lor\), \(\limplies\), \(\liff\)) and constants (\(\top\), \(\bot\)) are introduced as abbreviations.
Semantics¶
On the semantic side, we use the notion of an expertise model.
Definition 1
An expertise model is a triple \(M = (X, P, V)\), where \(X\) is a set of states, \(P \subseteq 2^X\) is a collection of subsets of \(X\), and \(V: \Prop \to 2^X\) is a valuation function. An expertise frame is a pair \(F = (X, P)\). The class of all models is denoted by \(\M\).
The sets in \(P\) are termed expertise sets, and represent the propositions on which the source has expertise. Given the earlier informal description of expertise as refutation, we interpret \(A \in P\) as saying that whenever the “actual” state is outside \(A\), the source knows so.
For an expertise model \(M = (X, P, V)\), the satisfaction relation between states \(x \in X\) and formulas \(\phi \in \cL\) is defined recursively as follows:
where \(\|\phi\|_M = \{x \in X \mid M, x \models \phi\}\) is the truth set of \(\phi\). For an expertise frame \(F = (X, P)\), write \(F \models \phi\) iff \(M, x \models \phi\) for all models \(M\) based on \(F\) and all \(x \in X\). As usual, write \(M \models \phi\) iff \(M, x \models \phi\) for all \(x \in X\), and \(\models \phi\) iff \(M \models \phi\) for all models \(M\); we say \(\phi\) is valid in this case. Write \(\phi \equiv \psi\) iff \(\phi \liff \psi\) is valid. For a set \(\Gamma \subseteq \cL\), write \(\Gamma \models \phi\) iff for all models \(M\) and states \(x\), if \(M, x \models \psi\) for all \(\psi \in \Gamma\) then \(M, x \models \phi\).
The clauses for atomic propositions and propositional connectives are standard. For expertise formulas, we have that \(\E\phi\) holds exactly when the set of states where \(\phi\) is true is an element of \(P\). Expertise is thus a special case of the neighbourhood semantics [Pac17], where each point \(x \in X\) has the same set of neighbourhoods. The clause for soundness reflects the intuition that \(\phi\) is sound exactly when all logically weaker formulas on which the source has expertise must be true: if \(A \in P\) (i.e. the source has expertise on \(A\)) and \(A\) contains all \(\phi\) states, then \(x \in A\). In terms of refutation, \(\S\phi\) holds iff there is no expertise set \(A\), false at the actual state \(x\), which allows the source rule out \(\phi\).
Our truth conditions for expertise and soundness also have topological interpretations, if one views \(P\) as the collection of closed sets of a topology on \(X\): 2 \(\E\phi\) holds iff \(\|\phi\|_M\) is closed, and \(\S\phi\) holds at \(x\) iff \(x\) lies in the closure of \(\|\phi\|_M\). In this case we can view the closure operation as expanding the set \(\|\phi\|_M\) along the lines of the source’s expertise; \(\phi\) is sound if the “actual” state \(x\) is included in this expansion.
Finally, the clause for the universal modality \(\A\) states that \(\A\phi\) holds iff \(\phi\) holds at all states \(y \in X\).
[Illustrate semantics by formalising motivating example.]
We further illustrate the semantics by listing some valid formulas.
Proposition 1
The following formulas are valid:
\(\phi \limplies \S\phi\)
\(\E\phi \liff \A\E\phi\)
\(\A(\phi \limplies \psi) \limplies (\S\phi \land \E\psi \limplies \psi)\)
\(\E\phi \limplies \A(\S\phi \limplies \phi)\)
Proof.
Let \(M = (X, P, V)\) be a model and \(x \in X\).
Clear
Clear
Suppose \(M, x \models \A(\phi \limplies \psi)\). Then \(\|\phi\|_M \subseteq \|\psi\|_M\). Further, suppose \(M, x \models \S\phi \land \E\psi\). Then \(\|\phi\|_M \subseteq \|\psi\|_M \in P\); taking \(A = \|\psi\|_M\) in the definition of the semantics for \(\S\), we get by \(M, x \models \S\phi\) that \(x \in \|\psi\|_M\), i.e. \(M, x \models \psi\).
TODO: show using 2, 3 and \(\A(\phi \limplies \phi)\).
Here (1) says that all truths are sound. (2) says that expertise is global. (3) says that if the source has expertise on \(\psi\), and \(\psi\) is logically weaker than some sound formula \(\phi\), then \(\psi\) is in fact true. This formalises the idea that if \(\phi\) is true up to lack of expertise, then weakening \(\phi\) until expertise holds (i.e. discarding parts of \(\phi\) on which the source does not have expertise) results in something true. (4) says that if the source has expertise on \(\phi\), then whenever \(\phi\) is sound it is also true.
Closure Properties¶
So far we have not imposed any constraints on the collection of expertise sets \(P\). But given our interpretation of \(P\), it may be natural to require that \(P\) is closed under certain set-theoretic operations. Say a frame \(F = (X, P)\) is
closed under intersections if \(\{A_i\}_{i \in I} \subseteq P\) implies \(\bigcap_{i \in I}{A_i} \in P\) 3
closed under unions if \(\{A_i\}_{i \in I} \subseteq P\) implies \(\bigcup_{i \in I}{A_i} \in P\)
closed under finite unions if \(A, B \in P\) implies \(A \cup B \in P\)
closed under complements if \(A \in P\) implies \(X \setminus A \in P\)
Say a model has any of the above properties if the underlying frame does. Write \(\M@int\), \(\M@unions\), \(\M@unions\), \(\M@funions\) and \(\M@compl\) for the classes of models closed under intersections, unions, finite unions and complements respectively.
What are the intuitive interpretations of these closure conditions? Consider again our interpretation of \(A \in P\): whenever the actual state is not in \(A\), the source knows so. With this in mind, closure under intersections is a natural property: if \(x \notin \bigcap_{i \in I}{A_i}\) then there is some \(i \in I\) such that \(x \notin A_i\); the source can then use this to refute \(A_i\) and therefore know that the actual state \(x\) does not lie in the intersection \(\bigcap_{i \in I}{A_i}\). A similar argument can be made for finite unions: if \(x \notin A \cup B\) then the source can use \(x \notin A\) and \(x \notin B\) to refute both \(A\) and \(B\). Closure under arbitrary unions is less clear cut; determining that \(x \notin \bigcup_{i \in I}{A_i}\) requires the source to refute (potentially) infinitely many propositions \(A_i\). This is more demanding from a computational and cognitive perspective, we therefore view closure under (arbitrary) unions as an optional property which may or may not be appropriate depending on the situation one wishes to model. Finally, closure under complements removes the distinction between refutation and verification: if the agent can refute \(A\) whenever \(A\) is false, they can also verify \(A\) whenever \(A\) is true. We view this as another optional property, which is appropriate in situations where symmetric expertise is desirable (i.e. when expertise on \(\phi\) and \(\neg\phi\) should be considered equivalent).
Several of these properties can be formally captured in our language at the level of frames.
Proposition 2
Let \(F = (X, P)\) be a non-empty frame. Then
\(F\) is closed under intersections iff \(F \models \A(\S\phi \limplies \phi) \limplies \E\phi\) for all \(\phi \in \cL\).
\(F\) is closed under finite unions iff \(F \models \E\phi \land \E\psi \limplies \E(\phi \lor \psi)\) for all \(\phi \in \cL\).
\(F\) is closed under complements iff \(F \models \E\phi \liff \E\neg\phi\) for all \(\phi \in \cL\).
Proof.
We prove only the first claim; the others are straightforward.
“if”: We show the contrapositive. Suppose \(F\) is not closed under intersections. Then there is a collection \(\{A_i\}_{i \in I} \subseteq P\) such that \(B := \bigcap_{i \in I}A_i \notin P\). Let \(p\) be an arbitrary atomic proposition, and define a valuation \(V\) by \(V(p) = B\) and \(V(q) = \emptyset\) for \(q \ne p\). Let \(M = (X, P, V)\) be the corresponding model. Since \(X\) is assumed to be non-empty, we may take some \(x \in X\).
We claim that \(M, x \models \A(\S p \limplies p)\) but \(M, x \not\models \E p\). Clearly \(M, x \not\models \E p\) since \(\|p\|_M = B \notin P\). For \(M, x \models \A(\S p \limplies p)\), suppose \(y \in X\) and \(M, y \models \S p\). Let \(j \in I\). Then \(A_j \in P\), and
so by \(M, y \models \S p\) we have \(y \in A_j\). Hence \(y \in \bigcap_{j \in I}A_j = B = \|p\|_M\), so \(M, y \models p\). This shows that any \(y \in X\) has \(M, y \models \S p \limplies p\), and thus \(M, x \models \A(\S p \limplies p)\). Hence \(F \not\models \A(\S p \limplies p) \limplies \E p\).
“only if”: Suppose \(F\) is closed under intersections. Let \(M\) be a model based on \(F\) and take \(x \in X\). Let \(\phi \in \cL\). Suppose \(M, x \models \A(\S\phi \limplies \phi)\). Then \(\|\S\phi\|_M \subseteq \|\phi\|_M\). But since \(\models \phi \limplies \S\phi\), we have \(\|\phi\|_M \subseteq \|\S\phi\|_M\) too. Hence \(\|\phi\|_M = \|\S\phi\|_M\), i.e.
where we use the fact that \(P\) is closed under intersections in the final step. Hence \(\|\phi\|_M \in P\), so \(M, x \models \E\phi\).
The question of whether closure under (arbitrary) unions can be expressed in the language is still open. By Proposition 2 (1) and Proposition 1 (4), the language fragment \(\cLSA\) containing only the \(\S\) and \(\A\) modalities is equally expressive as the full language \(\cL\) with respect to \(\M@int\), since \(\E\phi\) is equivalent to \(\A(\S\phi \limplies \phi)\) in such models. In general \(\cLSA\) is strictly less expressive, since \(\cLSA\) cannot distinguish between a model and its closure under intersections.
Lemma 1
Let \(M = (X, P, V)\) be a model, and \(M' = (X, P', V)\) its closure under intersections, where \(A \in P'\) iff \(A = \bigcap_{i \in I}{A_i}\) for some \(\{A_i\}_{i \in I} \subseteq P\). Then for all \(\phi \in \cLSA\) and \(x \in X\), we have \(M, x \models \phi\) iff \(M', x \models \phi\).
Proof.
We proceed by induction on \(\cLSA\) formulas. The cases for atomic propositions, propositional connectives and \(\A\) are straightforward. We treat only the case for \(\S\).
That \(M', x \models \S\phi\) implies \(M, x \models \S\phi\) is clear using the induction hypothesis and the fact that \(P \subseteq P'\). Suppose \(M, x \models \S\phi\). Take \(A = \bigcap_{i \in I}{A_i} \in P'\), where each \(A_i\) is in \(P\), such that \(\|\phi\|_{M'} \subseteq A\). By the induction hypothesis, \(\|\phi\|_M \subseteq A\). For any \(i \in I\), \(\|\phi\|_M \subseteq A \subseteq A_i\) and \(M, x \models \S\phi\) gives \(x \in A_i\). Hence \(x \in \bigcap_{i \in I}{A_i} = A\). This shows \(M', x \models \S\phi\).
Corollary 1
\(\cLSA\) is strictly less expressive than \(\cL\).
Proof. Consider \(M = (X, P, V)\), where \(X = \{1, 2, 3\}\), \(P = \{\{1, 2\}, \{2, 3\}\}\) and \(V(p) = \{1, 2\}\), \(V(q) = \{2, 3\}\) for some fixed \(p, q \in \Prop\). Let \(M'\) be as in Lemma 1. Then \(M', 1 \models \E(p \land q)\) and \(M, 1 \not\models \E(p \land q)\), but \(M\) and \(M'\) agree on \(\cLSA\) formulas. Hence \(\E(p \land q)\) is not equivalent to any \(\cLSA\) formula.
To round off the discussion of closure properties, we note that within the class of frames closed under intersections, closure under finite unions is also captured by the well-known K axiom – \(\Box(\phi \limplies \psi) \limplies (\Box\phi \limplies \Box\psi)\) – for the dual soundness operator \(\hat{\S}\phi := \neg\S\neg\phi\):
Proposition 3
Suppose \(F = (X, P)\) is non-empty and closed under intersections. Then \(F\) is closed under finite unions if and only if \(F \models \hat{\S}(\phi \limplies \psi) \limplies (\hat{\S}\phi \limplies \hat{\S}\psi)\) for all \(\phi, \psi \in \cL\).
Proof.
“if”: We show the contrapositive. Suppose \(F\) is closed under intersections but not finite unions, so that there are \(B_1, B_2 \in P\) with \(B_1 \cup B_2 \notin P\). Set
By closure under intersections, \(C \in P\). Clearly \(B_1 \cup B_2 \subseteq C\). Since \(C \in P\) but \(B_1 \cup B_2 \notin P\), \(B_1 \cup B_2 \subset C\). Hence there is \(x \in C \setminus (B_1 \cup B_2)\).
Pick distinct atomic propositions \(p\) and \(q\), and define a valuation \(V\) by
Let \(M = (X, P, V)\) be the corresponding model. First, we make three claims:
\(M, x \models \S p\): Let \(A \in P\) such that \(\|p\|_M \subseteq A\). Then \(B_1 \cup B_2 \subseteq A\), so \(C \subseteq A\). Since \(x \in C\), we have \(x \in A\) as required.
\(M, x \not\models \S q\): This is clear since \(B_1 \in P\), \(\|q\|_M \subseteq B_1\), but \(x \notin B_1\).
\(M, x \not\models \S (p \land \neg q)\): Note that \(\|p \land \neg q\|_M = V(p) \setminus V(q) = B_2 \setminus B_1\). Therefore we have \(B_2 \in P\) and \(\|p \land \neg q\|_M \subseteq B_2\), but \(x \notin B_2\).
Now set \(\phi = \neg q\) and \(\psi = \neg p\). We have
From the claims above we see that \(M, x \models \hat{\S}(\phi \limplies \psi)\) but \(M, x \not\models \hat{\S}\phi \limplies \hat{\S}\psi\). Since \(M\) is a model based on \(F\), we are done.
“only if”: Suppose \(F\) is closed under intersections and finite unions. Let \(M\) be a model based on \(F\) and \(x\) a state in \(M\). Suppose \(M, x \models \hat\S(\phi \limplies \psi)\) and \(M, x \models \hat\S\phi\). Then \(M, x \not\models \S\neg(\phi \limplies \psi)\) and \(M, x \not\models \S\neg\phi\). Hence there is \(A \in P\) such that \(\|\neg(\phi \limplies \psi)\|_M \subseteq A\) but \(x \notin A\), and \(B \in P\) such that \(\|\neg\phi\|_M \subseteq B\) but \(x \notin B\). Note that
Since \(x \notin A \cup B\) and \(A \cup B \in P\) by closure under finite unions, this shows \(M, x \not\models \S\neg\psi\), i.e. \(M, x \models \hat\S\psi\). This completes the proof of \(F \models \hat{\S}(\phi \limplies \psi) \limplies (\hat{\S}\phi \limplies \hat{\S}\psi)\).
Connection with Epistemic Logic¶
In this section we explore the connection between our logic and epistemic logic, for certain classes of expertise models. In particular, we provide a one-to-one mapping between classes of expertise models and S4 and S5 relational models, and a translation from \(\cL\) to the modal language with knowledge operator \(\K\) (and \(\A\)), which allows expertise and soundness to be expressed in terms of knowledge.
First, we introduce the syntax and (relational) semantics of epistemic logic. Let \(\cLKA\) be the language formed from \(\Prop\) with modal operators \(\K\) and \(\A\). We read \(\K\phi\) as the source knows \(\phi\).
Definition 2
A relational model is a triple \(M^* = (X, R, V)\), where \(X\) is a set of states, \(R \subseteq X \times X\) is a binary relation on \(X\), and \(V: \Prop \to 2^X\) is a valuation function. The class of all relational models is denoted by \(\M@rel\).
The satisfaction relation for \(\cLKA\) is defined recursively: the clauses for atomic propositions, propositional connectives and \(\A\) are the same as for expertise models, and
As usual, \(R\) is interpreted as an epistemic accessibility relation: \(xRy\) means that the sources considers \(y\) possible if the “actual” state of the world is \(x\). We will be interested in the logics of S4 and S5, which are axiomatised by KT4 and KT5, respectively:
K: \(\K(\phi \limplies \psi) \limplies (\K\phi \limplies \K\psi)\)
T: \(\K\phi \limplies \phi\)
4: \(\K\phi \limplies \K\K\phi\)
5: \(\neg\K\phi \limplies \K\neg\K\phi\)
T says that all knowledge is true, 4 expresses positive introspection of knowledge, and 5 expresses negative introspection. [TODO: comments on suitability of S4 and S5 for logics of knowledge, and philosophical arguments against S5 (see highlight in p12 of evidence in epistemic logic)]
It is well known that S4 is sound and complete for the class of relational models where \(R\) is reflexive and transitive, and that S5 is sound and complete for the class of relational models where \(R\) is an equivalence relation. Accordingly, we write \(\M@sfour\) for the class of all \(M^*\) where \(R\) is reflexive and transitive, and \(\M@sfive\) for \(M^*\) where \(R\) is an equivalence relation.
Our first result connecting expertise and knowledge is on the semantic side: we show there is a bijection between expertise models closed under intersections and unions and S4 models. Moreover, there is a close connection between the collection of expertise sets \(P\) and the relation \(R\) in the image of any expertise model. Since expertise models closed under intersections and unions are Alexandrov topological spaces (where \(P\) is the set of closed sets), this is essentially a reformulation of a known result linking relational semantics over S4 frames and topological semantics over Alexandrov spaces [TODO: citations]. For self-containment, we prove it for our setting here. First, we show how to map a collection of sets \(P\) to a binary relation.
Definition 3
For a set \(X\) and \(P \subseteq 2^X\), let \(R_P\) be the binary relation on \(X\) given by
In the case where \(P\) is the collection of closed sets of a topology on \(X\), \(R_P\) is the specialisation preorder. In what follows, say a set \(A \subseteq X\) is downwards closed with respect to a relation \(R\) if \(xRy\) and \(y \in A\) implies \(x \in A\).
Lemma 2
Let \(X\) be a set and \(R, S\) reflexive and transitive relations on \(X\). Then if \(R\) and \(S\) share the same downwards closed sets, \(R = S\).
Proof.
Suppose \(xRy\). Set \(A = \{z \in X \mid zSy\}\). By transitivity of \(S\), \(A\) is downwards closed wrt \(S\). By assumption, \(A\) must also be downwards closed wrt \(R\). By reflexivity of \(S\), \(y \in A\). Hence \(xRy\) implies \(x \in A\), i.e. \(xSy\). This shows \(R \subseteq S\), and the reverse inclusion holds by a symmetrical argument. Hence \(R = S\).
Lemma 3
Let \(X\) be a set.
For any \(P \subseteq 2^X\), \(R_P\) is reflexive and transitive.
If \(P \subseteq 2^X\) is closed under (possibly empty) unions and intersections, then for all \(A \subseteq X\):
\[A \in P \iff A \text{ is downwards closed wrt } R_P\]If \(R\) is a reflexive and transitive relation on \(X\), there is \(P \subseteq 2^X\) closed under unions and intersections such that \(R_P = R\).
Proof.
Reflexivity is clear. For transitivity, suppose \(x{R_P}y\) and \(y{R_P}z\). Take \(A \in P\) such that \(z \in A\). Then \(y{R_P}z\) gives \(y \in A\). But then \(x{R_P}y\) gives \(x \in A\). Hence \(x{R_P}z\).
Suppose \(P\) is closed under unions and intersections and let \(A \subseteq X\).
First suppose \(A \in P\). Then \(A\) is downwards closed with respect to \(R_P\): if \(y \in A\) and \(x{R_P}y\) then, by definition of \(R_P\), we have \(x \in A\).
Next suppose \(A\) is downwards closed with respect to \(R_P\). We claim
\[A = \bigcup_{y \in A}\bigcap\{B \in P \mid y \in B\} \]Since \(P\) is closed under intersections and unions, this will show \(A \in P\). The left-to-right inclusion is clear, since any \(y \in A\) lies in the term of the union corresponding to \(y\). For the right-to-left inclusion, take any \(x\) in the set on the RHS. Then there is \(y \in A\) such that \(x \in \bigcap\{B \in P \mid y \in B\}\). But this is just a rephrasing of \(x{R_P}y\). Since \(A\) is downwards closed, we get \(x \in A\), which completes the proof.
Take any reflexive and transitive relation \(R\). Set
\[P = \{A \subseteq X \mid A \text{ is downwards closed wrt } R\}\]It is easily seen that \(P\) is closed under unions and intersections. We need to show that \(R_P = R\).
By (1), \(R_P\) is reflexive and transitive. By Lemma 2, it is sufficient to show that \(R_P\) and \(R\) share the same downwards closed sets. Indeed, for any \(A \subseteq X\) we get by (2) and the definition of \(P\) that
\[\begin{aligned} A \text{ is downwards closed wrt } R_P &\iff A \in P \\ &\iff A \text{ is downwards closed wrt } R \end{aligned} \]Hence \(R = R_P\).
We can now state the correspondence between expertise models and S4 relational models.
Theorem 1
The mapping \(f: \M@int \cap \M@unions \to \M@sfour\) given by \((X, P, V) \mapsto (X, R_P, V)\) is bijective.
Proof.
Lemma 3 (1) shows that \(f\) is well-defined, i.e. that \(f(M)\) does indeed lie in \(\M@sfour\) for any expertise model \(M\). Injectivity follows from Lemma 3 (2), since \(P\) is fully determined by \(R_P\) for expertise models closed under unions and intersections. Finally, Lemma 3 (3) shows that \(f\) is surjective.
If we consider closure under complements together with intersections, an analogous result holds with S5 taking the place of S4.
Theorem 2
The mapping \(g: \M@int \cap \M@compl \to \M@sfive\) given by \((X, P, V) \mapsto (X, R_P, V)\) is bijective.
Proof.
First, note that \(\M@int \cap \M@compl \subseteq \M@int \cap \M@unions\), since any union of sets in \(P\) can be written as a complement of intersection of complements of sets in \(P\). Therefore \(g\) is simply the restriction of \(f\) from Theorem 1 to \(\M@int \cap \M@compl\).
To show \(g\) is well-defined, we need to show that \(R_P\) is an equivalence relation whenever \(P\) is closed under intersections and complements. Reflexivity and transitivity were already shown in Lemma 3 (1). We show \(R_P\) is symmetric. Suppose \(x{R_P}y\). Let \(A \in P\) such that \(x \in A\). Write \(B = X \setminus A\). Then since \(P\) is closed under complements, \(B \in P\). Since \(x{R_P}y\) and \(x \notin B\), we cannot have \(y \in B\). Thus \(y \notin B = X \setminus A\), i.e. \(y \in A\). This shows \(y{R_P}x\). Hence \(R_P\) is an equivalence relation.
Injectivity of \(g\) is inherited from injectivity of \(f\) from Theorem 1. For surjectivity, it suffices to show that \(f^{-1}(M^*)\) is closed under complements when \(M^* = (X, R, V) \in \M@sfive\). Recall, from Lemma 3 (3), that \(f^{-1}(M^*) = (X, P, V)\), where \(A \in P\) iff \(A\) is downwards closed with respect to \(R\). Suppose \(A \in P\), i.e. \(A\) is downwards closed. To show \(X \setminus A\) is downwards closed, suppose \(y \in X \setminus A\) and \(xRy\). By symmetry of \(R\), \(yRx\). If \(x \in A\), then downwards closure of \(A\) would give \(y \in A\), but this is false. Hence \(x \notin A\), i.e. \(x \in X \setminus A\). Thus \(X \setminus A\) is downwards closed, so \(P\) is closed under complements. This completes the proof.
[Examples of expertise models and their corresponding relational models]
These mappings between expertise models and relational models also preserve the truth value of formulas, up to the following translation \(t: \cL \to \cLKA\), defined recursively as follows:
The only interesting cases are for \(\E\phi\) and \(\S\phi\). The translation of \(\E\phi\) corresponds directly to the intuition of expertise as refutation: in all possible scenarios, if \(\phi\) is false the source knows so. The translation of \(\S\phi\) says that soundness is just the dual of knowledge: \(\phi\) is sound if the source does not know that \(\phi\) is false.
Theorem 3
Let \(f: \M@int \cap \M@unions \to \M@sfour\) be the bijection from Theorem 1. Then for all \(M = (X, P, V) \in \M@int \cap \M@unions\), \(x \in X\) and \(\phi \in \cL\):
Moreover, if \(g: \M@int \cap \M@compl \to \M@sfive\) is the bijection from Theorem 2, then for all \(M = (X, P, V) \in \M@int \cap \M@compl\):
Proof.
Note that since \(g\) is defined as the restriction of \(f\) to \(\M@int \cap \M@compl\), \((2)\) follows from \((1)\). We show \((1)\) only.
Let \(M = (X, P, V) \in \M@int \cap \M@unions\). Write \(f(M) = (X, R, V)\). From the definition of \(f\) and Lemma 3 (2), we have
We show \((1)\) by induction. The only non-trivial cases are \(\E\) and \(\S\) formulas.
\(\E\): Suppose \(M, x \models \E\phi\). Then \(\|\phi\|_M \in P\). By the induction hypothesis and \((*)\), this means \(\|t(\phi)\|_{f(M)}\) is downwards closed with respect to \(R\). Now take \(y \in X\) such that \(f(M), y \models \neg t(\phi)\). Then \(y \notin \|t(\phi)\|_{f(M)}\). Since this set is downwards closed, it cannot contain any \(R\)-successor of \(y.\) Hence \(f(M), y \models \K\neg t(\phi)\). This shows that \(f(M), x \models \A(\neg t(\phi) \limplies \K\neg t(\phi))\), i.e. \(f(M), x \models t(\E\phi)\).
Now suppose \(f(M), x \models t(\E\phi)\), i.e. \(f(M), x \models \A(\neg t(\phi) \limplies \K\neg t(\phi))\). We show \(\|\phi\|_M\) is downwards closed. Suppose \(yRz\) and \(z \in \|\phi\|_M\). By the induction hypothesis, \(f(M), z \not\models \neg t(\phi)\). Hence \(f(M), y \not\models \K\neg t(\phi)\). Since \(\neg t(\phi) \limplies \K\neg t(\phi)\) holds everywhere in \(f(M)\), this means \(f(M), y \models t(\phi)\); by the induction hypothesis again we get \(M, y \models \phi\) and thus \(y \in \|\phi\|_M\). This shows that \(\|\phi\|_M\) is downwards closed, and by \((*)\) we have \(\|\phi\|_M \in P\). Hence \(M, x \models \E\phi\).
\(\S\): We show both directions by contraposition. Suppose \(M, x \not\models \S\phi\). Then there is \(A \in P\) such that \(\|\phi\|_M \subseteq A\) and \(x \notin A\). Since \(A\) is downwards closed (by \((*)\)), this means \(xRy\) implies \(y \notin A\) and hence \(y \notin \|\phi\|_M\), for any \(y \in X\). By the induction hypothesis, we get that \(xRy\) implies \(f(M), y \models \neg t(\phi)\), i.e. \(f(M), x \models \K\neg t(\phi)\). Hence \(f(M), x \not\models t(\S\phi)\).
Finally, suppose \(f(M), x \not\models t(\S\phi)\), i.e. \(f(M), x \models \K\neg t(\phi)\). Let \(A\) be the \(R\)-downwards closure of \(\|\phi\|_M\), i.e.
\[A = \{y \in X \mid \exists z \in \|\phi\|_M: yRz\}\]Then \(\|\phi\|_M \subseteq A\) by reflexivity of \(R\), and \(A\) is downwards closed by transitivity. Hence \(A \in P\). But \(x \notin A\), since for all \(z\) with \(xRz\) we have \(f(M), z \models \neg t(\phi)\), so \(z \notin \|t(\phi)\|_{f(M)} = \|\phi\|_M\). Hence \(M, x \not\models \S\phi\).
Taken together, the results of this section show that, when considering expertise models closed under intersections and unions, \(P\) uniquely determines an epistemic accessibility relation such that expertise and soundness have precise interpretations in terms of S4 knowledge. If we also impose closure under complements, the notion of knowledge is strengthened to S5. Moreover, every S4 and S5 model arises from some expertise model in this way.
[Address counterexamples for negative introspection of knowledge. Do these give arguments against symmetric expertise?]
Axiomatisation¶
In this section we give sound and complete logics with respect to various classes of expertise models. We start with the class of all expertise models \(\M\), and show how adding more axioms corresponds to closure conditions (the connections with S4 and S5 shown in the previous section will be useful here to make use of their well-known axiomatisations).
The General Case¶
Let \(\sL\) be the logic containing all instances of propositional tautologies generated by the following axioms and inference rules:
Note that we treat \(\A\) as a “box” and \(\S\) as a “diamond” modality. First, \(\sL\) is sound.
Lemma 4
\(\sL\) is sound with respect to \(\M\).
Proof. The inference rules are clearly sound. All axioms were either shown to be sound in Proposition 1 or are straightforward to see, with the possible exception of \(\foursoundness\) which we will show explicitly. Let \(M = (X, P, V)\) be an expertise model and \(x \in X\). Suppose \(M, x \models \S\S\phi\). We need to show \(M, x \models \S\phi\). Take \(A \in P\) such that \(\|\phi\|_M \subseteq A\). Now for any \(y \in X\), if \(M, y \models \S\phi\) then clearly \(y \in A\). Hence \(\|\S\phi\|_M \subseteq A\). But then \(M, x \models \S\S\phi\) gives \(x \in A\). Hence \(M, x \models \S\phi\).
For completeness, we use a variation of the standard canonical model method. In taking this approach, one constructs a model whose states are maximally \(\sL\)-consistent sets of formulas, and aims to prove the truth lemma: that a set \(\Gamma\) satisfies \(\phi\) in the canonical model if and only if \(\phi \in \Gamma\). However, the truth lemma poses some difficulties for our semantics. Roughly speaking, we find there is an obvious choice of \(P\) to ensure the truth lemma for \(\E\phi\) formulas, but that this may be too small for \(\S\phi\) to be refuted when \(\S\phi \notin \Gamma\) (recall that \(M, x \not\models \S\phi\) iff there exists some \(A \in P\) such that \(\|\phi\|_M \subseteq A\) and \(x \notin A\)). We therefore enlargen the set of states so we can add new expertise sets \(A\) – without affecting the truth value of expertise formulas – to obtain the truth lemma for soundness formulas.
First, we need some standard notation and terminology. Write \(\thm \phi\) iff \(\phi \in \sL\). For \(\Gamma \subseteq \cL\) and \(\phi \in \cL\), write \(\Gamma \thm \phi\) iff there are \(\phi_0, \ldots, \phi_n \in \Gamma\), \(n \ge 0\), such that \(\thm (\phi_0 \land \cdots \land \phi_n) \limplies \phi\). Say \(\Gamma \subseteq \cL\) is inconsistent if \(\Gamma \entails \bot\), and that \(\Gamma\) is consistent otherwise. \(\Gamma\) is maximally consistent iff \(\Gamma\) is consistent and \(\Gamma \subset \Delta\) implies that \(\Delta\) is inconsistent. We recall some standard facts about maximally consistent sets, which we may use without explicit reference in the sequel.
Lemma 5
Let \(\Gamma\) be a maximally consistent set and \(\phi, \psi \in \cL\).
\(\phi \in \Gamma\) iff \(\Gamma \entails \phi\)
If \(\phi \limplies \psi \in \Gamma\) and \(\phi \in \Gamma\), then \(\psi \in \Gamma\)
\(\neg\phi \in \Gamma\) iff \(\phi \notin \Gamma\)
\(\phi \land \psi \in \Gamma\) iff \(\phi \in \Gamma\) and \(\psi \in \Gamma\)
Proof.
First suppose \(\phi \in \Gamma\). Since \(\phi \limplies \phi\) is an instance of the propositional tautology \(p \limplies p\), we have \(\thm \phi \limplies \phi\). Since \(\phi \in \Gamma\), this gives \(\Gamma \entails \phi\).
Now suppose \(\Gamma \entails \phi\). Set \(\Delta = \Gamma \cup \{\phi\}\). We claim \(\Delta\) is consistent. If not, there are \(\psi_0,\ldots,\psi_n \in \Delta\) such that \(\thm (\psi_0 \land \cdots \land \psi_n) \limplies \bot\). Since \(\Gamma\) is consistent, at least one of the \(\psi_i\) must be equal to \(\phi\). Without loss of generality, \(\psi_0 = \phi\) and \(\psi_j \in \Gamma\) for \(j > 0\). Hence, by propositional logic and \(\modpon\), \(\thm (\psi_1 \land \cdots \land \psi_n) \limplies \neg \phi\). Thus \(\Gamma \entails \neg\phi\). But since \(\Gamma \entails \phi\) also, it follows that \(\Gamma \entails \bot\), and thus \(\Gamma\) is inconsistent: contradiction. So \(\Delta\) must be consistent after all. Clearly \(\Gamma \subseteq \Delta\), and by maximal consistency of \(\Gamma\), \(\Gamma \not\subset \Delta\). Hence \(\Delta = \Gamma\), so \(\phi \in \Gamma\) as required.
By propositional logic we have \(\thm ((\phi \limplies \psi) \land \phi) \limplies \psi\). Hence \(\Gamma \entails \psi\); by (1) we get \(\psi \in \Gamma\).
If \(\neg\phi \in \Gamma\) then clearly \(\phi \notin \Gamma\), since otherwise \(\Gamma\) would be inconsistent. If \(\phi \notin \Gamma\) then \(\Gamma \not\entails \phi\) by (1). Set \(\Delta = \Gamma \cup \{\neg\phi\}\). Then \(\Delta\) is consistent (one can show that assuming \(\Delta\) is inconsistent leads to \(\Gamma \entails \phi\); a contradiction). Again, since \(\Gamma \subseteq \Delta\) and \(\Gamma\) is maximally consistent, we must in fact have \(\Gamma = \Delta\), so \(\neg\phi \in \Gamma\).
If \(\phi \land \psi \in \Gamma\) then both \(\Gamma \entails \phi\) and \(\Gamma \entails \psi\), so \(\phi, \psi \in \Gamma\) by (1). Conversely, if \(\phi, \psi \in \Gamma\) then \(\Gamma \entails \phi \land \psi\), so \(\phi \land \psi \in \Gamma\) by (1) again.
Lemma 6 (Lindenbaum’s Lemma)
If \(\Gamma \subseteq \cL\) is consistent there is a maximally consistent set \(\Delta\) such that \(\Gamma \subseteq \Delta\).
Let \(X_{\sL}\) denote the set of maximally consistent sets. Define a relation \(R\) on \(X_{\sL}\) by
The \(\Tuniv\) and \(\fiveuniv\) axioms for \(\A\) show that \(R\) is an equivalence relation (this is just part of the standard proof that S5 is complete for equivalence relations).
Lemma 7
\(R\) is an equivalence relation.
Proof.
We show that \(R\) is reflexive and has the Euclidean property (\(xRy\) and \(xRz\) implies \(yRz\)).
Reflexivity: Let \(\Gamma \in X_{\sL}\). Suppose \(\A\phi \in \Gamma\). By \(\Tuniv\) and closure of maximally consistent sets under modus ponens, \(\phi \in \Gamma\). Hence \(\Gamma R \Gamma\).
Euclidean: Suppose \(\Gamma R \Delta\) and \(\Gamma R \Lambda\). We show \(\Delta R \Lambda\) by contraposition. Suppose \(\phi \notin \Lambda\). Since \(\Gamma R \Lambda\), this means \(\A\phi \notin \Gamma\). Hence \(\neg\A\phi \in \Gamma\), and by \(\fiveuniv\) we get \(\A\neg\A\phi \in \Gamma\). Now \(\Gamma R \Delta\) gives \(\neg\A\phi \in \Delta\), so \(\A\phi \notin \Delta\).
To conclude we need to show \(R\) is symmetric and transitive. For symmetry, suppose \(\Gamma R \Delta\). By reflexivity, \(\Gamma R \Gamma\). The Euclidean property therefore gives \(\Delta R \Gamma\). For transitivity, suppose \(\Gamma R \Delta\) and \(\Delta R \Lambda\). By symmetry, \(\Delta R \Gamma\). The Euclidean property again gives \(\Gamma R \Lambda\).
For \(\phi \in \cL\), let \(|\phi| = \{\Gamma \in X_{\sL} \mid \phi \in \Gamma\}\) be the proof set of \(\phi\). For \(\Sigma \in X_{\sL}\), let \(X_\Sigma\) be the equivalence class of \(\Sigma\) in \(R\), and write \(|\phi|_\Sigma = |\phi| \cap X_\Sigma\).
Using what is essentially the standard proof of the truth lemma for the modal logic K with respect to relational semantics, \(\Kuniv\) gives the following lemma:
Lemma 8
Let \(\Sigma \in X_{\sL}\). Then
For any \(\phi \in \cL\), \(\A\phi \in \Sigma\) iff \(|\phi|_\Sigma = X_\Sigma\)
For any \(\phi, \psi \in \cL\), \(\A(\phi \limplies \psi) \in \Sigma\) iff \(|\phi|_\Sigma \subseteq |\psi|_\Sigma\)
For any \(\phi, \psi \in \cL\), \(\A(\phi \liff \psi) \in \Sigma\) iff \(|\phi|_\Sigma = |\psi|_\Sigma\)
Proof.
\(\implies\): Suppose \(\A\phi \in \Sigma\). Let \(\Gamma \in X_\Sigma\). Then \(\Sigma R \Gamma\), so clearly \(\phi \in \Gamma\).
\(\impliedby\): We show the contrapositive. Supopse \(\A\phi \notin \Sigma\). Set
\[\Gamma_0 = \{\psi \mid \A\psi \in \Gamma\} \cup \{\neg\phi\}\]We claim \(\Gamma_0\) is consistent. If not, wlog there are \(\psi_0, \ldots, \psi_n \in \Gamma_0\) such that \(\A\psi_i \in \Sigma\) for each \(i\), and
\[\thm \psi_0 \land \cdots \land \psi_n \limplies \phi\]By propositional logic, we get
\[\thm \psi_0 \limplies ( \cdots \limplies (\psi_n \limplies \phi) \cdots )\]and by \(\necuniv\),
\[\thm \A(\psi_0 \limplies ( \cdots \limplies (\psi_n \limplies \phi) \cdots ))\]Since \(\Kuniv\) together with \(\modpon\) says that \(\A\) distributes over implications, repeated applications gives
\[\thm \A\psi_0 \limplies ( \cdots \limplies (\A\psi_n \limplies \A\phi) \cdots )\]and propositional logic again gives
\[\thm \A\psi_0 \land \cdots \land \A\psi_n \limplies \A\phi\]But recall that \(\A\psi_i \in \Sigma\). Hence \(\Sigma \entails \A\phi\). Since \(\Sigma\) is maximally consistent, this means \(\A\phi \in \Sigma\): contradiction.
Thus \(\Gamma_0\) is indeed consistent. By Lindenbaum’s lemma (Lemma 6), there is a maximally consistent set \(\Gamma \supseteq \Gamma_0\). Clearly \(\Sigma R \Gamma\), since if \(\A\psi \in \Sigma\) then \(\psi \in \Gamma_0 \subseteq \Gamma\). Moreover, \(\neg\phi \in \Gamma_0 \subseteq \Gamma\), so by consistency \(\phi \notin \Gamma\). Hence \(\Gamma \in X_\Sigma \setminus |\phi|_\Sigma\), and we are done.
By (1), we have
\[\begin{aligned} \A(\phi \limplies \psi) \in \Sigma &\iff |\phi \limplies \psi|_\Sigma = X_\Sigma \\ &\iff \forall \Gamma \in X_\Sigma: \phi \limplies \psi \in \Gamma \end{aligned}\]Suppose \(\A(\phi \limplies \psi) \in \Sigma\). Take \(\phi \in \Gamma\). Then we have \(\phi, \phi \limplies \psi \in \Gamma\), so \(\psi \in \Gamma\). This shows \(|\phi|_\Sigma \subseteq |\psi|_\Sigma\).
Conversely, suppose \(|\phi|_\Sigma \subseteq |\psi|_\Sigma\). Take \(\Gamma \in X_\Sigma\). If \(\phi \notin \Gamma\) then \(\neg\phi \in \Gamma\), so \(\neg\phi \lor \psi \in \Gamma\) and thus \(\phi \limplies \psi \in \Gamma\). If \(\phi \in \Gamma\) then \(\Gamma \in |\phi|_\Sigma \subseteq |\psi|_\Sigma\), so \(\psi \in \Gamma\). Thus \(\phi \limplies \psi \in \Gamma\) in this case too. Hence \(\A(\phi \limplies \psi) \in \Sigma\).
First note that \(\A(\alpha \land \beta) \in \Sigma\) iff both \(\A\alpha \in \Sigma\) and \(\A\beta \in \Sigma\). This can be shown using \(\Kuniv\), \(\modpon\) and instances of the propositional tautologies \((p \land q) \limplies p\) (for the left-to-right implication) and \(p \limplies (q \limplies (p \land q))\) (for the right-to-left implication).
Recalling that \(\phi \liff \psi\) is an abbreviation for \((\phi \limplies \psi) \land (\psi \limplies \phi)\), we get
\[\begin{aligned} \A(\phi \liff \psi) \in \Sigma &\iff \A(\phi \limplies \psi) \in \Sigma \text{ and } \A(\psi \limplies \phi) \in \Sigma \\ &\iff |\phi|_\Sigma \subseteq |\psi|_\Sigma \text { and } |\psi|_\Sigma \subseteq |\phi|_\Sigma \\ & \iff |\phi|_\Sigma = |\psi|_\Sigma \end{aligned}\]as required.
Corollary 2
Let \(\Sigma \in X_{\sL}\). For \(\Gamma, \Delta \in X_\Sigma\) and \(\phi \in \cL\),
\(\A\phi \in \Gamma\) iff \(\A\phi \in \Delta\)
\(\E\phi \in \Gamma\) iff \(\E\phi \in \Delta\)
Proof.
If \(\A\phi \in \Gamma\) then Lemma 8 gives \(|\phi|_\Gamma = X_\Gamma\). But since \(\Gamma\) and \(\Delta\) are in the same equivalence class of \(R\), \(|\phi|_\Gamma = |\phi|_\Delta\) and \(X_\Gamma = X_\Delta\). Hence \(|\phi|_\Delta = X_\Delta\), so \(\A\phi \in \Delta\) by Lemma 8. The covnerse holds by symmetry.
If \(\E\phi \in \Gamma\) then \(\A\E\phi \in \Gamma\) by \(\EA\). Since \(\Gamma R \Delta\), we get \(\E\phi \in \Delta\). Again, the converse holds by symmetry.
We are ready to define the “canonical” model (for each \(\Sigma\)). Set \(\widehat{X}_\Sigma = X_\Sigma \times \R\). This is the step described informally above: we embiggen \(X_\Sigma\) by considering uncountably many copies of each point (any uncountable set would do in place of \(\R\)). The valuation is straightforward: set \(\widehat{V}_\Sigma(p) = |p|_\Sigma \times \R\). For the expertise component of the model, say \(A \subseteq \widehat{X}_\Sigma\) is S-closed iff for all \(\phi \in \cL\):
Set \(\widehat{P}_\Sigma = \widehat{P}_\Sigma^0 \cup \widehat{P}_\Sigma^1\), where
We have a version of the truth lemma for the model \(\widehat{M}_\Sigma = (\widehat{X}_\Sigma, \widehat{P}_\Sigma, \widehat{V}_\Sigma)\).
Lemma 9
For any \((\Gamma, t) \in \widehat{X}_\Sigma\) and \(\phi \in \cL\),
i.e. \(\|\phi\|_{\widehat{M}_\Sigma} = |\phi|_\Sigma \times \R\).
Proof.
We proceed by induction on formulas.
Atomic propositions: By definition of the valuation, \(\|p\|_{\widehat{M}_\Sigma} = \widehat{V}_\Sigma(p) = |p|_\Sigma \times \R\).
Conjunctions: By the induction hypothesis we have
\[\begin{aligned} \|\phi \land \psi \|_{\widehat{M}_\Sigma} &= \|\phi\|_{\widehat{M}_\Sigma} \cap \|\psi\|_{\widehat{M}_\Sigma} \\ &= (|\phi|_\Sigma \times \R) \cap (|\psi|_\Sigma \times \R) \\ &= |\phi \land \psi|_\Sigma \times \R \end{aligned}\]Negations: We have
\[\begin{aligned} \widehat{M}_\Sigma, (\Gamma, t) \models \neg\phi &\iff \widehat{M}_\Sigma, (\Gamma, t) \not\models \phi \\ &\iff \phi \notin \Gamma \\ &\iff \neg\phi \in \Gamma \end{aligned} \]\(\A\): We have
\[\begin{aligned} \widehat{M}_\Sigma, (\Gamma, t) \models \A\phi &\iff \forall (\Delta, s) \in \widehat{X}_\Sigma: \widehat{M}_\Sigma, (\Delta, s) \models \phi \\ &\iff \forall (\Delta, s) \in \widehat{X}_\Sigma: \phi \in \Delta \\ &\iff |\phi|_\Sigma = X_\Sigma \\ &\iff \A\phi \in \Sigma \\ &\iff \A\phi \in \Gamma \end{aligned}\]where we use Lemma 8 and Corollary 2 in the final two steps.
\(\E\): First suppose \(\E\phi \in \Gamma\). By Corollary 2, \(\E\phi \in \Sigma\). Hence \(|\phi|_\Sigma \times \R \in \widehat{P}_\Sigma^0\). By the induction hypothesis, \(\|\phi\|_{\widehat{M}_\Sigma} \in \widehat{P}_\Sigma^0\). Hence \(\widehat{M}_\Sigma, (\Gamma, t) \models \E\phi\).
Now suppose \(\widehat{M}_\Sigma, (\Gamma, t) \models \E\phi\). Then by the induction hypothesis, \(|\phi|_\Sigma \times \R \in \widehat{P}_\Sigma\). Since \(\widehat{P}_\Sigma^1\) does not contain any sets of this form, we must have \(|\phi|_\Sigma \times \R \in \widehat{P}_\Sigma^0\). Therefore there is some \(\psi\) such that \(\E\psi \in \Sigma\) and \(|\phi|_\Sigma \times \R = |\psi|_\Sigma \times \R\). It follows that \(|\phi|_\Sigma = |\psi|_\Sigma\), and Lemma 8 then gives \(\A(\phi \liff \psi) \in \Sigma\). By Corollary 2, we have \(\E\psi \in \Gamma\) and \(\A(\phi \liff \psi) \in \Gamma\) too. By \(\reE\) we get \(\E\phi \in \Gamma\) as required.
\(\S\): First suppose \(\S\phi \in \Gamma\). Take \(A \in \widehat{P}_\Sigma\) such that \(\|\phi\|_{\widehat{M}_\Sigma} \subseteq A\). By the induction hypothesis, \(|\phi|_\Sigma \times \R \subseteq A\). We consider two cases:
Case 1: \(A \in \widehat{P}_\Sigma^0\). Here there is \(\psi\) such that \(A = |\psi|_\Sigma \times \R\) and \(\E\psi \in \Sigma\). Since \(|\phi|_\Sigma \times \R \subseteq A\), we have \(|\phi|_\Sigma \subseteq |\psi|_\Sigma\). By Lemma 8, \(\A(\phi \limplies \psi) \in \Sigma\). By Corollary 2 we have \(\E\psi, \A(\phi \limplies \psi) \in \Gamma\) too. Applying \(\weakening@E\) gives \(\S\phi \land \E\psi \limplies \psi \in \Gamma\); since \(\S\phi, \E\psi \in \Gamma\) we have \(\S\phi \land \E\psi \in \Gamma\) and thus \(\psi \in \Gamma\). This means \((\Gamma, t) \in |\psi|_\Sigma \times \R = A\), as required.
Case 2: \(A \in \widehat{P}_\Sigma^1\). Here \(A\) is S-closed by definition of \(\widehat{P}_\Sigma^1\). Hence \(|\S\phi|_\Sigma \times \R \subseteq A\). Since \(\S\phi \in \Gamma\) we get \((\Gamma, t) \in A\) as required.
In either case we have \((\Gamma, t) \in A\). This shows \(\widehat{M}_\Sigma, (\Gamma, t) \models \S\phi\).
For the other direction we show the contrapositive. Take any \((\Gamma, t) \in \widehat{X}_\Sigma\) and suppose \(\S\phi \notin \Gamma\). We show that \(\widehat{M}_\Sigma, (\Gamma, t) \not\models \S\phi\), i.e. there is \(A \in \widehat{P}_\Sigma\) such that \(\|\phi\|_{\widehat{M}_\Sigma} \subseteq A\) but \((\Gamma, t) \notin A\).
First, set
\[\mathcal{U} = \{ |\psi|_\Sigma \times \R \mid \psi \in \cL \text{ and } |\psi|_\Sigma \times \R \not\subseteq |\S\phi|_\Sigma \times \R \}\]Since \(\cL\) is countable, \(\mathcal{U}\) is at most countable. Hence we may write \(\mathcal{U} = \{U_n\}_{n \in N}\) for some index set \(N \subseteq \N\). Since \(U_n \not\subseteq |\S\phi|_\Sigma \times \R\), we may choose some \((\Delta_n, t_n) \in U_n \setminus (|\S\phi|_\Sigma \times \R)\) for each \(n\). Now write
\[\mathcal{D} = \{(\Delta_n, t_n)\}_{n \in N} \cup \{(\Gamma, t)\}\]Since \(N\) is at most countable, so too is \(\mathcal{D}\). Since \(\R\) is uncountable, there is some \(s \in \R\) such that \((\Gamma, s) \notin \mathcal{D}\). 4 We necessarily have \(s \ne t\). We are ready to define \(A\): set
\[A = (|\S\phi|_\Sigma \times \R) \cup \{(\Gamma, s)\}\]Note that \((\Gamma, t) \notin A\) since \(\S\phi \notin \Gamma\) and \(s \ne t\).
Next we show \(\|\phi\|_{\widehat{M}_\Sigma} \subseteq A\). By the induction hypothesis, this is equivalent to \(|\phi|_\Sigma \times \R \subseteq A\). By \(\Tsoundness\) and \(\necuniv\), we have \(\A(\phi \limplies \S\phi) \in \Sigma\), and consequently \(|\phi|_\Sigma \subseteq |\S\phi|_\Sigma\) by Lemma 8. Hence \(|\phi|_\Sigma \times \R \subseteq |\S\phi|_\Sigma \times \R \subseteq A\) as required.
It only remains to show that \(A \in \widehat{P}_\Sigma\). We claim that \(A \in \widehat{P}_\Sigma^1\).
Claim 1: \(A\) is S-closed.
Proof. Suppose \(|\psi|_\Sigma \times \R \subseteq A\). We claim that, in fact, \(|\psi|_\Sigma \times \R \subseteq |\S\phi|_\Sigma \times \R\). If not, then by definition of \(\mathcal{U}\) there is \(n \in N\) such that \(|\psi|_\Sigma \times \R = U_n\). Hence \(U_n \subseteq A\). This means \((\Delta_n, t_n) \in A\). But \((\Delta_n, t_n) \notin |\S\phi|_\Sigma \times \R\), so we must have \((\Delta_n, t_n) = (\Gamma, s)\). But then \((\Gamma, s) \in \mathcal{D}\): contradiction. So we do indeed have \(|\psi|_\Sigma \times \R \subseteq |\S\phi|_\Sigma \times \R\), and thus \(|\psi|_\Sigma \subseteq |\S\phi|_\Sigma\). By Lemma 8, \(\A(\psi \limplies \S\phi) \in \Sigma\).
Now, take any \((\Lambda, u) \in |\S\psi|_\Sigma \times \R\). Since \(\Lambda \in X_\Sigma\), Corollary 2 gives \(\A(\psi \limplies \S\phi) \in \Lambda\). By \(\weakening@S\), \(\S\psi \limplies \S\S\phi \in \Lambda\). Since \(\Lambda \in |\S\psi|_\Sigma\), we get \(\S\S\phi \in \Lambda\). But then \(\foursoundness\) gives \(\S\phi \in \Lambda\). That is, \((\Lambda, u) \in |\S\phi|_\Sigma \times \R \subseteq A\). This shows \(|\S\psi|_\Sigma \times \R \subseteq A\), and thus \(A\) is S-closed.
Claim 2: \(\forall \psi \in \cL: A \ne |\psi|_\Sigma \times \R\).
Proof. For contradition, suppose there is \(\psi\) such that \(A = |\psi|_\Sigma \times \R\). Then since \((\Gamma, s) \in A\), we have \(\Gamma \in |\psi|_\Sigma\). But then \((\Gamma, t) \in |\psi|_\Sigma \times \R = A\): contradiction.
This shows that \(A \in \widehat{P}_\Sigma^1\). Therefore \(\widehat{M}_\Sigma, (\Gamma, t) \not\models \S\phi\), and we are done.
Strong completeness now follows. 5
Theorem 4
\(\sL\) is strongly complete with respect to \(\M\).
Proof.
We show the contrapositive. Suppose \(\Gamma \not\entails \phi\). Then \(\Gamma \cup \{\neg\phi\}\) is consistent. By Lindenbaum’s Lemma, there is a maximally consistent set \(\Sigma \supseteq \Gamma \cup \{\neg\phi\}\). Consider the model \(\widehat{M}_\Sigma\). For any \(\psi \in \Gamma\) we have \(\psi \in \Sigma\), so Lemma 9 (with \(t = 0\) chosen arbitrarily) gives \(\widehat{M}_\Sigma, (\Sigma, 0) \models \psi\). Also, \(\neg\phi \in \Gamma \subseteq \Sigma\) gives \(\widehat{M}_\Sigma, (\Sigma, 0) \models \neg\phi\), so \(\widehat{M}_\Sigma, (\Sigma, 0) \not\models \phi\). This shows that \(\Gamma \not\models \phi\), and we are done.
Extensions of the Base Logic¶
In this section we extend \(\sL\) to obtain axiomatisations of sub-classes of \(\M\) corresponding to closure conditions.
We start by considering closure under intersections. It was shown in Proposition 2 that the formula \(\A(\S\phi \limplies \phi) \limplies \E\phi\) characterises frames closed under intersections. It is perhaps no surprise that adding this as an axiom results in a sound and complete axiomatisation of \(\M@int\). Formally, let \(\sL@int\) be the extension of \(\sL\) with the following axiom
so-named since together with \(\E\phi \limplies \A(\S\phi \limplies \phi)\) – which is derivable in \(\sL\) 6 – it allows expertise to be reduced to soundness (with the help of the universal modality).
Theorem 5
\(\sL@int\) is sound and strongly complete with respect to \(\M@int\).
Proof.
For soundness, we only need to check that \(\red@E\) is sound for \(\M@int\). But this follows from Proposition 2 (1).
For completeness, we adopt a roughly similar approach to the general case above. Let consistency, maximal consistency and other standard notions and notation be defined as before, but now for \(\sL@int\) instead of \(\sL\). Let \(X_{\sL@int}\) be the set of maximally \(\sL@int\)-consistent sets. Define the relation \(R\) on \(X_{\sL@int}\) in exactly the same way. Since \(\sL@int\) extends \(\sL\), \(R\) is again an equivalence relation, and we have the analogues of Lemma 8 and Corollary 2.
This time, however, the construction of the canonical model for a given \(\Sigma \in X_{\sL@int}\) is much more straightfoward. The set of states is simply \(X_\Sigma\), i.e. the equivalence class of \(\Sigma\) in \(R\). Overriding earlier terminology, say \(A \subseteq X_\Sigma\) is S-closed iff \(|\phi|_\Sigma \subseteq A\) implies \(|\S\phi|_\Sigma \subseteq A\) for all \(\phi \in \cL\). Then set
Finally, set \(V_\Sigma(p) = |p|_\Sigma\), and write \(M_\Sigma = (X_\Sigma, P_\Sigma, V_\Sigma)\).
Claim 1: \(M_\Sigma \in \M@int\)
Proof. We only need to show that intersections of S-closed sets are S-closed. Indeed, suppose \(\{A_i\}_{i \in I}\) is a collection of S-closed sets, and suppose \(|\phi|_\Sigma \subseteq \bigcap_{i \in I}{A_i}\). Then \(|\phi|_\Sigma \subseteq A_i\) for each \(i\), so S-closure gives \(|\S\phi|_\Sigma \subseteq A_i\). Hence \(|\S\phi|_\Sigma \subseteq \bigcap_{i \in I}{A_i}\).
We have the truth lemma for \(M_\Sigma\).
Claim 2: For all \(\Gamma \in X_\Sigma\) and \(\phi \in \cL\),
\[M_\Sigma, \Gamma \models \phi \iff \phi \in \Gamma\]i.e. \(\|\phi\|_{M_\Sigma} = |\phi|_\Sigma\).
Proof. By induction on formulas. The case for atomic propositions follows from the definition of \(V_\Sigma\), the cases for conjunctions and negations hold by properties of maximally consistent sets, and the case for \(\A\phi\) holds by an argument identical to the one used in the general case (Lemma 9). The only interesting cases are therefore for \(\E\phi\) and \(\S\phi\) formulas:
\(\E\): First suppose \(\E\phi \in \Gamma\). We claim \(|\phi|_\Sigma\) is S-closed. This will give \(\|\phi\|_{M_\Sigma} \in P_\Sigma\) by the induction hypothesis and definition of \(P_\Sigma\), and therefore \(M_\Sigma, \Gamma \models \E\phi\).
So, suppose \(|\psi|_\Sigma \subseteq |\phi|_\Sigma\). Then \(\A(\psi \limplies \phi) \in \Sigma\). Let \(\Delta \in |\S\psi|_\Sigma\). Since \(\Delta, \Gamma, \Sigma \in X_\Sigma\), we have \(\E\phi \in \Delta\) and \(\A(\psi \limplies \phi) \in \Delta\) too. By \(\weakening@E\), \(\S\psi \land \E\phi \limplies \phi \in \Delta\). But \(\S\psi \in \Delta\), so \(\S\psi \land \E\phi \in \Delta\) and thus \(\phi \in \Delta\), i.e. \(\Delta \in |\phi|_\Sigma\). This shows \(|\S\psi|_\Sigma \subseteq |\phi|_\Sigma\), so \(|\phi|_\Sigma\) is S-closed as required.
Now suppose \(M_\Sigma, \Gamma \models \E\phi\). Then, by the induction hypothesis, \(|\phi|_\Sigma\) is S-closed. Since \(|\phi|_\Sigma \subseteq |\phi|_\Sigma\) clearly holds, we get \(|\S\phi|_\Sigma \subseteq |\phi|_\Sigma\). This implies \(\A(\S\phi \limplies \phi) \in \Sigma\), and \(\red@E\) gives \(\E\phi \in \Sigma\). Since \(\Gamma \in X_\Sigma\), we get \(\E\phi \in \Gamma\) as required.
\(\S\): Suppose \(\S\phi \in \Gamma\). Take any \(A \in P_\Sigma\) such that \(\|\phi\|_{M_\Sigma} \subseteq A\). By the induction hypothesis, \(|\phi|_\Sigma \subseteq A\). By S-closure of \(A\), \(|\S\phi|_\Sigma \subseteq A\). Hence \(\Gamma \in |\S\phi|_\Sigma \subseteq A\). This shows \(M_\Sigma, \Gamma \models \S\phi\).
For the other direction we show the contrapositive. Suppose \(\S\phi \notin \Gamma\). First, we claim \(|\S\phi|_\Sigma\) is S-closed. Indeed, suppose \(|\psi|_\Sigma \subseteq |\S\phi|_\Sigma\). Then \(\A(\psi \limplies \S\phi) \in \Sigma\). Take any \(\Delta \in |\S\psi|_\Sigma\). Since \(\Delta \in X_\Sigma\), \(\A(\psi \limplies \S\phi) \in \Delta\) also. By \(\weakening@S\), \(\S\psi \limplies \S\S\phi \in \Delta\). Now \(\S\psi \in \Delta\) implies \(\S\S\phi \in \Delta\), and \(\foursoundness\) gives \(\S\phi \in \Delta\), i.e. \(\Delta \in |\S\phi|_\Sigma\). This shows \(|\S\psi|_\Sigma \subseteq |\S\phi|_\Sigma\), and thus \(|\S\phi|_\Sigma\) is S-closed.
Hence \(|\S\phi|_\Sigma\) is a set in \(P_\Sigma\) not containing \(\Gamma\). Moreover, \(\|\phi\|_{M_\Sigma} \subseteq |\S\phi|_\Sigma\) by the induction hypothesis and \(\Tsoundness\). Hence \(M_\Sigma, \Gamma \not\models \S\phi\).
Strong completeness now follows. If \(\Gamma \not\entails_{\sL@int} \phi\), then \(\Gamma \cup \{\neg\phi\}\) is consistent, so by Lindenbaum’s Lemma there is \(\Sigma \in X_{\sL@int}\) with \(\Sigma \supseteq \Gamma \cup \{\neg\phi\}\). Considering the model \(M_\Sigma \in \M@int\), we have \(M_\Sigma, \Sigma \models \Gamma\) and \(M_\Sigma, \Sigma \not\models \phi\) by the truth lemma. Hence \(\Gamma \not\models_{\M@int} \phi\).
Now we add finite unions to the mix. It was shown in Proposition 3 that, within the class \(\M@int\), the K axiom for the dual operator \(\hat\S\) characterises closure under finite unions. Note that any frame \((X, P)\) closed under intersections and finite unions is a topological space, 7 where \(P\) is the set of closed sets. Write \(\M@top = \M@int \cap \M@funions\) for the class of models over such frames. We obtain an axiomatisation of \(\M@top\) by adding K for \(\hat\S\) and a bridge axiom which links \(\hat\S\) and \(\A\):
Let \(\sL@top\) be the extension of \(\sL@int\) by \(\Kshat\) and \(\inc\). Note that \(\sL@top\) contains the KT4 axioms for \(\hat\S\) (recalling that \(\Tsoundness\) and \(\foursoundness\) are the “diamond” versions of T and 4). Since KT4 together with the bridge axiom \(\inc\) is complete for the class of relational models \(\M@sfour\), we can exploit Theorem 3 to obtain completeness of \(\sL@top\) with respect to \(\M@int \cap \M@unions\). Since this class includes \(\M@top\), we also get completeness with respect to \(\M@top\). [TODO: mention that S4 is also known to be sound and complete for topological semantics]
Theorem 6
\(\sL@top\) is sound and strongly complete with respect to \(\M@top\).
Proof.
Soundness of \(\Kshat\) for \(\M@top\) follows from Proposition 3. For \(\inc\), suppose \(M = (X, P, V) \in \M@top\), \(x \in X\) and \(M, x \models \A\phi\). Then \(\|\phi\|_M = X\), so \(\|\neg\phi\|_M = \emptyset\). By the convention that the empty set is the empty union \(\bigcup\emptyset\) (which is a finite union), we have \(\emptyset \in P\). Taking \(A = \emptyset\) in the definition of the semantics for \(\S\), we have \(\|\neg\phi\|_M \subseteq A\) but clearly \(x \notin A\). Hence \(M, x \not\models \S\neg\phi\), so \(M, x \models \hat\S\phi\).
For completeness, we go via relational semantics using the translation \(t: \cL \to \cLKA\) and Theorem 3. First, let \(\sL@sfoura\) be the logic of \(\cLKA\) formulas formed by the following axioms and inference rules:
It is well known that \(\sL@sfoura\) is strongly complete with respect to \(\M@sfour\). [TODO: some reference here]
Now, define a translation \(u: \cLKA \to \cL\) as follows.
Recall the translation \(t: \cL \to \cLKA\) from earlier. While \(u\) is not the inverse of \(t\) (e.g. there is no \(\psi \in \cLKA\) with \(u(\psi) = \E p\)), for any \(\phi \in \cL\) we have that \(\phi\) is \(\sL@top\)-provably equivalent to \(u(t(\phi))\).
Claim 1: Let \(\phi \in \cL\). Then \(\entails_{\sL@top} \phi \liff u(t(\phi))\).
Proof.
By induction on \(\cL\) formulas. The cases of atomic propositions and propositional connectives are straightforward. For the other cases, first note that the “replacement of equivalents” rule is derivable in \(\sL\) (and thus in \(\sL@top\)) for \(\S\), \(\E\) and \(\A\):
\[\text{From } \phi \liff \psi \text{ infer } \bigcirc\phi \liff \bigcirc\psi \quad (\bigcirc \in \{\S, \E, \A\})\]For \(\S\) this follows from \(\necuniv\) and \(\weakening@S\); for \(\E\) from \(\necuniv\) and \(\reE\), and for \(\A\) from \(\necuniv\) and \(\Kuniv\). Now for the inductive step, suppose \(\entails_{\sL@top} \phi \liff u(t(\phi))\).
\(\S\): Note that
\[u(t(\S\phi)) = u(\neg\K\neg t(\phi)) = \neg\neg\S\neg\neg u(t(\phi))\]By the inductive hypothesis, propositional logic and replacement of equivalents, \(\entails_{\sL@top} \S\phi \liff u(t(\S\phi))\).
\(\E\): We have
\[\begin{aligned} u(t(\E\phi)) &= u(\A(\neg t(\phi) \limplies \K\neg t(\phi))) \\ &= \A u(\neg t(\phi) \limplies \K\neg t(\phi)) \\ &= \A (u(\neg t(\phi)) \limplies u(\K\neg t(\phi))) \\ &= \A (\neg u(t(\phi)) \limplies \neg\S\neg u(\neg t(\phi))) \\ &= \A (\neg u(t(\phi)) \limplies \neg\S\neg \neg u(t(\phi))) \end{aligned}\]Taking the contrapositive of the implication, and using replacement of equivalents together with the inductive hypothesis, we get
\[\entails_{\sL@top} u(t(\E\phi)) \liff \A( \S\phi \limplies \phi )\]But we have already seen that \(\entails_{\sL@int} \E\phi \liff \A(\S\phi \limplies \phi)\); since \(\sL@top\) extends \(\sL@int\), we get \(\entails_{\sL@top} \E\phi \liff u(t(\E\phi))\).
\(\A\): This case is straightforward by the inductive hypothesis and replacement of equivalents, since \(u(t(\A\phi)) = \A u(t(\phi)\).
Next we show that if \(\phi \in \cLKA\) is a theorem of \(\sL@sfoura\), then \(u(\phi)\) is a theorem of \(\sL@top\).
Claim 2: Let \(\phi \in \cLKA\). Then \(\entails_{\sL@sfoura} \phi\) implies \(\entails_{\sL@top} u(\phi)\).
Proof.
By induction on the length of \(\sL@sfoura\) proofs. The base case consists of showing that if \(\phi\) is an instance of an \(\sL@sfoura\) axiom or a subtitution instance of a propositional tautology, then \(\entails_{\sL@top} u(\phi)\). The case for instances of tautologies is straightforward, since \(u\) does not affect the structure of a propositional formula. We take the axioms of \(\sL@sfoura\) in turn.
\(\Kk\): We have
\[\begin{aligned} &u(\K(\phi \limplies \psi) \limplies (\K\phi \limplies \K\psi)) \\ &\qquad = \neg\S\neg (u(\phi) \limplies u(\psi)) \limplies (\neg\S\neg u(\phi) \limplies \neg\S\neg u(\psi)) \\ &\qquad = \hat\S (u(\phi) \limplies u(\psi)) \limplies (\hat\S u(\phi) \limplies \hat\S u(\psi)) \end{aligned}\]which is an instance of \(\Kshat\).
\(\Tk\): We have
\[u(\K\phi \limplies \phi) = \neg\S\neg u(\phi) \limplies u(\phi)\]Taking the contrapositive, this is \(\sL@top\)-provably equivalent to \(\neg u(\phi) \limplies \S\neg u(\phi)\), which is an instance of \(\Tsoundness\).
\(\fourk\): We have
\[u(\K\phi \limplies \K\K\phi) = \neg\S\neg u(\phi) \limplies \neg\S\neg\neg\S\neg u(\phi)\]This is provably equivalent to \(\S\S\neg u(\phi) \limplies \S\neg u(\phi)\), which is an instance of \(\foursoundness\).
\(\Kuniv\): We have
\[u(\A(\phi \limplies \psi) \limplies (\A\phi \limplies \A\psi)) = \A (u(\phi) \limplies u(\psi)) \limplies (\A u(\phi) \limplies \A u(\psi))\]which is an instance of \(\Kuniv\) in \(\sL@top\).
\(\Tuniv\): We have
\[u(\A\phi \limplies \phi) = \A u(\phi) \limplies u(\phi)\]which is an instance of \(\Tuniv\) in \(\sL@top\).
\(\fiveuniv\): We have
\[u(\neg\A\phi \limplies \A\neg\A\phi) = \neg\A u(\phi) \limplies \A \neg\A u(\phi) \]which is an instance of \(\fiveuniv\) in \(\sL@top\).
\(\inck\): We have
\[u(\A\phi \limplies \K\phi) = \A u(\phi) \limplies \neg\S\neg u(\phi) = \A u(\phi) \limplies \hat\S u(\phi)\]which is an instance of \(\inc\).
For the inductive step, we show that for each inference rule \(\frac{\psi_1,\ldots,\psi_n}{\phi}\), if \(\entails_{\sL@top}u(\psi_i)\) for each \(i\) then \(\entails_{\sL@top}u(\phi)\).
\(\necuniv\): If \(\entails_{\sL@top} u(\phi)\), then from \(\necuniv\) in \(\sL@top\) we get \(\entails_{\sL@top} \A u(\phi)\). But \(\A u(\phi) = u(\A\phi)\), so we are done.
\(\modpon\): Similarly, this clear from \(\modpon\) for \(\sL@top\) and the fact that \(u(\phi \limplies \psi) = u(\phi) \limplies u(\psi)\).
Claims 1 and 2 easily imply the following.
Claim 3: Let \(\phi \in \cL\). Then \(\entails_{\sL@sfoura} t(\phi)\) implies \(\entails_{\sL@top} \phi\).
Proof.
Suppose \(\entails_{\sL@sfoura} t(\phi)\). By Claim 2, \(\entails_{\sL@top} u(t(\phi))\). By Claim 1, \(\entails_{\sL@top} \phi \liff u(t(\phi))\). By \(\modpon\), \(\entails_{\sL@top} \phi\).
We can now show strong completeness. Suppose \(\Gamma \subseteq \cL\), \(\phi \in \cL\) and \(\Gamma \models_{\M@top} \phi\). We claim \(t(\Gamma) \models_{\M@sfour} t(\phi)\). Indeed, if \(M^* \in \M@sfour\) and \(x\) is a state in \(M^*\) with \(M^*, x \models t(\psi)\) for all \(\psi \in \Gamma\), then with \(f\) as in Theorem 3 we have \(f^{-1}(M^*), x \models \psi\) for all \(\psi \in \Gamma\). Since \(f^{-1}(M^*) \in \M@int \cap \M@unions \subseteq \M@top\), \(\Gamma \models_{\M@top} \phi\) gives \(f^{-1}(M^*), x \models \phi\), and thus \(M^*, x \models t(\phi)\).
By (strong) completeness of \(\sL@sfoura\) for \(\M@sfour\), we get \(t(\Gamma) \entails_{\sL@sfoura} t(\phi)\). That is, there are \(\psi_0, \ldots, \psi_n \in \Gamma\) such that \(\entails_{\sL@sfoura} t(\psi_0) \land \cdots \land t(\psi_n) \limplies t(\phi)\). Since \(t\) passes over conjunctions and implications, this means \(\entails_{\sL@sfoura} t(\psi_0 \land \cdots \land \psi_n \limplies \phi)\). By Claim 3, \(\entails_{\sL@top} \psi_0 \land \cdots \land \psi_n \limplies \phi\). Hence \(\Gamma \entails_{\sL@top} \phi\), and we are done.
Just as the connection between S4 and \(\M@int \cap \M@unions\) allowed us to obtain a complete axiomatisation, we can axiomatise \(\M@int \cap \M@compl\) by considering S5. For brevity, write \(\M@intcompl = \M@int \cap \M@compl\). Let \(\sL@intcompl\) be the extension of \(\sL@top\) with the 5 axiom for \(\hat\S\), which we present in the “diamond” form:
Theorem 7
\(\sL@intcompl\) is sound and strongly complete with respect to \(\M@intcompl\).
Proof.
For soundness, we need to check that \(\fiveshat\) is valid on \(\M@intcompl\). Let \(M = (X, P, V)\) be closed under intersections and complements, and suppose \(M, x \models \S\neg\S\phi\). Note that \(\|\S\phi\|_M = \bigcap\{A \in P \mid \|\phi\|_M \subseteq A\}\) is an intersection from \(P\), so \(\|\S\phi\|_M \in P\). By closure under complements, \(\|\neg\S\phi\|_M \in P\) too. Hence \(M, x \models \S\neg\S\phi \land \E\neg\S\phi\). By Proposition 1 (4), we get \(M, x \models \neg\S\phi\).
The completeness proof goes in exactly the same way as Theorem 6. Letting \(\sL@sfivea\) be the extension of \(\sL@sfoura\) with the \(\fivek\) axiom \(\neg\K\phi \limplies \K\neg\K\phi\), it can be shown that \(\sL@sfivea\) is strongly complete with respect to \(\M@sfive\). With \(u\) as in the proof of Theorem 6, we have that \(\entails_{\sL@sfivea} \phi\) implies \(\entails_{\sL@intcompl} u(\phi)\), for \(\phi \in \cLKA\) (the only new part to check there is that \(u(\neg\K\phi \limplies \K\neg\K\phi)\) is a theorem of \(\sL@intcompl\), but this follows from \(\fiveshat\)). The remainder of the proof goes through as before, this time appealing to the bijection \(g: \M@intcompl \to \M@sfive\).
The Multi-source Case¶
So far we have focussed on the expertise on a single source. In this section we generalise the setting to handle multiple sources. This allows us to consider not only the expertise of different sources individually, but also notions of collective expertise. For example, how may sources combine their expertise so that the group becomes more expert than the individual members? Is there a suitable notion of common expertise? To answer these questions we take inspiration from the well-studied notions of distributed knowledge and common knowledge from epistemic logic (see e.g. [FMHV03]), and establish connections between collective expertise and collective knowledge.
Collective Knowledge¶
Let \(\J\) be a finite, non-empty set of sources. Turning briefly to epistemic logic interpreted under relational semantics, we define several notions of collective knowledge. First, a multi-source relational model is a triple \(M^* = (X, \{R_j\}_{j \in \J}, V)\), where \(R_j\) is a binary relation on \(X\) for each \(j\). Consider the following knowledge operators [FMHV03]:
\(\K_j\phi\) (individual knowledge): for \(j \in J\) and a formula \(\phi\), set
\[M^*, x \models \K_j\phi \iff \forall y \in X: x{R_j}y \implies M^*, y \models \phi\]This is the straightforward adaptation of knowledge in the single-source case to the multi-source setting.
\(\Kdist_J\phi\) (distributed knowledge): for \(J \subseteq \J\) non-empty, set
\[M^*, x \models \Kdist_J\phi \iff \forall y \in X: (x, y) \in \bigcap_{j \in J}{R_j} \implies M^*, y \models \phi\]That is, knowledge of \(\phi\) is distributed among the sources in \(J\) if, by combining their accessibility relations \(R_j\), all states possible at \(x\) satisfy \(\phi\). Here the \(R_j\) are combined by taking their intersection: a state \(y\) is possible according to the group at \(x\) iff every source in \(J\) considers \(y\) possible at \(x\).
\(\Kshared_J\phi\) (shared knowledge): 8 for \(J \subseteq \J\) non-empty, set
\[M^*, x \models \Kshared_J\phi \iff \forall j \in J: M^*, x \models \K_j\phi\]That is, a group \(J\) have shared knowledge of \(\phi\) exactly when each agent in \(J\) knows \(\phi\). Thus we have \(\Kshared_J\phi \equiv \bigland_{j \in J}\K_j\phi\).
\(\Kcommon_J\phi\) (common knowledge): write \(\K_J^1\phi\) for \(\Kshared_J\phi\), and for \(n \in \N\) write \(\K_J^{n + 1}\phi\) for \(\Kshared_J\K_J^n\phi\). Then
\[M^*, x \models \Kcommon_J\phi \iff \forall n \in \N: M^*, x \models \K_J^n\phi\]Here \(\K_J^1\phi\) says that everyone in \(J\) knows \(\phi\), \(\K_J^2\phi\) says that everybody in \(J\) knows that everybody in \(J\) knows \(\phi\), and so on. There is common knowledge of \(\phi\) among \(J\) if this nesting of “everybody knows” holds for any order \(n\).
In what follows we write \(\cLKAJ\) for the language formed from \(\Prop\) with knowledge operators \(\K_j\), \(\Kdist_J\), \(\Kshared_J\) and \(\Kcommon_J\), for \(j \in \J\) and \(J \subseteq \J\) non-empty, and the universal modality \(\A\).
Collective Expertise¶
Returning to expertise semantics, define a multi-source expertise model as a triple \(M = (X, \{P_j\}_{j \in \J}, V)\), where \(P_j \subseteq 2^X\) is a collection of expertise sets for source \(j\). We will say \(M\) is closed under intersection, unions, complements etc. if each \(P_j\) is. Since the connection between expertise and S4 knowledge (Theorem 3) holds for expertise models closed under unions and intersections, we will restrict attention to this class of (multi-source) models throughout this section.
The counterpart of individual knowledge – individual expertise – is straightforward: we may simply introduce expertise and soundness operators \(\E_j\) and \(\S_j\) for each source \(j \in \J\), and interpret \(\E_j\phi\) and \(\S_j\phi\) as in the single-source case using \(P_j\). For notions of collective expertise and soundness, we define new collections \(P_J\) by combining the \(P_j\) in an appropriate way.
Distributed Expertise¶
For distributed expertise, the intuition is clear: the sources in a group \(J\) should combine their expertise collections \(P_j\) to form a larger collection \(\Pdist_J\). A first candidate for \(\Pdist_J\) would therefore be \(\bigcup_{j \in J}{P_j}\). However, since we assume each \(P_j\) is closed under unions and intersections, we suppose that each source \(j\) has cognitive or computational capacity to combine expertise sets \(A \in P_j\) by taking unions or intersections. We argue that the same should be possible for the group \(J\) as a whole, and therefore let \(\Pdist_J\) be the closure of \(\bigcup_{j \in J}{P_j}\) under unions and intersections:
Note that \(\Pdist_J\) is closed under unions and intersections, and \(P_j \subseteq \Pdist_J\) for all \(j \in J\) (in fact, \(\Pdist_J\) is the smallest set with these properties). While \(\Pdist_J\) depends on the model \(M\), we suppress this from the notation.
Now, recall from Theorem 3 that our semantics for expertise and soundness is connected to relational semantics via the mapping \(P \mapsto R_P\) (Definition 3). The following result shows that \(\Pdist_J\) corresponds to distributed knowledge under this mapping. For ease of notation, write \(\Rdist_J\) for \(R_{\Pdist_J}\) and \(R_j\) for \(R_{P_j}\).
Proposition 4
Let \(M\) be a multi-source expertise model. Then
Proof.
“\(\subseteq\)”: Suppose \(x{\Rdist_J}y\). Let \(j \in J\). We need to show \(x{R_j}y\). Take any \(A \in P_j\) such that \(y \in A\). Then \(A \in \Pdist_J\), so \(x{\Rdist_J}y\) gives \(x \in A\). Hence \(x{R_j}y\).
“\(\supseteq\)”: Suppose \((x, y) \in \bigcap_{j \in J}{R_j}\). Then \(x{R_j}y\) for all \(j \in J\). Set
and
We claim \(P'\) is closed under unions and intersections. Indeed, suppose \(\{A_i\}_{i \in I} \subseteq P'\) for some index set \(I\). Write \(A = \bigcup_{i \in I}{A_i}\). Since \(A_i \in P' \subseteq \Pdist_J\) and \(\Pdist_J\) is closed under unions, we have \(A \in \Pdist_J\). We need to show \(A \notin F\). If \(y \notin A\) this is clear. Supose \(y \in A\). Then there is \(i \in I\) such that \(y \in A_i\). Since \(A_i \in P'\), we have \(A_i \notin F\) and thus \(x \in A_i \subseteq A\). Hence \(A \notin F\), so \(A \in P'\). This shows that \(P'\) is closed under unions. The case for intersections is similar.
Next we claim \(P' \supseteq \bigcup_{j \in J}{P_j}\). Take \(j \in J\) and \(A \in P_j\). Clearly \(A \in \Pdist_J\). We claim \(A \notin F\). Indeed, suppose wlog that \(y \in A\). Then since \(A \in P_j\) and \(x{R_j}y\), we have \(x \in A\) by the definition of \(R_j\). Thus \(A \in \Pdist_J \setminus F = P'\).
We see from the definition of \(\Pdist_J\) that \(\Pdist_J \subseteq P'\). This implies \(F = \emptyset\). In other words, for all \(A \in \Pdist_J\), if \(y \in A\) then we must also have \(x \in A\). But this is exactly what it means for \(x{\Rdist_J}y\) to hold, and the proof is complete.
Common Expertise¶
Common expertise admits a straightforward definition: simply take the expertise sets in common with all \(P_j\):
If each \(P_j\) is closed under unions and intersections, then so too is \(\Pcommon_J\).
At first this may appear too straightforward. The form of the definition is closer to shared knowledge than to common knowledge. But in fact, shared knowledge has no expertise counterpart which admits the type of connection established in Theorem 3. Indeed, shared knowledge may fail positive introspection (axiom 4: \(\K\phi \limplies \K\K\phi\)), but we have seen that the knowledge derived from expertise and soundness satisfies S4 (when the collection of expertise sets is closed under unions and complements).
However, this problem is only apparent in the translation of \(\S\phi\) as \(\neg\K\neg\phi\). For our translation of \(\E\phi\) as \(\A(\neg\phi \limplies \K\neg\phi)\), the universal quantification dissolves the differences between shared and common knowledge.
Proposition 5
Let \(\phi \in \cLKAJ\) and let \(J \subseteq \J\) be non-empty. Then
Proof.
Let \(M^* = (X, \{R_j\}_{j \in \J}, V)\) be a multi-source relational model. Since \(\Kcommon_J\psi \limplies \Kshared_J\psi\) is valid for any \(\psi\), the left-to-right implication of the above equivalence is straightforward.
For the right-to-left implication, suppose \(M^*, x \models \A(\neg\phi \limplies \Kshared_J\neg\phi)\). We show by induction that \(M^*, x \models \A(\neg\phi \limplies \K_J^n\neg\phi)\) for all \(n \in \N\), from which the result follows.
The base case \(n = 1\) is given, since \(\K_J^1\neg\phi = \Kshared_J\neg\phi\). For the inductive step, suppose \(M^*, x \models \A(\neg\phi \limplies \K_J^n\neg\phi)\). Take \(y \in X\) such that \(M^*, y \models \neg\phi\). Let \(j \in J\). Take \(z \in X\) such that \(y{R_j}z\). From the initial assumption we have \(M^*, y \models \Kshared_J\neg\phi\), so \(M^*, y \models \K_j\neg\phi\) and thus \(M^*, z \models \neg\phi\). By the inductive hypothesis, \(M^*, z \models \K_J^n\neg\phi\). This shows that \(M^*, y \models \K_j\K_J^n\neg\phi\) for all \(j \in J\), and thus \(M^*, y \models K^{n+1}_J\neg\phi\). Hence \(M^*, x \models \A(\neg\phi \limplies \K^{n+1}_J\neg\phi)\) as required.
Proposition 5 shows that when interpreting collective expertise on \(\phi\) as collective refutation of \(\phi\) whenever \(\phi\) is false, there is no difference between using common knowledge and just shared knowledge.
We now confirm that \(\Pcommon_J\) does indeed correspond to common knowledge. First we recall the following well-known result. In what follows, for a binary relation \(R\) we write \(R^\trcls = \bigcup_{n \in \N}{R^n}\) for the transitive closure of \(R\).
Lemma 10 ([FMHV03], Lemma 2.2.1)
Let \(M^* = (X, \{R_j\}_{j \in \J}, V)\) be a multi-source relational model and \(J \subseteq \J\) non-empty. Write \(R' = \left(\bigcup_{j \in J}{R_j}\right)^\trcls\). Then for all \(x \in X\) and \(\phi \in \cLKAJ\):
By Lemma 10, common knowledge has an interpretation in terms of the usual relational semantics for knowledge, where we use the transitive closure of the union of the accessibility relations of the sources in \(J\). We also require an auxiliary lemma.
Lemma 11
Let \(X\) be a set and \(R, S\) binary relations on \(X\). Then \(A \subseteq X\) is downwards closed with respect to \(R\) if and only if \(A\) is downwards closed with respect to \(R^\trcls\).
Proof.
First suppose \(A\) is downwards closed with respect to \(R^\trcls\). Take \(y \in A\) and suppose \(xRy\). Then \(x{R^\trcls}y\) also, so \(x \in A\). Hence \(A\) is downwards closed with respect to \(R\).
Now suppose \(A\) is downwards closed with respect to \(R\). A straightforward induction shows that \(A\) is downwards closed with respect to \(R^n\) for all \(n \in \N\), and closure with respect to \(R^\trcls\) follows easily.
Writing \(\Rcommon_J\) for \(R_{\Pcommon_J}\) and \(R_j\) for \(R_{P_j}\), we have the following.
Proposition 6
Let \(M\) be a multi-source model closed under unions and intersections. Then
Proof.
Write \(R' = (\bigcup_{j \in J}{R_j})^{\trcls}\). Note that \(\Rcommon_J\) is reflexive and transitive by Lemma 3 (1). \(R'\) is transitive by its definition as a transitive closure, and reflexive since each \(R_j\) is (and \(J \ne \emptyset\)).
It is therefore sufficient by Lemma 2 to show that any \(A \subseteq X\) is downwards closed wrt \(\Rcommon_J\) iff it is downwards closed wrt \(R'\). Since each \(P_j\) is closed under unions and intersections, so too is \(\Pcommon_J\). Making use of Lemma 3 (2), we have
where the last step uses Lemma 11. This completes the proof.
Collective semantics¶
We now formally define the semantics of collective expertise. Let \(\cLJ\) be the language defined by the following grammar:
for \(p \in \Prop\), \(j \in \J\), \(g \in \{\dist, \common\}\) and \(J \subseteq \J\) non-empty. For a multi-source expertise model \(M = (X, \{P_j\}_{j \in \J}, V)\), we define the satisfaction relation as before for atomic propositions, propositional connectives and the universal modality \(\A\), and set
Note that expertise and soundness are interpreted as before, but with respect to different collections \(P\). Consequently, the interactions between the two shown in Proposition 1 still hold for individual and collective notions of expertise and soundness. The following validities express properties specific to collective expertise. In what follows, we write \(\M@J\) for the class of multi-source models, and adopt similar notation for sub-classes.
Proposition 7
The following formulas are valid (with respect to \(\M@J\), unless otherwise stated):
For \(j \in J\), \(\E_j\phi \limplies \E_J^\dist\phi\)
\(\E_J^\common\phi \liff \bigland_{j \in J}{\E_j\phi}\)
\(\S_J^\common\phi \liff \biglor_{j \in J}\S_j\S_J^\common\phi\)
\(\E_{\{j\}}^\dist\phi \liff \E_j\phi\) is valid on \(\M@intJ \cap \M@unionsJ\)
Proof.
Clear from the fact that \(P_j \subseteq P_J^\dist\) for \(j \in J\).
Clear from the definition of \(P_J^\common\).
The right implication is valid since \(\psi \limplies \S_j\psi\) is, with \(\psi\) set to \(\S_J^\common\phi\) and \(j \in J\) arbitrary (recall \(J\) is non-empty).
For the left implication, suppose there is \(j \in J\) with \(M, x \models \S_j\S_J^\common\phi\). Then \(x \in \bigcap\{A \in P_j \mid \|\S_J^\common\phi\|_M \subseteq A\}\). Now take \(B \in P_J^\common\) such that \(\|\phi\|_M \subseteq B\). Note that if \(y \in \|\S_J^\common\phi\|\) then \(y \in B\) by the definition of the semantics for \(\S_J^\common\), so \(\|\S_J^\common\phi\|_M \subseteq B\). Since \(B \in P_J^\common \subseteq P_j\), we get \(x \in B\). This shows \(M, x \models \S_J^\common\phi\).
Clear since if \(P_j\) is closed under unions and intersections we have \(P_{\{j\}}^\dist = P_j\).
Proposition 7 (3) comes from the fixed-point axiom for common knowledge: \(\Kcommon_J\phi \liff \Kshared_J(\phi \land \Kcommon_J\phi)\). In our case it says that \(\S_J^\common\phi\) is a fixed-point of the function \(\theta \mapsto \biglor_{j \in J}{\S_j\theta}\). In words, \(\phi\) is true up to lack of common expertise if there is some source for whom \(\S_J^\common\phi\) may be true up to lack of their individual expertise.
As promised, there is a tight link between our notions of collective expertise/soundness and knowledge. Define a translation \(t: \cLJ \to \cLKAJ\) inductively by
This is essentially the same translation as before, but now with the various types of expertise and soundness matches with their knowledge counterparts. We have an analogue of Theorem 3.
Theorem 8
The mapping \(f: \M@intJ \cap \M@unionsJ \to \M@sfourJ\) given by \((X, \{P_j\}_{j \in \J}, V) \mapsto (X, \{R_{P_j}\}_{j \in \J}, V)\) is biijective, and for \(x \in X\) and \(\phi \in \cLJ\):
Moreover, the restriction of this map to \(\M@intJ \cap \M@complJ\) is a bijection into \(\M@sfiveJ\).
Proof (sketch).
That the map is bijective follows easily from Theorem 1 and Theorem 2. For (3) we proceed by induction on \(\cLJ\) formulas. As in Theorem 3, the cases for atomic propositions, propositional connectives and the universal modality are straightforward.
For expertise and soundness, the argument in the proof of Theorem 3 showed that \(\E\phi\) and \(\S\phi\) interpreted with respect to some collection \(P\) is equivalent to \(t(\E\phi)\) and \(t(\S\phi)\) interpreted with respect to the relational semantics via \(R_P\).
It is therefore sufficient to show that for each notion of individual and collective expertise interpreted in \(M\) via \(P\), its corresponding notion of individual or collective knowledge (used in the translation \(t\)) is interpreted in \(f(M)\) via \(R_P\). This is self-evident for individual expertise. For distributive expertise this was shown in Proposition 4. For common expertise this was shown in Lemma 10 and Proposition 6. This completes the proof.
One can use Theorem 8 to adapt any sound and complete axiomatisation for \(\M@sfourJ\) (resp., \(\M@sfiveJ\)) over the language \(\cLKAJ\) to obtain an axiomatisation for \(\M@intJ \cap \M@unionsJ\) (resp., \(\M@intJ \cap \M@complJ\)) over \(\cLJ\), in the same way as we did earlier when adapting S4 and S5 in Theorem 6 and Theorem 7.
Dynamic Extension¶
So far our picture has been entirely static. We cannot speak of expertise changing over time, nor of the information in a model changing via announcements from sources. In this section we extend the framework with two dynamic operators: one to account for increases in expertise – e.g. after a process of learning or acquisition of new evidence – and one to model sound announcements. For simplicity, we return to the single-source case.
Expertise Increase¶
As a source interacts with the world over time, they may learn to make more distinctions between possible states of the world, and thereby increase their expertise. Leaving the particulars of the learning mechanism unspecified, we study only the end result: the source’s expertise collection \(P\) is expanded to include a new set \(A\).
However, this may not be so simple as setting \(P' = P \cup \{A\}\) in light of the closure properties that may be imposed \(P\). As remarked in section [TODO], closure conditions correspond to assumptions about the source’s cognitive or computational capabilities. It seems natural that if the source has the ability to combine sets in \(P\) by taking intersections, for example, then they should also be able to do after the learning, i.e. \(P'\) should also be closed under intersections. Thus, the new collection \(P'\) should inherit any closure properties from \(P\), while extending \(P \cup \{A\}\). In principle, we could therefore consider an expertise increase operation for each combination of closure properties.
For concreteness we will not do this, and will instead focus on the class \(\M@int\) of models closed under intersections. Conceptually, this is a minimal requirement, since we argued in section [TODO] that closure under intersections is a natural property. There are also technical advantages: we will later show that closure under intersections allows us to find reduction axioms which allow the formulas involving expertise increase to be equivalently expressed in the static language.
Definition 4
Given an expertise model \(M = (X, P, V)\) and a formula \(\phi\), define the model \(M^{\expinc\phi} = (X, P^{\expinc\phi}, V)\) by setting
That is, \(P^{\expinc\phi}\) is obtained from \(P\) by adding the set \(\|\phi\|_M\) and closing under intersections.
Syntactically, we introduce formulas of the form \([\expinc\phi]\psi\), which are to be read as “\(\psi\) holds after the source gains expertise on \(\phi\)”. The truth condition for \([\expinc\phi]\psi\) in a model \(M\) is defined in terms of \(M^{\expinc\phi}\):
If \(\cLzero\) denotes the propositional language build from \(\Prop\), then \([\expinc\alpha]\E\alpha\) is valid for all \(\alpha \in \cLzero\). That is, expertise increase is successful for any propositional formula. However, this is not the case for general formulas \(\phi \in \cL\). This comes from the fact that expertise is represented semantically via sets of states. The operator \([\expinc\phi]\) represents the source obtaining expertise on the set of \(\phi\) states, where \(\phi\) is interpreted before the increase took place. If \(\phi\) refers to expertise (with \(\E\) or \(\S\)) then the meaning of \(\phi\) may change after the increase. For example, consider the model \(M = (X, P, V)\) with
Then, with \(\phi = p \lor (q \land \neg\S p)\) we have \(M, 1 \not\models [\expinc\phi]\E\phi\). 9 This counterexample is reminiscent of Moore sentences as formalised in Dynamic Epistemic Logic; e.g. an agent cannot know \(p \land \neg\K p\) (“\(p\) is true but I do not know it”) after this is truthfully announced [BS08].
Next we give reduction axioms to express any formula involving \([\expinc\phi]\) by an equivalent formula in the static language \(\cL\).
Proposition 8
The following formulas are valid on \(\M\):
Proof.
The cases for atomic propositions, propositional connectives and \(\A\) are straightforward. We show the reduction axiom for \(\S\). Let \(M = (X, P, V)\) be a model and \(x \in X\).
“\(\limplies\)”: Suppose \(M, x \models [\expinc\phi]\S\psi\). Then \(M^{\expinc\phi}, x \models \S\psi\). Hence
Note that \(\|\psi\|_{M^{\expinc\phi}} = \|[\expinc\phi]\psi\|_M\). Now take \(A \in P\) such that \(\|[\expinc\phi]\psi\|_M \subseteq A\). Since \(P \subseteq P^{\expinc\phi}\), we get \(x \in A\) from \((*)\). Hence \(M, x \models \S[\expinc\phi]\psi\).
Now suppose \(M, x \models \A([\expinc\phi]\psi \limplies \phi)\). Then \(\|[\expinc\phi]\psi\|_M \subseteq \|\phi\|_M\), so \(\|\psi\|_{M^{\expinc\phi}} \subseteq \|\phi\|_M\). Since \(\|\phi\|_M \in P^{\expinc\phi}\), we get \(x \in \|\phi\|_M\) from \((*)\), i.e. \(M, x \models \phi\) as required.
“\(\limpliedby\)”: Suppose \(M, x \models \S[\expinc\phi]\psi\) and \(M, x \models \A([\expinc\phi]\psi \limplies \phi) \limplies \phi\). Take \(A \in P^{\expinc\phi}\) such that \(\|\psi\|_{M^{\expinc\phi}} \subseteq A\). Then \(\|[\expinc\phi]\psi\|_M \subseteq A\). By definition of \(P^{\expinc\phi}\), there is a collection \(\mathcal{A} \subseteq P \cup \{\|\phi\|_M\}\) such that \(A = \bigcap\mathcal{A}\). Let \(B \in \mathcal{A}\). If \(B \in P\), then \(\|[\expinc\phi]\psi\|_M \subseteq A \subseteq B\) and \(M, x \models \S[\expinc\phi]\psi\) give \(x \in B\). Otherwise, \(B = \|\phi\|_M\). Hence \(\|[\expinc\phi]\psi\|_M \subseteq \|\phi\|_M\), so \(M, x \models \A([\expinc\phi]\psi \limplies \phi)\). By the second assumption, we get \(M, x \models \phi\), i.e. \(x \in \|\phi\|_M = B\). We have now shown that \(x \in \bigcap\mathcal{A} = A\), and thus \(M^{\expinc\phi}, x \models \S\psi\) and \(M, x \models [\expinc\phi]\S\psi\).
For the reduction axiom for \(\E\), note that since \(M^{\expinc\phi} \in \M@int\) we have \(M^{\expinc\phi}, x \models E\psi\) iff \(M^{\expinc\phi}, x \models \A(\S\psi \limplies \psi)\). Using the reduction axioms for \(\A\) and \(\S\) (and the reduction axiom for the implication, derived from those for \(\neg\) and \(\land\)), we obtain the desired equivalence.
Note that only the reduction axiom for \([\expinc\phi]\E\psi\) requires \(M^{\expinc\phi}\) to be closed under intersections.
Sound Announcements¶
In logics of public announcement [Pla07][vDvdHK08], the dynamic operator \([!\phi]\) represents a public and truthful announcement of \(\phi\); the formula \([!\phi]\psi\) is read as “after \(\phi\) is announced, \(\psi\) holds”. Such an announcement changes the information available in a model: after the announcement, all \(\neg\phi\) states are eliminated.
Since the premise of our work is to deal with non-expert sources, the truthfulness requirement is too strong for an announcement operator in our setting. Instead, we consider sound announcements: the source may announce \(\phi\) whenever \(\phi\) is sound at the current state. That is, the source may announce any (possibly false) statement which is true up to their lack of expertise.
Such an announcement is denoted syntactically by \([\sndann\phi]\). As with the expertise increase operator, we define a model update operation \(M \mapsto M^{\sndann\phi}\).
It is clear how one should define new set of states: since the announcement tells us \(\phi\) is sound, we eliminate unsound states by setting \(X^{\sndann\phi} = \|\S\phi\|_M\). The valuation is also straightforward, since announcements should not change the meaning of atomic propositions.
What about the new expertise collection \(P^{\sndann\phi}\)? If we restrict attention to models closed under intersections, as we did for expertise increase, then a natural choice is to simply restrict each \(A \in P\) to \(X^{\sndann\phi}\) by intersection. Since \(X^{\sndann\phi} = \|\S\phi\|_M = \bigcap\{B \in P \mid \|\phi\|_M \subseteq B\}\), by the closure property we will have \(P^{\sndann\phi} \subseteq P\), so that announcements do not increase expertise. This assumption will also permit us to find reduction axioms later on.
Definition 5
Let \(M = (X, P, V)\) be an expertise model. For a formula \(\phi\), define the model \(M^{\sndann\phi} = (X^{\sndann\phi}, P^{\sndann\phi}, V^{\sndann\phi})\) by setting
Semantically, the truth condition for \([\sndann\phi]\psi\) is as follows.
Note the precondition that \(\S\phi\) is true: if \(\phi\) is unsound, \([\sndann\phi]\psi\) is true for any \(\psi\).
[TODO: example. Show how announcements can fail even for atomic propositions, if the source does not have expertise]
As with the expertise increase operator, sound announcements are successful for purely propositional formulas \(\alpha \in \cLzero\): \([\sndann\alpha]\S\alpha\) is valid on \(\M@int\). That is, \(\alpha\) remains sound after \(\alpha\) is soundly announced. This is not true for general formulas \(\phi \in \cL\) which may refer to expertise itself. For example, in the model \(M = (X, P, V) \in \M@int\) given by \(X = \{1,2,3,4\}\), \(P = \{\emptyset, X, \{1\}, \{2\}, \{1, 2, 3\}\}\), \(V(p) = \{1, 2\}\) and \(V(q) = \{2, 4\}\), with \(\phi = p \land \neg\E q\) we have \(M, 1 \not\models [\sndann\phi]\S\phi\).
The following reduction axioms allow formulas involving announcements to be expressed in the static language.
Proposition 9
The following formulas are valid on \(\M\):
and the following is valid on \(\M@int\):
Proof.
The cases of atomic propositions, propositional connectives and the universal modality \(\A\) are straightforward.
For the reduction axiom for \(\S\), first note that \(\|\psi\|_{M^{\sndann\phi}} = \|\S\phi \land [\sndann\phi]\psi\|_M\). We need to show that \(M, x \models [\sndann\phi]\S\psi\) iff \(M, x \models \S\phi \limplies \S(\S\phi \land [\sndann\phi]\psi)\). If \(M, x \not\models \S\phi\) this is clear. Otherwise \(x \in \|\S\phi\|_M\), and we have
and the result follows.
For the \(\E\) reduction axiom, take \(M \in \M@int\). Again, suppose without loss of generality that \(x \in \|\S\phi\|_M\). Then we have
where the forwards direction of the penultimate equivalence holds since \(P^{\sndann\phi} \subseteq P\) when \(M\) is closed under intersections, and the backwards direction holds since \(\|\S\phi \land [\sndann\phi]\psi\|_M \subseteq \|\S\phi\|_M = X^{\sndann\phi}\). It follows that \(M, x \models [\sndann\phi]\E\psi\) iff \(M, x \models \S\phi \limplies \E(\S\phi \land [\sndann\phi]\psi)\), as required.
To conclude, we note some interesting validities involving the dynamic operators and their interaction.
Proposition 10
For any \(\alpha, \beta \in \cLzero\), the following formulas are valid on \(\M@int\):
\(\E\alpha \liff \A[\sndann\alpha]\alpha\)
\(\A(\alpha \limplies \beta) \limplies [\expinc\beta][\sndann\alpha]\beta\)
\([\expinc\alpha][\sndann\alpha]\alpha\)
Proof.
Using the reduction axioms for atomic propositions, conjunctions and negations, one can show by induction that \([\sndann\phi]\alpha\) is equivalent to \(\S\phi \limplies \alpha\). Applying this with \(\phi = \alpha\), we have that \(\A[\sndann\alpha]\alpha\) is equivalent to \(\A(\S\alpha \limplies \alpha)\), which is equivalent to \(\E\alpha\) for models closed under intersections.
We use the following fact, whose proof is straightforward by induction on \(\cLzero\) formulas.
For \(\alpha \in \cLzero\), \(\phi \in \cL\) and any model \(M\), \(\|\alpha\|_{M^{\expinc\phi}} = \|\alpha\|_M\) and \(\|\alpha\|_{M^{\sndann\phi}} = \|\alpha \land \S\phi\|_M\).
Now, take \(M = (X, P, V) \in \M@int\), \(x \in X\), and suppose \(M, x \models \A(\alpha \limplies \beta)\). Then \(\|\alpha\|_M \subseteq \|\beta\|_M\).
We need to show \(M, x \models [\expinc\beta][\sndann\alpha]\beta\), i.e. \(M^{\expinc\beta}, x \models [\sndann\alpha]\beta\). Suppose \(M^{\expinc\beta}, x \models \S\alpha\). To show \((M^{\expinc\beta})^{\sndann\alpha}, x \models \beta\), we need
\[x \in \|\beta\|_{(M^{\expinc\beta})^{\sndann\alpha}} = \|\beta \land \S\alpha\|_{M^{\expinc\beta}} \]where the equality follows from the claim above. By assumption \(M^{\expinc\beta}, x \models \S\alpha\), so we only need to show \(M^{\expinc\beta}, x \models \beta\).
Since \([\expinc\beta]\E\beta\) is valid in \(M\), we have \(M^{\expinc\beta}, x \models \E\beta\). From Proposition 1 (3), \(M^{\expinc\beta}, x \models \A(\alpha \limplies \beta) \limplies (\S\alpha \land \E\beta \limplies \beta)\). But from the above claim and \(\|\alpha\|_M \subseteq \|\beta\|_M\) we have \(\|\alpha\|_{M^{\expinc\beta}} \subseteq \|\beta\|_{M^{\expinc\beta}}\), i.e. \(M^{\expinc\beta}, x \models \A(\alpha \limplies \beta)\). Hence \(M^{\expinc\beta}, x \models \beta\), and we are done.
Taking \(\beta = \alpha\), this validity follows from (2).
In words, (1) says that expertise on a propositional formula \(\alpha\) is equivalent to the guarantee that \(\alpha\) is true whenever it is soundly announced. (2) is essentially a reformulation of Proposition 1 (3); it says that if \(\beta\) is logically weaker than \(\alpha\), gaining expertise on \(\beta\) ensures that \(\beta\) is at least true after a sound announcement of the stronger formula \(\alpha\). (3) is the special case of (2) with \(\beta = \alpha\), which says that \(\alpha\) is true following a sound announcement after the sources gains expertise on \(\alpha\).
Conclusion¶
TODO
- 1
Note that we could instead consider the dual case: expertise means being able to veryify when a proposition is true.
- 2
For this to be the case \(P\) must be closed under intersections and finite unions, and contain both the empty set and \(X\) itself. We will turn to these closure and inclusion properties later. [TODO: check]
- 3
Here we include the empty collection \(\emptyset \subseteq P\) and employ the nullary intersection convention that \(\bigcap \emptyset = X\). Closure under intersections therefore implies \(X \in P\).
- 4
If not, then \(s \mapsto (\Gamma, s)\) is an injective mapping \(\R \to \mathcal{D}\), which would imply \(\R\) is countable.
- 5
\(\sL\) strongly complete means that for all sets \(\Gamma \subseteq \cL\) and \(\phi \in \cL\), if \(\Gamma \models \phi\) then \(\Gamma \entails \phi\).
- 6
This follows from completeness of \(\sL\), since \(\E\phi \limplies \A(\S\phi \limplies \phi)\) was shown to be valid in Proposition 1. Nevertheless, we present an \(\sL\)-proof:
\(\phi \limplies \phi\) (PL)
\(\A(\phi \limplies \phi)\) \(\necuniv\), 1
\(\A(\phi \limplies \phi) \limplies (\S\phi \land \E\phi \limplies \phi)\) \(\weakening@E\)
\(\S\phi \land \E\phi \limplies \phi\) \(\modpon\), 2, 3
\(\E\phi \limplies (\S\phi \limplies \phi)\) (PL), 4
\(\A(\E\phi \limplies (\S\phi \limplies \phi))\) \(\necuniv\), 4
\(\A(\E\phi \limplies (\S\phi \limplies \phi)) \limplies (\A\E\phi \limplies \A(\S\phi \limplies \phi))\) \(\Kuniv\)
\(\A\E\phi \limplies \A(\S\phi \limplies \phi)\) \(\modpon\), 6, 7
\(\E\phi \liff \A\E\phi\) \(\EA\)
\(\E\phi \limplies \A\E\phi\) (PL), 9
\((p \limplies q) \limplies ((q \limplies r) \limplies (p \limplies r))\) (PL)
\((\E\phi \limplies \A\E\phi) \limplies ((\A\E\phi \limplies \A(\S\phi \limplies \phi)) \limplies (\E\phi \limplies \A(\S\phi \limplies \phi)))\) (instance of 11)
\((\A\E\phi \limplies \A(\S\phi \limplies \phi)) \limplies (\E\phi \limplies \A(\S\phi \limplies \phi))\) \(\modpon\), 10, 12
\(\E\phi \limplies \A(\S\phi \limplies \phi)\) \(\modpon\), 8, 13
- 7
By the convention that the empty intersection is the whole space \(X\) and the empty union is \(\emptyset\), we have \(X, \emptyset \in P\) too.
- 8
In [FMHV03], shared knowledge is denoted \(\mathsf{E}_J\phi\) for “everybody knows \(\phi\)”. We opt to use the term “shared” knowledge to avoid conflict with our notation for expertise.
- 9
In detail, we have \(\|\phi\|_M = \{1, 2\}\), so \(P^{\expinc\phi} = \{\emptyset, X, \{1, 3\}, \{1, 2\}, \{1\}\}\). Then \(\|\phi\|_{M^{\expinc\phi}} = \{1, 2, 3\} \notin P^{\expinc\phi}\), so \(M^{\expinc\phi}, 1 \not\models \E\phi\).