Formal learning, epistemic logic etc

On the Solvability of Inductive Problems: A Study in Epistemic Topology (2016)

(Baltag, Alexandru and Gierasimczuk, Nina and Smets, Sonja)

  • Summary:

    • Topological study of learnability in the framework of formal learning theory

    • Possible observations are viewed as open sets in a topology. Learning-theoretic notions of verifiability and learnability are characterised in terms of topological separation properties of the observations

    • In brief: the true state of the world is learnable if there are “enough” fine-grained observations to single out the state, i.e. if there are enough open sets to separate points

  • Framework:

    • Epistemic space: \(\mathcal{S} = (S, \mathcal{O})\). \(S\) is a set of states, and \(\mathcal{O} \subseteq 2^S\) is a set of observations

    • Problem: \(\mathcal{P} = (\mathcal{S}, \mathcal{Q})\), where \(\mathcal{Q}\) is a partition of \(S\)

    • (Standard) learner \(L\): for each \(\mathcal{S}\), \(L\) maps finite sequences of observations \(\sigma = (O_1,\ldots,O_n)\) to a belief set \(L_{\mathcal{S}}(\sigma) \subseteq S\).

      • Intuition: \(L\) believes \(P \subseteq S\) on input \(\sigma\) if \(L(\sigma) \subseteq P\) (i.e. every state \(L\) thinks possible lies in \(P\))

      • (This is the special “standard” case; see paper for general definition)

    • An AGM learner assigns to each epistemic space \(\mathcal{S}\) a total preorder \(\le_\mathcal{S}\) over \(S\), and satisfies

      \[L_\mathcal{S}(\sigma) = \min_{\le_\mathcal{S}}{\bigcap \sigma}\]

      whenever the minimum is non-empty (see paper for full definition)

    • A sequence \(\vec{O} = (O_n)_{n \in \mathbb{N}}\) is sound and complete for a state \(s \in S\) if for all observations \(O \in \mathcal{O}\),

      \[s \in O \iff \exists k \in \mathbb{n} : \bigcap \vec{O}[k] \subseteq O\]

      i.e. every observation in \(\vec{O}\) is true at \(s\) (soundness), and every true observation is a consequence of some initial segement of \(\vec{O}\) (completeness)

    • \(L\) verifies \(A \subseteq S\) in the limit iff for any \(s \in S\) and any sound and complete sequence \(\vec{O}\),

      \[s \in A \iff \exists k \in \mathbb{N} : \forall n \ge k : L(\vec{O}[n]) \subseteq A\]

      i.e. \(A\) is true (at \(s\)) iff there is some finite time after which \(L\) always believes \(A\) (when the information stream is sound and complete)

      • \(L\) falsifies \(A\) if it verifies the complement \(A^c\), and \(L\) decides \(A\) is it both verifies and falsifies \(A\)

    • \(L\) solves a problem \(\mathcal{P} = (\mathcal{S}, \mathcal{Q})\) if for any \(s \in S\) and any sound and complete sequence \(\vec{O}\),

      \[\exists k \in \mathbb{N} : \forall n \ge k : L(\vec{O}[n]) \subseteq A_s\]

      where \(A_s\) is the unique set \(A \in \mathcal{Q}\) containing \(s\)

      • \(L\) learns the epistemic space \(\mathcal{S}\) if it solves the problem \(\mathcal{Q} = \{\{s\} \mid s \in S \}\)

  • Topology preliminaries:

    • We equip \(S\) with the topology generated by \(\mathcal{O}\) (i.e. the smallest topology in which each \(O \in \mathcal{O}\) is open)

    • \(A \subset S\) is locally closed if \(A = U \cap \bar{A}\) for some open set \(U\) (where \(\bar{A}\) denotes the closure of \(A\))

    • \(A\) is \(\omega\)-constructible if it is a countable union of locally closed sets

    • A problem \(\mathcal{P} = (\mathcal{S}, \mathcal{Q})\) is linearly separated if there is a total order \(\trianglelefteq\) on \(\mathcal{Q}\) such that \(A \cap \overline{\bigcup_{B \triangleleft A}B} = \emptyset\)

  • Results:

    • Let \(A \subseteq S\). Say a finite sequence of observations \(\sigma\) is \(A\)-locking for a learner \(L\) and state \(s \in A\) if:

      1. \(s \in \bigcap \sigma\) (i.e. \(\sigma\) – interpreted as the conjunction of its observations – is true at \(s\))

      2. If \(\delta\) is another other finite sequence with \(s \in \bigcap \delta\), then \(L(\sigma\delta) \subseteq A\)

      i.e. \(\sigma\) is “strong enough” so that \(L\) always believes \(A\) after receiving any further information consistent with \(s\)

    • Write \(L_A^\sigma = \{s \in A \mid \sigma \text{ is } A-\text{locking for } s \text{ and } L\}\)

    • Lemma: If \(L\) verifies \(A\) then \(A = \bigcup_\sigma L_A^\sigma\)

    • Lemma: If \(L\) verifies \(A\) then \(L_A^\sigma\) is locally closed

    • Theorem: \(A\) is verifiable (by some \(L\)) if and only if \(A\) is \(\omega\)-constructible

    • Theorem: The following are equivalent:

      1. \(\mathcal{P} = (\mathcal{S}, \mathcal{Q})\) is solvable

      2. \(\mathcal{Q}\) is an at most countable collection of \(\omega\)-constructible sets

      3. \(\mathcal{Q}\) has an (at most countable) refinement \(\mathcal{Q}'\) such that each \(A \in \mathcal{Q}'\) is locally closed

    • Corollary: \(\mathcal{S}\) is learnable if and only if \(\{s\}\) is locally closed for each \(s \in S\)

      (\(\{s\}\) LC for each \(s\) is equivalent to \(S\) satisfying the \(T_d\) separation axiom)

    • On AGM learners…

      • Any total order \(\trianglelefteq\) on \(\mathcal{Q}\) induces a total preorder on \(S\) by \(s \le t\) iff \(A_s \trianglelefteq A_t\)

      • \(\mathcal{P}\) is directly solvable by AGM conditioning if there is \(\trianglelefteq\) and an AGM learner \(L\) such that \(L\) solves \(\mathcal{P}\) and \(\le_\mathcal{S}\) is induced from \(\trianglelefteq\) as above

      • Lemma: \(\mathcal{P}\) is directly solvable by AGM conditioning if and only if \(\mathcal{P}\) is linearly separated

      • Lemma: Every partition \(\mathcal{Q}\) with locally closed cells can be refined to some \(\mathcal{Q}'\) which is linearly separated

      • Theorem: If \(\mathcal{P}\) is solvable, it is solvable by an AGM agent

      • Corollary: Every learnable space is learnable by an AGM agent

The topology of statistical verifiability (2017)

(Genin, Konstantin and Kelly, Kevin T)

  • Summary:

    • There exist topological accounts of verifiability in propositional settings, where incoming information explicitly rules out certain states of the world. This is used to (hopefully) find the true state of the world given enough information

    • In statistical applications, we aim to find the true probability distribution over some sample space \(\Omega\). But incoming information (a random sample \(\omega\)) may be compatible with all distributions, and does not rule out states as in the propositional case

    • The authors define new notions of verifiability for the statistical setting. They describe the unique topology on the set of probability distributions in which the open sets characterise the statistically verifiable propositions

  • Framework:

    • \(\mathfrak{G} = (\Omega, \mathcal{T})\) is a sample space \(\Omega\) equipped with a topology \(\mathcal{T}\)

    • \(\mathcal{B}\) denotes the Borel algebra on \(\Omega\) generated by \(\mathcal{T}\)

    • \(W\) – the set of worlds – consists of probability measures \(\mu: \mathcal{B} \to [0,1]\)

    • Intuition: it is verifiable that \(\omega\) lies in \(A \subseteq \Omega\) if \(\omega\) is in the interior of \(A\) (I think?). There could be problems at the boundary \(\mathsf{bdry}(A) = \overline{A} \cap \overline{A^c}\)

    • \(A\) is almost surely deciable in \(\mu \in W\) if \(\mu(\mathsf{bdry}(A)) = 0\), i.e. if the probability that a sample lands on the boundary is 0

    • A method is a measurable function \(\psi: \Omega \to 2^W\). \(\psi(\omega)\) is the conjectured set of possible probability distributions after seeing a random sample \(\omega\), i.e.

    • A test for \(H \subseteq W\) is a method \(\psi\) with \(\psi(\omega) \in \{W, H^c\}\): \(\psi\) either rules out \(H\) (if \(\psi(\omega) = H^c\)) or makes no inference (\(\psi(\omega) = W\)). I think this follows the ideas of hypothesis testing in statistics

    • \(\psi\) is feasible in \(\mu \in W\) if \(\psi^{-1}(W)\) (the region in \(\Omega\) in which \(\psi\) does not reject \(H\)) is almost surely decidable in \(\mu\)

    • (Authors give several version of statistical verifiability in terms of sequences of \(H^c\)-tests \((\lambda_n)_{n \in \mathbb{N}}\), where each \(\lambda_n\) is a mapping \(\Omega^n \to \{W, H\}\) (I think). Omitted here since I don’t quite understand some of the concepts)

  • Topology on measures:

    • A sequence \((\mu_n)_{n \in \mathbb{N}}\) of measures in \(W\) converges weakly to \(\mu\) if \(\mu_n(A) \to \mu(A)\) for every \(A\) almost surely decidable in \(\mu\) (note that weak convergence depends on the topology on \(\Omega\))

    • \(W\) can be topologised such that weak convergence is exactly the same as convergence in the topology: this is called the weak topology

  • Results:

    • A proposition \(H \subseteq W\) is verifiable (in a certain sense; see the paper) iff it is open in the weak topology

    • Similar characterisation for “limiting verifiability” and for empirical problems

A formal theory of inductive inference. Part I (1964)

(Solomonoff, Ray J)

Announcement as effort on topological spaces (2016)

(Van Ditmarsch, Hans and Knight, Sophia and Özgün, Aybüke)

Topological Evidence Logics: Multi-Agent Setting (2018)

(Baltag, Alexandru and Bezhanishvili, Nick and Gonz’alez, Sa’ul Fern’andez)

Evidence in Epistemic Logic : A Topological Perspective (2017)

(Özgün, Aybüke)

  • PhD thesis

  • I really like the typesetting

  • Overview:

    • Looks at a bunch of different modal logics for knowledge, belief, evidence and announcements. Looks at topological semantics for these logics. Gives epistemic interpretations of the modalities and the semantics. Proves completeness results for most of these.

    • Good exposition of existing work in the literature

  • Topics:

    • Topological semantics for modal logic

      • Idea is that we have a topology on the set of worlds, and \([\square \phi] = \text{int}([\phi])\)

      • Interior semantics for Alexandroff spaces (where arbitrary intersections of open sets are open) are in correspondence with reflexive and transitive Kripke frames

      • \(\mathsf{S4}\) is sound and complete for interior semantics for any topological space

      • \(\mathsf{S4}\) axioms look very similar to Kuratowski axioms for the interior operator

      • \(\mathsf{S4}\) is good for modelling knowledge, so interior semantics give a good way to interpret knowledge topologically

    • Topological semantics for belief:

      • Class of topological spaces and interpretation of the belief modality that characterise an existing modal system in the literature (I think?)

      • \([B\phi] = \text{cl}(\text{int}([\phi]))\) (“closure-interior belief semantics”). This captures the idea of belief as subjective certainty: if knowledge corresponds to the interior and closure represents “closeness” (i.e. \(x \in \text{cl}(A)\) means \(x\) is “close” to \(A\)), then worlds in \(\text{cl}(\text{int}([\phi]))\) are “close” to worlds in which the agent knows \(\phi\); consequently, such worlds are subjectively indistinguishable from knowledge.

      • Author extends this to semantics for conditional beliefs. Quite complicated, but apparently satisfies analogues of the AGM postulates, when rephrased in terms of conditional beliefs

    • Presentation of Justified Belief and the Topology of Evidence (2016)

    • Overview of subset space semantics

      • Here one evaluates the truth value of formulas at epistemic scenarios \((x, U)\), where \(x\) is the actual state of the world and \(U\) is the current evidence obtained so far. This allows one to distinguish between the potential evidence that could be collected in the future, and the evidence which is currently in hand

      • In additional to a knowledge modality \(K\), one has the effort modality \(\square\), where \(\square \phi\) means that after any further “observational effort” (i.e. after more evidence is obtained), \(\phi\) will remain true.

      • In general the possible pieces of evidence \(U\) come from an arbitrary family \(\mathcal{O} \subseteq 2^W\). Topological spaces are a particular case of subset spaces where \(\mathcal{O}\) has certain closure properties.

      • Others have shown that there are topological interpretations of verifiability and falsifiability: if \(v: \mathsf{prop} \to 2^W\) is a valuation function, then \(v(p)\) is open iff \(p \to \diamond Kp\) is valid. Since open sets represent potential pieces of evidence, this means that \(p\) is “verifiable” (there is evidence for exactly the worlds \(v(p)\) in which \(p\) is true) iff it is possible to know \(p\). Very cool

      • Topological semantics for public announcement modalities in the style of subset space semantics

    • Other chapters which I have not read…

Keep changing your beliefs, aiming for the truth (2011)

(Baltag, Alexandru and Smets, Sonja)

  • Summary (sketch):

    • Authors study what happens to an agents epistemic state when receiving a sequence of truthful information

    • Information is not just propositional but doxastic: it may include statements about the agent’s own beliefs as well as facts about the world

    • Questions: do beliefs converge to a fixed state? Converge to the truth? Converge to the truth so that the agent knows their beliefs are true?

    • Relevant research areas: belief revision, dynamic epistemic logic, formal learning theory, truth approximation

  • Basic framework:

    • Pointed plausibility model: \(\bm{S} = (S, {\le}, \|\cdot\|, s_0)\), where

      • \(S\) is a set of states

      • \(\le\) is a plausibility tpo over \(S\)

      • \(\|\cdot\|: \Phi \to 2^S\) is a valuation map, taking atomic formulas \(p \in \Phi\) (for a fixed object language \(\Phi\)) to a set of states \(\|p\| \subseteq S\)

      • \(s_0\) is the “actual” state of the world

    • For a proposition \(P \subseteq S\), \(\mathsf{best}(P)\) is the set of \(\le\)-minimal elements of \(P\)

    • \(BP = S\) if \(\mathsf{best}(S) \subseteq P\), and \(KP = \emptyset\) otherwise

      • \(BP\) represents the agent believing \(P\). Note that this holds either at all states or at no states. Whether the agent believes \(P\) depends only on the agent’s doxastic state, not the actual state of the world)

    • A doxastic proposition \(\bm{P}\) maps each pointed plausibility model \(\bm{S}\) to a set \(\bm{P}_\bm{S} \subseteq S\)

      • Write \(s \vDash_\bm{S} \bm{P}\) iff \(s \in \bm{P}_\bm{S}\)

      • Write \(\bm{S} \vDash \bm{P}\) iff \(s_0 \vDash_\bm{S} \bm{P}\)

      • Conjunction, negation etc are defined pointwise. E.g. \((\bm{P} \wedge \bm{Q})_\bm{S} = \bm{P}_\bm{S} \cap \bm{Q}_\bm{S}\)

    • A binary question \(\mathcal{Q}\) is a pair \(\{\bm{P}, \bm{\neg P}\}\) for some (doxastic) proposition \(\bm{P}\)

  • Learning new information:

    • An upgrade by a proposition \(\bm{P}\) maps a plausibility model to a “transformed” one, corresponding to receiving information \(\bm{P}\)

    • Different notions of upgrade may be used depending on the source of the information; \(\dagger\bm{P}\) denotes a general upgrade by \(\bm{P}\)

    • We require that \(\dagger\bm{P}\) does not change the actual world. If \(\dagger\bm{P}(\bm{S}) = \bm{S'} = (S', {\le'}, \|\cdot\|', s_0')\), we have:

      • \(S' \subseteq S\) (no new states are introduced)

      • \(\|p\|' = \|p\| \cap S'\)

      • \(s'_0 = s_0\)

    • Three types of upgrade:

      • Update (receiving absolutely certain information). Whenever \(\bm{S} \vDash \bm{P}\), \(!\bm{P}\) maps \(\bm{S}\) to \(\bm{S'}\), where

        • \(S' = \bm{P}_\bm{S}\)

        • \(s \le' t\) iff \(s \le t\)

        i.e. states outside \(\bm{P}_\bm{S}\) are now impossible, and the plausibility ordering is unchanged

      • Radical upgrade (receiving trusted but fallible information). For any \(\bm{S}\), \(\Uparrow\bm{P}\) maps \(\bm{S}\) to \(\bm{S'}\) where

        • \(S' = S\)

        • \(s \le' t\) iff (\(s \in \bm{P}_\bm{S}\) and \(t \notin \bm{P}_\bm{S}\)) or ((\(s \in \bm{P}_\bm{S}\) iff \(t \in \bm{P}_\bm{S}\)) and \(s \le t\))

        i.e. the possible states remain unchanged, and \(\le'\) ranks states lexicographically first on the basis of whether they cause \(\bm{P}\) to be true, and then according to the existing plausibility ordering \(\le\)

      • Conservative update (receiving “barely believed” information). \(\uparrow \bm{P}\) maps \(\bm{S}\) to \(\bm{S'}\), where

        • \(S' = S\)

        • \(s \le' t\) iff \(s \in \mathsf{best}(\bm{P}_\bm{S})\) or (\(s \notin \mathsf{best}(\bm{P}_\bm{S})\) and \(s \le t\))

        i.e. the most plausible \(\bm{P}\) states become minimal, and the rest of the plausibility ordering remains the same

    • \(\dagger\bm{P}\) is truthful at \(\bm{S}\) if \(\bm{S} \vDash \bm{P}\) (i.e. \(\bm{P}\) is true at \(\bm{S}\))

    • An upgrade stream is a sequence \(\dagger \bm{\vec{P}} = (\dagger \bm{P}_n)_{n \in \mathbb{N}}\) of upgrades of the same type \({\dagger} \in \{{!}, {\Uparrow}, {\uparrow}\}\)

    • An upgrade stream can be applied recursively to a state \(\bm{S}\) to produce a sequence of states \((\bm{S}_n)_{n \in \mathbb{N}_0}\), where

      • \(\bm{S}_0 = \bm{S}\)

      • \(\bm{S}_{n+1} = \dagger \bm{P}_n(\bm{S}_n)\)

  • Results (rough overview):

    • Conditions for an upgrade stream to cause an agents beliefs to stabilise, in different senses for different kinds of upgrade:

      • Beliefs stay the same

      • Beliefs stay the same, and beliefs are true

      (I did not read in full)

Knowing one’s limits: logical analysis of inductive inference (2010)

(Gierasimczuk, Nina)

  • PhD thesis

  • Topics:

    • Learning theory

    • (Dynamic) epistemic logic

    • Learning and belief revision

    • Computational complexity

    • “Supervised” learning: a “teacher” provides information to the learner

Language identification in the limit (1967)

(E Mark Gold)

Modes of Convergence to the Truth: Steps Toward a Better Epistemology of Induction (2018)

(Lin, Hanti)

On the path to the truth: Logical & computational aspects of learning (2020)

(Vargas Sandoval, AL and others)

  • PhD thesis

  • Topics:

    • Similar to Nina Gierasimczuk’s thesis?

    • Dynamic logic for inductive learning from observations

    • Logic for AGM learning

    • Public announcement epistemic logic

    • Identification in the limit with positive and negative data

The Logic of AGM Learning from Partial Observations (2020)

(Alexandru Baltag and Aybüke Özgün and Ana Lucia Vargas-Sandoval)

  • (I think this is included in the PhD thesis of Vargas-Sandoval (above))

  • Dynamic logic for AGM learning

    • Language for observations: \(e := !\top \mid !o \mid e; e \mid e \sqcup e\). Here \(o\) represents an observation (think FLT), \(e_1; e_2\) denotes sequentially observing \(e_1\) then \(e_2\), and \(e_1 \sqcup e_2\) denotes observing one of \(e_1, e_2\), but the agent is not sure which one (these are the titular partial observations)

    • Dynamic language: \(\phi := p \mid o \mid \neg \phi \mid \phi \wedge \phi \mid L(e) \mid K\phi \mid [e]\phi \mid \square \phi\)

  • Semantics are given over tuples \((X, \mathscr{O}, {\le}, \|\cdot\|)\), where \(X\) is a set of states, \(\mathscr{O} \subseteq 2^X\) is a set of “information states” (read: observations?), \({\le}\) is a plausibility tpo over \(X\), and \(\|\cdot\|\) is a valuation mapping propositional and observational symbols (\(p\) and \(o\) in the above BNF) to subsets of \(X\)

  • Formulas are interpreted at “epistemic scenarios” \((x, U)\), where \(x \in U \in \mathscr{O}\). Here \(x\) represents the actual state of the world, and \(U\) represents the agent’s current evidence, based on previous observations. Consequently, the truth value of a formula depends not only on the actual state of the world, but also the information that has been observed. Interesting!

  • Each observational formula \(e\) defines a mapping \(\mathscr{O} \to \mathscr{O}\)

  • Formally:

    • Propositional formulas are interpreted as expected

    • \((x, U) \vDash L(e)\) iff \(x \in \min(e(U), {\le})\), i.e. an AGM learner with plausibility tpo \({\le}\) believes \(x\) when receiving the information which \(e\) represents

    • \((x, U) \vDash K\phi\) iff \(\forall y \in U:\ (y, U) \vDash \phi\), i.e. \(\phi\) is “known” if it is true at every point consistent with the information received so far (that is, \(U\))

    • \((x, U) \vDash [e]\phi\) iff \(x \in e(U)\) implies \((x, e(U)) \vDash \phi\), i.e. if the actual state \(x\) is consistent with the observation represented by \(e\), then \(\phi\) is true after observing \(e\)

    • \((x, U) \vDash \square \phi\) iff \(\forall O \in \mathscr{O}: x \in O \subseteq U \implies (x, O) \vDash \phi\), i.e. if \(\phi\) is true after receiving any more specific information \(O\). This seems similar to the “locking sequence” idea in FLT: no further information can cause belief in \(\phi\) to be retracted

  • Authors provide a sound and complete axiomatisation of this logic

  • Some more complicated stuff which I did not read properly

Uncertainty about evidence (2019)

(Bjorndahl, Adam and Özgün, Aybüke)

  • Summary:

    • Logical framework for reasoning about knowledge with uncertain evidence

    • The agent knows the evidence that it receives, but there may be uncertainty about what that evidence entails: the interpretation of evidence is not known to the agent

    • Specifically: the states of the world corresponding to a piece of evidence depend on the actual state of the world

  • Framework:

    • Evidence model: \((X, E, I, v)\), where:

      • \(X\) is a non-empty set of states

      • \(E\) is a non-empty set of evidence states (abstract elements)

      • \(I = (I_e)_{e \in E}\) is a collection of mappings \(I_e: X \to 2^X\)

      • \(v: \mathsf{prop} \to 2^X\) is a valuation function over a set of atomic propositions \(\mathsf{prop}\)

    • Intuition: \(I_e(x)\) is the exactly the set of states compatible with evidence \(e\) if the actual state is \(x\). This is where the uncertainty about evidence comes in; if we do not know the actual state, we do not know \(I_e(x)\) and therefore do not know exactly what the evidence says about the world

    • This generalises other work where evidence is seen as a subset of the state space: just take \(I_e\) constant

    • A state \(x\) and evidence state \(e\) are coherent if \(x \in I_e(x)\). This means that evidence \(e\) could actually be seen if the actual world were \(x\)

    • Write \(U_e = \{x \in X \mid x \in I_e(x)\}\) for the set of states coherent with evidence \(e\), i.e. the set of states for which \(e\) might be seen as evidence. But note: \(e\) does not necessarily rule out all \(y \notin U_e\); we can have \(y \in I_e(x)\) for some \(x \in U_e\)

  • A modal logic:

    • \(\phi := p \mid \phi \wedge \phi \mid \neg \phi \mid E\phi \mid K\phi\)

    • Formulas are interpreted over evidence scenarios: \((x, e)\) where \(x\) and \(e\) are coherent. Intuitively, \(\phi\) is true at \((x, e)\) iff \(\phi\) is true when the agent has received evidence \(e\) and the actual state is \(x\)

    • Semantics: pre-emptively write \([\phi]_e = \{x \in X \mid (x, e) \vDash \phi\}\), and set

      \[\begin{aligned} (x, e) &\vDash p \iff x \in v(p) \\ (x, e) &\vDash \phi \wedge \psi \iff (x, e) \vDash \phi \text{ and } (x, e) \vDash \psi \\ (x, e) &\vDash \neg \phi \iff (x, e) \not\vDash \phi \\ (x, e) &\vDash E\phi \iff I_e(x) \subseteq [\phi]_e \\ (x, e) &\vDash K\phi \iff \bigcup_{y \in X}{I_e(y)} \subseteq [\phi]_e \\ \end{aligned}\]

      (the recursion is well-defined in the semantic clause for \(K\phi\) since \(\phi\) has strictly smaller depth than \(K\phi\))

    • Meaning:

      • \(E\phi\) is true at \((x, e)\) if the evidence \(e\) actually entails \(\phi\) when the actual state is \(x\): for any state \(y \in I_e(x)\) which the evidence entails, we have \(\phi\) true at \((y, e)\)

      • \(K\phi\) is true if the agent “knows” \(\phi\) upon receiving evidence \(e\), even if they do not know the actual state \(x\): for a state \(z \in I_e(y)\) entailed by any interpretation of \(e\), \(\phi\) is true at \((z, e)\)

      • Note that the truth value of \(K\phi\) is independent of the state \(x\) (apparently this is enough to see \(K\) is an \(\mathsf{S5}\) modality?)

    • An extra property:

      \[(E1)\quad y \in I_e(x) \implies y \in I_e(y)\]

      This says that if \(y\) is entailed by \(e\) at \(x\), then \(y\) and \(e\) are coherent. This makes sense: if \(y\) is entailed by \(e\) at some other state \(x\), then \(y\) should surely be entailed by \(e\) at \(y\) itself

  • Results:

    • Evidence model generalises Kripke relational semantics (for reflexive accessibility relations) and subset-space semantics

    • Sound and complete axiomatisation for models satisfying \((E1)\): \(\mathsf{S5}\) for the \(K\) modality plus \(\mathsf{KT}\) for \(E\) plus the axiom scheme \(K\phi \to E\phi\)

  • Other bits (just summarised here…)

    • Add belief modality to the language. Formulas are interpreted over doxastic evidence scenarios \((x, e, V)\), where \(V \subseteq X\) represents the beliefs of the agent

    • Add knowability modality and dynamics to the language. Authors consider a meet-semilattice \((E, \oplus)\), where \(\oplus\) intuitively allows the agent to combine evidence. Then \(\square \phi\) is true at \((x, e)\) (\(\phi\) is knowable) if there is evidence \(e'\) such that \(e \oplus e'\) would cause the agent to know \(\phi\)

A dynamic logic for learning theory (2019)

(Baltag, Alexandru and Gierasimczuk, Nina and Ozgun, Aybuke and Sandoval, Ana Lucia Vargas and Smets, Sonja)

Logic and topology for knowledge, knowability, and belief (2019)

(Bjorndahl, Adam and Ozgun, Aybuke)

Argument-based belief in topological structures (2017)

(Shi, Chenwei and Smets, Sonja and Vel’azquez-Quesada, Fernando R)

  • Summary:

    • Authors use topological semantics to represent evidence, and argumentation theory to combine (possibly conflicting) evidence

  • Evidence models:

    • (Similar setup to this paper: Justified Belief and the Topology of Evidence (2016))

    • Set of possible worlds \(X\)

    • Family of sets of “basic evidence” \(E_0 \subseteq 2^X\), with \(X \in E_0\) and \(\emptyset \notin E_0\)

    • A body of evidence is a subset \(F \subseteq E_0\) such that any finite intersection from \(F\) is non-empty

    • Authors consider the topology \(\tau\) on \(X\) generated by \(E_0\). Open sets then represent combinations of pieces of basic evidence

    • The agent believes a proposition \(P \subseteq X\) if for all \(U\) open and non-empty there is \(V\) open and non-empty such that \(V \subseteq U\) and \(V \subseteq P\)

      • That is, the agent believes \(P\) if any piece of (combined) evidence \(U\) can be strengthened to some evidence \(V\) which implies \(P\)

  • Bringing in argumentation…

    • \(U, U' \in \tau\) conflict with each other if \(U \cap U' = \emptyset\)

    • A topological argumentation model extends an evidence model with an attack relation \(\to\) on \(\tau\), such that

      1. \(U \cap U' = \emptyset\) iff \(U \to U'\) or \(U' \to U\)

      2. if \(U \to U'\) and \(V \subseteq U'\), then \(U \to V'\)

      3. for all \(U \ne \emptyset\), \(U \to \emptyset\) and \(\emptyset \not\to U\)

    • Given a topological argumentation model, we can consider usual argumentation semantics

    • The agent has grounded belief in a proposition \(P \subseteq X\) if there is an open set in the grounded extension which is a subset of \(P\)

    • Grounded belief is not in general closed under conjunction, but the authors give three separate conditions under which this does hold

  • Logic of grounded beliefs

    • Modal language with belief modality \(B\)

    • Semantics given by grounded belief over topological argumentation models

    • Sound and complete axiom system given:

      • Propositional tautologies and modus ponens

      • (4): \(B\phi \to BB\phi\)

      • (5): \(\neg B \phi \to B \neg B \phi\)

      • (D): \(B \phi \to \neg B \neg \phi\)

      • (N): \(B \top\)

      • (M): \(B(\phi \wedge \psi) \to B\phi \wedge B\psi\)

      • (RE): from \(\phi \leftrightarrow \phi\) infer \(B\phi \leftrightarrow B\psi\)

Justified Belief and the Topology of Evidence (2016)

(Alexandru Baltag and Nick Bezhanishvili and Aybüke Özgün and Sonja Smets)

A qualitative theory of dynamic interactive belief revision (2008)

(Baltag, Alexandru and Smets, Sonja)

  • Summary:

    • An account of belief revision from the point of view of dynamic epistemic logic

    • Authors first look at “static” belief revision: the agent’s beliefs change with new information, but the world stays the same (more or less). Key modalities are of knowledge, belief and conditional belief

    • They then turn to “dynamic” belief revision. Here one has dynamic modalities (e.g. announcements)

  • Static revision

    • Quote:

      “static” belief revision is about pre-encoding potential belief revisions as conditional beliefs

    • Believing \(Q\) conditioned on learning \(P\) is to say that the agent is prepared to believe \(Q\) was the case, if the agent were to learn that \(P\) was the case

    • Note the use of past tense “was”. Indeed, if \(Q\) is a higher-level doxastic proposition (i.e. it refers to the agent’s beliefs) then the act of learning \(P\) might change the truthfulness of \(Q\)

    • In standard AGM theory one deals with purely “ontic” propositions (which do not refer to beliefs of agents) so this is a moot point

  • Formalism:

    • Epistemic plausibility frame over some set of agents \(\mathcal{A}\): \(\bm{S} = (S, {\sim_a}, {\le_a})_{a \in \mathcal{A}}\)

      • \(S\) is a set of possible worlds

      • Each \(\sim_a\) is an equivalence relation expressing the states between which agent \(a\) can distinguish (like in Richard’s JAIR paper)

      • Each \(\le_a\) is a plausibility relation over \(S\)

    • Constraints:

      1. \(s \le_a t\) implies \(s \sim_a t\) (plausibility comparisons only take place between epistemically indistinguishable states)

      2. \(\le_a\) restricted to each equivalence class of \(\sim_a\) is a well-preorder, i.e. reflexive, transitive, and any non-empty subset has at least one minimal element

    • The idea with (1) is that if \(s \not\sim_a t\), agent \(a\) knows how to distinguish between states \(s\) and \(t\), so the plausibility relation (which reflects beliefs) is not needed to decide between \(s\) and \(t\)

    • For (2), this condition allows us to speak of the minimal \(P\) states within each equivalence class (compare with AGM revision)

    • Many different modalities introduced:

      • Knowledge \(K_aP\) (corresponds to \(\sim_a\))

      • Simple belief \(B_aP\) (true at \(s\) if all minimal states in the equivalence class of \(s\) satisfy \(P\))

      • Conditional belief \(B_a^QP\) (similar, but only look at minimal states satisfying \(Q\))

      • Safe belief \(\square_a P\) (true if \(B^QP\) is true for all true propositions \(Q\))

      • Weak safe belief \(\square^{\text{weak}}_aP\) (true if \(\neg B^Q\neg P\) for all true propositions \(Q\), i.e. belief in \(P\) may be given up, but is never reversed)

      • Revision operator \(\star_a P\) (true at \(s\) iff \(s\) is a minimal model of \(P\) within the eq. class of \(s\) (?))

    • Relations between these modalities are discussed

    • Interesting example (2.3) in which a belief is true but not safe: some truthful information can come in and cause an agent to switch their belief to a false one. The trick is that the incoming information is a doxastic proposition which refers to the beliefs of other agents.

    • Complete axiomatisations given for a few logics built from these modal operators

  • Dynamic revision (not read yet)

Reliable belief revision (1997)

(Kelly, Kevin and Schulte, Oliver and Hendricks, Vincent)

The learning power of belief revision (1998)

(Kelly, Kevin T)

Scientific discovery based on belief revision (1997)

(Eric Martin and Daniel Osherson)

Iterated Belief Revision, Reliability, and Inductive Amnesia (1999)

(Kevin T. Kelly)

Topological Properties of Concept Spaces (2008)

(Matthew de Brecht and Akihiro Yamamoto)

Dynamic testimonial logic (2009)

(Holliday, Wesley H)

  • Summary:

    • Multi-agent dynamic logic formalism for belief and testimony

    • Dynamic operators for belief upgrade, belief suspension and testimony

    • Trust between agents on propositions can be defined as an abbreviation in the language

  • Models judgment-aggregation-like situations, where agents testify for/against propositions

  • Motivation: epistemic bandwagon effect

    • In JA one usually assumes judgements take instantaneously

    • In reality agents may vote sequentially

    • The order in which judgements are revealed can influence the beliefs of later agents

    • E.g. consider agents 1, 2, 3 who are initially for, indifferent and against the proposition, respectively

    • Suppose an agent will go with the majority if they are outnumbered

    • Taking judgements in the order 1, 2, 3 results in unanimous for, but the reverse order 3, 2, 1 results in unanimous against

  • Formalities:

    • Logic is based on \(\mathsf{CDL}\) of Baltag and Smets, which includes conditional belief modalities \(B_i^\phi\psi\) (agent \(i\) believes \(\psi\) conditional on \(\phi\))

    • Syntax:

      • Upgrade operator: \([\uparrow_i{\phi}]\psi\) (\(\psi\) is true after agent \(i\) upgrades beliefs by \(\phi\)). Here “upgrade” is conservative belief revision: most plausible \(\phi\) worlds are promoted to be minimal in the plausibility ordering, and the rest of the ordering stays the same

      • Suspension operator: \([\downarrow_i{\phi}]\psi\) (\(i\) suspends belief in \(\phi\)). Similar, but the most plausible \(\phi\) and \(\neg\phi\) worlds are promoted. The result is that the agent will not belief \(\phi\), not believe \(\neg\phi\)

      • Public record: \(\mathsf{Rec}(i, \phi)\) (agent \(i\) is publicly on the record as a proponent of \(\phi\))

      • \(S \preceq_i^\phi S'\), for propositional \(\phi\) and \(S, S' \subseteq \mathsf{Agt}\) (according to agent \(i\), coalition \(S'\) is at least as testimonially authoritative on \(\phi\) as coalition \(S\))

        • Authors distinguish between testimonial reliability of an agent/coalition (inclination to believe \(\phi\) when the agent/coalition testifies \(\phi\)) and doxastic reliability (inclination to believe \(\phi\) when the agent/coalition believes \(\phi\)).

          Assuming agents are sincere in their testimony, doxastic reliability implies testimonial reliability. But the reverse may not hold in general. Consider an agent who, whilst they are not an expert on all topics, is very careful in public, and checks things thoroughly before going on the record about anything. Then they may be testimonially reliable. But their private beliefs may not be so well-researched, so there is no reason to expect that their beliefs should always be true

      • Testimony operator \(\langle i, !\phi\rangle\psi\) (\(i\) believes \(\phi\), publicly testifies as such, and \(\psi\) holds afterwards)

    • Semantics:

      • Plausibility orderings over worlds for each agent

      • Public record and \(\preceq_i^\phi\) specified more or less directly in the definition of a model

      • Testimony model change: only interesting changes are the plausibility orderings (i.e. agent beliefs). Gist is the following. Suppose \(j\) testifies \(\phi\), and \(i \ne j\). Then

        • If \(i\) judges those on the record for \(\neg\phi\) to be strictly less authoritative than those for \(\phi\), and \(i\) considers it possible that \(j\) believes \(\phi\) and that \(\phi\) is true, then \(i\) upgrades by \(B_j\phi \wedge \phi\)

        • If \(i\) judges those for \(\neg\phi\) to be strictly more authoritative than those for \(\phi\), and \(i\) considers it possible for \(j\) to believe \(\phi\) but for \(\phi\) to be false, then \(i\) upgrades by \(B_j^\phi \wedge \neg\phi\)

        • If \(i\) judges those for \(\phi\) and \(\neg\phi\) to be equally authoritative , and possible both for \(j\) to correctly and incorrectly believe \(\phi\), then \(i\) upgrades by \(B_j{\phi}\) and suspends belief on \(\phi\)

        • In any other case, \(i\) simply upgrades by \(B_j^\phi\)

      • Note the first three cases handle the extremes where it easy to say something: case 1 expresses a clear win for \(\phi\), case 2 for \(\neg\phi\), and case 3 a “draw”. Case 4 is the fallback case

      • Note agent always upgrades by \(B_j\phi\), i.e. agents belief the testimony of others is sincere

    • Trust:

      • Authors introduce \(T_i(j, \phi)\) as an abbreviation for a complicated formula which represents “\(i\) trusts \(j\)’s testimony on \(\phi\)”.

      • This formula holds if:

        • \(i\) considers it possible that \(j\) believes \(\phi\) and that \(\phi\) could be true; and

        • Those on the record for \(\neg\phi\), excluding \(j\), are strictly less authoritative on \(\phi\) than those on the record for \(\phi\), plus \(j\)

      • Note that this depends not only on \(j\), but others who are currently on the record for \(\phi\) or \(\neg\phi\). So the testimony of some \(k \ne j\) can affect \(i\)’s trust in \(j\)

    • Formal modelling of the bandwagon effect mentioned in the introduction.

Trust, Belief and Honesty. (2015)

(Pearce, David and Uridia, Levan)

  • Essentially a small modification (and at the same time a simplification) of Liau’s BIT logic to add axioms expressing sincerity or honesty of agents

  • Criticism of Liau’s framework:

    • Authors’ main point is that there is not enough of a connection between trust and belief in Liau’s framework

    • The only axiom of Liau’s connecting the two go via the information operator: \(B_i I_{i,j} \phi \wedge T_{i,j}\phi \rightarrow B_i\phi\). In words, if \(i\) believes they have been informed of \(\phi\) by agent \(j\), and \(i\) trusts \(j\) on \(\phi\), then \(i\) believes \(\phi\)

    • The authors simplify the framework by removing the \(I\) operators in their logic, so that no connection remains

    • Authors construct a BIT model which violates some sincerity ideas: \(i\) trusts \(j\) on \(\phi\) but does not believe that \(j\) believes \(\phi\)

    • I am not completely convinced, though, since the BIT logic they criticise is only the base logic of Liau’s work, which is extended in various ways (in the same paper!) to capture more nuanced interactions between trust, information and belief

  • Contribution:

    • Logic L1 adds a “weak sincerity” axiom:

      \[T_{i,j}\phi \rightarrow B_i \neg B_j \neg \phi\]

      i.e. if \(i\) trusts \(j\) on \(\phi\), then \(i\) believes that \(j\) does not believe \(\neg\phi\)

    • L1 also requires the “KS” modal logic axioms: the standard K and B axioms (where B is \(\phi \rightarrow \Box\Diamond\phi\)) plus “w4”:

      \[\Box \phi \wedge \phi \rightarrow \Box\Box\phi\]

      As far as I can see the authors do not really give any doxastic interpretation of w4, instead referring to another paper… In the conclusion of the paper they explain that KS is used to ensure that the sincerity and weak sincerity notions are independent. But this actually seems a bit counterintuitive…

    • L1 is sound and complete wrt Kripke + neighbourhood frames, with belief accessibility relations weakly transitive (\(xRy\), \(yRz\) and \(x \ne \z\) implies \(xRz\)) and symmetric. Some conditions are imposed to relate trust neighbourhood function and belief relations (one comes from Liau’s paper; the other reflects weak sincerity)

    • Logic L2 replaced weak sincerity with the following:

      \[T_{i,j}\phi \rightarrow B_i B_j \phi\]

      i.e. if \(i\) trusts \(j\) on \(\phi\), then \(i\) believes that \(j\) believes \(\phi\)

    • L2 frames are exactly as L1 frames but with a slightly modified condition relating trust neighbourhood functions and belief relations