Story¶
Core idea: how can we aggregate conflicting information from unreliable sources, and figure out who to trust?
We start with how to rank sources by their trustworthiness, and pieces of information by their believability.
Normative viewpoint: what “desirable” properties should aggregation methods have.
Stay with the same problem, but look to find methods based on argumentation.
Diversion: TD methods often iterate two steps: rank sources based on an estimate of true information, and then rank information based on assessment of the sources.
We isolate one direction of this step and study a more general problem of bipartite tournament ranking. We claim it is of independent interest, in addition to its connection with truth discovery.
Gap to fill: more results on bipartite tournaments in general, before we focus just on chain editing.
We then focus specifically on chain editing. [Why?]
Extra: Generalise the results to non-binary outcomes and/or abstentions.
Gap to fill: make the connection back to TD: come up with a TD operator based on chain editing and analyse it wrt the axioms.
Reflection: work so far has used rankings for “trustworthiness”. This gives a relative notation of trustworthiness of sources, but what does it mean?
We shift focus from trustworthiness as a vague concept to expertise, interpreted in a precise sense.
First explore a notion of expertise through the lens of modal logic wrt neighbourhood semantics. We make connections between expertise and truthfulness of information (will be revisited in belief revision work) and epistemic logic.
[Why logic? One answer: having logical foundations for expertise will allow us to later make connections with the large body of work on belief revision framed in propositional logic.]
Having explored expertise in some detail, we return to the problem of receiving unreliable information from non-expert sources.
Differences from earlier TD approach: we fix the notion of unreliability as lack of expertise; we formalise the input reports as propositional formulas; we interpret trust as belief in expertise.
As with the earlier work we take a more-or-less axiomatic approach, but this time in the style of belief revision as opposed to social choice.
To change: possibly develop the framework more along the lines of the earlier expertise stuff (e.g. expertise collections instead of partitions) to emphasise the connection.
Gap to fill: connections with the TD work. To what extent can we encode the TD problem as an instance of the belief change problem? Input is probably possible: objects \(\to\) cases; encode facts with propositional variables. Output is trickier: how do we get rankings of formulas/sources from belief and knowledge sets? Should also compare the TD axioms vs belief change postulates.
Gap to fill: we have so far not addressed actually finding the truth. Aim to use the learning-theory-based work (e.g. [BGS19][BGS16][KSH97][GK17]) to explore truth-tracking with unreliable sources.