Echo chambers on Facebook (2016)
(Quattrociocchi, Walter and Scala, Antonio and Sunstein, Cass R)
Authors look at Facebook communities around scientific news and conspiracy
theories in Italy and the US
They find users are polarised: users interact with one of the other but not
both
Some similarities between the two communities:
Post lifetimes are similar
Authors plot proportion of friends with same polarisation status against
engagement in the community (number of likes on posts). There is similar
positive correlation for both scientific news and conspiracy theories
Some differences:
For conspiracy theories, lifetime of a post (time difference between
first and last user sharing the post) is positively correlated with the
cascade size (number of users sharing a post)
That is, the more users share the post, the longer the post stays around
(?)
For scientific news there is a peak in lifetime around a cascade size of
100-200 users
Authors write: “a longer lifetime does not correspond to a higher level of
interest but most likely to a prolonged discussion within a specialized
group of experts”
Authors look at sentiment analysis of comments
They find that the more comments a user leaves, the more negative their
comments are
However, it is not clear to me exactly how they characterise the sentiment
of the corpus of a user’s comments, and the correlation they show seems
quite weak
Confirmation bias and debunking:
Limitations?
Restricted to US and Italy
I imagine results are domain-specific, i.e. relate specifically to
conspiracy theories and scientific news
Social science¶
Echo chambers on Facebook (2016)
Large networks of rational agents form persistent echo chambers (2018)
Echo chambers on Facebook (2016)¶
(Quattrociocchi, Walter and Scala, Antonio and Sunstein, Cass R)
Authors look at Facebook communities around scientific news and conspiracy theories in Italy and the US
They find users are polarised: users interact with one of the other but not both
Some similarities between the two communities:
Post lifetimes are similar
Authors plot proportion of friends with same polarisation status against engagement in the community (number of likes on posts). There is similar positive correlation for both scientific news and conspiracy theories
Some differences:
For conspiracy theories, lifetime of a post (time difference between first and last user sharing the post) is positively correlated with the cascade size (number of users sharing a post)
That is, the more users share the post, the longer the post stays around (?)
For scientific news there is a peak in lifetime around a cascade size of 100-200 users
Authors write: “a longer lifetime does not correspond to a higher level of interest but most likely to a prolonged discussion within a specialized group of experts”
Authors look at sentiment analysis of comments
They find that the more comments a user leaves, the more negative their comments are
However, it is not clear to me exactly how they characterise the sentiment of the corpus of a user’s comments, and the correlation they show seems quite weak
Confirmation bias and debunking:
Few conspiracy theorists interact with debunking posts
Limitations?
Restricted to US and Italy
I imagine results are domain-specific, i.e. relate specifically to conspiracy theories and scientific news
Large networks of rational agents form persistent echo chambers (2018)¶
(Madsen, Jens Koed and Bailey, Richard M and Pilditch, Toby D)
Multi-agent model for formation of echo chambers
Key points:
Information modelled as a number in \([0, 1]\)
Agents are connected in a graph structure. At each time step, agents receive/transmit their beliefs to some of their peers (controlled by a parameter \(\alpha\)). Their information is accepted if it is within \(\beta\sigma_i\) of \(\mu_i\), where \(\mu_i\) is the current belief of agent \(i\), \(\sigma_i\) is the uncertainty of agent \(i\), and \(\beta\) is a constant parameter.
Agents update their beliefs “in a Bayesian manner”. Even after reading the paper I am still not quite sure exactly how this is done
Edges may be added or removed if agents beliefs deviate or come closer to the beliefs of others. I don’t think the mechanism by which this happens is fully explained, though.
Experiments performed using random and scale-free networks