Works in progress

Here are abstracts for some of the papers that I'm either working on now or that are under review. I've omitted titles so as not to compromise blind review. Contact me if you'd like a copy of one of the papers. I'm always grateful to receive feedback.

Applied Epistemology

On bots and trolls

Many of our beliefs are acquired through testimony within an environment that includes fake persons—trolls and bots posting under fabricated identities. This paper investigates epistemic consequences of this fact.

The most immediate epistemic effects of online fake persons concern internet users’ epistemic positions with respect to the identities of those with whom they interact online. Trolls and bots might prevent users from knowing who they are interacting with online by tricking users into mistaking fake persons for real persons, causing wary users to mistake real persons for fake persons, or interfering with the warrant of users’ identifications of online accounts.

These epistemic effects have consequences for the abilities of users to acquire knowledge from social media posts. The misidentification of trolls and bots as real persons will render a user prone to being further misinformed by fake persons’ posts. Additionally, a user’s ability to acquire knowledge through online testimony is plausibly compromised if that user fails to know that her sources are real human persons. Finally, the misidentification of real persons as fake persons may render a user hesitant to believe accurate claims.

Fake persons with convincing fabricated identities are likely, in the long run, to do further epistemic damage. The behavior of effectively disguised fake persons will be misconstrued as evidence of the unreliability of real persons. Because the acquisition of knowledge from testimony requires that one lacks reasons to believe one’s sources are unreliable, the bad epistemic behavior of fake persons might undermine testimonial transmission of knowledge. Even if this effect is not general, fake persons might undermine knowledge transmission by particular individuals. For example, by posing as members of a political party, fake persons might interfere with the abilities of party members to transmit warrant and may make users reluctant to believe party members’ claims.

Finally, fake persons may compromise the reliability of real persons through the dissemination of misinformation and by less obvious mechanisms. Through the strategic use of likes, shares, and follows, fake persons can steer real human users toward posting certain kinds of content. For instance, fake persons might encourage users to post political misinformation by liking and sharing such content. Such feedback is likely to be perceived as rewarding and as evidence of the accuracy of the content. Fake persons can thereby simultaneously misinform authentic users about the plausibility of their own posts and encourage the posting of further misinformation.

On conspiracy theories

Particularists maintain that conspiracy theories are to be assessed individually, while generalists hold that conspiracy theories may be assessed as a class. This paper seeks to clarify the nature and importance of the debate between particularism and generalism, while offering an argument for a version of generalism. I begin by considering three approaches to the definition of conspiracy theory, and offer reason to prefer an approach that defines conspiracy theories in opposition to the claims of epistemic authorities. I argue that particularists rely on an untenably broad definition of conspiracy theory. Then, I argue that particularism and its counterpart are best understood as constellations of theses, rather than a pair of incompatible theses. While some particularist theses are highly plausible, I argue that one important particularist thesis is false. The argument for this conclusion draws on the history of false conspiracy theories. I then defend this conclusion against a pair of potential objections.

On deepfakes

A growing number of epistemologists and other commentators have expressed concerns about the likely epistemic effects of deepfakes. But while fears concerning the likely epistemic effects of deepfakes are widespread, the precise conditions under which deepfakes threaten knowledge have not been established. This is an important oversight, because without understanding these conditions we cannot know how best to counter the epistemic threat of deepfakes. The purpose of this paper is to describe the conditions under which deepfakes threaten knowledge. I argue that deepfakes are pernicious, in part, because they need not exist for there to be a threat to knowledge.

I begin by presenting a broad overview of the pathways by which deepfakes can threaten knowledge. Some of these pathways are more straightforward than others. Then I present the view that it is the real existence of deepfakes that threatens knowledge. I then present the view that it is instead the ability to generate deepfakes that threatens knowledge. Next, I argue against both views by drawing on considerations from the epistemology of testimony. In the final substantive section, I present a positive account of the conditions under which deepfakes threaten knowledge. According to this proposal, it is the propensity of deepfakes to exist in an environment which threatens the acquisition of knowledge from video footage in that environment.

On disinformation

Existing analyses of disinformation tend to embrace the view that disinformation is intended or otherwise functions to mislead its audience, that is, to produce false beliefs. I argue that this view is doubly mistaken. First, while paradigmatic disinformation campaigns aim to produce false beliefs in an audience, disinformation may in some cases be intended only to prevent its audience from forming true beliefs. Second, purveyors of disinformation need not intend to have any effect at all on their audience’s beliefs, aiming instead to manipulate an audience’s behavior through alteration of sub-doxastic states. Ultimately, I argue that attention to such non-paradigmatic forms of disinformation is essential to understanding the threat disinformation poses and why this threat is so difficult to counter.

On epistemic environments

A range of artificial entities—including but not limited to texts, maps, photographs, and videos—play important roles in shaping human beliefs. Together, these entities constitute an artificial epistemic environment that, to some degree, reflects reality. This paper emphasizes the importance of the artificial epistemic environment to human epistemic prospects and offers some reasons to expect that the artificial epistemic environment systemically misrepresents reality. First, the artificial epistemic environment includes a great deal of misinformation. Second, both on social media and in science, the preference for novelty and surprise tends to distort reality. Finally, many important components of the artificial epistemic environment—especially communications that occur over social media—are accessed in the absence of the intonations, facial expressions, and gestures that facilitate the identification of reliable testimony in face-to-face contexts.

Skepticism

On the simulation argument

David Chalmers suggests that the probability that we are sims is around 25%. Nick Bostrom puts the odds at around 20%. Both reach this conclusion through the simulation argument, which can be roughly summarized as follows. Either highly technologically advanced “post-human” civilizations will be unable to create many simulations involving conscious beings, such civilizations will be unwilling to do so, or it is extremely likely that we are sims. Because there is no reason to strongly favor either of the first two disjuncts over the third, it is fairly probable that we are sims. I argue that this conclusion is premature on the grounds that even posthuman civilizations capable of producing many simulations involving conscious beings would be unwilling to do so.

As Bostrom notes, one might in principle argue that posthuman civilizations would be unwilling to create conscious sims for moral reasons. However, it is not clear that there are moral reasons not to create conscious sims or that posthuman civilizations would be compelled by such reasons. I instead argue that posthuman civilizations would be unwilling to create complex simulations involving conscious beings for self-interested reasons.

First, no posthuman civilization could be sure that it exists in base, non-simulated reality. The inability to rule out being in a simulation is inevitable, not contingent on technological sophistication. Second, for anyone uncertain of whether they exist in base reality, the creation of highly complex simulations poses a potential existential threat. The computational demands of executing a complex simulation might result in the “crash” not only of that simulation, but of its host simulation. Thus, because no posthuman civilization will be able to rule out being simulated, and any such civilization’s existence will attest to its respect for existential risks, posthuman civilizations are unlikely to run simulations involving conscious beings.