Is Authenticity an Effective Antidote to Misinformation?

Authors

  • Jaap-Henk Hoepman Radboud University, Nijmegen, Netherlands \ Karlstadt University, Karlstad, Sweden

1 Introduction

False and misleading information (“misinformation” for short) is spreading more easily than ever (Nature editorial, 2024). A recent paper (Jacobs, 2024) argues that the growing impact of misinformation is caused by an “authenticity crisis”: we can no longer be certain of the source of a particular message 1 or determine whether it has been tampered with. In other words, we are also uncertain about the integrity of the message. Properly authenticating messages would solve both problems.

Jacobs (2024) rightly notes that authenticity, as defined in this information security setting, is not the same as veracity: a clear lie (“1+1=3”) may be authentic if someone chooses to utter it. Yet, he proposes to use authenticity as a proxy for veracity based on the belief that the ubiquitous use of (cryptographic) techniques to establish the authenticity of a message (e.g., via a signature) would help people make their own credibility judgments. For this, Jacobs proposes to let all users – and in particular institutions and authorities – sign their online messages and to give the users reading these messages the necessary means to verify them (see Section 3).

In this paper, we study this line of thought in greater detail. As we discuss in Section 4.1, the two main theories explaining people’s susceptibility to misinformation, namely, the inattention account and the motivated cognition account, offer little evidence that assurance about the source would be of much benefit. At a more fundamental level, and as we explain in detail in Section 4.2, identity is a poor proxy for veracity: misinformation is also spread through channels that are supposedly trusted or mimic reliable outlets. Moreover, both signing messages (Section 4.3) and verifying the signatures on messages (Section 4.4) are challenging in practice. Deep fakes pose different challenges, which we describe in Section 4.5. Lastly, as we discuss in Section 4.6, ubiquitous digital signatures have negative side effects, including diminishing trust in anonymous or pseudonymous messages, putting anonymity or pseudonymity itself in a bad light, and reducing the set of possible sources for news to those organizations that have the resources to reliably sign their publications. We conclude (Section 5) that authenticity is not effective in countering misinformation and that the notion of an authenticity crisis is exaggerated.

2 Terminology

We first introduce and clarify some of the terminology and concepts this paper builds on.

2.1 Misinformation and Deep Fakes

Misinformation is an elusive concept with numerous meanings (Altay et al., 2023). Following van der Linden (2022), we broadly define misinformation as false or misleading information masquerading as legitimate information, regardless of intent. We avoid the popular term “fake news” and instead favor this more scientifically established term, which encompasses disinformation (intentionally false or misleading statements) as well as proper misinformation (unintentionally false or misleading statements). Again, intent is considered irrelevant, and we are not solely interested in pure falsehoods.

The recent surge in artificial intelligence (AI) adds “deep fakes” to the mix, where an artificially manipulated picture, video, or audio message is used to create the illusion that something happened that did not actually happen, or that someone did or said something they did not do or say. Deep fakes are a particular type of disinformation, which deserves independent analysis (see Section 4.5).

An understanding of how and why misinformation propagates and how its circulation and impact could be curbed is developing slowly (Lazer et al., 2018; Righetti, 2021). Although there is every reason to be concerned, some argue that alarmist narratives about the “infodemic” (Simon & Camargo, 2023) exaggerate the issue (Altay et al., 2023). A recent survey (Altay et al., 2023) found reasonable agreement among 150 experts that factors like motivated reasoning and political partisanship or inattention, lack of cognitive reflection, and repeated exposure influence why people believe and share misinformation (see Section 4.1 for details). Interestingly enough, lack of education and lack of access to reliable news were considered the least likely determinants.

Following an epidemiological model (Kucharski, 2016), the impact of a particular piece of misinformation depends on three factors: the way it spreads (i.e., how many people it reaches, how fast it travels, and how long it keeps circulating), its persuasiveness (i.e., how many people initially believe it to be true), and recovery (i.e., the extent to which people can be convinced afterwards that the information was actually false; van der Linden, 2022). Persuasiveness of course influences spread because people are generally less likely to repost or forward things they do not believe. Thus, reducing the impact of misinformation requires limiting its spread, diminishing its persuasiveness, or increasing recovery. Jacobs’s (2024) proposal focuses on reducing persuasiveness.

2.2 Digital Signatures

Jacobs (2024) proposes using digital signatures as a cryptographic technique to establish the authenticity of a message. Without going into great technical detail, understanding how they work and how they help to establish message authenticity is essential to the discussion that will follow.

Digital signatures are based on public key cryptography. This distinguishes private keys, which belong to a particular user and are to be kept secret and known only to this user, from public keys, which are public and recorded as belonging to an identified user. Each public key belongs to one particular private key, and vice versa. In other words, they form pairs (Menezes et al., 1996).

A digital signature for a message is a small digital artefact computed using the contents of the message and the private key that belongs to the signer. It depends on both this private key and the message itself. As a result, the signature becomes invalid if any of these change even a little bit. The digital signature is attached to the message it signs. To verify a digital signature attached to a message, one needs to know the public key corresponding to the private key used to sign it. Given a message and a public key, the verification procedure will determine whether the signature is valid, that is, whether it was signed using the private key belonging to the public key. Without knowing the private key, one cannot create a valid signature over a message. Provided that private keys are indeed unique and kept private by their users, signatures authenticate the source of the message and preserve message integrity, given that changing the message invalidates the signature.

2.3 Digital Signatures in Practice

This basic theory of digital signatures depends on assumptions that are relevant in practice.

Signing is a complex mathematical operation that is performed in software running on a computing device controlled by the signer. We call this the signing device. The following assumption is key.

  • Users securely store and use their private keys to ensure that they are indeed private.

If they do not, others can sign messages for them. Often, the signing device includes a secure area for storing the private key. To guarantee the authenticity of the message, the design and implementation of the signing device must ensure the following:

  • The exact message that the user feeds into the signing device is indeed signed by it, that is, what you see is what you sign (WYSIWYS).
  • The private key used to sign the message indeed belongs to the user operating the signing device to sign it. This is called holder binding.
  • The signing device never signs messages by itself, except when explicitly instructed to do so by a user operating it.

If any of these assumptions do not hold, the signature cannot be trusted in practice.

Verification of a digital signature again is a complex mathematical operation that is performed in software on the verifying device. In theory, the verifying device is given the message, the signature, and the public key of the user assumed to have signed the message. In practice, however, this is not how it works: the message and/or the attached signature are assumed to contain the identity of the presumed signer 2 , and the verifying device is given access to a public registry of known public keys and their owners to look up the public key that must be used to verify the signature. In this setting, the verifying device returns the identity of the signer or an error if the signature failed to verify. To guarantee authenticity, the design and implementation of the verifying device must ensure the following:

  • The exact message and signature that the user feeds into the verifying device are verified by it, that is, what you see is what you verify (WYSIWYV).
  • The public key used to verify the signature over the message indeed belongs to the user who claims to have signed it.

The latter is a particularly important assumption and difficult to guarantee in practice. It very much depends on the integrity of the registries maintaining the link between user identifiers and their public keys, which are sometimes maintained through a hierarchy of certificates called the public key infrastructure (Chokhani, 1994). It also relies on users knowing and diligently verifying the identifier of the purported signer.

2.4 Attribute-Based Signatures

Attribute-based signatures (Maji et al., 2011) are a special kind of signature that are not linked to a particular individual, but only to a particular attribute that belongs to the signer (and that it may share with other people). Examples of attributes include being a medical doctor, a police officer, or a notary, having passed accreditation, or working as a journalist for The Guardian. Jacobs proposes using such signatures to allow users to sign their messages in different roles, for example, as a representative of a particular company or organization.

Attribute-based signatures derive from attribute-based credentials (Camenisch & Lysyanskaya, 2001; Chaum, 1985; Rannenberg et al., 2014), which are issued by entities that are trusted to vouch for their validity. These entities are called issuers. For instance, The Guardian could issue credentials for their journalists, and doctors obtain their credentials when enrolling in the medical register. A message signed “by a medical doctor” could be signed by any person holding a valid medical doctor credential at the time of signing, but not by any other entity not holding this credential. To verify an attribute-based signature, you need to know the public key of the issuer of the credential containing the attribute. The following additional assumptions are relevant in practice:

  • Issuers only issue credentials with attributes they can vouch for, and only to users who qualify. Users know which attributes issuers can vouch for.
  • Users securely store their attribute-based credentials and do not share them with others.
  • The public key used to verify the credential indeed belongs to the issuer who claims to have signed it.

2.5 Watermarks

A signature can be removed from a message. This is a problem if there are incentives to remove it, for example, if the signature signals a property the bearer wishes to deny. The classical case is that of an owner of some copyrighted digital artefact who wants to create an illegal copy for distribution. Merely signing the digital movie or audio recording to allow its original owner to be traced would be useless, seeing as the signature can simply be omitted. To enforce such forms of digital rights management, so-called digital watermarks have been proposed (Cox et al., 2008). These are essentially digital signatures imperceptibly woven through the message. They can be detected by special software but do not interfere with the normal use of the picture, movie, or piece of music. Most importantly, it is difficult (though certainly not impossible, see Zhao et al., 2024) to remove them from the message because of the secret way they are embedded in it. In the context of this paper, watermarks have been proposed to flag AI-generated content (see Section 4.5).

3 Summary of Jacobs’s Argument

Misinformation easily spreads on social media. A particularly sensitive issue is the use of AI to create deep fakes. Misinformation and deep fakes are challenging to identify because our increasing reliance on digital forms of communication makes it hard to establish the authenticity (i.e., the source and integrity) of a message. This is what Jacobs (2024) calls the authenticity crisis.

To overcome the authenticity crisis and curb the impact of misinformation, Jacobs (2024) advocates for the ubiquitous use of digital signatures. The idea is to let everyone sign all their online messages and requires all apps and browsers to verify the signatures on these messages, showing the status of the verification using a check mark in a protected area. Authorities could sign as legal entities, while people could either sign in a personal capacity or as members of a professional group, using attribute-based signatures. This could help users, for example, to determine whether a news story really was published by the New York Times or confirm that a warning was issued by the national health authorities. As Jacobs rightly claims, this allows them to identify fabricated messages from fabricated sources. On this basis, they can then judge the veracity of the messages they receive.

To be clear, Jacobs (2024) correctly observes that authenticity is not the same as veracity: a clear lie (“1+1=3”) may be authentic if someone chooses to utter it. Instead, he proposes using authenticity as a proxy for veracity, as a signal on which to base veracity decisions. The method he presents would make it much easier for users to reliably verify the authenticity (i.e., the source and integrity) of the messages they receive. This could be helpful because the ordinary cues we use to establish authenticity no longer work in the digital domain.

This approach makes sense if misinformation is considered to be “fabricated information that mimics news media content in form but not in organizational process or intent” (Lazer et al., 2018, p. 1). 3 That is, “misinformation producers do not adhere to editorial norms and… the defining attribution of ‘fake-ness’ happens at the level of the publisher and not at the level of the story” (van der Linden, 2022, p. 2). A clear sign authenticating a reputable news outlet that abides by a proper editorial process as the source of a message would definitely help.

For deep fakes, Jacobs (2024) suggests obliging AI generation tools to embed a watermark in their output as a marker of their synthetic (“fake”) origin. While the presence of a signature could increase confidence in the content of a message, a watermark would be a signal to distrust it. As watermarks cannot easily be detached from a message, this “stain of syntheticness” cannot easily be washed off. Watermarking may thus be a welcome tool for establishing whether content on social media is fake or not.

According to Jacobs (2024), this authenticity system also strengthens institutions and authorities if they systematically sign their messages online. Provided that these organizations tightly control who can sign on their behalf, citizens can check that these messages really came from them and, thus, carry a certain level of trustworthiness. As Jacobs notes, “this only works if people have a certain level of trust in institutions in the first place” (Jacobs, 2024, p. 3). Moreover, he claims, it allows institutions to deny that a specific fake message came from them by pointing to the absence of a valid signature. If organizations systematically sign all their messages, it follows that a message without their signature cannot have originated from them.

Jacobs (2024) observes that adding another digital signature to a message can signal endorsement. By signing a message instead of simply liking or reposting it, one can express agreement with its content. Again, this can be used by others to judge the veracity of the message. Intermediaries such as newspapers and TV stations can strip the original signatures from messages to protect their sources while adding their own to vouch for their content.

To be clear, Jacobs (2024) notes that misinformation is a complex issue and that his proposal to use digital signatures will by no means solve all these problems. However, he envisions a fundamental role for signatures in establishing trust within digital environments, in particular to make public institutions and their communications more recognizable and, hence, more trustworthy.

4 Critique

It is worth noting that in many ways, existing social networks already provide a certain level of authenticity. Many of them offer verified accounts, where the identity of the account holder has been established in more or less reliable ways. These accounts display a special badge or check mark in their profile. Moreover, the social media feed is typically accessed over a secure connection that also offers authenticity and integrity guarantees. When combined, these two measures already provide a decent guarantee of the immediate source of any message encountered on social media. However, this form of authenticity is somewhat implicit. Messages on these protected social media feeds are not individually signed and, therefore, do not retain their authenticity when they are taken out of the feed. Nonetheless, depending on how forwarding or reposting is implemented, sharing within the same social network preserves authenticity in most cases.

Social network companies typically are in a much better position to verify the identity of their accounts than individual users (as Jacobs proposes), but these trust badges have not proven very useful. Untrained users respond unpredictably to these identity cues, while trained users who were explained the meaning of such badges make significant reasoning errors, like trusting any message from a verified account. This is also the case with the attribute-based signature approach advocated by Jacobs, where people sign in their professional capacity. As Geels et al. (2024) observed, “the prevalene of these two reasoning errors seem[s] especially harmful in the light of misinformation” (p. 21). Therefore, there are reasons to be skeptical of the proposed approach.

4.1 Does Authenticity Mitigate Susceptibility to Misinformation?

We consider whether the solution proposed by Jacobs (2024) would reduce the impact of misinformation and could help strengthen the status of public institutions, even in theory. Jacobs claims that digital signatures allow citizens to identify misinformation. But can they really? And will they?

Two essentially competing theories explain why people are susceptible to misinformation: the inattention account and the motivated cognition account (van der Linden, 2022). The former posits that social media users are bombarded with emotionally charged news content online and have little time to properly reflect on its veracity and make deliberate decisions about whether to believe it or not. The latter claims that information deficits or a lack of reflective reasoning are not the primary driver of susceptibility to misinformation. Instead, it argues that deeply held political, religious, or social beliefs determine which media content a user believes and endorses.

According to the inattention account, a clear signal that can be used instantly to judge the veracity of a message would help reduce the impact of fake news. This is especially the case if very little cognitive effort is necessary to make the (split-second) decision and if the signal is strong enough to compete with the emotional charge of the message. Evidence suggests that people do use information about the source of a piece of information to judge its credibility (Hilligoss & Rieh, 2008). In particular, people are known to use usernames as a heuristic to assess the credibility of social media posts (Morris et al., 2012). Accordingly, in theory, employing signatures to allow users to quickly determine the source and integrity of a message could help, provided that the identity of the source is a clear indicator of the veracity of the message. Unfortunately, this is not always the case. Moreover, the extra “indirection” of using identity to determine the veracity of the message might already be too heavy a cognitive burden.

For example, if the message is signed by an unknown person or institution, the signature does little to help determine its veracity. Hardly anybody outside the Netherlands would be able to judge the veracity or bias of a news item published by NRC, for example. Similarly, people outside the United States would struggle to say whether the New York Post is a reliable source of information. This is even truer when the signer is an unknown natural person. Even if the person or institution is known, some people deliberately make the wrong credibility decision. For instance, Andrew Tate (or any other high-profile influencer) cannot be considered a reliable source of information, yet many of his followers immediately believe anything he says.

According to the motivated cognition account, users are not necessarily interested in the veracity of a message as long as it confirms their beliefs. This applies especially to outrage-evoking misinformation, which “may be difficult to mitigate with interventions that assume users want to share accurate information” (McLoughlin et al., 2024, p. 1). Therefore, adding signatures to allow users to establish the source of a message is not helpful; users would simply ignore it unless the signature corresponds to a person (e.g., Trump, Harris) or institution (e.g., Russia Today, The Wall Street Journal) known to match their strongly held beliefs. The measure may even be counterproductive if the goal is to increase the trustworthiness of institutions: if users do not agree with or do not believe the messages these institutions sign, they might perceive these signatures as signals to distrust these institutions even more.

We conclude that the effect of the lack of authenticity on the impact of misinformation is minor and present only where the inattention account of misinformation holds. Moreover, although signatures might help increase the trustworthiness of institutions in some cases, a similar argument can be made that they could lead to a decrease in their trustworthiness.

4.2 Fundamental Issues

Independent of which theory of susceptibility to misinformation one subscribes, using authenticity as a proxy for veracity is problematic on a more fundamental level as well. This is not an entirely new insight: in the study of online community behaviour, the assumption that not being anonymous mitigates harassment is known as the Real Name Fallacy (Matias, 2017). Additional factors are at work as concerns misinformation.

Misinformation is also spread through supposedly trusted channels, like newspapers or scientific journals, by more or less trustworthy people, like journalists or academics. Real events or scientific results can be framed in different ways, to the point of becoming misleading. If a few scientists oppose the scientific consensus on a specific topic (be it the climate crisis, the benefits of vaccination, or the dangers of smoking), this could be the start of a paradigm shift in science (Kuhn, 1962) or simply misinformation disguised as such. The point is that looking at their signatures offers no way to know.

Using authenticity as a proxy for veracity depends on what it means for someone to sign a message. Does it really always mean “I believe this message to be true”? Or does it mean “I endorse this message”? Or “I like it”? Or “I got paid handsomely for it”? Or “I clicked the repost button and this automatically signed it”? This cannot be ascertained purely from the signature and instead requires one to know the signing person or organization. It also depends on the extent to which someone signing a message can be held accountable. With higher accountability comes greater responsibility for the veracity of the message. Further, the more seamless and automated the act of signing is, the less the generated signature says about intent. This makes it harder to overcome practical hurdles (see Section 4.3).

It also indicates that to use authenticity as a proxy for veracity, one also needs to “know” the signer. As established in Section 4.4, users must be able to reliably establish the identity of the signer, and this identity must be meaningful to them. That is, it must allow them to make a valid veracity decision based on it. This is not necessarily the case. Often, users do not know the persons or organizations whose accounts they follow on social media. Or they may merely think they do because the accounts use the names of well-known people or organizations (or deceptively similar names). Even when they do know them, they may not be able to tell whether the signer can be trusted to make valid statements about the topic of the message.

4.3 Practical Issues with Signing Messages

The analysis above assumes that it is easy, in practice, to reliably sign messages (see the assumptions in Section 2). For institutions that typically have a professional communication department that maintains the official website and all online accounts, adding a digital signing step to an already standardized publication process should be feasible, although integrated tools to this end are currently lacking. The private signing key should be protected with the same vigilance as access to the web server or online accounts. For sufficiently large and professionally run institutions, this should be doable. However, security experts have observed that the more a signature key is used, the more difficult it is to prevent abuse (Abelson et al., 2015).

For individuals, this is not the case. Most people have never consciously signed a digital document before. If they have, it is generally through a web-based process like DocuSign, 4 which does not actually generate digital signatures involving the private key of the user. This means that individuals currently lack the infrastructure (signing keys, hardware, and software) to individually sign messages. Rolling out this kind of infrastructure would require a significant effort, especially considering the fact that users typically use different platforms (e.g., their browser or mobile apps) to access their social media accounts. It would be especially challenging to integrate signing functionality securely into web browsers while ensuring that the private key stays private (Assumption 1) and that messages are signed only when instructed by the user (Assumption 4). The most promising approach (advocated by Jacobs, 2024) would involve an app on users’ mobile devices, like a digital identity wallet. Such wallets have been proposed, for example, in the EU (Regulation (EU) 2023/1183 (eIDAS2), 2024). Wallets could also be used to generate attribute-based signatures if the wallet supports attribute-based credentials. Even then, a considerable effort is needed to integrate such signatures into the current browser and social networking app ecosystem.

4.4 Practical Issues with Verifying Signatures on Messages

The biggest practical hurdle, in practice, is not the signing of a message but what happens with the message once it is signed. In particular, the reliable verification of the signatures poses practical problems in light of the assumptions stated in Section 2.

First, all means by which people consume media should allow the easy verification of signatures on any message, hence Jacobs’s (2024) suggestion to add support for this in apps and browsers. This requires a significant effort, notably a global trusted infrastructure for securely maintaining and distributing public keys for all users and organizations that wish to sign their messages (see Assumption 6). This public key infrastructure must properly verify the identity of users and organizations that submit their public keys and then securely maintain the link between identity and public key for as long as the public key is valid. The first part, especially, is a challenge: it assumes all users have a global and unique identity that (1) can be reliably established and (2) is meaningful to users who later want to verify signatures.

Such a Public Key Infrastructure (PKI) does not really exist at this scale, although the introduction of more modern digital identity wallets – some with signing capabilities, like the European identity wallet (Regulation (EU) 2023/1183 (eIDAS2), 2024) – will facilitate this in the future. If we limit the scope to only allow organizations to sign messages, the PKI for the web (used to authenticate websites) could be used. The additional advantage is that the issue of properly naming organizations is solved (somewhat) by using the web domain name as the organizational name. 5

Moreover, Jacobs’s (2024) proposal does not engage with the reality of how (fake) news is spread on social media: it often involves excerpts, pictures, or screenshots of documents rather than the full original document. All these actions either remove the signature altogether or render it invalid. As a result, the signal to distinguish real news and fake news quickly evaporates as soon as genuine news hits the messy practice of news propagation through social media.

4.5 Deep Fakes

Deep fakes, like disinformation, are not authentic, but they are so in a fundamentally different way. Whatever they portray or represent never actually happened, but the way the message (e.g., an image, a video, or an audio fragment) is synthesized abuses our innate mechanisms for establishing authenticity, such as relying on a familiar face, voice, or even document design.

Jacobs (2024) proposes using watermarks to flag deep fakes, making it mandatory for generative AI tools to attach such watermarks to the (potentially deep-fake) output they generate. The idea is that these seals of syntheticness cannot be removed easily, and, as a result, the corresponding output cannot be presented as “real.” 6

Although official generative AI tools can legally be compelled to add a watermark, this is much more challenging to achieve for open-source models. Those who maintain these open-source models may not be legal entities and may reside in jurisdictions that are out of reach. In any case, their open-source nature should not make it too difficult to remove the watermarking process from the system or to force it to use the wrong key to evade detection.

Moreover, a more fundamental issue of efficacy is at stake. According to Kapoor and Narayanan’s (2024) study of the use of deep fakes during the 2024 US election revealed that “focusing on the demand for misinformation rather than the supply is a much more effective way to diagnose problems and identify interventions”. Even “cheap fakes,” that is, obviously fake photos and videos, are effective because many people share media as a form of social signaling: I send you a message just to show that we are on the same team. Whether it is true, misinformation, or actual propaganda is of secondary importance (Schneier, 2025).

Finally, it is important to note that it is certainly possible for a sufficiently determined adversary to remove watermarks from the output generated by AI without being noticed (Zhao et al., 2024). This limits the usefulness of this countermeasure, in particular as concerns the more pernicious spread of deep fakes initiated by state actors, which have the resources and motivation to expend the required effort.

4.6 Negative Side Effects

Using authenticity as a proxy for veracity to reduce the impact of misinformation also has negative side effects. First, if people are trained to verify the authenticity of messages and base their veracity decisions on the identity of the sender, this instills in them the reverse of this rule: the absence of a signature implies that the message is more likely false. As a result, anonymous or pseudonymous communications become (even) less relevant. In fact, it might put pseudonymous or fully anonymous communication in a bad light. This is problematic because research on online harassment has shown that forcing people to be identifiable online might actually put them at risk (Matias, 2017). This is especially the case for marginalized people and people with extreme opinions, who may therefore become reluctant to engage in the online debate. Whistleblowers will also suffer, given that they need to stay anonymous.

Second, using authenticity as a proxy for veracity reduces the set of possible sources of news to those that can sign and whose signatures can be reliably verified. This creates an uneven playing field: well-established governmental institutions and businesses are more likely to fall in this category and will therefore dominate “verifiable” news feeds. A perception could then arise that the only news that can be verified is state approved or comes from larger corporations, possibly leading people to feel that they are subjected to state propaganda. This would then trigger a vicious cycle: if the goal is that “authenticity-guarantees make institutions recognisable online and provide people with useful tools for making their own credibility judgements” (Jacobs, 2024, p. 1), the risk of a backfire is considerable, at least as concerns the perceived trustworthiness of said institutions. This can only be prevented by creating a level playing field where everyone can sign their messages and these signatures are easily verified by other users. However, as pointed out earlier, this is far from a trivial task.

Third, there appears to be no middle ground: a user either signs all of their messages or none. If some messages that are really from them and that they believe to be true are unsigned, while others are signed, the unsigned ones will be perceived as less trustworthy or even fake. This creates a push to sign all messages except the most insignificant ones, or to sign none. This is a particular concern for the social media accounts of individual users who act in a more or less official capacity (e.g., government officials, journalists, academics, doctors, etc.) because their online messages may be construed as the “official” position. Yet, individuals are much worse positioned to ensure that all their messages are signed than large organizations.

5 Discussion and Conclusions

Assuming for the moment that authenticity is a reasonable proxy for veracity –and we discussed in Section 4.2 why we believe that this is not the case – one can question whether adding a complex infrastructure to sign and verify messages is necessary. After all, signature verification is an opaque process that, from the user’s perspective, produces a green checkmark somewhere in the user interface. Does that really add much value, given that users access their social media using an official app or while browsing over a secure web connection and that the social media adds account information (sometimes even including identity badges) to each message in its feed? Adding a signature proving that the source really is Donald Trump, The Wall Street Journal, the BBC, Al Jazeera, or Russia Today does not add much value if their account names already state this.

In the end, anybody can obtain a private key, register the associated public key, and sign; therefore, a valid signature in itself means nothing. As the discussion above made clear, the real decisions users must make concern who signed the message, whether this person can be held responsible for the content of the signed message, and whether they can be trusted to make valid statements about the topic. This is a highly complex judgment that few people are able or willing to make (see Section 4.1). In the end, users may simply follow the basic rule that if a message has a verifiable signature, it must be true (see Last et al., 2024) – and, conversely, that if it does not have one, then it is not true.

Should this happen, the situation will be even worse than at present. Only entities that are capable of reliably signing messages will be trusted, and anonymous communication will largely be ignored. This is problematic because, as argued in Section 4.3, it is not easy to reliably sign every message posted. This creates a barrier to entry into the public discourse shaped by social media. As a result, the voices of less-established institutions and marginalized communities will be heard less. This is only one of the negative side effects of this approach, as discussed in Section 4.6.

Instead of looking at persuasiveness and the individual actions of users as a cause of (and a potential solution to) the misinformation problem, we may instead examine systemic causes and consider ways to reduce the spread of misinformation. For example, initial research suggests that only a limited number of accounts actively spread misinformation (McCabe et al., 2024). Hence, social media platforms could play a much stronger role in preventing its spread by enforcing their terms of use. This could involve platform design changes, algorithmic alterations, content moderation, de-platforming prominent actors spreading misinformation, and crowdsourcing misinformation detection and removal (Altay et al., 2023). Underlying factors include the attention-based economy and the advertising-funded model of much of the web, which has boosted the production of “fake news”; this must be addressed as well (Lazer et al., 2018). This seems to suggest that to effectively fight the spread of misinformation, more attention must be paid to the role of social media platforms, in particular their business models and their use of algorithmic amplification.

We have argued that, both in theory and in practice, aiming for more authenticity is unlikely to reduce the impact of misinformation. It also comes at a cost. Therefore, the solutions proposed to mitigate the impact of misinformation should be carefully analyzed in terms of their effectiveness and potential ill effects and should not follow potentially alarmist narratives (Altay et al., 2023). Based on our analysis, we conclude that the ubiquitous use of digital signatures is not effective in countering misinformation and that the claim that there is an authenticity crisis is overblown.

Acknowledgements

I would like to express my sincere gratitude to Yana van de Sande for her guidance regarding the literature on misinformation, and to Hanna Schraffenberger and the anonymous referees for their very thorough and constructive feedback.

References

Abelson, H., Anderson, R., Bellovin, S. M., Benaloh, J., Blaze, M., Diffie, W., Gilmore, J., Green, M., Landau, S., Neumann, P. G., Rivest, R. L., Schiller, J. I., Schneier, B., Specter, M., & Weitzner, D. J. (2015, July 6). Keys under doormats: Mandating insecurity by requiring government access to all data and communications (Report No. MIT-CSAILTR-2015-026). MIT.

Altay, S., Berriche, M., & Acerbi, A. (2023). Misinformation on misinformation: Conceptual and methodological challenges. Social Media + Society, 9(1).

Altay, S., Berriche, M., Heuer, H., Farkas, J., & Rathje, S. (2023). A survey of expert views on misinformation: Definitions, determinants, solutions, and future of the field. Harvard Kennedy School (HKS) Misinformation Review, 4(4).

Camenisch, J., & Lysyanskaya, A. (2001). An efficient system for non-­transferable anonymous credentials with optional anonymity revocation. In B. Pfitzmann (Ed.), Advances in Cryptology – EUROCRYPT 2001 (pp. 93 – 118). Springer.

Chaum, D. (1985). Security without identification: Transaction systems to make big brother obsolete. Communications of the ACM, 28(10), 1030 – 1044.

Chokhani, S. (1994). Toward a national public key infrastructure. IEEE Communications Magazine, 32(9), 70 – 74.

Cox, I. J., Miller, M. L., Bloom, J. A., Fridrich, J., & Kalker, T. (2008). Digital watermarking and steganography (2nd ed.). Morgan Kaufmann.

Geels, J., Graßl, P., Schraffenberger, H., Tanis, M., & Kleemans, M. (2024). Virtual lab coats: The effects of verified source information on social media post credibility. PLOS ONE, 19(5), 1 – 28.

Hilligoss, B., & Rieh, S. Y. (2008). Developing a unifying framework of credibility assessment: Construct, heuristics, and interaction in context. Information Processing & Management, 44(4), 1467 – 1484.

Jacobs, B. (2024). The authenticity crisis. Computers, Law & Security Review, 53, Article 105962.

Kapoor, S., & Narayanan, A. (2024, December 13). We looked at 78 election deepfakes. Political misinformation is not an AI problem. Knight First Amendment Institute at Columbia University. https://knightcolumbia.org/blog/we-looked-at-78-election-deepfakes-political-misinformationis-not-an-ai-problem

Krebs, B. (2018). Look-Alike Domains and Visual Confusion. https://krebsonsecurity.com/2018/03/look-alike-domains-and-visual-confusion/

Kucharski, A. (2016). Study epidemiology of fake news. Nature, 540, 525 – 525.

Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press.

Last, Y., Geels, J., & Schraffenberger, H. (2024). Digital dotted lines: Design and evaluation of a prototype for digitally signing documents using identity wallets. CHI EA ’24: Extended abstracts of the CHI Conference on Human Factors in Computing Systems, 108, 1 – 11.

Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094 – 1096.

Maji, H. K., Prabhakaran, M., & Rosulek, M. (2011). Attribute-based signatures. In A. Kiayias (Ed.), Topics in Cryptology – CT-RSA 2011 (pp. 376 – 392), Springer.

Matias, J. N. (2017, January 3). The real name fallacy. Coral Project. https://coralproject.net/blog/the-real-name-fallacy/

McCabe, S. D., Ferrari, D., Green, J., Lazer, D. M. J., & Esterling, K. M. (2024). Post-January 6th deplatforming reduced the reach of misinformation on Twitter. Nature, 630, 132 – 140.

McLoughlin, K. L., Brady, W. J., Goolsbee, A., Kaiser, B., Klonick, K., & Crockett, M. J. (2024). Misinformation exploits outrage to spread online. Science, 386(6725), 991 – 996.

Menezes, A. J., van Oorschot, P. C., & Vanstone, S. A. (1996). Handbook of applied cryptography. CRC Press.

Morris, M. R., Counts, S., Roseway, A., Hoff, A., & Schwarz, J. (2012). Tweeting is believing? Understanding microblog credibility perceptions. Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, 441 – 450.

Nature editorial. (2024). Data access needed to tackle online misinformation. Nature, 630, 7 – 8.

Rannenberg, K., Camenisch, J., & Sabouri, A. (2014). Attribute-based credentials for trust: Identity in the information society. Springer.

Regulation (EU) 2024/1183 of the European Parliament and of the Council of 11 April 2024 amending Regulation (EU) No 910/2014 as regards establishing the European Digital Identity Framework, Official Journal of the EU L, 2024/1183. (2024).

Righetti, N. (2021). Four years of fake news: A quantitative analysis of the scientific literature. First Monday, 26(7).

Schneier, B. (2025). Deepfakes and the 2024 US Election, https://www.schneier.com/blog/archives/2025/02/deepfakes-and-the-2024-us-election.html.

Simon, F. M., & Camargo, C. Q. (2023). Autopsy of a metaphor: The origins, use and blind spots of the “infodemic.” New Media & Society, 25(8), 2219 – 2240.

van der Linden, S. (2022). Misinformation: Susceptibility, spread, and interventions to immunize the public. Nature Medicine, 28, 460 – 467.

Zhao, X., Zhang, K., Su, Z., Vasan, S., Grishchenko, I., Kruegel, C., Vigna, G., Wang, Y.-X., & Li, L. (2024). Invisible image watermarks are provably removable using generative AI. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, & C. Zhang (Eds.), Advances in neural information processing systems (pp. 8643 – 8672, Vol. 37). Curran Associates, Inc.

Date received: 16 April 2025

Date accepted: 6 November 2025


  1. 1 We use the generic term “message” for any digital piece of information, be it an email, document, image, video, news item, or anything else.

  2. 2 Or the identity of the signer can be derived from the context, for instance, the social media account on which the message is found.

  3. 3 The paper itself refers to fake news, noting that it overlaps with both misinformation and disinformation. We consider it close enough to the definition of misinformation used in this paper.

  4. 4 https://www.docusign.com

  5. 5 This is not a proper solution, however, because domain names are confusing to users and prone to abuse. For instance, domain squatters may claim domain names for organizations on other top-level domains. As an example, fbi.org does not belong to the Federal Bureau of Investigations (fbi.gov) at the time of writing (October 2025). Similarly, guardian.com does not belong to the UK newspaper The Guardian (theguardian.com). In addition, internationalized domain names (which allow non-ASCII characters) can create confusion (Krebs, 2018).

  6. 6 There is a whole discussion to be had about the reality of any media message, given that many are digitally post-processed. Even in the analogue world, the literal question of framing (from which position and angle to shoot and what to put in the frame or leave out) played an important role.

Metrics

Metrics Loading ...

Downloads

Published

25-11-2025

Issue

Section

Research Papers