You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Beyond facts – a survey and conceptualisation of claims in online discourse analysis

Abstract

Analyzing statements of facts and claims in online discourse is subject of a multitude of research areas. Methods from natural language processing and computational linguistics help investigate issues such as the spread of biased narratives and falsehoods on the Web. Related tasks include fact-checking, stance detection and argumentation mining. Knowledge-based approaches, in particular works in knowledge base construction and augmentation, are concerned with mining, verifying and representing factual knowledge. While all these fields are concerned with strongly related notions, such as claims, facts and evidence, terminology and conceptualisations used across and within communities vary heavily, making it hard to assess commonalities and relations of related works and how research in one field may contribute to address problems in another. We survey the state-of-the-art from a range of fields in this interdisciplinary area across a range of research tasks. We assess varying definitions and propose a conceptual model – Open Claims – for claims and related notions that takes into consideration their inherent complexity, distinguishing between their meaning, linguistic representation and context. We also introduce an implementation of this model by using established vocabularies and discuss applications across various tasks related to online discourse analysis.

1.Introduction

The Web has evolved into an ubiquitous platform where many people have the opportunity to be publishers, to express opinions and to interact with others. It has been widely explored as a source to mine and understand online discourse or to extract knowledge.

On the one hand, understanding and analyzing societal discourse on the Web are becoming increasingly important issues involving computational methods in natural language processing (NLP) or computational linguistics. Related tasks include fact or claim verification, discourse modeling, stance detection or argumentation mining. In this context, a wide range of interdisciplinary research directions have emerged involving a variety of scientific disciplines including investigations into the spreading patterns of false claims on Twitter [197], pipelines for discovering and finding the stance of claim-relevant Web documents [20,70,203], approaches for classifying sources of news, such as Web pages, pay-level domains, users or posts [143], or research into fake news detection [190] and automatic fact-checking [75]. In addition, understanding discourse in scholarly and scientific works has been a long-standing research problem throughout the past years [1,57,58,61,6466,79,83,87,94,95,120].

On the other hand, knowledge-based approaches, in particular works in knowledge base (KB) construction and augmentation, often are concerned with mining, verifying and representing factual knowledge from the Web. Research in such areas often deploys methods and conceptualisations strongly related to some of the aforementioned computational methods related to claims, e.g. when aiming to verify facts from the Web for augmenting KBs [37,214]. Whereas the focus in knowledge base augmentation is on extracting and formally representing trust-worthy factual statements as an atomic assertion in the first-order-logic sense, research focused on interpreting claims expressed in natural language tends to put stronger emphasis on understanding the context of a claim, e.g. its source, timing, location or its role as an argument as part of (online) discourse. Capturing the meaning of claims requires both to preserve the actual claim utterances as natural language texts as well as structured knowledge about the claims. Utterances often carry a range of assertions and sentiments embedded in complex sentence structures, which are easy to process by humans but are hard to interpret by machines. Preserving structured knowledge about claims, including their contexts and constituents, enables machine-interpretation, discoverability and reuse of claims, for instance, to facilitate research in the aforementioned areas.

Despite these differences, methods in various disparate fields, such as claim/fact verification or fact-checking as well as KB augmentation, tend to be based on similar intuitions and heuristics and are concerned with similar and related notions from different perspectives. Hence, achieving a shared understanding and terminology has become a crucial challenge.

However, both the used terminology and the underlying conceptual models are still strongly diverging, within and across the academic literature and the involved applications [35,186]. For example, “Animals should have lawful rights” is considered a claim in Chen et al. [29] and according to many definitions from the argumentation mining community which define claims as the conclusive parts of an argument. It does not constitute a claim according to the guidelines of the FEVER fact-checking challenge [184] where claims are defined as factoid statements. This claim would also not be eligible for inclusion in a fact-checking portal as it does not contain factual content that can be checked and does not seem check-worthy (although this would depend on the context, such as who uttered the statement and when). The claim might be contained in the ground truth of a topic-independent claim extraction approach, but might only be used to evaluate a topic-dependent approach when it is connected to a given topic (more details in Section 3).

This heterogeneity poses challenges for the understanding of related works and data by both humans as well as machines and hinders the cross-fertilisation of research across various distinct, yet related fields. Thus, our work aims at facilitating a shared understanding of claims and related terminology across diverse communities as well as the representation of semi-structured knowledge about claims and their context, which is a crucial requirement for advancing, replicating, and validating research in the aforementioned fields.

In order to address the aforementioned problems, this paper makes the following main contributions:

  • An extensive survey (Section 3) of related works concerned with defining, understanding and representing online discourse and related notions, most importantly, claims and facts. The survey is the first of its kind, providing a comprehensive overview of definitions, terminology used across various fields and communities.

  • A conceptual model (Section 4), which we call Open Claims Model, and corresponding terminology of claims and their constituents and context, that is grounded in both scientific literature in related fields such as argumentation mining or discourse analysis as well as the actual practices of representing and sharing claims on the Web, for instance, as part of fact-checking sites. To this end, we also provide an OWL (Web Ontology Language) implementation of the model as well as an RDF/S (Resource Description Framework Schema) implementation that uses state-of-the-art vocabularies, such as schema.org and PROV-O (Provenance Ontology), in order to facilitate Web-scale sharing, discovery and reuse of claims and their context, for instance through semi-structured Web page markup or as part of dedicated knowledge graphs (KGs) such as ClaimsKG [176].

  • An introductory review of related information extraction and knowledge engineering tasks (Section 5), involved with the extraction, verification and (inter)linking of claim related data. Our aim is to provide an overview of related state-of-the-art works that may be used for populating a KB of claims and their context according to the proposed conceptual model. This also enables us to discover under-researched areas and challenging directions for future work.

Note that while an earlier version of the conceptual model has been presented in Boland et al. [23], the novel contributions of this work include the actual survey of related works in the context of online discourse, a critical review of related tasks, as well as improvements to the model and its implementation facilitated by the substantial survey provided here.

This work is meant to facilitate a shared representation of claims across various communities, as is required for inter-disciplinary research. This includes works aimed at detecting and representing the inherent relations of uttered claims among each other or with represented factual knowledge and other resources, such as web pages or social media posts, e.g., as part of stance detection tasks. Assessing and modeling the similarity of claims, for example, is a challenging task. When two claims are similar to each other, what precisely does this mean? Do they have the same topic but have been uttered to express a different stance? Are they expressing a shared viewpoint but have been uttered by different agents? Do they talk about similar topics but with diverging specificity, i.e. the topic of one claim is a single aspect of the more broad topic of the other one? Or is one claim a part of a more complex claim that includes multiple assertions? Even claims deemed equal with regard to their content may have to be differentiated: they may, for example, be repeated utterances with the same content by the same agent (but at different times), paraphrases (same content but different utterances, also at different times, maybe by different agents) or just duplicates in the respective database. A fine-grained model that allows relating claims and individual claim components allows specifying different dimensions of relatedness and similarity. This also enables more formal and clear definitions for tasks related to detection of claim similarity and relatedness. Use cases involve research into the detection of viewpoints and communities sharing related narratives and viewpoints on the Web [172], the analysis of quotation patterns involving varied sources and media types or profiling of sources and references used in news media [126], and fact-checking applications, e.g. linking claims to previously fact-checked claims [109,162].

2.Methodology of the survey

In this section, we describe the publication selection and review process employed in this survey. An overview of the workflow is given in Fig. 1.

Fig. 1.

Publication selection and review workflow.

Publication selection and review workflow.

2.1.Selection of research fields

First, we identified application areas and research fields involved with claims, facts or relevant concepts.

Application domains include, on the one hand, areas related to natural language claims, which are of concern in fact-checking portals, computational journalism or scientific discourse analysis, for instance, as part of scholarly publications, all involving claims of varying complexity. On the other hand, structured knowledge bases such as Wikidata are used in various applications such as Web search and involve factual statements bound to a predefined grammar relying on triples involving a subject (s), predicate (p) and object (o).

It becomes apparent that a more explicit and clear definition of the concepts of facts vs. claims is needed as both are relevant to this survey. Works focusing on claims made in the context of discourse can be found in argumentation mining, argumentation theory, discourse modeling, and pragmatics.

Facts are central for knowledge representation/augmentation works. With claims not only transporting beliefs or knowledge about factual information, but also conveying subjective information such as opinions, stances or viewpoints, relevant definitions and concepts can also be found in works targeting stance detection, viewpoint extraction and opinion mining/sentiment analysis. Rumours can be considered specific kinds of claims, thus we include definitions from the rumour detection field. Finally, retrieval of claims or respectively facts about specific entities is central to question answering and information retrieval in general, for instance, in the context of fact retrieval, entity summarisation or entity retrieval. Relevant works from these fields are also taken into account.

2.2.Search and review process

Works addressing the aforementioned fields and tasks can be found in a variety of different scientific communities, particularly NLP, Web Mining, Information Retrieval (IR), Knowledge-based Systems and Artificial Intelligence (AI). Based on an initial set of publications from these communities dealing with extraction, verification or linking of claims and facts, found using a keyword-based search, we selected venues from the most relevant papers for systematic screening. Table 1 gives an overview of the chosen core journals, conferences, workshops and events. For each of those, we screened the proceedings of the years 2015–2019 (incl. 2020 and 2021 to the extent possible at the time of writing and revision preparation) and widened the search beyond these venues using online search engines and databases, also considering pre-prints. Publications cited by relevant publications were also taken into account regardless of their venue. For each publication, we extracted formal and informal definitions and descriptions of the concepts of claims and facts which are the basis for the analysis in Section 3 and the development of the model introduced in Section 4. As part of the modeling process, we defined possible relations between the different classes and mapped the generation of information on classes and relations to knowledge engineering tasks (Section 5). We extended our search in the listed venues and beyond to also cover these tasks. The following set of keywords was used for both steps: fact-checking, fact checking, fake news, fact verification, argumentation, discourse, pragmatics, logic, knowledge representation, knowledge base augmentation, knowledge base construction, Knowledge-Base Augmentation, stance, viewpoint, claim, opinion mining, sentiment analysis, rumour detection, rumor detection, question answering, information extraction, relation extraction, ontology learning. This search procedure resulted in a set of 598 publications that we deemed potentially relevant for the topics. Distribution across venues and time periods are displayed in Figs 2 and 3. Note that not all of these publications contain relevant definitions or ended up being cited in this survey. To maintain readability, both figures only contain venues and years, for which we collected at least 10 publications.

Table 1

Core venues analyzed systematically for the survey of fact and claim definitions and related concepts. Related events and workshops that were also considered: Workshop on Argument Mining (ArgMining), Fake News Challenge (FNC), CLEF Lab: CheckThat!, Fact Extraction and VERification (FEVER) Shared Task

CommunityJournalsConferences
NLPComputer Speech and LanguageACL, EMNLP, COLING, NAACL-HLT
Web (Mining)ACM TWEBWWW, WSDM
IRInformation Retrieval Journal (Springer)SIGIR, ECIR
AIAAAI, IJCAI, ECAI
Knowledge-based Systems & Knowledge GraphsSWJ, TKDE, JWS, Elsevier KBSISWC, ESWC, CIKM
Fig. 2.

Analyzed publications and distribution over venues for all venues with at least 10 publications.

Analyzed publications and distribution over venues for all venues with at least 10 publications.
Fig. 3.

Analyzed publications and distribution over years for all years with at least 10 publications.

Analyzed publications and distribution over years for all years with at least 10 publications.

2.3.Related surveys and conceptualizations

While this is, to the best of our knowledge, the first extensive survey on the conceptualization of facts and claims, several works have looked into different aspects of the problem providing overviews of related work in specific areas related to these aspects.

Konstantinovskiy et al. [88] present a novel annotation schema and a benchmark for check-worthy claim detection, providing both an overview of claim definitions from other studies and a new definition of a claim that is constructed as a common denominator of existing ones. The novelty is that the definition is cast in the context of a claim being worthy of fact-checking – an important property of an utterance in view of verifying its veracity. The difficulty of identifying and defining fact-check worthiness of a claim is discussed with regard to the different perspectives that can be given to a single claim according to the human annotator’s background.

Daxenberger et al. [35] also take interest in the task of claim identification, but from an argumentation mining perspective, where this task is defined as recognizing argument components in argumentative discourse. The authors propose a qualitative analysis of claim conceptualization in argumentation mining data sets from six different domains (“different domains” here mean different data distributions). They show that the ways in which claims are conceptualized in each of these data sets are largely diverging and discuss and analyze the presumed harmful impact of these divergences on the task of cross-domain claim identification.

Thorne et al. [180] take a holistic stance on the problem and task of automated fact-checking. They provide an overview of approaches, data sets and methods covering the various steps of the process. This is the first paper of its kind that formulates the ambition to unify the often diverging definitions presented in related works from the fact-checking field by identifying shared concepts, data sets and models. A particularity of the survey is the fact that the authors consider both text-like and structured definitions of claims (e.g. in the form of triples), covering works on knowledge graph building and completion.

Fake news detection is related to fact-checking, but remains a distinct problem. Zhou et al. [222] provide a definition of fake news and present relevant fundamental theories in various disciplines on human cognition and behaviour that are assumed useful for understanding fake news propagation and detection. Their survey on fake news detection methods is built along four categories of methods: (i) Knowledge-based methods, which verify if the knowledge within the news content matches certified facts; (ii) Style-based methods that look into the form of fake news (e.g., expressing extreme emotions); (iii) Propagation-based methods that are based on online spreading patterns; and (iv) Source-based methods investigating the credibility of sources.

Rumours are often seen as a specific kind of fake news. Zubiaga et al. [224] provide a survey on rumour identification and resolution, where conflicting and diverging definitions of rumours from related works are discussed, but without making parallels to related notions such as fake news or biased discourse. The main motivations are the assumed impact of social media on rumour generation and spread. The survey focuses on datasets for rumour detection, as well as existing tools for accessing, collecting and annotating social media data for the purposes of automated rumour detection. The authors analyse generic rumour detection systems by breaking them down to their different components and subsequently discussing the related approaches to address the challenges related to each of those components. In that, the paper presents rumour tracking systems, rumour stance classification and veracity classification approaches.

Both the lack of and necessity for shared understanding and conceptualization of claims surfaces from all of the above studies, which is underlined as their main motivation. However, the fact that some of these surveys discuss the same notions and refer to the overlapping sets of related work but by using different terminology (like e.g. [224]) comes to show that these works do not fully contribute to closing the terminological and conceptual gap that exists within and across fields as these studies discuss more narrow concepts of claims/facts used in specific domains rather than aiming at providing a shared view on the overlap and differences between used terminology.

3.Facts and claims – a multidisciplinary survey of definitions

Fig. 4.

An overview of definitions and relations between facts and claims.

An overview of definitions and relations between facts and claims.

While the analysis of facts and claims plays a crucial role for a number of fields, the definitions of these concepts vary and are often left to the intuition of the reader. Existing definitions vary considerably not only across different fields but also within a single community. At the same time, different communities use the same terminology to refer to different concepts. In this section, we expatiate on different concepts for facts and claims, explain commonalities and differences and introduce a selected vocabulary to refer to these and related concepts throughout this paper. An overview is given in Fig. 4.

3.1.Facts

A fact in the everyday use of the term (depicted on the top of Fig. 4) refers to “A thing that is known or proved to be true”,1 “something that has actual existence”,2 “something that is known to have happened or to exist, especially something for which proof exists, or about which there is information”,3 “something that actually exists; reality; truth”,4 “an event known to have happened or something known to have existed”5 or “a concept whose truth can be proved”.6 Note that not everything that is a fact according to this definition can be observed directly; instead, beliefs about them can be formed by observing evidence.

3.1.1.Facts in knowledge bases

In the semantic web community and the fields of knowledge representation and knowledge base construction/augmentation, facts are seen as the knowledge that is represented in KGs or KBs [6,9,12,28,31,46,47,50,110,110,115,131,153,165,189,193,196,200,216,223]. More precisely, items in KGs or KBs are coined statements of facts or assertions or triples encoding/representing facts [28,31,115,165,193], with the facts being assumed to be true, can be proven to be true or are likely to hold [31,131,142]. However, the use of terminology is not consistent: fact is often used as synonym for RDF triple [50,82,131,218,223] or for the representation of a fact, respectively assertion, but there is often no distinction made between “fact” and “statement of a fact” [46,110,115,153]. The interchangeable use of “statement of fact” and “fact” leads to a widespread terminology of “checking whether facts are true” [175], implying that facts may not be true. Depending on the precise definition of fact, this might be an oxymoron, i.e. when defining a fact as something that is known to be true. Having the task of fact-prediction as background, some works coin the relations between entities or the paths in a knowledge base as facts [131,196]. As Gerber et al. [50] note, facts have a scope, e.g. a temporal one, that determines the context that has to be taken into account in order to judge their validity.

3.1.2.Types of facts

Several more fine-grained distinctions of different types of facts can be found in the literature. Facts can refer to relations or attributes [218], or can be attributes of other facts [196]. They can pertain to numerical properties, quotes or other object properties [82]. They can be assessed according to their “check-worthiness” [82] or importance for the containing KB [196]. Another interesting distinction is made by Tsurel et al. [191] who aim at identifying facts that are suitable to be used as interesting trivia by developing a measure for trivia-worthiness that relies on surprise and cohesiveness of the contained information.

Throughout this paper, we will use the term fact referring to knowledge that is generally accepted to be true and refer to items in knowledge bases as statements of facts.

3.1.3.Facts vs. evidence

Related to the notion of fact is the notion of evidence. Evidence is seen as something to support or contradict a claim [3,150,171]. Some works give a more narrow definition relating to their specific use cases, e.g. Zhan et al. [216, p. 1] define evidence as “text, e.g. web-pages and documents, that can be used to prove if news content is or is not true”. As Stahlhut [171] notes, the task of evidence detection is similar to premise detection in argumentation mining. A premise in argumentation mining is, as Stab et al. [169, p. 1] put it, “a reason given by an author for persuading the readers of the claim”. Evidence and premise directly correspond to each other, as both terms are often used interchangeably [101,166,188].

Evidence can be categorized into many different types, such as expert opinion, anecdote, or study data [171], or, with slightly different wording, study, expert or anecdotal [3]. Walker et al. [199] distinguish lay testimony, medical records, performance evaluations, other service records, other expert opinions, other records. Niculae et al. [125] include references such as URLs or citations as pointers to evidence. Premises can refer to logos, pathos or ethos [80]. For scientific articles, Mayer et al. [112] distinguish the classes comparative, significance, side-effect, other.

While some works refer to knowledge found in texts or other resources as evidence for a fact [9,46,127,153] and call it fact only after the truthfulness has been determined and that knowledge is entered into a knowledge base, other works assume the truthfulness of the mentions and refer to them or the knowledge they represent as facts directly [32,71]. Very related is the task of Truth Discovery. “Truth Discovery aims at identifying facts (true claims) when conflicting claims are made by several sources” [18]. In this domain, the terms data items and truths are used to refer to invalidated mentions of knowledge and the true values respectively [202,209].

3.2.Claims

A claim is commonly seen as “a statement or assertion that something is the case, typically without providing evidence or proof”.7

3.2.1.Claims in argumentation

In line with this definition, works in argumentation mining and argumentation theory focus on claims as the key components of arguments [35], as statements that are made to convince others or express someone’s views, evaluations or interpretations [80,102,107,152].

Claims denominate the conclusion of an argument, the assertion the argument aims to prove or the thesis to be justified [19,97,100102,133,169]. Claims correspond to propositions in argumentation models and both terms are often used interchangeably, “The claim is a proposition, an idea which is either true or false, put forward by somebody as true” [133]. As Daxenberger et al. [35] point out, the exact definition of a claim, even inside the field of argumentation mining, depends on the domain or task and is somewhat arbitrary. Also, as Torsi et al. [186] show, related annotation categories are often not well defined.

With the use case of scientific articles in mind, Mayer et al. [111] define a claim as a concluding statement made by the author about the outcome of the study. Focusing on debates, Aharoni et al. [3, 2], but also Rinott et al. [150, 1], define a claim as a “general, concise statement that directly supports or contests the topic”. A topic here is defined as “a short, usually controversial statement that defines the subject of interest” or “a short phrase that frames the discussion” respectively. Examples for such topics are “Use of performance enhancing drugs (PEDs) in professional sports” with a claim being “PEDs can be harmful to athletes health” [150, p. 2] or “The sale of violent video games to minors should be banned” with a claim being “Violent video games can increase children’s aggression”[3, p. 3]. Note that these definitions diverge from the common definition of a topic as the underlying semantic theme of a document with a topic being a probability distribution over terms in a vocabulary [22] as used in topic modelling and document classification. There, a topic may be represented by terms on a coarse-grained level such as Health or Computers & Internet [211]. This concept of a topic is also used by Chen et al. [29] in their work about discovering perspectives about claims. Also, the second example of a topic can be seen as a claim or stance itself. Durmus et al. [40] represent topics by tags of pre-defined categories similar to the above described semantic themes plus what they call a thesis, corresponding to Aharoni et al. [3]’s claim-like topics, e.g. “free Press is necessary to democracy.”, “All drugs should be legalised.”.

In the following, topic will be used to refer to the frame of the discussion, as defined by Rinott et al. [150] while the underlying semantic theme will be referred to as the subject.

3.2.2.Types of claims

According to Lippi et al. [102], there are three different types of claims: 1) epistemic, i.e., claims about knowledge or beliefs, 2) practical, i.e., claims about actions, alternatives and consequences, and 3) moral, i.e., claims about values or preferences. For example, “our survival rate for cancer that used to be some of the worse in Europe now actually is one of the best in Europe, we are changing the NHS and we are improving it” [sic] is an epistemic, “cuts will have to come, but we can do it in a balanced way, we can do it in a fair way” a practical and “I don’t want Rebecca, I don’t want my own kids, I don’t want any of our children to pay the price for this generation’s mistake” a moral claim [102].

Similarly, Schiappa et al. [45,157] differentiate claims of fact, value and policy. Claims of fact state that something is true, i.e. they express a belief about a fact. This corresponds to the epistemic claims according to Lippi et al. [102]’s taxonomy with claims of value and policy corresponding to moral and practical claims, respectively. Epistemic claims are also referred to as factoid claims [183,184] or, more commonly, factual claims, e.g. [51,52,75,76,88,97,117,130]. However, assessing the factuality of a claim may refer to assessing a claim’s veracity [74] rather than assessing whether it is a factual or non-factual claim. Note that all types of claims can be used to express a stance in discourse but not all of them are verifiable.

Some works propose a more fine-grained differentiation of claims according to their use cases, e.g. Lauscher et al. [95] distinguish Own Claims vs. Background Claims vs. references to Data for argumentation mining of scientific texts. Hassan et al. [77] distinguish between the classes Non-statistical (e.g. quotes), Statistical, Media (e.g., photo or video), and Other, Zhang et al. [217] between categorical vs. numerical claims. Park et al. [136] categorize claims according to their verifiablity and distinguish between unverifiable, verifiable nonexperiential, verifiable experiential claims with experiential referring to whether the claim refers to the writer’s personal state or experience or not. Another notion that can be seen as a specific type of claim is a rumour. In an attempt to unify various definitions found in works addressing the identification and veracity assessment of rumours, Zubiaga et al. [224, p. 1] define rumours as “items of information that are unverified at the time of posting”. The authors further distinguish between different types of rumours, with respect to their currentness (emerging vs. longstanding rumours).

3.2.3.Claims vs. stances vs. viewpoints

Habernal et al. [68] explain that the term claim in the context of argumentation theory is a synonym for standpoint or point of view referring to what is being argued about, i.e. the topic. This is in line with Liebeck et al’s [98] and Aharoni et al’s [3] debate-oriented definition and with Hidey et al’s [80, p. 4] definition of claims as “proposition that expresses the speaker’s stance on a certain matter”. Standpoint, point of view and stance in these definitions do not mean the content of the claim has to be of an unverifiable or of a purely opinionated nature. Stab et al. [170] see a stance as an attribute of a claim.

Stances are usually defined as text fragments representing opinions, perspectives, points of views or attitudes with respect to a target [52,53,70,89,221]. They can be expressed explicitly or implicitly [146]. Fragments can be messages such as tweets or posts [55,86], paragraphs [144] or complete articles [70]. Joseph et al. [86] see stances as latent properties of users rather than text fragments. Text fragments can however reveal a user’s stance. As Joseph et al. [86] point out, stance and sentiment are related, but not the same: a negative sentiment of a text can be paired with a positive stance towards a particular target and vice versa. Also the tasks of aspect-based sentiment analysis and stance detection differ, even though both aim at detecting opinions towards a target. For example, a piece of text may express a positive sentiment towards a specific aspect of a person, e.g. their personality, but still argue against this person’s claim.

Stance detection has been used to determine opinions on the veracity of claims [43,108]. Stances in these works are similar to what is coined evidence in fact-checking works, as described above, although they do not necessarily contain factual information that can be used to verify information. Note that this may be the case for evidence as well, depending on the precise definition. The fact that a claim is supported by another entity than the source can be seen as evidence for the claim’s truthfulness in itself (cf. expert-type evidence).

Stances have been classified into different categories such as for, against and observing [43], pro and con [14] and none [185], agree, disagree, discuss, or unrelated [118]. There is also a hierarchical model that classifies the stance of web documents in three levels: first as related or unrelated, the related ones as taking a stance or being neutral, and those taking a stance as agree or disagree [154]. Another fine-grained distinction can be found in Hidey et al., [80] who distinguish interpretations, rational evaluations, emotional evaluations, agreement and disagreement. As Kotonya et al. [89] note, the task of stance classification is closely related to relation-based argumentation mining that determines attack and support relations between argumentative units.

Another related task is that of viewpoint discovery. Thonet et al. [177] define a viewpoint as “the standpoint of one or several authors on a set of topics”. A viewpoint goes beyond a person’s stance on a specific subject and represents their global standpoint or side they are taking. As Thonet et al. [177] explain, a viewpoint in a debate about the building of Israeli communities on disputed lands can for example be summarized as “pro-Palestine” or “pro-Israel”. Consequently, Viewpoint Discovery is considered a sub-task of Opinion Mining [145,177].

Another closely related, but different notion is that of a perspective which Chen et al. describe as an argument that constitutes a particular attitude towards a given claim, an opinion in support of a given claim or against it [29]. For example, for a claim “Animals should have lawful rights” a perspective would be “Animals are equal to human beings”, which would express support for the claim. A perspective corresponds to an opinion on a specific aspect in a viewpoint. Perspectives can be supported by evidence, connected to claims by supports or attacks relations and can be seen as a specific type of claims that are connected to what Chen et al. coin argue-worthy claims.

3.2.4.Claims in journalism and fact-checking

Works outside of the area of argumentation focus less on the role of the claim in the context of the discourse and more on the content of the claims.

A very general definition is given by Zhang et al. [217, p. 2] for their truth discovery approach: “A claim is defined as a piece of information provided by a source towards an entity”.

From a journalistic fact-checking perspective, dedicated platforms focus on statements supported by (a group of) people or organizations that appear news-worthy, check-worthy, significant and verifiable (cf. definitions from, e.g., politifact.com,8 truthorfiction.com,9 or checkyourfact.com10). Newsworthiness and significance are not only subjective, both can also vary depending on historical or political context [62].

For other use cases, different definitions or restrictions of what is considered a claim are employed.

Automatic fact-checking often constrains the problem by limiting the kinds of claims being checked to focusing on simple declarative statements (short factoid sentences [181]) or claims about statistical properties [62,179,195]. For the Fast & Furious Fact-Check Challenge, four primary types of claims were distinguished and further differentiated into more fine-grained sub-categories:11 1) numerical claims (involving numerical properties about entities and comparisons among them), 2) entity and event properties (such as professional qualifications and event participants), 3) position statements (such as whether a political entity supported a certain policy) and 4) quote verification (assessing whether the claim states precisely the author of a quote, its content, and the event at which it supposedly occurred). Note that fact-checking portals contain many quoted claims but it is not always clearly marked whether the quote itself is verified (i.e. did the person indeed make the claim?) or the content of the quoted claim (i.e. is the claim allegedly made by the person correct?).

3.2.5.Claims in information retrieval and question answering

In the area of Information Retrieval and Question Answering, several works focus on retrieving scientific claims and claims in digital libraries. Here, a claim is defined as a statement formulating a problem together with a concrete solution [56] or a sentence in a scientific document that relates two entities given in a query [57,58]. More generally, from a database-centric perspective, Wu et al. [207,208] represent a claim as a “parametrized query over a database”. This allows to computationally study the impact of modifying a claim (i.e. its parameters) on the result of the query and to thus identify claim properties, such as claim robustness which may serve as evidence to detect potential misleadingness i.e. due to cherry-picking. A related perspective has been proposed by Cohen et al. [33] in the field of Computational Journalism.

3.3.Discussion and concluding remarks

In summary, works focusing on the argumentation domain investigate claims in the context of a discourse, i.e. taking their pragmatic role into account. Claims are uttered by the author or speaker to achieve an aim through a speech act [159]. In order to recognize the meaning of an utterance and draw conclusions about the intention of the author, the pragmatic context has to be taken into account. A claim often carries a variety of intentional or unintended meanings, where subtle changes in the wording or context can have significant effects on its validity [62]. Works in other areas, such as Knowledge Bases and Fact-checking, typically focus on the content of epistemic claims, i.e. rather than trying to analyze intended meanings or messages, they try to find and check evidence for assertions and find facts vs. false claims of fact. Works in the area of information retrieval focus more on the surface of claims, trying to retrieve relevant texts without necessarily analyzing their content or contexts. These differences are reflected in the claim definitions found in the respective works.

Note that due to these different foci, there is a difference in what is referred to as claim in argumentation mining vs. in the automatic fact-checking community: what is used as premise or evidence in an argument is often selected as check-worthy claim by fact-checking sites, not the evaluative component of the argument that is coined claim in argumentation mining. Generally, the distinction of argumentative units such as claims and evidence in argumentation mining is based on the statement’s usage or its relations in an argument, while fact-checking classifies statements into claims, stances and other categories considering features inherent to the statement itself (such as their subjectivity), regardless of their connection to the discourse. Thus, what is identified as claim in works of one research field or labelled claim in a ground truth corpus may or may not be called claim in the other, depending on the specific use case and context.

Likewise, some works focus on identifying claims (or other argumentative components) that belong to a pre-defined topic (called corpus wide topic-dependent [166], context-dependent [3], or the information-seeking perspective [188]), while others aim at extracting any units that act as claims for any topic (closed-domain discourse-level [188] or context/topic-independent). Using topic-dependent annotations as ground truth for topic-independent extraction approaches leads to impaired precision values [103].

Lastly, another difference between statements of fact in knowledge bases and claims is that for the former, a certain level of consensus at least in the respective community can be assumed, while claims may only represent the beliefs of one person or be uttered by them to achieve a certain goal such as spreading disinformation. Thus, it makes sense to model truth values for claims while statements in knowledge bases are assumed to be true. There may be errors in knowledge bases, however. Thus, modeling uncertainty or confidence values is applicable for them.

The task of assessing the correctness of a statement of fact is called fact validation. The task of assessing the veracity of a claim is called fact-checking. Fact-checking has also been modeled as a specific stance detection task where the stance of a source or evidence unit towards an epistemic claim is used to assess the claim’s veracity. Finding the true values in case of conflicting evidence is the aim of truth discovery.

3.4.Naming conventions

To arrive at a more precise usage of terminology, we will, throughout this paper, refer to items in knowledge bases as statements of facts, while other mentions or assertions of knowledge, will be referred to as claims about a fact that can act as evidence about some information being true and its content being a fact. An index of all naming conventions followed in this work is given in Table 2.

Table 2

Index of main notions and definitions as discussed in this paper

TermDefinition
ClaimA statement or assertion that something is the case
FactA thing that is known or proved to be true
Statement of factA statement in a knowledge base
EvidenceInformation that can be used to assess the truthfulness of a claim or claimed value or relation
TopicPhrase describing the frame of the discussion
SubjectKeyword describing the semantic theme
StanceSupport or opposition expressed by a user or text fragment with respect to a given target
ViewpointStandpoint of one or several authors on a topic or set of topics

4.Conceptual modeling

In this section, we propose a conceptual model for representing claims and related data as well as an example of an implementation of this model in RDF using established vocabularies.

The conceptual model was informed through the survey described in the previous section. In order to derive a conceptual model, we followed the following steps: 1) identification of key concepts to be reflected in the model (e.g. claim proposition, claim utterance), 2) deriving definitions of these concepts by considering established definitions from the literature, 3) excluding definitions that are inconsistent with each other or not reflecting the required granularity (e.g. we argue that a distinction between proposition and utterance is important for many NLP and knowledge engineering tasks), and 4) identifying relations between all concepts which are consistent with and/or implied by our definitions. Through this process, we arrive at a conceptual model containing key concepts, relations and definitions which is then implemented in OWL as well as through a dedicated RDF/S data model. We start by giving an overview of the key terminology.

4.1.Key terminology – from pragmatics to fact-checking

For our conceptual model, we follow notions from pragmatics to allow modeling not only a claim in isolation, but also its meaning in a given discourse and its role in communication.

As Green (1996) puts it, “(...) communication is not accomplished by the exchange of symbolic expressions. Communication is, rather, the successful interpretation by an addressee of a speaker’s intent in performing a linguistic act.”[63, p. 1] “Minimally the context required for the interpretation (...) includes the time, place, speaker, and topic of the utterance.” [63, p. 2] While this quote refers to the interpretation of indexical expressions (i.e. words like “here” and “now”), the same holds true for the interpretation of the meaning of an utterance in general.

A linguistic act, or speech act following Searle [159], includes an utterance, a proposition, an illocution and a perlocution. An utterance is a grammatically and syntactically meaningful statement. A proposition is the semantic content, i.e. meaning. An illocution is the intended effect, e.g. persuading the addressee or requesting a service, while a perlocution is the achieved effect.

For example, referring to the topic “Brexit”, i) British journalist David Dimbleby said during a topical debate in Dover “We are going to be paying until 2064, apparently”,12 and ii) a news article of The Independent on the same topic wrote (“UK will be paying Brexit “divorce bill” until 2064”13). While the surface forms of these utterances differ, they express the same proposition. At the same time, utterances with equivalent surface forms may be used to express different and even contradicting propositions or viewpoints when embedded in different contexts. Consider the two claims: (i) “The unemployment rate among Poles in Britain is lower than the unemployment rate among Brits”, uttered by British public policy analyst and former Labour Party politician David Miliband14 to argue that immigrants are not a drain on the British welfare system and thus not bad for the British society; (ii) “EU migrants are MORE likely to have a job in the UK than British citizens”, written by MailOnline journalist Matt Dathan15 to make a point that immigrants are taking away British citizens’ jobs and thus are bad for society. The propositions are semantically similar and both utterances aim at persuading the audience (illocutionary act) but the expressed viewpoints are different.

This can only be recognized when taking the context into account which is why we argue that the context should be modeled along with the claim utterance. The importance of contextual information has also been recognized for the task of fact-checking: “Who makes a claim, when they say it, where they say it, and who they say it to, can all affect the conclusion a fact-checker could reach. Whether it’s true to say unemployment is what country or which part of a country a speaker is referring to, and when the speaker makes the claim. An open format for recording public debate should support metadata, including at least the time, the place, the venue or publication, and the speaker.” [11].

As outlined in the previous section, we see a fact as a conceptual object which represents the current consensual knowledge in a given community about something or someone. While this knowledge is relatively stable, a change of its truth value is possible, for example when flaws in scientific studies are discovered and findings have to be corrected [149].

Any verified information about a claim, like who uttered it, when and where, can be considered a fact. Facts explicitly uttered by an agent can be modeled as (factual) claims. Facts extracted from a knowledge base can be represented using the same model: provenance information about the knowledge base can be represented as source, that is, as part of the utterance. The statement of a fact is typically not embedded in a discourse. Thus, certain attributes of the context, like the topic of the discourse and the agent, would remain undefined. Likewise, non-factual claims (e.g. “animals should have lawful rights”) do not have universally accepted truth values, i.e. they are unverifiable, and hence, verdict would remain undefined for the respective proposition. Therefore, we argue that facts, factual claims and non-factual claims can be represented using the same model.

4.2.The open claims conceptual model

In line with the rationale outlined above, we introduce the Open Claims conceptual model, which distinguishes three main components of a claim represented by three central classes: (1) claim proposition, (2) claim utterance, and (3) claim context (Fig. 5).

A claim proposition is the meaning of a statement or assertion. In the context of fact-checking and argumentation mining, it is usually related to a controversial topic and is supported by one person or a group of people. A claim proposition can have been expressed in many different ways and in different contexts, thus it has one or more claim utterances. For example, it might have been expressed in different languages, using different words in the same language, or uttered by different persons and/or in different points in time.

In contrast, a specific claim utterance is typically associated with only one claim proposition, i.e., it has a single meaning. However, the claim proposition can be represented in different ways, for example, by selecting a representative utterance with its context, or through a more formal model. Each claim utterance is related to a specific claim context, which includes the person who uttered the claim, the time point at which the claim was uttered, the location or the event of the utterance and the topic of the enclosing discourse. The claim context provides information to interpret the claim utterance and thus understand its proposition.

Since explicit information about the perlocution (achieved effect) and illocution (intended effect) of utterances is usually unavailable, we do not consider them in this model. They can, however, easily be added to the model as an extension.

Below, we provide details and the main properties of each of the three main classes (Claim Proposition, Claim Utterance, Claim Context).

An OWL implementation of the Open Claims model is available online.16 To facilitate data integration with existing relevant datasets, such as ClaimsKG [176], TweetsKB [42] and TweetsCOV19 [36], we also provide an RDF/S implementation of the model using existing vocabularies (more below in Section 4.3).

Fig. 5.

The open claims conceptual model.

The open claims conceptual model.

4.2.1.Claim proposition

A claim proposition is the meaning of one or more claim utterances in their respective contexts. A claim proposition is associated with i) zero, one or more representations, ii) zero, one or more reviews, iii) zero, one or more attitudes, and iv) zero, one or more other claim propositions.

A representation can have the form of free text, e.g. a sentence that describes the proposition as precisely as possible, or be more formal, e.g. a first-order logic model, or the URI of a named graph pointing to a set of RDF statements.

A review is a resource (e.g. a document) that analyzes one or more check-worthy claim propositions and provides a verdict about their veracity or trustworthiness. An example of such a review is an article published by a fact-checking organization. Note here that not all factual claims have a clear verdict. For instance, the claim “the presence of a gun makes a conflict more likely to become violent” represents hypothesis which can be linked to both supporting and contradicting evidence and is thus difficult to be associated with a single overall correctness score. If a claim is associated to a review which gives a true verdict about its veracity, then the claim can be considered a fact (it represents the current knowledge about something). Non-factual claims are not linked to any reviews and have no verdicts.

An attitude is the general opinion (standpoint, support) on a given topic (e.g. a viewpoint), which often underlies a set of specific values, beliefs or principles. For instance, pro-Brexit and anti-Brexit are two different viewpoints for the Brexit topic. A claim proposition can be associated with several attitudes for different topics. For example, the proposition linked to the claim “immigrants are taking our jobs” can support both the against immigration (for the Immigration topic) and the pro-Brexit attitude (for the Brexit topic).

A claim proposition can also be associated with other claim propositions through some type of relation, e.g. same-as, opposite, part-of, etc.

4.2.2.Claim utterance

A claim utterance is the expression of a claim in a specific natural language and form, like text or speech. Among other things, it can be something said by a politician during an interview, a text within a news article written by a journalist, or a tweet posted by a celebrity about a controversial topic. It is associated with i) one or more linguistic representations (subclass of representation), ii) one or more sources, and iii) zero, one or more other claim utterances (through relations such as same-as, paraphrase, etc.).

A linguistic representation can be, for example, a text in a specific language that best imprints the claim as it was said/appeared, or a sound excerpt from someone’s speech.

A source provides evidence of the claim’s existence. For instance, it can be the URL of an interview video, a news article, or a tweet, i.e. source here means the medium reporting the utterance, not the originating agent (speaker or author which is part of the context). For this distinction, see also Newel [122]. A linguistic representation can have one or more linguistic annotations which provide formal linguistic characteristics. For instance, it can be an entity or date mentioned in the text of the claim utterance, the sentiment of the text (e.g. positive, negative, neutral), or the linguistic tone of a speech (like irony). These annotations can enable advanced exploration of the claims (e.g. based on mentioned entities) and can be manually provided by a domain expert or automatically produced using a NLP or speech processing tool (like an entity linking [164] tool for the case of entity annotations in text).

Links between utterances can be also used to explicate their role in discourse, e.g. by using relations such as used-as-evidence-for or used-as-evidence-against to model premises, evidence, conclusions and other components and relations in argumentation. Likewise, supports and attacks relations may hold between utterances to connect stances and their targets. With this, we follow Carstens and Toni [25] and the discussion in Section 3 with the notion that whether a statement is of type evidence or another type and whether it was uttered to express a stance depends on its usage in the context of a discourse, e.g. its relations, rather than being an inherent property of the statement in isolation.

4.2.3.Claim context

The claim context provides background information about the claim utterance. It is associated with metadata information about the claim utterance and, together with the linguistic representation of the claim utterance, can provide an answer to the Five W’s: i) what was said (linguistic representation of claim utterance), ii) who said it (agent; person, group, organisation, etc., making the claim), iii) when it was said (date/time the claim was uttered), iv) where it was said (location where claim was uttered), and v) why it was said (event or activity in the context of which the claim was uttered, and/or the topic of the underlying discourse). The claim context provides the necessary information for interpreting the claim utterance (and thus understanding its proposition).

4.2.4.Instantiation example

Figure 6 depicts an instantiation example of the proposed conceptual model. The example shows information for two claim utterances (in pink background, in the centre of Fig. 6): i) the one by David Dimbleby (“We are going to be paying until 2064, apparently”), and ii) the one by The Independent (“UK will be paying Brexit “divorce bill” until 2064”). Both utterances correspond to the same claim proposition (in green background, left part of Fig. 6) and each one has its own context information (in yellow background, right parts of Fig. 6). The linguistic representation of the first claim utterance has been annotated with one date annotation (2064) and that of the second claim utterance with one entity annotation (United Kingdom).

The claim proposition has two representations, a textual one (“Britain will be paying its Brexit bill for 45 years after leaving the EU”) and a formal one (“cost = {of = ‘Brexit’, for = ‘UK’ amount = ?, until = 2064}”), and supports the against-Brexit viewpoint of the Brexit topic. In addition, there is a review of this claim proposition with verdict “true”, published by Full Fact (UK’s independent fact-checking organisation). Moreover, we can see the URL of the review article as well as a reference to a document file which provides evidence for its correctness.

The context of each claim utterance provides additional metadata information about the claim. For example, we see that the first utterance was said by David Dimbleby on 15.03.2018, in the context of a debate about Brexit which took place in Dover. For the second claim utterance, the example only represents its agent (UK Office of Budget Responsibility) and date (13.03.2018).

Fig. 6.

Instantiation example of the conceptual model.

Instantiation example of the conceptual model.

4.3.RDF/S implementation

In order to facilitate the use and operationalisation of our Open Claims Conceptual Model, we provide an RDF/S implementation using established vocabularies, depicted in Fig. 7. Vocabulary selection followed three directives: i) relying on stable term identifiers and persistent hosting, ii) being supported by a community, iii) being extensible.

As our base schema, we propose to use schema.org.17 For capturing provenance information of all generated annotations, we employ PROV.18 This includes information about employed tools and confidence values. Claim review verdicts are part of the schema.org ClaimReview entity. Viewpoints are represented using Marl Ontology,19 which is designed to annotate and describe subjective opinions. Linguistic annotations are represented in the NLP Interchange Format (NIF).20 To cover other modalities such as video and images, we include the Web Annotation Vocabulary (OA).21

More details about the RDF/S implementation of the proposed conceptual model can be found in our previous work [23].

Fig. 7.

RDF/S implementation of the open claims conceptual model.

RDF/S implementation of the open claims conceptual model.

5.Related knowledge engineering tasks

In this section, we review different knowledge engineering and information extraction tasks pertaining to claim related data, like utterances, claim verification scores, claim context information (e.g. who uttered the claim, when and where) and other claim metadata that is described in our Open Claims model. Figure 8 depicts how the below discussed knowledge engineering tasks are mapped to the Open Claims model.

Fig. 8.

The open claims model annotated with related knowledge engineering tasks.

The open claims model annotated with related knowledge engineering tasks.

We identify three main (sometimes overlapping) categories of tasks: extraction, verification, interlinking and position them within the context of our conceptual model. Note that we do not aim to provide an exhaustive overview of those tasks, but rather introduce examples of works of different relevant areas and show how they are positioned with respect to extracting or generating the information and relations suggested by our model. Extraction pertains to detecting statements, utterances and other components and attributes in a corpus of (mainly) textual modality. Verification pertains to the assignment of truth ratings or credibility scores to claims or other related components such as information sources. Interlinking, finally, includes a range of tasks that aim at detecting various relations between claims or related components thereof, such as same-as relations, stances or topic-relatedness.

5.1.Extraction

Given the complexity and varying definitions of what is or what constitutes a claim, a number of different knowledge extraction approaches can be associated to the tasks in each of the three groups outlined above. We will follow the definition of a claim and its components as given by our model (Section 4) in order to review the existing techniques for knowledge extraction pertaining to each of these components and attributes. In parallel, we identify challenging problems that are underrepresented in the literature.

5.1.1.Extracting claim propositions

The task of extracting a claim proposition can be reformulated as assigning an identifier to a group of statements that are assumed to be semantically equivalent. Our model suggests that the meaning of a claim can be captured both by the means of natural language as well as formal knowledge representation frameworks, e.g. description logics.

Extracting formally represented claim propositions at different levels of formality is of main interest in the field of knowledge extraction, both from unstructured (web pages, social networks) or semi-structured (Wikipedia) sources. Populating and building KBs and thus providing structured knowledge on the Web has been of central interest in the NLP, web, data mining and the semantic web communities over the past decades, focusing on a variety of tasks such as named entity recognition, entity linking, relation extraction or word sense disambiguation. The extensive research in this field has led to a very broad range of works. A comparison of generic information extraction tools and systems is provided by Gangemi et al. [49], while Martinez-Rodriguez et al. [110] and Ristoski and Plaulheim [151] focus on semantic web approaches (aiming at the provision of structured knowledge for populating ontologies, linked data and knowledge graphs). The reader may also turn to the book on NLP methods for building the semantic web [113] as well as a recent survey on fact extraction from the web [204].

Relation extraction and ontology learning from text are overviewed by Kumar [92] and Wong et al. [206], respectively, while Atefeh and Khreich [8] dedicate their survey to the task of extracting event-related knowledge. Uren et al. [192] consider methods that take the inverse approach of annotating documents with entities or statements of facts based on existing knowledge bases. Very closely related to this work is a recent work by Al-Khatib et al. [7] who extract knowledge encapsulated in arguments to inform a knowledge graph encoding positive and negative effects between concept instances and classifying the consequences as good or bad. For instance, from the claim “Nuclear energy leads to emission decline”, a positive effect of nuclear energy on emission decline would be extracted and the consequence, emission decline, rated as good. The proposed extraction framework uses a combination of supervised learning and pattern-based approaches.

If we look at textual representations of a claim, the task can be approached by first extracting textual utterances (see below), then grouping them together according to their meaning by the help of textual similarity methods (some of them described in 5.3.2) and then identifying in a cluster of semantically equivalent utterances one that will serve as an identifier for the meaning of the claim. A formal approach to the assignment of textual identifiers to a set of equivalent claims has not been discussed in the literature, to the best of our knowledge, but the task relates closely to the text summarization task, which is surveyed by Lin and Ng [99].

Extracting viewpoints and stances Existing computational models [137] describe viewpoints via a summarization framework, able to find phrases that best reflect them. In Thonet et al. [177,178], unsupervised topic models are proposed to jointly discover viewpoints, aspects and opinions in text and social media. An unsupervised model for viewpoint detection in online debate forums, proposed in Trabelsi and Zaiane [187], favors “heterophily” over “homophily” when encoding the nature of the authors’ interactions in online debates. With respect to viewpoint detection in social media, the model by Barberá [15] groups Twitter users along a common ideological dimension based on who they follow. A graph partitioning method that exploits social interactions for the discovery of different user groups (representing distinct viewpoints) discussing about a controversial topic in a social network is proposed in Quraishi et al. [145], also providing a method to explain the discovered viewpoints by detecting descriptive terms that characterize them.

Our model suggests, in line with the current research, that viewpoints with respect to topics take the form of polarized opinions. Given a controversial topic, for example an issue like climate change, viewpoint discovery aims at finding the general viewpoint expressed in a piece of text or supported by a user. This task can indeed be considered a sub-task of opinion mining, which aims to analyze opinionated documents and to infer properties such as subjectivity or polarity. The survey in Pang and Lee [134] provides a general review of the opinion mining and sentiment analysis tasks. However, for some topics, there may be more than two viewpoints. As of yet, there is limited research that studies these cases.

Viewpoint extraction is closely connected to the stance detection problem, a supervised classification problem in NLP where the stance of a piece of text towards a particular target is explored. Stance detection has been applied in different contexts, including social media (stance of a tweet towards an entity or topic) [10,38,41,93,116,174,210], online debates (stance of a user post or argument/claim towards a controversial topic or statement) [13,67,167,198], and news media (stance of an article towards a claim) [20,70,141,203,216]. A recent work by Schiller et al. [158] details the different and varying task definitions found in previous works, diverging not only with regard to domains, but also classes and number and type of inputs, and introduce a benchmark for stance detection that allows the comparison of models against a variety of heterogeneous datasets. In contrast to the works on viewpoint extraction described previously, works on stance detection focus more on supervised models and textual features (like the sentiment expressed in the text, or the use of polarised words), and less on the structure of the underlying network of users or documents, which can be exploited by unsupervised approaches. For two recent surveys of stance detection works, we refer to Küçük and Can [91] and Ghosh et al. [54].

In recent work, Sen et al. [160] compare untargeted and targeted opinion mining methods (sentiment analysis, aspect-based sentiment analysis, stance detection) to infer approval of political actors in tweets. They show that the compared targeted approaches have low generalizability on unseen and unfamiliar targets and that indirectly expressed stances are hard to detect, and thus identify the need for further research in this area.

Chen et al. [29] propose the task of substantiated perspective discovery where the goal is to discover a set of perspectives and supporting evidence paragraphs that take a stance to a given input claim, and release a first dataset for this task.

5.1.2.Extracting claim utterances

Textual utterance extraction In this survey, we focus on methods for extracting information from language rather than other modalities such as speech or video. The methods discussed in the literature, with few exceptions, are tailored towards a particular context, topic or type of targeted utterances, usually referred to as claims in these works.

Identifying and extracting argumentative components such as claims (also called propositions in these works) or evidence units (also called premises) is a central task in the argumentation mining field [35,96]. The first survey on the topic by Peldszus and Stede [139] assumes the availability of an argumentative text and focuses on the problem of analyzing the underlying structure of the presented argument from two perspectives: (1) argument annotation schemes drawing from works in the classical AI field of argumentation and (2) automatic argumentation mining, discussing the first approaches that enhance the historical field with data-centered machine learning approaches. A more recent survey by Lippi and Torroni [104] provides a structured view on the existing models, methods, and applications in argumentation mining attempting to draw a single unifying view over a plethora of related sub-tasks and dispersed efforts. The authors define the argumentation mining problem as a pipeline consisting of the detection of argument components in raw text and predicting the structure (or relations) between these components, where the former is of particular interest to the task that we consider in this section. Building on and completing these surveys, Cabrio and Villata [24] adopt a data-driven perspective of the existing work in argumentation mining with a focus on applications, algorithms, features, and resources for evaluation of state of the art systems. Taking also a data-driven perspective, the difficulty of devising cross-domain claim identification approaches has been discussed and analyzed in Daxenberger et al. [35] by using multiple domain-specific data sets. In that, the authors address the analysis of the generalization properties of systems and features across heterogeneous domains and study their robustness across the underlying fields. Shnarch et al. [166] propose a methodology to combine smaller amounts of high quality labeled data with noisy weakly labeled data to train neural networks for extracting evidence units for given topics.

The extraction of a claim is the first step in a computational fact-checking pipeline, where it is common to see fact verification as a three-step process: (i) detecting/extracting a check-worthy claim, (ii) reviewing the claim with respect to its veracity and (iii) publishing the reviewed claim [78,180].22 In Hassan et al. [78], the authors propose a first version of the ClaimBuster tool with a particular focus on the extraction of check-worthy claims. The claim-spotting problem is defined as a two step task, comprising (1) classification of pieces of text as check-worthy or not and (2) their ranking with respect to their check-worthiness. An end-to-end fact-checking platform, including both steps (1) and (2) is presented in a follow up work [76]. To overpass the limitations of using hand-crafted features for claim detection, Hansen et al. [72] propose a neural check-worthiness ranking model that represents a claim as a set of features, where each word is accounted for by its embedding (capturing its semantics) and its syntactic dependencies (capturing its relation to other terms in the sentence). The extraction of simple claims about statistical properties to be subjected to verification is addressed in Vlachos and Riedel [195]. The authors apply a distantly supervised claim identification approach that relies on approximate matching between numerical values in text and a knowledge base (Freebase). A relevant line of work has been followed in the field of subjectivity analysis of text, proposing approaches which aim at classifying sentences into objective and subjective categories, e.g., [21,205,213]. It has been shown in Hassan et al. [76] that subjectivity identifiers are limited in discerning factual claims as compared to the method presented by ClaimsBuster.

Annotating utterances In our model, we discuss an annotation of utterances based on (1) entities (such as names, dates, locations, etc.) and (2) lower-level linguistic features extracted from the text that can be useful for a number of tasks, such as bias detection or fake-news analysis, as discussed in Rashkin et al. [147]. For (1), one can turn to the literature surrounding (end-to-end) Entity Linking,23 particularly the exhaustive survey in Sevgili et al. [161]. The features in (2) include characteristics of the discourse, such as shades of irony or the overall polarity score of the expression, as well as linguistic or syntactic cues (part-of-speech (POS) tags, syntax, dependencies, semantic parsing, punctuation or capitalization) that can be indicative of a certain intention. For the identification of such cues, one could turn to NLP annotation pipelines (with standardized annotation type taxonomies). The industrial standard is UIMA (Unstructured Information Management Applications) [44], a comprehensive meta-framework for inter-operable linguistic annotation. Recent developments in deep approaches to NLP have led to the development of ad-hoc annotation models such as SpaCy.24

Claim utterance source extraction Sources are identified as the media that publishes a claim. Their extraction can be straightforward in many cases (e.g. when the utterance itself is extracted directly from its source). In certain cases identifying the original source may be more challenging and would require tracking down the claim to its original publication by, e.g. following cascades of retweets or identifying and analysing quotations [121,126,173,197].

5.1.3.Extracting claim context

This group of approaches deals with annotating a claim with contextual information that helps reply to the questions who uttered the claim when and where. In order to extract a date or a location one can rely on Entity Linking (EL) or Named Entity Recognition (NER) techniques outlined in the previous section. We focus in more detail on the tasks of event detection, topic detection, and author identification and attribution.

Event detection The event in which a claim was uttered is an important component from the context that defines a claim. An event can be seen as a complex entity defined by a set of attributes, such as a date, persons involved and a location. Following this definition, one can apply the methods described in the previous paragraph in order to extract independently these components to populate an event. However, recent approaches consider an event as an atomic entity that can be detected from web corpora (often social networks) [30,73].

Topic detection Detecting what claims are about is a challenging issue. If available, context such as the source articles the claim was extracted from, a claim review article, or the discourse the utterance was embedded in, e.g. the given subject in a debating portal, can be considered for claim topic detection. Here standard NLP methods of topic extraction, modeling or detection from text can be employed [110]. However, detecting the topic when only the textual content of a claim utterance can be considered, or when the textual context is sparse, is challenging.

Approaches developed for extracting topics from short text (like tweets and micro-blogs) can be adapted for claim topic modeling [168]. However, the complex structure and positioning in a context of elements (such as sources, authors and other entities) has to be taken into consideration when predicting topics of claims. Topics can be seen as groups of equivalent claims (e.g. all claims pertaining to “US immigration policies”) situated in a network of contextual entities (e.g. a knowledge graph such as the one given in our model implementation example in Fig. 7). Therefore, link prediction methods on knowledge graphs may be used, where a recent work by Beretta et al. [17] studies the effectiveness of neural graph embedding features for claim topic prediction as well as their complementary with text embeddings. The authors show, however, that state-of-the-art link prediction models fail to capture equivalence structures and transfer poorly to downstream tasks such as claim topic prediction, which may, however, also be connected to the lack of sufficiently large and reliable ground truth data (topic-labeled claims) that would allow to train neural embedding models. This calls for the development of novel methods that surpass the state-of-the-art graph embedding model’s reliance on a local link prediction objective, which likely limits the ability of these models to capture more complex relationships (e.g. equivalence cliques between claims, keywords and topic concepts) and the generation of suitable ground truth data.

Author identification and extraction Identifying the author of an utterance is not trivial [11] yet authorship is crucial for interpreting its meaning. Moreover, claims are often quoted by distant sources, e.g. in news articles or other media. The attribution of content to an author25 is consequently gaining increased attention in the context of the analysis of news articles, e.g. by Newell et al. and Salway et al. [121,155] who build structured databases of claims with extracted quotes and author information. Approaches for quotation extraction and attribution from newspaper articles for both direct and indirect speech usually comprise three different component identification steps: (1) cue phrases signalling the presence of a quotation (e.g. “say” or “criticize”) are identified using manually curated word lists [90] or classifiers trained on labelled data [121,135,156]. On this basis, (2) quotation content spans are identified using manually defined syntactic rules [90], conditional random fields (CRFs) [121,135] or semi-markov models [156]. Finally, (3) author entities are identified, typically using sequence models such as CRFs [121,128,135]. In that, Newell et al. [121] extend Okeefe et al. [128]’s sequence-based quote attribution to a two-stage approach using maximum entropy classifiers for connecting cue and content spans and cue and author spans, respectively, allowing multiple content and cue spans to take part in an attribution relation. A different approach is followed by Pavllo et al. [138] who employ pattern-based bootstrapping to extract quotation-speaker pairs. A recent paper by Jiang et al. [85] extracts structured information from fact-checking articles, including the “claimant”. This corresponds to either the source or the author of the claim, depending on which of those is mentioned in the fact-checks where usually, this distinction is not made.

5.2.Claim verification

A number of terms, such as fact-checking, truth discovery, claim or fact verification pertain to a large degree to the process of the automatic assignment of a veracity score to a statement uttered by a particular person or a group of people [180]. Note that the analysis of false or mis-information spread, or fake-news detection,26 defined and surveyed in Sharma et al. [163], often deal with entire news articles or outlets and are, therefore, broader problems where claim verification can be seen as one of their ingredients.

Claim truthfulness verification is reviewed in Cazalens et al. and Thorne et al. [26,180], where [180] in particular propose to unify diverging definitions of the task and its components from various disciplines, such as NLP, machine learning, knowledge representation, databases, and journalism. Indeed, most of the existing techniques rely on background knowledge sources (e.g. encyclopedic knowledge graphs, such as DBpedia or Freebase) that provide a “truthfulness context” [78,179] and a combination of various computational methods in order to infer the veracity scores of a claim either from those background knowledge sources or, more rarely, in a self-dependent manner. In addition, versatile features pertaining to all three main components of our model (meaning, utterance and context) are often considered in a combined manner, making it difficult to break down claim verification approaches along each of these three axes independently.

In certain cases, claims are given a structured form (e.g. triples or database queries), which allows for the verification of entity-centric information by calling on machine learning techniques [214]. In that, fact verification can be seen as a particular kind of a link prediction or knowledge base augmentation task [31,165]. In contrast, certain methods apply symbolic inference approaches on KGs in order to infer the truth value of a statement [18], or to identify potential errors [48]. A multitude of features, machine learning models and inference techniques are combined together in the KB construction approach presented in [37].

In other cases, statements are taken in their textual form [143,201], while again largely machine learning techniques are applied in order to assess their veracity. Training data in the form of examples of true and false claims come either from archives of fact-checked statements [16,194,201] or from manually labelled (often crowdsourced) collections of claims [59,114,182]. Statistical (topic) models as well as standard NLP filters are used in order to construct a feature space. Note that the majority of approaches based on machine learning rely primarily on highly contextualized features on document/text level, such as words, n-grams, salient entities and topics [75]. Additional context- and aspect-related features such as provenance, time and sources are considered in Popat et al. and Vlachos and Riedel [143,194]. An analysis of news corpora is provided by Rashkin et al. [147] in an attempt to identify linguistic and stylistic cues that help discriminate facts from false information. In addition, certain approaches, like [212], look at how a claim spreads through a crowd or how sources and claims are connected, exploiting social/community-related features.

5.3.Interlinking

There exist a variety of types of relations between claims and in particular between their components as introduced in our conceptual model. We consider that the problem of claim relatedness depends on the particular perspective and application context – for example, two claims can be considered contextually similar because they have been uttered at the same event by the same person, but still differ in their meaning and textual expression. Following the main building components of our model, we identify a number of dimensions on which this problem can be studied. One could be interested in relating instances of propositions, utterances or contexts within each of these three groups. These are the kind of relations that will be discussed in this section. Else, one can look into cross-class relations (e.g. establishing the association between an utterance and its author or viewpoint). Such relations result from knowledge extraction processes already discussed in Section 5.1. Although most of these problems can be considered as challenging with little existing work that approach them directly, we will outline below relevant works.

5.3.1.Relating propositions

According to our model, the proposition, or meaning, of a claim is materialized via a particular representation (e.g. a natural language or a logical expression) and is further described by its topics to which we associate viewpoints. As discussed in Section 5.1, different extraction methods can be applied in order to derive those representations. Independently from the particular representation type, we outline three general types of relations that we can establish between proposition instances: equivalence (same-as), similarity and relatedness.

Same-as. The equivalence or identity relation binds together claims that have the exact same meaning. In the case of textual expression of the meaning of a claim, when two propositions are expressed differently although they convey the same message (have the same meaning), we talk of a relation of paraphrase. Paraphrasing detection allows to discover equivalent text fragments that differ (to a given extent), where neural language models are currently largely applied to the task [106,220]. In the case of a symbolic or formal expression of a claim (or a fact), we outline works on relation alignment, such as Pereira et al. [140].

Similar. Two propositions can be similar to a given degree on a scale between “identical” (represented by the same-as relation) and “dissimilar”. This notion relates to that of semantic similarity discussed, for example, in Gracia and Mena [60] and tackled in the Semantic Textual Similarity task [2,27]. A first systematic study on finding similar claims is proposed by Dumani and Schenkel [39].

Related. Relatedness, as opposed to similarity, covers “any kind of lexical or functional association” [60] and is, hence, a more general concept than semantic similarity. Relatedness encloses various relationships, such as meronymy (a relation of composition (part-of) that is such that the meaning of a complex expression relates and can be expressed by the meanings of the parts from which it is constructed), antonymy (opposite meanings, including conflicting/contradicting claims), logical or textual entailment [34], same topic, or any kind of functional relationships or relationships or frequent association. A survey of semantic relatedness methods, evaluation and datasets is given by Zhang et al. [219] and Hadj-Taieb et al. [69]. As opposed to logical entailment, textual entailment is understood as a relationship between pairs of text fragments where one entails the other if a human reading the former would be able to infer that the latter holds.

5.3.2.Relating utterances

Several works address finding equivalent claims in the context of claim verification [76,78,109], where a claim matcher (or linker) is a component of a fact-checking system matching new claims to claims that have already been checked. Shaar et al. [162] recently proposed the task of detecting previously fact-checked claims defined as ranking a set of verified claims according to their potential to help verify an input claim. They propose a learning-to-rank approach and release a first dataset for the task. Clustering similar arguments is at the core of the work by Reimers et al. [148] who use contextualized word embeddings to classify arguments as pro or con and identify arguments that address the same aspect of a topic.

Concerning the matching of text fragments more generally, recent advances in neural NLP and the advent of deep contextualized language models for language understanding, have allowed a renewal state-of-the-art techniques for matching text fragments through the pooling or aggregation of classical [132] and contextualized word-embeddings [81,105] into phrase, sentence or document embeddings [4,5] and the computation of distance metrics to find the closest matching utterances.

In the context of the Open Claims model, relations between utterances can further be derived from the relations between their constituents. For example, an utterance is a repetition of another utterance when all constituents are equal except for at least one attribute of the context such as the author or the date. An utterance is a paraphrase of another utterance when the propositions are equal but the (linguistic) representations differ. Deriving relations when some of the constituents are similar or related, rather than equal, remains a question for further research.

Other types of relations comprise support/attack relations or pro/con stances. Many works treat this as an extraction and classification task, e.g. classifying an argumentative unit as evidence (see Section 5.1), while others treat this as an argumentative relation extraction task, e.g. relating two units with a supports or attacks relation [25,124,129].

5.3.3.Relating contexts

A context is broken down to its constituents: events, entities, dates, etc. Establishing links among contexts comes down to linking their respective components. For that purpose, one may call upon state-of-the-art approaches to data linking, where, following years of research and practice, a wealth of methodological approaches and tools are currently out there [119]. Among those, property-centric approaches (e.g. [84,123]) can be of particular interest in order to establish relations (like identity or overlap) between different contexts, comparing their elements individually by the help of well-suited similarity measures (e.g. measures of similarity between proper names or dates).

6.Conclusion

This paper bridges the gap between various disciplines involved with online discourse analysis from a range of perspectives by (a) surveying definitions of claims, facts and related concepts across different research areas and communities, (b) establishing a shared conceptualisation and vocabulary in this context and (c) discussing a range of tasks involved with such notions, for instance, for extracting or interlinking related concepts through NLP techniques. We contribute to a shared understanding of a wide range of disparate yet strongly related research areas, facilitating a deeper understanding of shared methods, approaches and concepts and the potential for reuse and cross-fertilisation across communities. Below, we highlight under-researched areas and potential future directions.

Currently, a framework for claim relatedness and similarity is missing. Several works from different fields appear to deal with the problem from different perspectives, but an approach that takes into consideration the various aspects of a claim, as well as its various representations, as defined in our Open Claims model in order to discover claim relatedness or similarity of different kinds is yet to be proposed. While there are works addressing the extraction of structured information from claims [85], allowing for example the detection of nested claims, current fact-checking methods and sites largely ignore such issues. For instance, a complex claim can have different (or no) truth rating as compared to its constituents. For instance, “Colin Kaepernick says Winston Churchill said, “A lie gets halfway around the world before the truth has a chance to get its pants on.””27 The claim that Kaepernick uttered this is true, while the claim within the claim is false since Churchill never said that. Using the proposed model, such cases can be modeled precisely and unambiguously.

Given the subtle differences between claims, where meaning often derives from subtext and context, disambiguating claim representations, e.g. when mapping novel claims to knowledge bases of fact-checked claims, appears challenging. Even for humans, deciding on the type of relationship of two claims is a non-trivial task. For example, the claim “Interest on the debt will exceed defense spending by 2022” provides an exact date, while “Interest on debt will exceed security spending” does not provide a date. Can these two claims be considered the same, and, if not, what is their relation? Using the proposed model, such subtle differences can be made explicit. In addition, automated fact-checking has the potential to elevate the problem given its lack of maturity at different steps, where for instance, the classification of half-correct or poorly disambiguated claims as correct may introduce further false claims into the wild.

Similarly, the process of stance detection is challenging as it has been shown to not work well for the minority class, i.e. documents disagreeing with the claim [70,154], and for unseen targets [160]. Little research in viewpoint discovery deals with extracting viewpoints for more than two polarized positions, a topic that could be worthwhile researching for the analysis of debates.

Detecting claim topics and linking those to a specific commonly shared vocabulary or thesaurus of topics (like, e.g., the TheSoZ [215] or the Unesco28 thesauri) appears to be a difficult and under-researched topic that promises to enhance claim retrieval, improve search and interoperability across sources, and facilitate access to currently existing or yet to be constructed structured resources of claims [176].

Generally speaking, considering the wide variety of methods and datasets involving claims and related notions, adopting a shared and well-defined vocabulary has the potential to significantly increase impact and reuse of research methods and data.

Notes

25 Coined “source” in the respective works; in order to not confuse different terminologies, we are referring to these entities as “authors” in the following text although this diverges from the naming used in the literature in this field.

26 “False and often sensational information disseminated under the guise of news reporting”, according to Collins English Dictionary.

References

[1] 

P. Accuosto and H. Saggion, Transferring knowledge from discourse to arguments: A case study with scientific abstracts, in: Proceedings of the 6th Workshop on Argument Mining, Association for Computational Linguistics, Florence, Italy, 2019, pp. 41–51. [Online]. Available at https://www.aclweb.org/anthology/W19-4505. doi:10.18653/v1/W19-4505.

[2] 

E. Agirre, C. Banea, D. Cer, M. Diab, A. Gonzalez-Agirre, R. Mihalcea, G. Rigau and J. Wiebe, SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation, in: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), Association for Computational Linguistics, San Diego, California, 2016, pp. 497–511. [Online]. Available at https://www.aclweb.org/anthology/S16-1081. doi:10.18653/v1/S16-1081.

[3] 

E. Aharoni, A. Polnarov, T. Lavee, D. Hershcovich, R. Levy, R. Rinott, D. Gutfreund and N. Slonim, A benchmark dataset for automatic detection of claims and evidence in the context of controversial topics, in: Proceedings of the First Workshop on Argumentation Mining, Association for Computational Linguistics, Baltimore, Maryland, 2014, pp. 64–68. [Online]. Available at http://aclweb.org/anthology/W14-2109. doi:10.3115/v1/W14-2109.

[4] 

A. Akbik, T. Bergmann and R. Vollgraf, Pooled contextualized embeddings for named entity recognition, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota, 2019, pp. 724–728. [Online]. Available at https://www.aclweb.org/anthology/N19-1078, https://www.doi.org/10.18653/v1/N19-1078.

[5] 

A. Akbik, D. Blythe and R. Vollgraf, Contextual string embeddings for sequence labeling, in: Proceedings of the 27th International Conference on Computational Linguistics, Association for Computational Linguistics, Santa Fe, New Mexico, USA, 2018, pp. 1638–1649, [Online]. Available at https://www.aclweb.org/anthology/C18-1139.

[6] 

M. Al-Bakri, M. Atencia, S. Lalande and M.-C. Rousset, Inferring same-as facts from linked data: An iterative import-by-query approach, in: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI’15, 2015, pp. 9–15, [Online]. Available at https://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/9508/9218.

[7] 

K. Al-Khatib, Y. Hou, H. Wachsmuth, C. Jochim, F. Bonin and B. Stein, End-to-end argumentation knowledge graph construction, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, 2020, pp. 7367–7374. [Online]. Available at https://ojs.aaai.org/index.php/AAAI/article/view/6231. doi:10.1609/aaai.v34i05.6231.

[8] 

F. Atefeh and W. Khreich, A survey of techniques for event detection in Twitter, Computational Intelligence 31(1) (2015), 132–164. doi:10.1111/coin.12017.

[9] 

I. Augenstein, D. Maynard and F. Ciravegna, Distantly supervised web relation extraction for knowledge base population, Semantic Web 7(4) (2016), 335–349. [Online]. Available at https://www.medra.org/servlet/aliasResolver?alias=iospress&doi=10.3233/SW-150180. doi:10.3233/SW-150180.

[10] 

I. Augenstein, T. Rocktäschel, A. Vlachos and K. Bontcheva, Stance detection with bidirectional conditional encoding, in: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Austin, Texas, 2016, pp. 876–885, [Online]. Available at https://www.aclweb.org/anthology/D16-1084. doi:10.18653/v1/D16-1084.

[11] 

M. Babakar and W. Moy, The State of Automated Factchecking – How to make factchecking dramatically more effective with technology we have now, 2016, Full Fact, Tech. Rep. [Online]. Available at https://fullfact.org/media/uploads/full_fact-the_state_of_automated_factchecking_aug_2016.pdf.

[12] 

I. Balazevic, C. Allen and T. Hospedales, TuckER: Tensor factorization for knowledge graph completion, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics, Hong Kong, China, 2019, pp. 5184–5193. [Online]. Available at https://www.aclweb.org/anthology/D19-1522. doi:10.18653/v1/D19-1522.

[13] 

R. Bar-Haim, I. Bhattacharya, F. Dinuzzo, A. Saha and N. Slonim, Stance classification of context-dependent claims, in: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, Association for Computational Linguistics, Valencia, Spain, 2017, pp. 251–261, [Online]. Available at https://www.aclweb.org/anthology/E17-1024.

[14] 

R. Bar-Haim, L. Edelstein, C. Jochim and N. Slonim, Improving claim stance classification with lexical knowledge expansion and context utilization, in: Proceedings of the 4th Workshop on Argument Mining, Association for Computational Linguistics, Copenhagen, Denmark, 2017, pp. 32–38, [Online]. Available at http://aclweb.org/anthology/W17-5104. doi:10.18653/v1/W17-5104.

[15] 

P. Barberá, Birds of the same feather tweet together: Bayesian ideal point estimation using Twitter data, Political Analysis 23(1) (2015), 76–91. doi:10.1093/pan/mpu011.

[16] 

A. Barrón-Cedeño, T. Elsayed, R. Suwaileh, L. Màrquez, P. Atanasova, W. Zaghouani, S. Kyuchukov, G. Da San Martino and P. Nakov, Overview of the CLEF-2018 CheckThat! Lab on automatic identification and verification of political claims. Task 2: Factuality, in: Working Notes of CLEF 2018 – Conference and Labs of the Evaluation Forum, CLEF 2018 Working Notes, 2018, p. 13, [Online]. Available at http://ceur-ws.org/Vol-2125/invited_paper_14.pdf.

[17] 

V. Beretta, S. Harispe, K. Boland, L. Lo Seen, K. Todorov and A. Tchechmedjiev, Can knowledge graph embeddings tell us what fact-checked claims are about? in: Proceedings of the First Workshop on Insights from Negative Results in NLP, Association for Computational Linguistics, 2020, pp. 71–75. [Online]. Available at https://www.aclweb.org/anthology/2020.insights-1.11. doi:10.18653/v1/2020.insights-1.11.

[18] 

V. Beretta, S. Harispe, S. Ranwez and I. Mougenot, Combining truth discovery and RDF knowledge bases to their mutual advantage, in: The Semantic Web – ISWC 2018, D. Vrandečić, K. Bontcheva, M.C. Suárez-Figueroa, V. Presutti, I. Celino, M. Sabou, L.-A. Kaffee and E. Simperl, eds, Lecture Notes in Computer Science, Springer International Publishing, Cham, 2018, pp. 652–668. https://www.doi.org/10.1007/978-3-030-00671-6_38. doi:10.1007/978-3-030-00671-6_38.

[19] 

P. Besnard and A. Hunter, Elements of Argumentation, The MIT Press, 2008. doi:10.7551/mitpress/9780262026437.001.0001.

[20] 

G. Bhatt, A. Sharma, S. Sharma, A. Nagpal, B. Raman and A. Mittal, Combining neural, statistical and external features for fake news stance identification, in: Companion Proceedings of the Web Conference 2018, WWW’18, International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 2018, pp. 1353–1357. doi:10.1145/3184558.3191577.

[21] 

P. Biyani, S. Bhatia, C. Caragea and P. Mitra, Using non-lexical features for identifying factual and opinionative threads in online forums, Knowledge-Based Systems 69 (2014), 170–178. [Online]. Available at http://www.sciencedirect.com/science/article/pii/S0950705114001786. doi:10.1016/j.knosys.2014.04.048.

[22] 

D.M. Blei and J.D. McAuliffe, Supervised topic models, in: Proceedings of the 20th International Conference on Neural Information Processing Systems, NIPS’07, Curran Associates Inc., Red Hook, NY, USA, 2007, pp. 121–128. [Online]. Available at https://arxiv.org/abs/1003.0783.

[23] 

K. Boland, P. Fafalios, A. Tchechmedjiev, K. Todorov and S. Dietze, Modeling and contextualizing claims, in: Second International Workshop on Contextualized Knowledge Graphs (CKG2019) @ ISWC, 2019, [Online]. Available at http://ceur-ws.org/Vol-2599/CKG2019_paper_1.pdf.

[24] 

E. Cabrio and S. Villata, Five years of argument mining: A data-driven analysis, in: Proceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI’18, AAAI Press, Stockholm, Sweden, 2018, pp. 5427–5433. doi:10.24963/ijcai.2018/766.

[25] 

L. Carstens and F. Toni, Towards relation based argumentation mining, in: Proceedings of the 2nd Workshop on Argumentation Mining, Association for Computational Linguistics, Denver, CO, 2015, pp. 29–34. [Online]. Available at http://aclweb.org/anthology/W15-0504. doi:10.3115/v1/W15-0504.

[26] 

S. Cazalens, P. Lamarre, J. Leblay, I. Manolescu and X. Tannier, A content management perspective on fact-checking, in: Companion Proceedings of the Web Conference 2018, WWW’18. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 2018, pp. 565–574. doi:10.1145/3184558.3188727.

[27] 

D. Cer, M. Diab, E. Agirre, I. Lopez-Gazpio and L. Specia, SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation, in: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), Association for Computational Linguistics, Vancouver, Canada, 2017, pp. 1–14. [Online]. Available at https://www.aclweb.org/anthology/S17-2001. doi:10.18653/v1/S17-2.

[28] 

M. Chen, W. Zhang, W. Zhang, Q. Chen and H. Chen, Meta relational learning for few-shot link prediction in knowledge graphs, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics, Hong Kong, China, 2019, pp. 4216–4225. [Online]. Available at https://www.aclweb.org/anthology/D19-1431. doi:10.18653/v1/D19-1431.

[29] 

S. Chen, D. Khashabi, W. Yin, C. Callison-Burch and D. Roth, Seeing things from a different angle: Discovering diverse perspectives about claims, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, 2019, pp. 542–557. [Online]. Available at https://aclanthology.org/N19-1053. doi:10.18653/v1/N19-1053.

[30] 

X. Chen, S. Wang, Y. Tang and T. Hao, A bibliometric analysis of event detection in social media, Online Information Review 43(1) (2019), 29–52. doi:10.1108/OIR-03-2018-0068.

[31] 

G.L. Ciampaglia, P. Shiralkar, L.M. Rocha, J. Bollen, F. Menczer and A. Flammini, Computational Fact Checking from Knowledge Networks, PLOS ONE 10(6) (2015), https://www.doi.org/10.1371/journal.pone.0128193. [Online]. Available at https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0128193.

[32] 

R. Clancy, I.F. Ilyas and J. Lin, Scalable knowledge graph construction from text collections, in: Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), Association for Computational Linguistics, Hong Kong, China, 2019, pp. 39–46. [Online]. Available at https://www.aclweb.org/anthology/D19-6607. doi:10.18653/v1/D19-6607.

[33] 

S. Cohen, C. Li, J. Yang and C. Yu, Computational journalism: A call to arms to database researchers, in: 5th Biennial Conference on Innovative Data Systems Research (CIDR’11), Asilomar, California, USA, 2011, pp. 148–151.

[34] 

I. Dagan, O. Glickman and B. Magnini, “the Pascal recognising textual entailment challenge,” in machine learning challenges. Evaluating predictive uncertainty, visual object classification, and recognising tectual entailment, in: Lecture Notes in Computer Science, J. Quiñonero-Candela, I. Dagan, B. Magnini and F. d’Alché Buc, eds, Springer, Berlin, Heidelberg, 2006, pp. 177–190. https://www.doi.org/10.1007/11736790_9.

[35] 

J. Daxenberger, S. Eger, I. Habernal, C. Stab and I. Gurevych, What is the essence of a claim? Cross-domain claim identification, in: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Copenhagen, Denmark, 2017, pp. 2055–2066. [Online]. Available at http://aclweb.org/anthology/D17-1218, https://www.doi.org/10.18653/v1/D17-1218.

[36] 

D. Dimitrov, E. Baran, P. Fafalios, R. Yu, X. Zhu, M. Zloch and S. Dietze, TweetsCOV19 – a knowledge base of semantically annotated tweets about the Covid-19 pandemic, in: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, CIKM’20, Association for Computing Machinery, New York, NY, USA, 2020, pp. 2991–2998. doi:10.1145/3340531.3412765.

[37] 

X. Dong, E. Gabrilovich, G. Heitz, W. Horn, N. Lao, K. Murphy, T. Strohmann, S. Sun and W. Zhang, Knowledge vault: A web-scale approach to probabilistic knowledge fusion, in: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining – KDD’14, ACM Press, New York, New York, USA, 2014, pp. 601–610. [Online]. Available at http://dl.acm.org/citation.cfm?doid=2623330.2623623. doi:10.1145/2623330.2623623.

[38] 

J. Du, R. Xu, Y. He and L. Gui, Stance classification with target-specific neural attention networks, in: Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI’17, AAAI Press, Melbourne, Australia, 2017, pp. 3988–3994. doi:10.24963/ijcai.2017/557.

[39] 

L. Dumani and R. Schenkel, A systematic comparison of methods for finding good premises for claims, in: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR’19, Association for Computing Machinery, New York, NY, USA, 2019, pp. 957–960. doi:10.1145/3331184.3331282.

[40] 

E. Durmus, F. Ladhak and C. Cardie, Determining relative argument specificity and stance for complex argumentative structures, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Florence, Italy, 2019, pp. 4630–4641. [Online]. Available at https://www.aclweb.org/anthology/P19-1456. doi:10.18653/v1/P19-1456.

[41] 

J. Ebrahimi, D. Dou and D. Lowd, Weakly supervised tweet stance classification by relational bootstrapping, in: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Austin, Texas, 2016, pp. 1012–1017, [Online]. Available at https://www.aclweb.org/anthology/D16-1105. doi:10.18653/v1/D16-1105.

[42] 

P. Fafalios, V. Iosifidis, E. Ntoutsi and S. Dietze, TweetsKB: A public and large-scale RDF corpus of annotated tweets, in: European Semantic Web Conference, Springer, 2018, pp. 177–190. doi:10.1007/978-3-319-93417-4_12.

[43] 

W. Ferreira and A. Vlachos, Emergent: A novel data-set for stance classification, in: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, San Diego, California, 2016, pp. 1163–1168. [Online]. Available at http://aclweb.org/anthology/N16-1138. doi:10.18653/v1/N16-1138.

[44] 

D. Ferrucci and A. Lally, UIMA: An architectural approach to unstructured information processing in the corporate research environment, Natural Language Engineering 10(3–4) (2004), 327–348. doi:10.1017/S1351324904003523.

[45] 

C. Fierro, C. Fuentes, J. Pérez and M. Quezada, 200K+ crowdsourced political arguments for a new Chilean constitution, in: Proceedings of the 4th Workshop on Argument Mining, Association for Computational Linguistics, Copenhagen, Denmark, 2017, pp. 1–10. [Online]. Available at http://aclweb.org/anthology/W17-5101, https://www.doi.org/10.18653/v1/W17-5101.

[46] 

V. Fionda and G. Pirrò, Fact checking via evidence patterns, in: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Stockholm, Sweden, 2018, pp. 3755–3761. [Online]. Available at https://www.ijcai.org/proceedings/2018/522. doi:10.24963/ijcai.2018/522.

[47] 

M.H. Gad-Elrab, D. Stepanova, J. Urbani and G. Weikum, ExFaKT: A framework for explaining facts over knowledge graphs and text, in: Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining – WSDM’19, ACM Press, Melbourne VIC, Australia, 2019, pp. 87–95. [Online]. Available at http://dl.acm.org/citation.cfm?doid=3289600.3290996. doi:10.1145/3289600.3290996.

[48] 

L.A. Galárraga, C. Teflioudi, K. Hose and F. Suchanek, AMIE: Association rule mining under incomplete evidence in ontological knowledge bases, in: Proceedings of the 22nd International Conference on World Wide Web, WWW’13, Association for Computing Machinery, New York, NY, USA, 2013, pp. 413–422. doi:10.1145/2488388.2488425.

[49] 

A. Gangemi, A comparison of knowledge extraction tools for the semantic web, in: The Semantic Web: Semantics and Big Data, P. Cimiano, O. Corcho, V. Presutti, L. Hollink and S. Rudolph, eds, Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, 2013, pp. 351–366. doi:10.1007/978-3-642-38288-8_24.

[50] 

D. Gerber, D. Esteves, J. Lehmann, L. Bühmann, R. Usbeck, A.-C.N. Ngomo and R. Speck, DeFacto – temporal and multilingual deep fact validation, in: Web Semantics: Science, Services and Agents on the World Wide Web, Vol. 35, 2015. https://www.doi.org/10.1016/j.websem.2015.08.001.

[51] 

B. Ghanem, G. Glavas, A. Giachanou, S. Paolo, P. Rosso and F. Rangel, UPV-UMA at CheckThat! Lab: Verifying Arabic claims using a cross lingual approach, in: Working Notes of CLEF 2019 – Conference and Labs of the Evaluation Forum, CLEF 2019 Working Notes, Lugano, Switzerland, 2019, p. 10, [Online]. Available at http://ceur-ws.org/Vol-2380/paper_91.pdf.

[52] 

B. Ghanem, P. Rosso and F. Rangel, Stance detection in fake news a combined feature representation, in: Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), Association for Computational Linguistics, Brussels, Belgium, 2018, pp. 66–71. [Online]. Available at http://aclweb.org/anthology/W18-5510. doi:10.18653/v1/W18-5510.

[53] 

D. Ghosh, S. Muresan, N. Wacholder, M. Aakhus and M. Mitsui, Analyzing argumentative discourse units in online interactions, in: Proceedings of the First Workshop on Argumentation Mining, Association for Computational Linguistics, Baltimore, Maryland, 2014, pp. 39–48. [Online]. Available at http://aclweb.org/anthology/W14-2106. doi:10.3115/v1/W14-2106.

[54] 

S. Ghosh, P. Singhania, S. Singh, K. Rudra and S. Ghosh, Stance Detection in Web and Social Media: A Comparative Study, Vol. 11696, 2019, pp. 75–87, https://www.doi.org/10.1007/978-3-030-28577-7_4. [Online]. Available at arXiv:2007.05976 [cs].

[55] 

G. Giasemidis, N. Kaplis, I. Agrafiotis and J.R.C. Nurse, A semi-supervised approach to message stance classification, IEEE Transactions on Knowledge and Data Engineering 32(1) (2020), 1–11. [Online]. Available at http://arxiv.org/abs/1902.03097. doi:10.1109/TKDE.2018.2880192.

[56] 

J.M. González Pinto and W.-T. Balke, Offering answers for claim-based queries: A new challenge for digital libraries, in: Digital Libraries: Data, Information, and Knowledge for Digital Lives, S. Choemprayong, F. Crestani and S.J. Cunningham, eds, Lecture Notes in Computer Science, Springer International Publishing, Cham, 2017, pp. 3–13. https://www.doi.org/10.1007/978-3-319-70232-2_1.

[57] 

J.M. González Pinto and W.-T. Balke, Scientific claims characterization for claim-based analysis in digital libraries, in: Digital Libraries for Open Knowledge, E. Méndez, F. Crestani, C. Ribeiro, G. David and J.C. Lopes, eds, Lecture Notes in Computer Science, Vol. 11057, Springer International Publishing, Cham, 2018, pp. 257–269. [Online]. Available at http://link.springer.com/10.1007/978-3-030-00066-0_22, https://www.doi.org/10.1007/978-3-030-00066-0_22.

[58] 

J.M. Gonzalez Pinto, J. Wawrzinek and W.-T. Balke, What drives research efforts? Find scientific claims that count! in: 2019 ACM/IEEE Joint Conference on Digital Libraries (JCDL), IEEE, Champaign, IL, USA, 2019, pp. 217–226. [Online]. Available https://ieeexplore.ieee.org/document/8791177/. doi:10.1109/JCDL.2019.00038.

[59] 

G. Gorrell, E. Kochkina, M. Liakata, A. Aker, A. Zubiaga, K. Bontcheva and L. Derczynski, SemEval-2019 task 7: RumourEval, determining rumour veracity and support for rumours, in: Proceedings of the 13th International Workshop on Semantic Evaluation (SemEval-2019), Association for Computational Linguistics, Minneapolis, Minnesota, USA, 2019, pp. 845–854. doi:10.18653/v1/S19-2147.

[60] 

J. Gracia and E. Mena, Web-based measure of semantic relatedness, in: Proceedings of the 9th International Conference on Web Information Systems Engineering, WISE’08, Springer-Verlag, Berlin, Heidelberg, 2008, pp. 136–150. doi:10.1007/978-3-540-85481-4_12.

[61] 

H. Graves, R. Graves, R. Mercer and M. Akter, Titles that announce argumentative claims in biomedical research articles, in: Proceedings of the First Workshop on Argumentation Mining, Association for Computational Linguistics, Baltimore, Maryland, 2014, pp. 98–99. [Online]. Available at http://aclweb.org/anthology/W14-2113. doi:10.3115/v1/W14-2113.

[62] 

L. Graves, Understanding the Promise and Limits of Automated Fact-Checking, 2018, Reuters.

[63] 

G.M. Green, Pragmatics and Natural Language Understanding, 2nd edn, Lawrence Erlbaum Associates, Hilldale, NJ, 1996.

[64] 

N. Green, Towards creation of a corpus for argumentation mining the biomedical genetics research literature, in: Proceedings of the First Workshop on Argumentation Mining, Association for Computational Linguistics, Baltimore, Maryland, 2014, pp. 11–18. [Online]. Available at http://aclweb.org/anthology/W14-2102. doi:10.3115/v1/W14-2102.

[65] 

N. Green, Identifying argumentation schemes in genetics research articles, in: Proceedings of the 2nd Workshop on Argumentation Mining, Association for Computational Linguistics, Denver, CO, 2015, pp. 12–21. [Online]. Available at http://aclweb.org/anthology/W15-0502. doi:10.3115/v1/W15-0502.

[66] 

N. Green, Proposed method for annotation of scientific arguments in terms of semantic relations and argument schemes, in: Proceedings of the 5th Workshop on Argument Mining, Association for Computational Linguistics, Brussels, Belgium, 2018, pp. 105–110. [Online]. Available at http://aclweb.org/anthology/W18-5213. doi:10.18653/v1/W18-5213.

[67] 

C. Guggilla, T. Miller and I. Gurevych, CNN- and LSTM-based claim classification in online user comments, in: Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, The COLING 2016 Organizing Committee, Osaka, Japan, 2016, pp. 2740–2751, [Online]. Available at https://www.aclweb.org/anthology/C16-1258.

[68] 

I. Habernal and I. Gurevych, Argumentation mining in user-generated web discourse, Computational Linguistics 43(1) (2017), 125–179. [Online]. Available at https://www.aclweb.org/anthology/J17-1004. doi:10.1162/COLI_a_00276.

[69] 

M.A. Hadj Taieb, T. Zesch and M. Ben Aouicha, A survey of semantic relatedness evaluation datasets and procedures, Artificial Intelligence Review 53(6) (2020), 4407–4448. doi:10.1007/s10462-019-09796-3.

[70] 

A. Hanselowski, A. Pvs, B. Schiller, F. Caspelherr, D. Chaudhuri, C.M. Meyer and I. Gurevych, A retrospective analysis of the fake news challenge stance-detection task, in: Proceedings of the 27th International Conference on Computational Linguistics, Association for Computational Linguistics, Santa Fe, New Mexico, USA, 2018, p. 16.

[71] 

A. Hanselowski, H. Zhang, Z. Li, D. Sorokin, B. Schiller, C. Schulz and I. Gurevych, UKP-Athene: Multi-sentence textual entailment for claim verification, in: Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), Association for Computational Linguistics, Brussels, Belgium, 2018, pp. 103–108. [Online]. Available at http://aclweb.org/anthology/W18-5516. doi:10.18653/v1/W18-5516.

[72] 

C. Hansen, C. Hansen, S. Alstrup, J. Grue Simonsen and C. Lioma, Neural check-worthiness ranking with weak supervision: Finding sentences for fact-checking, in: Companion Proceedings of the 2019 World Wide Web Conference, WWW’19, Association for Computing Machinery, New York, NY, USA, 2019, pp. 994–1000. doi:10.1145/3308560.3316736.

[73] 

M. Hasan, M.A. Orgun and R. Schwitter, A survey on real-time event detection from the Twitter data stream, Journal of Information Science 44(4) (2018), 443–463. doi:10.1177/0165551517698564.

[74] 

M. Hasanain, R. Suwaileh, T. Elsayed, A. Barron-Cedeno and P. Nakov, Overview of the CLEF-2019 CheckThat! Lab: Automatic identification and verification of claims. Task 2: Evidence and factuality, in: Working Notes of CLEF 2019 – Conference and Labs of the Evaluation Forum, CLEF 2019 Working Notes, 2019, p. 15, [Online]. Available at http://ceur-ws.org/Vol-2380/paper_270.pdf.

[75] 

N. Hassan, B. Adair, J.T. Hamilton, C. Li, M. Tremayne, J. Yang and C. Yu, The quest to automate fact-checking, in: Proceedings of the 2015 Computation+ Journalism Symposium, 2015, [Online]. Available at http://cj2015.brown.columbia.edu/papers/automate-fact-checking.pdf.

[76] 

N. Hassan, F. Arslan, C. Li and M. Tremayne, Toward automated fact-checking: Detecting check-worthy factual claims by ClaimBuster, in: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD’17, Association for Computing Machinery, New York, NY, USA, 2017, pp. 1803–1812. doi:10.1145/3097983.3098131.

[77] 

N. Hassan, M. Yousuf, M. Mahfuzul Haque, J.A. Suarez Rivas and M. Khadimul Islam, Examining the roles of automation, crowds and professionals towards sustainable fact-checking, in: Companion Proceedings of the 2019 World Wide Web Conference, WWW’19, Association for Computing Machinery, San Francisco, USA, 2019, pp. 1001–1006. doi:10.1145/3308560.3316734.

[78] 

N. Hassan, G. Zhang, F. Arslan, J. Caraballo, D. Jimenez, S. Gawsane, S. Hasan, M. Joseph, A. Kulkarni, A.K. Nayak, V. Sable, C. Li and M. Tremayne, ClaimBuster: The first-ever end-to-end fact-checking system, Proceedings of the VLDB Endowment 10(12) (2017), 1945–1948. doi:10.14778/3137765.3137815.

[79] 

M.A. Hernández and J.M. Gómez, Survey in sentiment, polarity and function analysis of citation, in: Proceedings of the First Workshop on Argumentation Mining, Association for Computational Linguistics, Baltimore, Maryland, 2014, pp. 102–103. [Online]. Available at http://aclweb.org/anthology/W14-2115. doi:10.3115/v1/W14-2115.

[80] 

C. Hidey, E. Musi, A. Hwang, S. Muresan and K. McKeown, Analyzing the semantic types of claims and premises in an online persuasive forum, in: Proceedings of the 4th Workshop on Argument Mining, Association for Computational Linguistics, Copenhagen, Denmark, 2017, pp. 11–21. [Online]. Available at http://aclweb.org/anthology/W17-5102. doi:10.18653/v1/W17-5102.

[81] 

J. Howard and S. Ruder, Universal language model fine-tuning for text classification, in: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Melbourne, Australia, 2018, pp. 328–339. [Online]. Available at https://www.aclweb.org/anthology/P18-1031. doi:10.18653/v1/P18-1031.

[82] 

V.-P. Huynh and P. Papotti, Towards a benchmark for fact checking with knowledge bases, in: Companion of the Web Conference 2018 on the Web Conference 2018 – WWW’18, ACM Press, Lyon, France, 2018, pp. 1595–1598. [Online]. Available at http://dl.acm.org/citation.cfm?doid=3184558.3191616. doi:10.1145/3184558.3191616.

[83] 

K. Hyland, Hedging in Scientific Research Articles, John Benjamins Publishing, 1998. doi:10.1075/pbns.54.

[84] 

A. Jentzsch, R. Isele and C. Bizer, Silk – generating RDF links while publishing or consuming linked data, in: 9th International Semantic Web Conference (ISWC’10), 2010, [Online]. Available at http://ceur-ws.org/Vol-658/paper519.pdf.

[85] 

S. Jiang, S. Baumgartner, A. Ittycheriah and C. Yu, Factoring fact-checks: Structured information extraction from fact-checking articles, in: Proceedings of the Web Conference 2020, ACM, Taipei Taiwan, 2020, pp. 1592–1603. [Online]. Available at https://dl.acm.org/doi/10.1145/3366423.3380231. doi:10.1145/3366423.3380231.

[86] 

K. Joseph, L. Friedland, W. Hobbs, D. Lazer and O. Tsur, ConStance: Modeling annotation contexts to improve stance classification, in: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Copenhagen, Denmark, 2017, pp. 1115–1124. [Online]. Available at http://aclweb.org/anthology/D17-1116, https://www.doi.org/10.18653/v1/D17-1116.

[87] 

C. Kirschner, J. Eckle-Kohler and I. Gurevych, Linking the thoughts: Analysis of argumentation structures in scientific publications, in: Proceedings of the 2nd Workshop on Argumentation Mining, Association for Computational Linguistics, Denver, CO, 2015, pp. 1–11. [Online]. Available at http://aclweb.org/anthology/W15-0501. doi:10.3115/v1/W15-0501.

[88] 

L. Konstantinovskiy, O. Price, M. Babakar and A. Zubiaga, Toward Automated Factchecking: Developing an Annotation Schema and Benchmark for Consistent Automated Claim Detection, Digital Threats: Research and Practice 2(2) (2021). [Online]. Available at https://doi.org/10.1145/3412869. doi:10.1145/3412869.

[89] 

N. Kotonya and F. Toni, Gradual argumentation evaluation for stance aggregation in automated fake news detection, in: Proceedings of the 6th Workshop on Argument Mining, Association for Computational Linguistics, Florence, Italy, 2019, pp. 156–166. [Online]. Available at https://www.aclweb.org/anthology/W19-4518. doi:10.18653/v1/W19-4518.

[90] 

R. Krestel, S. Bergler and R. Witte, Minding the source: Automatic tagging of reported speech in newspaper articles, in: Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), European Language Resources Association (ELRA), Marrakech, Morocco, 2008, [Online]. Available at http://www.lrec-conf.org/proceedings/lrec2008/pdf/718_paper.pdf.

[91] 

D. Küçük and F. Can, Stance detection: A survey, ACM Computing Surveys 53(1) (2020), 12:1–12:37. doi:10.1145/3369026.

[92] 

S. Kumar, A Survey of Deep Learning Methods for Relation Extraction, 2017, arXiv:1705.03645 [cs]. [Online]. Available at http://arxiv.org/abs/1705.03645.

[93] 

M. Lai, D.I. Hernández Farías, V. Patti and P. Rosso, Friends and enemies of clinton and trump: Using context for detecting stance in political tweets, in: Advances in Computational Intelligence, G. Sidorov and O. Herrera-Alcántara, eds, Lecture Notes in Computer Science, Springer International Publishing, Cham, 2017, pp. 155–168, [Online]. Available at https://www.doi.org/10.1007/978-3-319-62434-1_13. doi:10.1007/978-3-319-62434-1_13.

[94] 

A. Lauscher, G. Glavaš and K. Eckert, ArguminSci: A tool for analyzing argumentation and rhetorical aspects in scientific writing, in: Proceedings of the 5th Workshop on Argument Mining, Association for Computational Linguistics, Brussels, Belgium, 2018, pp. 22–28. [Online]. Available at http://aclweb.org/anthology/W18-5203. doi:10.18653/v1/W18-5203.

[95] 

A. Lauscher, G. Glavaš and S.P. Ponzetto, An argument-annotated corpus of scientific publications, in: Proceedings of the 5th Workshop on Argument Mining, Association for Computational Linguistics, Brussels, Belgium, 2018, pp. 40–46. [Online]. Available at http://aclweb.org/anthology/W18-5206. doi:10.18653/v1/W18-5206.

[96] 

J. Lawrence and C. Reed, Argument mining: A survey, Computational Linguistics 45(4) (2019), 765–818. [Online]. Available at https://www.aclweb.org/anthology/J19-4006. doi:10.1162/coli_a_00364.

[97] 

R. Levy, B. Bogin, S. Gretz, R. Aharonov and N. Slonim, Towards an argumentative content search engine using weak supervision, in: Proceedings of the 27th International Conference on Computational Linguistics, Association for Computational Linguistics, Santa Fe, New Mexico, USA, 2018, pp. 2066–2081, [Online]. Available at https://www.aclweb.org/anthology/C18-1176.

[98] 

M. Liebeck, K. Esau and S. Conrad, What to do with an airport? Mining arguments in the German online participation project tempelhofer feld, in: Proceedings of the Third Workshop on Argument Mining (ArgMining2016), Association for Computational Linguistics, Berlin, Germany, 2016, pp. 144–153. [Online]. Available at http://aclweb.org/anthology/W16-2817. doi:10.18653/v1/W16-2817.

[99] 

H. Lin and V. Ng, Abstractive summarization: A survey of the state of the art, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, 2019, pp. 9815–9822. [Online]. Available at https://ojs.aaai.org/index.php/AAAI/article/view/5056. doi:10.1609/aaai.v33i01.3301981.

[100] 

M. Lippi, M. Mamei, S. Mariani and F. Zambonelli, An argumentation-based perspective over the social IoT, IEEE Internet of Things Journal 5(4) (2018), 2537–2547. [Online]. Available at https://www.doi.org/10.1109/JIOT.2017.2775047. doi:10.1109/JIOT.2017.2775047.

[101] 

M. Lippi and P. Torroni, Context-independent claim detection for argument mining, in: Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI’15, AAAI Press, Buenos Aires, Argentina, 2015, pp. 185–191, [Online]. Available at https://www.ijcai.org/Proceedings/15/Papers/033.pdf.

[102] 

M. Lippi and P. Torroni, Argument mining from speech: Detecting claims in political debates, in: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, AAAI Press, Phoenix, Arizona, 2016, pp. 2979–2985, [Online]. Available at https://ojs.aaai.org/index.php/AAAI/article/view/10384.

[103] 

M. Lippi and P. Torroni, MARGOT: A web server for argumentation mining, Expert Systems with Applications 65 (2016), 292–303. [Online]. Available at https://linkinghub.elsevier.com/retrieve/pii/S0957417416304493. doi:10.1016/j.eswa.2016.08.050.

[104] 

M. Lippi and P. Torroni, Argumentation mining: State of the art and emerging trends, ACM Transactions on Internet Technology 16(2) (2016), 10:1–10:25. doi:10.1145/2850417.

[105] 

L. Liu, X. Ren, J. Shang, X. Gu, J. Peng and J. Han, Efficient contextualized representation: Language model pruning for sequence labeling, in: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Brussels, Belgium, 2018, pp. 1215–1225. [Online]. Available at https://www.aclweb.org/anthology/D18-1153. doi:10.18653/v1/D18-1153.

[106] 

X. Liu, P. He, W. Chen and J. Gao, Multi-task deep neural networks for natural language understanding, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Florence, Italy, 2019, pp. 4487–4496. [Online]. Available at https://www.aclweb.org/anthology/P19-1441. doi:10.18653/v1/P19-1441.

[107] 

L. Lugini and D. Litman, Argument component classification for classroom discussions, in: Proceedings of the 5th Workshop on Argument Mining, Association for Computational Linguistics, Brussels, Belgium, 2018, pp. 57–67. [Online]. Available at http://aclweb.org/anthology/W18-5208. doi:10.18653/v1/W18-5208.

[108] 

J. Ma, W. Gao and K.-F. Wong, Detect rumor and stance jointly by neural multi-task learning, in: Companion of the Web Conference 2018 on the Web Conference 2018 – WWW’18, ACM Press, Lyon, France, 2018, pp. 585–593. [Online]. Available at http://dl.acm.org/citation.cfm?doid=3184558.3188729. doi:10.1145/3184558.3188729.

[109] 

E. Maliaroudakis, K. Boland, S. Dietze, K. Todorov, Y. Tzitzikas and P. Fafalios, ClaimLinker: Linking text to a knowledge graph of fact-checked claims, in: Companion Proceedings of the Web Conference 2021 (WWW’21 Companion), ACM, 2021. doi:10.1145/3442442.3458601.

[110] 

J.L. Martinez-Rodriguez, A. Hogan and I. Lopez-Arevalo, Information extraction meets the semantic web: A survey, Semantic Web 11(2) (2020), 255–335. [Online]. Available at https://www.medra.org/servlet/aliasResolver?alias=iospress&doi=10.3233/SW-180333. doi:10.3233/SW-180333.

[111] 

T. Mayer, E. Cabrio, M. Lippi, P. Torroni and S. Villata, Argument mining on clinical trials, in: COMMA 2018–7th International Conference on Computational Models of Argument Proceedings, Frontiers in Artificial Intelligence and Applications, Vol. 305, Warsaw, Poland, 2018, pp. 137–148, [Online]. Available at https://hal.archives-ouvertes.fr/hal-01876462.

[112] 

T. Mayer, E. Cabrio and S. Villata, Evidence type classification in randomized controlled trials, in: Proceedings of the 5th Workshop on Argument Mining, Association for Computational Linguistics, Brussels, Belgium, 2018, pp. 29–34. [Online]. Available at http://aclweb.org/anthology/W18-5204. doi:10.18653/v1/W18-5204.

[113] 

D. Maynard, K. Bontcheva and I. Augenstein, Natural Language Processing for the Semantic Web, Morgan & Claypool Publishers, 2016. doi:10.2200/S00741ED1V01Y201611WBE015.

[114] 

T. Mihaylova, P. Nakov, L. Marquez, A. Barron-Cedeno, M. Mohtarami, G. Karadzhov and J. Glass, Fact checking in community forums, in: AAAI 2018, 2018, pp. 5309–5316, arXiv:1803.03178 [cs].

[115] 

P. Minervini, V. Tresp, C. d’Amato and N. Fanizzi, Adaptive knowledge propagation in web ontologies, ACM Transactions on the Web 12(1) (2017), 2. [Online]. Available at https://www.researchgate.net/publication/319282592_Adaptive_Knowledge_Propagation_in_Web_Ontologies. doi:10.1145/3105961.

[116] 

S. Mohammad, S. Kiritchenko, P. Sobhani, X. Zhu and C. Cherry, SemEval-2016 task 6: Detecting stance in tweets, in: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), Association for Computational Linguistics, San Diego, California, 2016, pp. 31–41. [Online]. Available at https://www.aclweb.org/anthology/S16-1003. doi:10.18653/v1/S16-1003.

[117] 

S. Mohtaj, T. Himmelsbach, V. Woloszyn and S. Moller, Using external knowledge bases and coreference resolution for detecting check-worthy statements, in: Working Notes of CLEF 2019 – Conference and Labs of the Evaluation Forum, CLEF 2019 Working Notes, Lugano, Switzerland, 2019, p. 8, [Online]. Available at http://ceur-ws.org/Vol-2380/paper_94.pdf.

[118] 

M. Nadeem, W. Fang, B. Xu, M. Mohtarami and J. Glass, FAKTA: An automatic end-to-end fact checking system, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), Association for Computational Linguistics, Minneapolis, Minnesota, 2019, pp. 78–83. [Online]. Available at https://www.aclweb.org/anthology/N19-4014. doi:10.18653/v1/N19-4014.

[119] 

M. Nentwig, M. Hartung, A.-C. Ngonga Ngomo and E. Rahm, A survey of current link discovery frameworks, Semantic Web 8(3) (2016), 419–436. [Online]. Available at https://www.medra.org/servlet/aliasResolver?alias=iospress&doi=10.3233/SW-150210. doi:10.3233/SW-150210.

[120] 

M. Neves, D. Butzke and B. Grune, Evaluation of scientific elements for text similarity in biomedical publications, in: Proceedings of the 6th Workshop on Argument Mining, Association for Computational Linguistics, Florence, Italy, 2019, pp. 124–135. [Online]. Available at https://www.aclweb.org/anthology/W19-4515. doi:10.18653/v1/W19-4515.

[121] 

C. Newell, T. Cowlishaw and D. Man, Quote extraction and analysis for news, in: Data Science, Journalism & Media @KDD 2018 International Conference on Knowledge Discover and Data Mining, 2018, [Online]. Available at https://research.signal-ai.com/assets/RnD_at_the_BBC__and_quotes.pdf.

[122] 

E. Newell, D. Margolin and D. Ruths, An attribution relations corpus for political news, in: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), European Language Resources Association (ELRA), Miyazaki, Japan, 2018, p. 8, [Online]. Available at https://aclanthology.org/L18-1524/.

[123] 

A.-C.N. Ngomo and S. Auer, LIMES: A time-efficient approach for large-scale link discovery on the web of data, in: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence – Volume Three, IJCAI’11, AAAI Press, Barcelona, Catalonia, Spain, 2011, pp. 2312–2317, [Online]. Available at https://www.ijcai.org/Proceedings/11/Papers/385.pdf.

[124] 

H. Nguyen and D. Litman, Context-aware argumentative relation mining, in: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Berlin, Germany, 2016, pp. 1127–1137. [Online]. Available at http://aclweb.org/anthology/P16-1107. doi:10.18653/v1/P16-1107.

[125] 

V. Niculae, J. Park and C. Cardie, Argument mining with structured SVMs and RNNs, in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Vancouver, Canada, 2017, pp. 985–995. [Online]. Available at https://www.aclweb.org/anthology/P17-1091. doi:10.18653/v1/P17-1091.

[126] 

V. Niculae, C. Suen, J. Zhang, C. Danescu-Niculescu-Mizil and J. Leskovec, QUOTUS: The structure of political media coverage as revealed by quoting patterns, in: Proceedings of the 24th International Conference on World Wide Web, WWW’15, International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 2015, pp. 798–808. doi:10.1145/2736277.2741688.

[127] 

Y. Nie, H. Chen and M. Bansal, Combining fact extraction and verification with neural semantic matching networks, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, 2019, pp. 6859–6866. [Online]. Available at https://aaai.org/ojs/index.php/AAAI/article/view/4662. doi:10.1609/aaai.v33i01.33016859.

[128] 

T. O’Keefe, S. Pareti, J.R. Curran, I. Koprinska and M. Honnibal, A sequence labelling approach to quote attribution, in: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Association for Computational Linguistics, Jeju Island, Korea, 2012, pp. 790–799. [Online]. Available at https://www.aclweb.org/anthology/D12-1072.

[129] 

J. Opitz and A. Frank, Dissecting content and context in argumentative relation analysis, in: Proceedings of the 6th Workshop on Argument Mining, Association for Computational Linguistics, Florence, Italy, 2019, pp. 25–34. [Online]. Available at https://www.aclweb.org/anthology/W19-4503. doi:10.18653/v1/W19-4503.

[130] 

A. Padia, F. Ferraro and T. Finin, Team UMBC-FEVER: Claim verification using semantic lexical resources, in: Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), Association for Computational Linguistics, Brussels, Belgium, 2018, pp. 161–165. [Online]. Available at http://aclweb.org/anthology/W18-5527. doi:10.18653/v1/W18-5527.

[131] 

A. Padia, K. Kalpakis, F. Ferraro and T. Finin, Knowledge Graph Fact Prediction via Knowledge-Enriched Tensor Factorization, Journal of Web Semantics (2019), https://www.doi.org/10.2139/ssrn.3331039. [Online]. Available at http://arxiv.org/abs/1902.03077.

[132] 

M. Pagliardini, P. Gupta and M. Jaggi, Unsupervised learning of sentence embeddings using compositional n-gram features, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Association for Computational Linguistics, New Orleans, Louisiana, 2018, pp. 528–540. [Online]. Available at https://www.aclweb.org/anthology/N18-1049. doi:10.18653/v1/N18-1049.

[133] 

R.M. Palau and M.-F. Moens, Argumentation mining: The detection, classification and structure of arguments in text, in: Proceedings of the 12th International Conference on Artificial Intelligence and Law, ICAIL’09, Association for Computing Machinery, New York, NY, USA, 2009, pp. 98–107. doi:10.1145/1568234.1568246.

[134] 

B. Pang and L. Lee, Opinion mining and sentiment analysis, Foundations and Trends in Information Retrieval 2(1–2) (2008), 1–135. doi:10.1561/1500000011.

[135] 

S. Pareti, Attribution: A Computational Approach, PhD Thesis, Edinburgh, 2015. [Online]. Available http://hdl.handle.net/1842/14170.

[136] 

J. Park and C. Cardie, Identifying appropriate support for propositions in online user comments, in: Proceedings of the First Workshop on Argumentation Mining, Association for Computational Linguistics, Baltimore, Maryland, 2014, pp. 29–38. [Online]. Available at http://aclweb.org/anthology/W14-2105. doi:10.3115/v1/W14-2105.

[137] 

M.J. Paul, C. Zhai and R. Girju, Summarizing contrastive viewpoints in opinionated text, in: Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP’10, Association for Computational Linguistics, USA, 2010, pp. 66–76. [Online]. Available at https://aclanthology.org/D10-1007/.

[138] 

D. Pavllo, T. Piccardi and R. West, Quootstrap: Scalable unsupervised extraction of quotation-speaker pairs from large news corpora via bootstrapping, in: Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM 2018), ICWSM 2018, 2018, pp. 231–240. arXiv:1804.02525.

[139] 

A. Peldszus and M. Stede, From argument diagrams to argumentation mining in texts: A survey, International Journal of Cognitive Informatics and Natural Intelligence 7(1) (2013), 1–31. doi:10.4018/jcini.2013010101.

[140] 

B. Pereira Nunes, A. Mera, M.A. Casanova, B. Fetahu, L.A.P. Paes Leme and S. Dietze, Complex matching of RDF datatype properties, in: Database and Expert Systems Applications, H. Decker, L. Lhotská, S. Link, J. Basl and A.M. Tjoa, eds, Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, 2013, pp. 195–208. doi:10.1007/978-3-642-40285-2_18.

[141] 

D. Pomerleau and D. Rao, Fake News Challenge Stage 1 (FNC-I): Stance Detection, 2017. [Online]. Available at http://www.fakenewschallenge.org/.

[142] 

K. Popat, Credibility Analysis of Textual Claims with Explainable Evidence, (2019), Doctoral Thesis. doi:10.22028/D291-30005.

[143] 

K. Popat, S. Mukherjee, J. Strötgen and G. Weikum, Where the truth lies: Explaining the credibility of emerging claims on the web and social media, in: Proceedings of the 26th International Conference on World Wide Web Companion – WWW’17 Companion, ACM Press, Perth, Australia, 2017, pp. 1003–1012. [Online]. Available at http://dl.acm.org/citation.cfm?doid=3041021.3055133. doi:10.1145/3041021.3055133.

[144] 

P. Potash, A. Ferguson and T.J. Hazen, Ranking passages for argument convincingness, in: Proceedings of the 6th Workshop on Argument Mining, Association for Computational Linguistics, Florence, Italy, 2019, pp. 146–155. [Online]. Available at https://www.aclweb.org/anthology/W19-4517. doi:10.18653/v1/W19-4517.

[145] 

M. Quraishi, P. Fafalios and E. Herder, Viewpoint discovery and understanding in social networks, in: Proceedings of the 10th ACM Conference on Web Science, 2018, pp. 47–56. [Online]. Available at http://arxiv.org/abs/1810.11047. doi:10.1145/3201064.3201076.

[146] 

P. Rajendran, D. Bollegala and S. Parsons, Contextual stance classification of opinions: A step towards enthymeme reconstruction in online reviews, in: Proceedings of the Third Workshop on Argument Mining (ArgMining2016), Association for Computational Linguistics, Berlin, Germany, 2016, pp. 31–39. [Online]. Available at http://aclweb.org/anthology/W16-2804. doi:10.18653/v1/W16-2804.

[147] 

H. Rashkin, E. Choi, J.Y. Jang, S. Volkova and Y. Choi, Truth of varying shades: Analyzing language in fake news and political fact-checking, in: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Copenhagen, Denmark, 2017, pp. 2931–2937. [Online]. Available at http://aclweb.org/anthology/D17-1317. doi:10.18653/v1/D17-1317.

[148] 

N. Reimers, B. Schiller, T. Beck, J. Daxenberger, C. Stab and I. Gurevych, Classification and clustering of arguments with contextualized word embeddings, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2019, pp. 567–578. [Online]. Available at https://aclanthology.org/P19-1054. doi:10.18653/v1/P19-1054.

[149] 

O.B. Rekdal, Academic urban legends, Social Studies of Science 44(4) (2014), 638–654. [Online]. Available at http://journals.sagepub.com/doi/10.1177/0306312714535679. doi:10.1177/0306312714535679.

[150] 

R. Rinott, L. Dankin, C. Alzate Perez, M.M. Khapra, E. Aharoni and N. Slonim, Show me your evidence – an automatic method for context dependent evidence detection, in: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Lisbon, Portugal, 2015, pp. 440–450. [Online]. Available at http://aclweb.org/anthology/D15-1050. doi:10.18653/v1/D15-1050.

[151] 

P. Ristoski and H. Paulheim, Semantic web in data mining and knowledge discovery: A comprehensive survey, Journal of Web Semantics 36 (2016), 1–22. [Online]. Available at https://linkinghub.elsevier.com/retrieve/pii/S1570826816000020. doi:10.1016/j.websem.2016.01.001.

[152] 

S. Rosenthal and K. McKeown, Detecting opinionated claims in online discussions, in: 2012 IEEE Sixth International Conference on Semantic Computing, IEEE, Palermo, Italy, 2012, pp. 30–37. [Online]. Available at http://ieeexplore.ieee.org/document/6337079/. doi:10.1109/ICSC.2012.59.

[153] 

M. Rospocher, M. van Erp, P. Vossen, A. Fokkens, I. Aldabe, A. Soroa, T. Ploeger and T. Bogaard, Building event-centric knowledge graphs from news, Journal of Web Semantics 37–38 (2016), 132–151. doi:10.1016/j.websem.2015.12.004.

[154] 

A. Roy, P. Fafalios, A. Ekbal, X. Zhu and S. Dietze, Exploiting stance hierarchies for cost-sensitive stance detection of web documents, Journal of Intelligent Information Systems (2021). doi:10.1007/s10844-021-00642-z.

[155] 

A. Salway, P. Meurer, K. Hofland and Ø. Reigem, Quote extraction and attribution from Norwegian newspapers, in: Proceedings of the 21st Nordic Conference on Computational Linguistics, Association for Computational Linguistics, Gothenburg, Sweden, 2017, pp. 293–297, [Online]. Available at https://www.aclweb.org/anthology/W17-0241.

[156] 

C. Scheible, R. Klinger and S. Padó, Model architectures for quotation detection, in: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Berlin, Germany, 2016, pp. 1736–1745. [Online]. Available at http://aclweb.org/anthology/P16-1164. doi:10.18653/v1/P16-1164.

[157] 

E. Schiappa and J.P. Nordin, Argumentation: Keeping Faith with Reason, Pearson Education, 2013, google-Books-ID, ZOn8nQEACAAJ.

[158] 

B. Schiller, J. Daxenberger and I. Gurevych, Stance detection benchmark: How robust is your stance detection? in: KI – Künstliche Intelligenz, 2021. doi:10.1007/s13218-021-00714-w.

[159] 

J.R. Searle, Speech Acts. an Essay in the Philosophy of Language, Cambridge University Press, Cambridge, 1969.

[160] 

I. Sen, F. Flöck and C. Wagner, On the reliability and validity of detecting approval of political actors in tweets, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Online: Association for Computational Linguistics, 2020, pp. 1413–1426, [Online]. Available at https://www.aclweb.org/anthology/2020.emnlp-main.110. doi:10.18653/v1/2020.emnlp-main.110.

[161] 

O. Sevgili, A. Shelmanov, M. Arkhipov, A. Panchenko and C. Biemann, Neural Entity Linking: A Survey of Models based on Deep Learning, 2020. [Online]. Available at http://arxiv.org/abs/2006.00575 arXiv:2006.00575 [cs].

[162] 

S. Shaar, N. Babulkov, G. Da San Martino and P. Nakov, That is a known Lie: Detecting previously fact-checked claims, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2020, pp. 3607–3618. [Online]. Available at https://aclanthology.org/2020.acl-main.332. doi:10.18653/v1/2020.acl-main.332.

[163] 

K. Sharma, F. Qian, H. Jiang, N. Ruchansky, M. Zhang and Y. Liu, Combating fake news: A survey on identification and mitigation techniques, ACM Transactions on Intelligent Systems and Technology 10(3) (2019), 21:1–21:42. doi:10.1145/3305260.

[164] 

W. Shen, J. Wang and J. Han, Entity linking with a knowledge base: Issues, techniques, and solutions, IEEE Transactions on Knowledge and Data Engineering 27(2) (2015), 443–460. [Online]. Available at https://www.doi.org/10.1109/TKDE.2014.2327028. doi:10.1109/TKDE.2014.2327028.

[165] 

B. Shi and T. Weninger, Discriminative Predicate Path Mining for Fact Checking in Knowledge Graphs, Vol. 104, 2016, pp. 123–133, https://www.doi.org/10.1016/j.knosys.2016.04.015. [Online]. Available at arXiv:1510.05911 [cs].

[166] 

E. Shnarch, C. Alzate, L. Dankin, M. Gleize, Y. Hou, L. Choshen, R. Aharonov and N. Slonim, Will it blend? Blending weak and strong labeled data in a neural network for argumentation mining, in: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, 2018, pp. 599–605. [Online]. Available at https://aclanthology.org/P18-2095. doi:10.18653/v1/P18-2095.

[167] 

D. Sridhar, J. Foulds, B. Huang, L. Getoor and M. Walker, Joint models of disagreement and stance in online debate, in: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Association for Computational Linguistics, Beijing, China, 2015, pp. 116–125. [Online]. Available at http://aclweb.org/anthology/P15-1012. doi:10.3115/v1/P15-1012.

[168] 

B. Sriram, D. Fuhry, E. Demir, H. Ferhatosmanoglu and M. Demirbas, Short text classification in Twitter to improve information filtering, in: Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR’10, Association for Computing Machinery, New York, NY, USA, 2010, pp. 841–842. doi:10.1145/1835449.1835643.

[169] 

C. Stab and I. Gurevych, Identifying argumentative discourse structures in persuasive essays, in: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Doha, Qatar, 2014, pp. 46–56. [Online]. Available at http://aclweb.org/anthology/D14-1006. doi:10.3115/v1/D14-1006.

[170] 

C. Stab and I. Gurevych, Parsing argumentation structures in persuasive essays, Computational Linguistics 43(3) (2017), 619–659. [Online]. Available at https://www.mitpressjournals.org/doi/abs/10.1162/COLI_a_00295. doi:10.1162/COLI_a_00295.

[171] 

C. Stahlhut, Interactive evidence detection: Train state-of-the-art model out-of-domain or simple model interactively? in: Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), Association for Computational Linguistics, Hong Kong, China, 2019, pp. 79–89. [Online]. Available at https://www.aclweb.org/anthology/D19-6613. doi:10.18653/v1/D19-6613.

[172] 

K. Starbird, Examining the alternative media ecosystem through the production of alternative narratives of mass shooting events on Twitter, in: Proceedings of the International AAAI Conference on Web and Social Media (ICWSM), Vol. 11, 2017, pp. 230–239, number: 1. [Online]. Available at https://ojs.aaai.org/index.php/ICWSM/article/view/14878.

[173] 

K. Starbird, A. Arif, T. Wilson, K.V. Koevering, K. Yefimova and D. Scarnecchia, Ecosystem or echo-system? Exploring content sharing across alternative media domains, in: Proceedings of the International AAAI Conference on Web and Social Media, Association for the Advancement of Artificial Intelligence, 2018, [Online]. Available at https://ojs.aaai.org/index.php/ICWSM/article/view/15009.

[174] 

Q. Sun, Z. Wang, Q. Zhu and G. Zhou, Stance detection with hierarchical attention network, in: Proceedings of the 27th International Conference on Computational Linguistics, Association for Computational Linguistics, Santa Fe, New Mexico, USA, 2018, pp. 2399–2409, [Online]. Available at https://www.aclweb.org/anthology/C18-1203.

[175] 

Z.H. Syed, M. Röder and A.-C.N. Ngomo, Unsupervised discovery of corroborative paths for fact validation, in: The Semantic Web – ISWC 2019, C. Ghidini, O. Hartig, M. Maleshkova, V. Svátek, I. Cruz, A. Hogan, J. Song, M. Lefrançois and F. Gandon, eds, Lecture Notes in Computer Science, Springer International Publishing, Cham, 2019, pp. 630–646. doi:10.1007/978-3-030-30793-6_36.

[176] 

A. Tchechmedjiev, P. Fafalios, K. Boland, M. Gasquet, M. Zloch, B. Zapilko, S. Dietze and K. Todorov, ClaimsKG: A knowledge graph of fact-checked claims, in: The Semantic Web – ISWC 2019, C. Ghidini, O. Hartig, M. Maleshkova, V. Svátek, I. Cruz, A. Hogan, J. Song, M. Lefrançois and F. Gandon, eds, Lecture Notes in Computer Science, Springer International Publishing, Cham, 2019, pp. 309–324. https://www.doi.org/10.1007/978-3-030-30796-7_20. doi:10.1007/978-3-030-30796-7_20.

[177] 

T. Thonet, G. Cabanac, M. Boughanem and K. Pinel-Sauvagnat, VODUM: A topic model unifying viewpoint, topic and opinion discovery, in: Advances in Information Retrieval, N. Ferro, F. Crestani, M.-F. Moens, J. Mothe, F. Silvestri, G.M. Di Nunzio, C. Hauff and G. Silvello, eds, Lecture Notes in Computer Science, Vol. 9626, Springer International Publishing, Cham, 2016, pp. 533–545. [Online]. Available at http://link.springer.com/10.1007/978-3-319-30671-1_39. doi:10.1007/978-3-319-30671-1_39.

[178] 

T. Thonet, G. Cabanac, M. Boughanem and K. Pinel-Sauvagnat, Users are known by the company they keep: Topic models for viewpoint discovery in social networks, in: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM’17, Association for Computing Machinery, New York, NY, USA, 2017, pp. 87–96. doi:10.1145/3132847.3132897.

[179] 

J. Thorne and A. Vlachos, An extensible framework for verification of numerical claims, in: Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Association for Computational Linguistics, Valencia, Spain, 2017, pp. 37–40, [Online]. Available at https://www.aclweb.org/anthology/E17-3010. doi:10.18653/v1/E17-3010.

[180] 

J. Thorne and A. Vlachos, Automated fact checking: Task formulations, methods and future directions, in: Proceedings of the 27th International Conference on Computational Linguistics, Association for Computational Linguistics, Santa Fe, New Mexico, USA, 2018, pp. 3346–3359, [Online]. Available at https://www.aclweb.org/anthology/C18-1283.

[181] 

J. Thorne and A. Vlachos, Adversarial attacks against Fact Extraction and VERification, CoRR (2019). [Online]. Available at http://arxiv.org/abs/1903.05543 arXiv:1903.05543.

[182] 

J. Thorne, A. Vlachos, C. Christodoulopoulos and A. Mittal, FEVER: A large-scale dataset for fact extraction and VERification, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Association for Computational Linguistics, New Orleans, Louisiana, 2018, pp. 809–819. [Online]. Available at https://www.aclweb.org/anthology/N18-1074. doi:10.18653/v1/N18-1074.

[183] 

J. Thorne, A. Vlachos, O. Cocarascu, C. Christodoulopoulos and A. Mittal, The fact extraction and VERification (FEVER) shared task, in: Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), Association for Computational Linguistics, Brussels, Belgium, 2018, pp. 1–9. [Online]. Available at http://aclweb.org/anthology/W18-5501, https://www.doi.org/10.18653/v1/W18-5501. doi:10.18653/v1/W18-55.

[184] 

J. Thorne, A. Vlachos, O. Cocarascu, C. Christodoulopoulos and A. Mittal, The FEVER2.0 shared task, in: Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), Association for Computational Linguistics, Hong Kong, China, 2019, pp. 1–6. [Online]. Available at https://www.aclweb.org/anthology/D19-6601, https://www.doi.org/10.18653/v1/D19-6601. doi:10.18653/v1/D19-66.

[185] 

O. Toledo-Ronen, R. Bar-Haim and N. Slonim, Expert stance graphs for computational argumentation, in: Proceedings of the Third Workshop on Argument Mining (ArgMining2016), Association for Computational Linguistics, Berlin, Germany, 2016, pp. 119–123. [Online]. Available at http://aclweb.org/anthology/W16-2814. doi:10.18653/v1/W16-2814.

[186] 

B. Torsi and R. Morante, Annotating claims in the vaccination debate, in: Proceedings of the 5th Workshop on Argument Mining, Association for Computational Linguistics, Brussels, Belgium, 2018, pp. 47–56. [Online]. Available at http://aclweb.org/anthology/W18-5207. doi:10.18653/v1/W18-5207.

[187] 

A. Trabelsi and O.R. Zaiane, Unsupervised model for topic viewpoint discovery in online debates leveraging author interactions, in: Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM 2018), Association for the Advancement of Artificial Intelligence, 2018, pp. 425–433, [Online]. Available at https://ojs.aaai.org/index.php/ICWSM/article/view/15021.

[188] 

D. Trautmann, J. Daxenberger, C. Stab, H. Schütze and I. Gurevych, Fine-grained argument unit recognition and classification, in: Proceedings of the AAAI Conference on Artificial Intelligence, AAAI’20, 2020, [Online]. Available at http://arxiv.org/abs/1904.09688. doi:10.1609/aaai.v34i05.6438.

[189] 

B.D. Trisedya, G. Weikum, J. Qi and R. Zhang, Neural relation extraction for knowledge base enrichment, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Florence, Italy, 2019, pp. 229–240. [Online]. Available at https://www.aclweb.org/anthology/P19-1023. doi:10.18653/v1/P19-1023.

[190] 

S. Tschiatschek, A. Singla, M. Gomez Rodriguez, A. Merchant and A. Krause, Fake news detection in social networks via crowd signals, in: Companion Proceedings of the Web Conference 2018, WWW’18. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 2018, pp. 517–524. doi:10.1145/3184558.3188722.

[191] 

D. Tsurel, D. Pelleg, I. Guy and D. Shahaf, Fun facts: Automatic trivia fact extraction from Wikipedia, in: Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, WSDM’17, 2017, pp. 345–354. [Online]. Available at http://arxiv.org/abs/1612.03896. doi:10.1145/3018661.3018709.

[192] 

V. Uren, P. Cimiano, J. Iria, S. Handschuh, M. Vargas-Vera, E. Motta and F. Ciravegna, Semantic annotation for knowledge management: Requirements and a survey of the state of the art, Journal of Web Semantics 4(1) (2006), 14–28. [Online]. Available at http://www.sciencedirect.com/science/article/pii/S1570826805000338. doi:10.1016/j.websem.2005.10.002.

[193] 

N. Veira, B. Keng, K. Padmanabhan and A. Veneris, Unsupervised embedding enhancements of knowledge graphs using textual associations, in: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Macao, China, 2019, pp. 5218–5225. [Online]. Available at https://www.ijcai.org/proceedings/2019/725. doi:10.24963/ijcai.2019/725.

[194] 

A. Vlachos and S. Riedel, Fact checking: Task definition and dataset construction, in: Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, Association for Computational Linguistics, Baltimore, MD, USA, 2014, pp. 18–22. [Online]. Available at https://www.aclweb.org/anthology/W14-2508. doi:10.3115/v1/W14-2508.

[195] 

A. Vlachos and S. Riedel, Identification and verification of simple claims about statistical properties, in: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Lisbon, Portugal, 2015, pp. 2596–2601. [Online]. Available at https://www.aclweb.org/anthology/D15-1312. doi:10.18653/v1/D15-1312.

[196] 

N. Voskarides, E. Meij, R. Reinanda, A. Khaitan, M. Osborne, G. Stefanoni, P. Kambadur and M. de Rijke, Weakly-supervised contextualization of knowledge graph facts, in: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval – SIGIR’18, 2018, pp. 765–774. [Online]. Available at http://arxiv.org/abs/1805.02393. doi:10.1145/3209978.3210031.

[197] 

S. Vosoughi, D. Roy and S. Aral, The spread of true and false news online, Science 359(6380) (2018), 1146–1151. [Online]. Available at https://science.sciencemag.org/content/359/6380/1146. doi:10.1126/science.aap9559.

[198] 

M.A. Walker, P. Anand, R. Abbott and R. Grant, Stance classification using dialogic properties of persuasion, in: Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT’12, Association for Computational Linguistics, USA, 2012, pp. 592–596, [Online]. Available at https://aclanthology.org/N12-1072/.

[199] 

V.R. Walker, D. Foerster, J.M. Ponce and M. Rosen, Evidence types, credibility factors, and patterns or soft rules for weighing conflicting evidence: Argument mining in the context of legal rules governing evidence assessment, in: Proceedings of the 5th Workshop on Argument Mining, Association for Computational Linguistics, Brussels, Belgium, 2018, pp. 68–78. [Online]. Available at http://aclweb.org/anthology/W18-5209. doi:10.18653/v1/W18-5209.

[200] 

C. Wang, M. Yan, C. Yi and Y. Sha, Capturing semantic and syntactic information for link prediction in knowledge graphs, in: The Semantic Web – ISWC 2019, C. Ghidini, O. Hartig, M. Maleshkova, V. Svátek, I. Cruz, A. Hogan, J. Song, M. Lefrançois and F. Gandon, eds, Lecture Notes in Computer Science, Springer International Publishing, Cham, 2019, pp. 664–679. doi:10.1007/978-3-030-30793-6_38.

[201] 

W.Y. Wang, Liar, liar pants on fire: A new benchmark dataset for fake news detection, in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Short Papers), Association for Computational Linguistics, Vancouver, Canada, 2017, pp. 422–426. [Online]. Available at https://www.aclweb.org/anthology/P17-2067.pdf. doi:10.18653/v1/P17-2067.

[202] 

X. Wang, Q.Z. Sheng, L. Yao, X. Li, X.S. Fang, X. Xu and B. Benatallah, Empowering truth discovery with multi-truth prediction, in: Proceedings of the 25th ACM International on Conference on Information and Knowledge Management – CIKM’16, ACM Press, Indianapolis, Indiana, USA, 2016, pp. 881–890. [Online]. Available at http://dl.acm.org/citation.cfm?doid=2983323.2983767, https://www.doi.org/10.1145/2983323.2983767.

[203] 

X. Wang, C. Yu, S. Baumgartner and F. Korn, Relevant document discovery for fact-checking articles, in: Companion Proceedings of the Web Conference 2018, WWW’18. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 2018, pp. 525–533. doi:10.1145/3184558.3188723.

[204] 

G. Weikum, J. Hoffart and F. Suchanek, Knowledge harvesting: Achievements and challenges, in: Computing and Software Science: State of the Art and Perspectives, B. Steffen and G. Woeginger, eds, Lecture Notes in Computer Science, Springer International Publishing, Cham, 2019, pp. 217–235. doi:10.1007/978-3-319-91908-9_13.

[205] 

J. Wiebe and E. Riloff, Creating subjective and objective sentence classifiers from unannotated texts, in: Proceedings of the 6th International Conference on Computational Linguistics and Intelligent Text Processing, CICLing’05, Springer-Verlag, Berlin, Heidelberg, 2005, pp. 486–497. doi:10.1007/978-3-540-30586-6_53.

[206] 

W. Wong, W. Liu and M. Bennamoun, Ontology learning from text: A look back and into the future, ACM Computing Surveys 44(4) (2012), 20:1–20:36. doi:10.1145/2333112.2333115.

[207] 

Y. Wu, P.K. Agarwal, C. Li, J. Yang and C. Yu, Toward computational fact-checking, in: Proceedings of the VLDB Endowment, Vol. 7, 2014, pp. 589–600. [Online]. Available at https://dl.acm.org/doi/10.14778/2732286.2732295. doi:10.14778/2732286.2732295.

[208] 

Y. Wu, P.K. Agarwal, C. Li, J. Yang and C. Yu, Computational fact checking through query perturbations, ACM Transactions on Database Systems 42(1) (2017), 1–41. doi:10.1145/2996453.

[209] 

H. Xiao, J. Gao, Q. Li, F. Ma, L. Su, Y. Feng and A. Zhang, Towards confidence interval estimation in truth discovery, IEEE Transactions on Knowledge and Data Engineering 31(3) (2019), 575–588. [Online]. Available at https://ieeexplore.ieee.org/document/8359426/. doi:10.1109/TKDE.2018.2837026.

[210] 

C. Xu, C. Paris, S. Nepal and R. Sparks, Cross-target stance classification with self-attention networks, in: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Association for Computational Linguistics, Melbourne, Australia, 2018, pp. 778–783. [Online]. Available at https://www.aclweb.org/anthology/P18-2123. doi:10.18653/v1/P18-2123.

[211] 

Z. Yang, D. Yang, C. Dyer, X. He, A. Smola and E. Hovy, Hierarchical attention networks for document classification, in: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, San Diego, California, 2016, pp. 1480–1489. [Online]. Available at https://www.aclweb.org/anthology/N16-1174. doi:10.18653/v1/N16-1174.

[212] 

X. Yin, J. Han and P.S. Yu, Truth discovery with multiple conflicting information providers on the web, IEEE Transactions on Knowledge and Data Engineering 20(6) (2008), 796–808. doi:10.1109/TKDE.2007.190745.

[213] 

H. Yu and V. Hatzivassiloglou, Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences, in: Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, 2003, pp. 129–136. doi:10.3115/1119355.1119372.

[214] 

R. Yu, U. Gadiraju, B. Fetahu, O. Lehmberg, D. Ritze and S. Dietze, KnowMore – knowledge base augmentation with structured web markup, Semantic Web 10(1) (2018), 159–180. [Online]. Available at https://www.medra.org/servlet/aliasResolver?alias=iospress&doi=10.3233/SW-180304. doi:10.3233/SW-180304.

[215] 

B. Zapilko, J. Schaible, P. Mayr and B. Mathiak, TheSoz: A SKOS representation of the thesaurus for the social sciences, Semantic Web 4(3) (2013), 257–263. doi:10.3233/SW-2012-0081.

[216] 

Q. Zhan, S. Liang, A. Lipani, Z. Ren and E. Yilmaz, From stances’ imbalance to their hierarchical representation and detection, in: Proceedings of WWW’19: The World Wide Web Conference, WWW’19, San Francisco, CA, USA, 2019, pp. 2323–2332. doi:10.1145/3308558.3313724.

[217] 

H. Zhang, Q. Li, F. Ma, H. Xiao, Y. Li, J. Gao and L. Su, Influence-aware truth discovery, in: Proceedings of the 25th ACM International on Conference on Information and Knowledge Management – CIKM’16, ACM Press, Indianapolis, Indiana, USA, 2016, pp. 851–860. [Online]. Available at http://dl.acm.org/citation.cfm?doid=2983323.2983785, https://www.doi.org/10.1145/2983323.2983785.

[218] 

Q. Zhang, Z. Sun, W. Hu, M. Chen, L. Guo and Y. Qu, Multi-view knowledge graph embedding for entity alignment, in: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Macao, China, 2019, pp. 5429–5435. [Online]. Available at https://www.ijcai.org/proceedings/2019/754. doi:10.24963/ijcai.2019/754.

[219] 

Z. Zhang, A.L. Gentile and F. Ciravegna, Recent advances in methods of lexical semantic relatedness – a survey, Natural Language Engineering 19(4) (2013), 411–479. doi:10.1017/S1351324912000125.

[220] 

Z. Zhang, X. Han, Z. Liu, X. Jiang, M. Sun and Q. Liu, ERNIE: Enhanced language representation with informative entities, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Florence, Italy, 2019, pp. 1441–1451. [Online]. Available at https://www.aclweb.org/anthology/P19-1139. doi:10.18653/v1/P19-1139.

[221] 

S. Zhi, Y. Sun, J. Liu, C. Zhang and J. Han, ClaimVerif: A real-time claim verification system using the web and fact databases, in: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management – CIKM’17, ACM Press, Singapore, Singapore, 2017, pp. 2555–2558. [Online]. Available at http://dl.acm.org/citation.cfm?doid=3132847.3133182. doi:10.1145/3132847.3133182.

[222] 

X. Zhou, R. Zafarani, K. Shu and H. Liu, Fake news: Fundamental theories, detection strategies and challenges, in: Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM’19, Association for Computing Machinery, New York, NY, USA, 2019, pp. 836–837. doi:10.1145/3289600.3291382.

[223] 

H. Zhu, R. Xie, Z. Liu and M. Sun, Iterative entity alignment via joint knowledge embeddings, in: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Melbourne, Australia, 2017, pp. 4258–4264. [Online]. Available at https://www.ijcai.org/proceedings/2017/595. doi:10.24963/ijcai.2017/595.

[224] 

A. Zubiaga, A. Aker, K. Bontcheva, M. Liakata and R. Procter, Detection and resolution of rumours in social media: A survey, ACM Computing Surveys 51(2) (2018), 32:1–32:36. doi:10.1145/3161603.