You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

LSQ 2.0: A linked dataset of SPARQL query logs

Abstract

We present the Linked SPARQL Queries (LSQ) dataset, which currently describes 43.95 million executions of 11.56 million unique SPARQL queries extracted from the logs of 27 different endpoints. The LSQ dataset provides RDF descriptions of each such query, which are indexed in a public LSQ endpoint, allowing interested parties to find queries with the characteristics they require. We begin by describing the use cases envisaged for the LSQ dataset, which include applications for research on common features of queries, for building custom benchmarks, and for designing user interfaces. We then discuss how LSQ has been used in practice since the release of four initial SPARQL logs in 2015. We discuss the model and vocabulary that we use to represent these queries in RDF. We then provide a brief overview of the 27 endpoints from which we extracted queries in terms of the domain to which they pertain and the data they contain. We provide statistics on the queries included from each log, including the number of query executions, unique queries, as well as distributions of queries for a variety of selected characteristics. We finally discuss how the LSQ dataset is hosted and how it can be accessed and leveraged by interested parties for their use cases.

1.Introduction

Since its initial recommendation in 2008 [70], the SPARQL query language for RDF has received considerable adoption, where it is used on hundreds of public query endpoints accessible over the Web [93]. The most prominent of these endpoints receive millions of queries per month [12], or even per day [57]. There is much to be learnt from queries received by such endpoints, where research on SPARQL would benefit – and has already benefited – from access to real-world queries to help focus both applied and theoretical research on commonly seen forms of queries [59].

To exemplify how access to real-world queries can directly benefit research on SPARQL, first consider the complexity results of SPARQL [67], which show that evaluation of SPARQL queries is intractable (PSPACE-hard). But do the worst cases predicted in theory actually occur in practice? Is it possible to define fragments of the language that avoid computationally difficult cases and lead the way to efficient algorithms dedicated to these common cases? The answer is yes, where a number of restricted fragments of SPARQL queries have been identified that are less computationally costly for important tasks. These fragments include well-designed queries that use the OPTIONAL clause in restricted ways [21,67], queries with low treewidth [21] whose structure is close to that of a tree, queries such as simple transitive expressions [58] or (certain fragments of) simple conjunctive regular path queries [36] where only restricted use of Kleene star (*) is allowed in path expressions, certain types of simple conjunctive regular path queries where disjunction (|) is not allowed inside Kleene star, and threshold queries that limit the number of results returned [20]. Studies of SPARQL query logs have shown that these fragments cover many of the queries seen in practice [24,58], where query logs help to bridge the theory and practice of SPARQL [59].

Another use case for a large collection of real-world queries pertains to benchmarking. For over a decade, the SPARQL community has relied on synthetic datasets and queries (e.g., LUBM [40], Berlin [19]), or real-world datasets and hand-crafted queries (e.g., BTC [63], FedBench [84]) to perform benchmarking. However, Aluç et al. [7] and Saleem et al. [83] find the queries of these benchmarks to often be too narrow and simplistic. Building benchmarks from real-world queries can help tune implementations and guide research towards better support for the types of queries most commonly encountered in practical settings [13,16,62,65,79,101]. Yet another use case is caching [50,54,100]. Here, real-world queries can be used to simulate practical workloads experienced by endpoints. The usability of SPARQL interfaces [24,25,52,73] can also benefit from query logs, as these logs can reveal patterns in how users incrementally build their queries, as has recently been studied by Bonifati et al. [24] in DBpedia logs. These use cases and others will be discussed in more detail in Section 2.

Recognising the value of query logs, a number of such collections have been published previously, including contributions from USEWOD [55],11 as well as Wikidata [57]. These logs have been widely used and analysed by a variety of authors (e.g., [12,21,23,57,68,72]). However, (i) these logs are provided in ad-hoc formats, varying in terms of syntax and information provided depending on the particular SPARQL implementation used to host the endpoint. (ii) Typically, queries are published as strings, meaning (for example) that a client would need to use a SPARQL query parser and some procedural code to find queries matching particular structures or characteristics. (iii) Moreover, runtime statistics in terms of–for example–the selectivity of individual query patterns with respect to the base dataset of the endpoint are not provided. (iv) Furthermore, these datasets have generally been limited to publishing logs from a small number (1–4) of endpoints.

In this dataset description paper, we extend upon our previous work [77], which reported on the initial release of the Linked SPARQL Query Dataset (LSQ). The goal of LSQ is to publish queries from a variety of SPARQL logs in a consistent format and associate these queries with rich metadata, including both static metadata (i.e., considering only the query) and runtime metadata (i.e., considering the query and the dataset). In particular, we propose an RDF representation of queries that captures their source, structure, static metadata and runtime metadata. These RDF descriptions of queries are indexed in a SPARQL endpoint. Thus, they allow clients to retrieve the queries of interest to their use case declaratively, potentially sourced from several endpoints at once. In comparison to our previous work [77], which described the initial release of the dataset in 2015:

  • The LSQ dataset has grown considerably: LSQ 2.0 now features logs from 27 endpoints (22 of which are from Bio2RDF) compared with 4 initial endpoints. As a result, the number of query executions described by the LSQ 2.0 dataset has grown from 5.68 million to 43.95 million.

  • Based on the experiences gained from the first version of LSQ, we have improved the RDF model to provide better modularisation and more detailed metadata, facilitating new ways in which clients can select the queries of interest to them; we have likewise updated the LSQ vocabulary accordingly.

  • We have re-engineered the extraction framework, which takes as input raw logs produced by a variety of popular SPARQL engines and Web servers, producing an output RDF graph in the LSQ 2.0 data model describing the queries. The RDFization process can now be scaled as it leverages Apache Spark.22 The LSQ software framework has been released as open source.

  • We have evaluated the new queries locally in a Virtuoso instance in order to gain runtime statistics (including estimates of the number of results, the selectivity of patterns, overall runtimes, etc.), and have updated the statistical analysis of the queries featured by LSQ to include the additional data provided by the new endpoints.

  • Since the initial release, LSQ has been used by a variety of diverse research works on SPARQL [2,3,11,14,15,17,18,21,22,26,3032,34,35,37,39,41,42,49,58,69,71,7476,7880,83,8587,8991,94,9799,102]. To exemplify the value of LSQ, we discuss the various ways in which the dataset has been used in these past years.

LSQ 2.0 is available at http://aksw.github.io/LSQ/.

The rest of the paper is structured as follows:

  • Section 2 describes use cases envisaged for LSQ.

  • Section 3 details the model and vocabulary used by LSQ to represent and describe SPARQL queries.

  • Section 4 describes how LSQ is published following Linked Data principles and best practices.

  • Section 5 first describes the datasets for which LSQ indexes queries, and then provides details on the raw logs from which queries are extracted.

  • Section 6 provides an analysis of the LSQ dataset itself, as well as the queries it contains.

  • Section 7 describes how LSQ has been adopted for the past six years since its initial release.

  • Section 8 concludes and discusses future directions for the LSQ dataset.

2.Use cases

To help motivate the Linked SPARQL Queries dataset, we first discuss some potential use cases that we envisage. We then list some general requirements for LSQ that arise from these use cases.

UC1 Custom Benchmarks

A number of benchmarks have been proposed recently based on real-world queries observed in logs [16,62,79,101]. The LSQ dataset can support the creation of such benchmarks, allowing users to select queries from a diverse selection of logs based on custom criteria matching the metadata provided by LSQ. Queries may be selected so as to provide a general benchmark that is representative of real-world workloads, or a specialised benchmark focused on particular query characteristics, such as path expressions, multi-way joins, and aggregation queries.

UC2 SPARQL Adoption

Various works have analysed SPARQL query logs in order to understand how features of the SPARQL standard are used “in the wild” as well as to extract structural properties of real-world queries [12,21,23,24,57,68,72]. In turn, this family of works has led to the definition of tractable fragments of queries that are common in practice [20,58]. LSQ can facilitate further research on the use of SPARQL in the wild as it compiles logs from different domains.

UC3 Caching

Techniques for SPARQL caching [50,60,66,100] aim to re-use solutions across multiple queries. Caching allows for reducing the computational requirements needed to evaluate a workload, particularly in cases where queries are often repeated and the underlying data do not change too frequently. The LSQ dataset can again provide a sequence of real-world queries for benchmarking caching systems in realistic settings.

UC4 Usability

Aside from efficiency, a crucial aspect of SPARQL research and development is to explore techniques that allow non-expert users to express queries against endpoints more easily. A number of techniques have been proposed to enhance the usability of SPARQL endpoints, including works on auto-completion [25,52,73], query relaxation [38,43,96] and query builders [10,27,44,95]. Such works could use the LSQ dataset to investigate patterns in how users iteratively formulate more complex queries, causes for queries with empty results, as well as to detect the most important features that interfaces must support.

UC5 Optimisation

Understanding the most common cases encountered in real-world queries can allow for optimising implementations towards those cases. One such optimisation is to define workload-aware schemes for local [8,9] and distributed [4,28,45] indexing that attempt to group data commonly requested together in the same region of storage; other optimisations look at scheduling the execution of parallel query requests in an effective and fair manner [56], or propose efficient algorithms for frequently encountered patterns in queries [58]. The LSQ dataset can provide diverse examples of real workloads to help configure and evaluate such techniques.

UC6 Meta-Querying

The final use case is admittedly more speculative. By meta-querying, we refer to LSQ being used to query for queries of interest, for example, to find the (most common) queries that are asked about specific resources, such as finding out what queries are being asked involving dbr:Zika_virus, or what frequent co-occurrences of resources appear in queries. Meta-querying along these lines may help to understand what are the common information needs of users.

These six use cases are intended to help motivate the dataset, to give ideas of potential applications, and also to help distil some key requirements for the design of the dataset. The list should not be considered complete, as other use cases will naturally arise in future. We identify the following facets of the dataset as relevant to support the aforementioned six use cases.

F1 Static Query Features

LSQ should describe the key features of each query independently of the dataset. These include SPARQL keywords (e.g., UNION, DISTINCT), syntactic features (e.g., property paths), and structural features (e.g., multi-way joins, number of projected variables, statistics relating to basic graph patterns (BGPs), etc.). Furthermore, the query should make the resources it mentions explicit. Static features are of key importance to UC1, UC2, UC4, UC5 and UC6.

F2 Provenance

LSQ should provide provenance meta-data about the execution of each query, including the endpoint it was issued to, a timestamp of when it was executed, and an anonymised identifier for the client. Timestamps are of particular importance to UC3 and UC4, while an anonymised identifier for the client is mostly of importance to UC4.

F3 Runtime Query Statistics

LSQ should include statistics of the evaluation of the query over the original dataset, including the number of results returned, the estimated runtime, and the selectivity of individual patterns in the query. Again, making such statistics available allows clients to select and analyse queries with regard to these features without having to execute them over the original dataset. Runtime statistics are of particular importance to UC1, UC3, UC4 and UC5.

These facets guide the design of the LSQ dataset in terms of what is included, and how the descriptions of individual queries are represented in RDF.

3.Data model & vocabulary

In this section, we describe the data model and vocabulary employed by LSQ for describing SPARQL queries. First, we identify a number of desiderata:

D1 Generality

The data model should facilitate a variety of use cases and cover at least the aforementioned facets (F1F3) without the need for clients to parse the raw query strings.

D2 Conciseness

With logs containing millions of queries, the data model should be relatively concise – in terms of triples produced per query – to keep LSQ at a manageable volume of data.

D3 Usability

Core competency questions over the dataset (e.g., find all queries using a particular feature) should be expressible in terms of simple queries that are efficient to evaluate.

Fig. 1.

Core of the LSQ data model: dashed lines indicate sub-classes; datatype properties are embedded within their associated class nodes to simplify presentation; external classes are shown with dotted borders. For clarity, we do not show details of the SPIN representation, or the execution of query elements more fine-grained than BGPs (which follow a similar pattern).

Core of the LSQ data model: dashed lines indicate sub-classes; datatype properties are embedded within their associated class nodes to simplify presentation; external classes are shown with dotted borders. For clarity, we do not show details of the SPIN representation, or the execution of query elements more fine-grained than BGPs (which follow a similar pattern).

D4 Linked Data Compatibility

URIs should be dereferenceable so as to abide by the Linked Data Principles. Terms from external well-known vocabularies should be re-used where appropriate. Links to other datasets should be provided.

It is important to note that some of these desiderata are incompatible. For example, D2 is in direct conflict with D1 as adding more meta-data for queries can increase generality, but decreases conciseness. D2 can also be seen as being in conflict with D3 and D4, as D3 can be achieved by adding “shortcut” representations for common needs, while D4 requires the addition of links to external datasets, both of which reduce conciseness. Consequently, the data model must find a balance between providing a detailed description of each query, being useful for various purposes, and keeping the overall dataset relatively concise and manageable.

In Fig. 1 we provide an overview of the model used to represent queries in RDF, while in Listing 1 we provide a snippet of the top-level data generated for a query found in the SWDF logs.33 We now discuss the groups of features described for each query.

Query instance We define a “query” to be uniquely identified by the syntactic query string (independently of the endpoint, the particular execution, etc.). We type these queries with lsqv:Query. Instances of this class are linked to the query string using lsqv:text, and to various instances of local and remote executions. Other links are provided to other resources that capture further details of the static features of the query, its structure, as well as runtime statistics of its local execution (on our server) as information about its remote execution (on the original server).

Listing 1.

An example LSQ/RDF representation of a SPARQL query in Turtle syntax

An example LSQ/RDF representation of a SPARQL query in Turtle syntax

Static features Next we define some static features of the query, independent of the dataset over which it is evaluated. These include links to its individual join variables, triple patterns, and basic graph patterns; the SPARQL features that is uses; its number of projected variables, basic graph patterns, join variables, triple patterns; the maximum, mean and median degree of its join variables; and the maximum and minimum size of its basic graph patterns. The triple patterns and basic graph patterns themselves link to the SPIN representation of the query included in the description (and discussed presently); the triple patterns, in turn, link to the resources used by the query. The join variables, on the other hand, are described separately, indicating the degree of the variable and type of join [81] it induces.

SPIN representation While the static features aim to capture some high-level descriptions of the query that may be of interest to specific use cases, some details may be missing. In the interest of generality, we also include for each query a SPARQL Inferencing Notation (SPIN) [48] representation of the query, which essentially captures a fine-grained translation of the SPARQL query to RDF. This SPIN encoding can be translated back to a SPARQL query equivalent to the original.44

Remote execution(s) Next, individual queries are associated with one or more executions on the original endpoint, including a timestamp of when the query was executed, as well as an anonymised ID for the client – based on their cryptographically-hashed and salted I.P. – to identify which queries are run by the same agent.55 The remote execution is also linked to the originating endpoint using lsqv:endpoint.66 Given that these meta-data constitute provenance for the query, we use the PROV Ontology (PROV-O) [51] for modelling the time, date and agent involved in the remote execution.

Local execution In most cases, the log of the remote executions will not provide statistics about the execution of the query in terms of how many results were returned, how long it took, how selective were the individual patterns, and so forth. Hence we re-execute the queries offline against the original dataset to generate runtime statistics about the query. Local executions were run on a machine with 64 core Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10 GHz, and 528 GB RAM running Ubuntu 18.04.5 LTS using Virtuoso 7.2.77 Due to the large number of queries to evaluate, we set a query timeout of one minute. The statistics generated include the number of results and the runtime for the query, as well as the number of results and the selectivity for each individual triple pattern.88 Runtime statistics are computed in a controlled environment that abstract away external factors such as the load on the endpoint server, etc.; however, due to the costs involved in evaluating such queries, we compute these only for one query engine, namely Virtuoso 7.2, where runtime estimates may thus vary for other engines.

Summary The meta-data described in this section aim to strike a balance in terms of the four desiderata mentioned previously. In terms of Generality, we provide detailed meta-data for static query features, for provenance, and for runtime query statistics. In terms of Conciseness, though the detailed meta-data do require potentially many triples to be encoded for each query, we take steps to reduce this number by re-using resources insofar as appropriate where, for example, each unique query string is encoded once per log, with one set of static features, one SPIN representation, and one set of local executions, being subsequently linked to its different remote executions (rather than duplicate the former meta-data each time the same query string appears in the log). In terms of Usability, we provide some “shortcut triples” that allow for quickly finding queries of interest; for example, the static features of the query are largely of this form, where all such meta-data could in principle be computed from the SPIN representation, but using rather complex SPARQL queries over LSQ; the static query features are thus presented to make it easier to find queries, for example, with a certain range of numbers of triple patterns, or queries using DISTINCT and GROUP BY, etc. We will discuss Linked Data Compatibility in the section that follows.

4.Publication

The LSQ dataset is published as Linked Data. Before describing the current contents of LSQ, we discuss in more detail how LSQ has been published.

Access methods We provide a number of ways to access LSQ. Firstly, following Linked Data principles, all IRIs under the lsqr: namespace are made dereferenceable using a 303 Redirect; this is implemented with LodView99 and supports content negotiation. A SPARQL endpoint is provided for querying LSQ 2.0. Table 1 lists the locations for these access methods.

Table 1

Locations from which LSQ can be accessed including an example Linked Data IRI, the vocabulary, dumps, the SPARQL endpoint, as well as locations where LSQ is indexed, including DataHub, Linked Open Vocabularies (LOV) and prefix.cc

MethodLocation
Linked Data IRIshttp://lsq.aksw.org/lsqQuery-3wBd2uKotB_-vUxnngs6ZNsGPhJmIDD9c7ig0UI24y8 (example)
Vocabularyhttp://lsq.aksw.org/vocab
Dumpshttp://lsq.aksw.org/downloads
SPARQL Endpointhttp://lsq.aksw.org/sparql
CatalogueLocation
Datahubhttps://datahub.io/dataset/lsq
LOVhttps://lov.linkeddata.es/dataset/lov/vocabs/lsq
prefix.cchttp://prefix.cc/lsqv

Vocabulary As seen in Fig. 1, we use a mixture of a custom vocabulary in the lsqv: namespace, as well as existing vocabulary where possible. The custom LSQ vocabulary dereferences (via 303 Redirect) to an RDFS/OWL definition of the corresponding terms in Turtle, which includes metadata about authors. The vocabulary meets four of the five stars of Linked Data vocabulary use [46].1010 With respect to external vocabulary, we re-use terms from the SPARQL Inferencing Notation (SPIN) ontology [48], as well as the Provenance Ontology (PROV-O) [51] where possible.

Discoverability The LSQ dataset has been registered in the DataHub catalogue, while the LSQ vocabulary has been listed on Linked Open Vocabularies (LOV) [92] as well as prefix.cc. We provide these locations in Table 1. We also compute and publish meta-data about the LSQ dataset using the Vocabulary of Interlinked Datasets (VoID) [5]. More specifically, we compute a separate VoID description for each log and make the resulting description accessible via both a downloadable file and a named graph of the SPARQL endpoint.

Availability The LSQ dataset has been hosted for over six years (at the time of writing) by the Agile Knowledge Engineering and Semantic Web (AKSW) group. As discussed in Section 7, it has been widely adopted in that time. The dataset is available to all under a CC-BY license. We further make the source code used for generating the LSQ dataset from the raw query logs available on Github https://github.com/AKSW/LSQ.

Table 2

High-level statistics for queries in the LSQ dataset (QE = Query Executions, UQ = Unique Queries, RE = Runtime Error, ZR = Zero Results, SEL = SELECT, CON = CONSTRUCT, DES = DESCRIBE)

DatasetQEUQREZRSEL (%)CON (%)DES (%)ASK (%)
Affymetrix1,229,339311,096277,98331,65916.4783.210.020.30
BioModels1,238,375435,232412,98421,69241.1858.750.000.06
BIOPORTAL1,337,80489,66485,2733,38964.8834.780.000.34
CTD940,390287,296266,99919,82411.9887.760.000.26
DBpedia6,535,5004,258,9411,259,9721,755,33869.903.5925.231.28
dbSNP794,023269,498267,6621,6984.9994.990.000.02
DrugBank1,613,951379,233372,0226,18646.6752.800.050.48
GenAge589,211265,067263,2051,6615.5594.430.000.02
GenDR690,864270,697262,7767,7267.5392.450.000.02
GO1,839,991121,54288,74330,08298.310.030.351.31
GOA3,544,273343,836310,80032,31726.1873.690.060.07
HGNC1,529,681364,961327,54033,56829.1570.580.040.23
iRefIndex1,560,704309,777289,54619,85818.1081.880.000.02
KEGG66,83019,87110,3868,00492.044.300.413.24
LinkedGeoData154,88461,89711,02813,99098.581.000.020.40
LinkedSQP337,001204,112203,5343100.2899.690.000.03
MGI1,316,673319,627277,08033,78121.1278.600.050.23
NCBIGene770,716216,832215,9387188.7191.260.000.04
OMIM1,506,621335,541290,48344,09322.7876.890.080.26
PharmGKB94,54024,00014,5978,64960.3539.650.000.01
SABIORK922,407274,098253,73319,9387.9192.070.000.02
SGD973,281318,641309,5937,19916.0680.530.303.12
SIDER599,285277,766274,9631,9659.3890.590.000.03
SWDF1,415,567101,42330,79236,78973.570.0626.170.21
Taxonomy7,698,898354,582334,29020,04115.8384.160.000.02
Wikidata3,298,254844,256520,976150,39595.030.130.084.77
Wormbase1,353,316498,170496,3251,66049.3350.660.000.01
Overall43,952,37911,557,6567,729,2232,312,53036.1457.81.890.60

5.LSQ 2.0 logs

We now describe the content of the LSQ 2.0 dataset. In order to collect raw SPARQL query logs, we sent mails both to the [email protected] mailing list and to individual providers of endpoints. We also incorporated logs from LSQ 1.0 [77] and a sample of queries from the Wikidata logs [57]. We thus acquired access to the logs of 27 endpoints, 22 of which are part of Bio2RDF release 3 [33].1111 Table 2 provides high-level statistics of the query logs from which we extract the LSQ dataset, including the query executions registered; the unique query strings; the number of queries providing a runtime error, or returning zero results; as well as the percentage of unique queries using SELECT, CONSTRUCT, DESCRIBE or ASK. Aside from the initial log of LSQ, only one log is already publicly available, namely Wikidata [57], of which we include a subset described in our data model.

Affymetrix

is a biomedical Linked Dataset describing probesets found in DNA microarrays [33].

BioModels

is a biomedical Linked Dataset describing mathematical models of biological systems [33].1212

BioPortal

is a biomedical Linked Dataset cataloguing biomedical ontologies [33].

CTD: Comparative Toxicogenomics Database

is a biomedical Linked Dataset that describes how environmental chemicals relate to diseases [33].

DBpedia

is a cross-domain Linked Dataset that is primarily extracted from Wikipedia [53].

dbSNP: Single Nucleotide Polymorphism Database

is a biomedical Linked Dataset that describes single base nucleotide substitutions and short deletion and insertion polymorphisms [33].

DrugBank

is a biomedical Linked Dataset that describes drugs and drug targets [33].

GenAge

is a biomedical Linked Dataset that describes human and other genes linked with ageing [33].

GenDR: Dietary Restriction Gene Database

is a biomedical Linked Dataset that describes genes associated with dietary restrictions [33].

GO: Gene Ontology

is a biomedical ontology that describes gene, gene products, and their functions [33].

GOA: Gene Ontology Annotation

is a biomedical Linked Dataset that provides annotations on proteins, RNA and protein complexes [33].

HGNC: HUGO Gene Nomenclature Committee

is a biomedical Linked Dataset that describes human gene nomenclature [33].

iRefIndex

is a biomedical Linked Dataset that indexes interaction data for proteins [33].

KEGG: Kyoto Encyclopedia of Genes and Genomes

is a biomedical Linked Dataset that describes functions of genes and biological systems [33].

LinkedGeoData

is a geographical Linked Data extracted primarily from Open Street Map [88].

LinkedSQP: Linked Structured Product Labelling

is a biomedical Linked Dataset that contains meta-data about drug labels sourced from DailyMed [33].

MGI: Mouse Genome Informatics

is a biomedical Linked Dataset that describes mouse genes, alleles, and strains [33].

NCBI Gene

is a biomedical Linked Dataset that describes gene-related information given by the National Center for Biotechnology Information (NCBI) [33].

Online Mendelian Inheritance in Man (OMIM)

is a biomedical Linked Dataset that catalogues human genes as well as genetic traits and disorders [33].

PharmGKB

is a biomedical Linked Dataset describing how genetic variations impact drug responses [33].

SABIORK: System for the Analysis of Biochemical Pathways – Reaction Kinetics

is a biomedical Linked Dataset that describes biochemical reactions [33].

SGD: Saccharomyces Genome Database

is a biomedical Linked Dataset describing the biology and genetics of the yeast Saccharomyces cerevisiae [33].

SIDER: Side Effect Resource

is a biomedical Linked Dataset describing the side effects of drugs [33].

SWDF: Semantic Web Dog Food

is a bibliographical Linked Dataset describing papers, presentations and people participating in top Semantic Web related conferences and workshops [61].

Taxonomy: NBCI Taxonomy

is a biomedical Linked Dataset that describes all organisms found in genetic databases [33].

Wikidata

is a collaboratively edited knowledge graph hosted by the Wikimedia foundation [57].

Wormbase

is a biomedical Linked Dataset that describes the biology and genome of worms [33].

6.LSQ 2.0 query statistics

We now look in more detail at the composition of the queries currently included in the LSQ dataset. In particular, we first look at some high-level statistics for queries in the dataset, before looking at the static features of the query, the agents making the queries, as well as runtime statistics computed against the corresponding dataset. Finally we discuss the composition of the LSQ dataset itself.

High-level statistics Table 2 provides a high-level analysis of the queries (both query executions and unique queries) appearing in each of the logs considered. From the overall row, we see that LSQ contains 43.95 million query executions and 11.56 million unique queries, implying that each query is executed, on average, 3.8 times within each log. Of the unique queries, 7.7 million (66.9%) have runtime errors; and 2.3 million (20.0%) have no errors but return empty results. A high ratio of runtime errors come from the Bio2RDF logs. The majority of queries are CONSTRUCT queries (60.0%), followed by SELECT (32.3%), DESCRIBE (7.1%) and ASK (0.5%). We find that CONSTRUCT queries are particularly prevalent on Bio2RDF endpoints, while DESCRIBE queries are particularly prevalent on DBpedia and Wikdata endpoints, possibly due to the use of such queries for dereferencing Linked Data IRIs through the endpoint.

Static features Turning to static features, we first look at the percentages of unique queries without parse errors using different SPARQL features (note that we will analyse joins in BGPs and property paths later). Table 3 provides statistics for the usage of different features of SPARQL. We see that FILTER is among the most widely used features, along with SPARQL functions and expressions (note that almost all filters use such expressions). This feature is followed by DISTINCT and other solution modifiers, UNION, OPTIONAL, etc. Notably these are all SPARQL 1.0 features. The SERVICE keyword is commonly used on Wikidata since the Wikidata Query Service provides a custom service for retrieving multilingual labels as preferred/available.

Table 3

Percentage of unique queries without parse errors using the specified SPARQL feature (Sol. Mod. includes the solution modifiers ORDER BY, OFFSET, and LIMIT; Agg. includes aggregation features GROUP BY, HAVING, AVG, SUM, COUNT, MAX, and MIN; Neg. includes MINUS, NOT EXISTS, and EXISTS; Bind. includes VALUES and BINDING; Graph includes FROM, FROM NAMED, and GRAPH; Func. includes SPARQL functions and expressions)

DatasetUNIONOPTIONALDISTINCTFILTERREGEXSERVICESub-Q.Sol. M.Agg.Neg.Bind.GraphFunc.
Affymetrix3.680.027.6483.300.150.010.064.850.360.000.010.6983.30
BioModels2.640.010.1894.320.060.000.010.120.100.000.000.0394.32
BIOPORTAL1.500.060.0537.952.230.010.010.2134.100.000.0034.2637.95
CTD3.990.020.3788.060.060.040.013.570.130.000.013.2188.06
DBpedia28.6819.9722.2229.874.100.002.228.929.980.001.110.0129.87
dbSNP0.050.010.1094.870.000.050.010.130.070.000.000.0994.87
DrugBank2.5815.5512.3754.671.810.100.029.312.590.000.012.7354.67
GenAge0.000.010.0894.370.000.000.010.060.070.000.000.0294.37
GenDR0.010.010.0796.550.000.010.010.060.070.000.000.0296.55
GO9.080.1620.9818.825.920.890.073.860.080.000.010.0218.82
GOA4.170.015.0084.769.150.860.030.710.090.000.000.4484.76
HGNC3.160.025.0084.120.040.030.021.200.440.000.000.4784.12
iRefIndex9.991.000.8683.372.290.010.010.870.120.000.000.7483.37
KEGG11.641.1354.917.222.860.070.0442.951.020.000.010.797.22
LinkedGeoData1.1519.139.2418.062.610.017.6430.7537.570.000.522.5218.06
LinkedSQP0.000.010.0099.760.000.000.010.050.070.000.000.0399.76
MGI3.570.026.9979.430.430.010.032.980.570.000.050.6479.43
NCBIGene0.020.010.1791.530.020.030.012.720.220.000.002.6191.53
OMIM3.521.104.9080.830.310.390.045.620.930.000.011.0980.83
PharmGKB33.050.0042.2247.920.280.130.0143.400.070.000.001.1447.92
SABIORK4.150.010.1292.000.000.000.010.170.090.000.000.0592.00
SGD1.630.016.7380.060.090.030.044.383.870.000.004.2480.06
SIDER0.020.017.4490.870.000.030.017.420.090.000.000.7390.87
SWDF40.1334.0853.162.340.870.040.1031.451.080.000.0132.322.34
Taxonomy3.190.010.0492.910.040.000.010.350.250.000.000.4492.91
Wikidata9.2729.2115.3226.481.1354.387.4440.727.990.008.990.0026.48
Wormbase14.164.460.1269.929.691.580.000.270.630.000.000.8269.92
Overall7.224.6710.2367.571.632.170.669.143.770.000.343.3467.57

Next, in Table 4, we provide three types of statistics about the basic graph patterns and property path features used. First, we present the unique number of subject, predicate and object terms used in the BGPs of the logs in order to characterise their diversity. We see that DBpedia, LinkedGeoData and Wikidata offer the most diversity, particularly in terms of predicates found in the queries. Second, we present the percentage of queries with different types of joins in the basic graph patterns [81]. Each join variable in a basic graph pattern is analysed in order to understand how they connect triple patterns. We say that a join vertex has an “outgoing link” if it appears as a subject of a triple pattern, and that it has an “incoming link” if it appears as predicate or object. The join types are then defined as follows:

Star

has multiple outgoing but no incoming links.

Path

has one incoming and one outgoing link.

Hybrid

has at least one incoming and outgoing link and three or more links overall.

Sink

has multiple incoming but no outgoing links.

From Table 4, we see that the majority of queries have no joins, but where present, Star joins are the most frequent, followed by Hybrid and Sink joins. Third, we present the number of queries using different property path features, where we see that DBpedia and Wikidata contain the most use of property path queries, while Bio2RDF logs exhibit little use of this feature. The most used such feature is/for concatenation.

These statistics may be helpful for consumers to choose which dataset/log to work with. For example, for the purposes of benchmarking joins, a dataset such as LinkedGeoData or Wikidata may be chosen as most queries feature joins; in order to benchmark or analyse property paths, DBpedia or Wikidata may be chosen as they use this feature more frequently; etc.

Table 4

Analysis of basic graph patterns and property paths including number of unique subject/predicate/object terms, percentage of unique queries containing different types of joins (a query may contain multiple join types), and number of queries using different types of property path expressions (/ denotes concatenation, ^ denotes inverse, * denotes zero-or-more; + denotes one-or-more; | denotes disjunction

DatasetBGP TermsJoin Types (%)Prop. Path Features



Subj.Pred.Obj.StarHyb.PathSinkNone/^*+|
Affymetrix17,91243227,3982.360.160.030.1097.5720001
BioModels14,055347120,14837.220.100.010.0462.7120001
BIOPORTAL9,2751306,27536.2634.220.0153.0844.6010001
CTD14,92727622,3201.720.190.040.1698.2131001
DBpedia912,94310,8421,104,73229.387.061.7115.4869.5649,66039,0392717,58232,709
dbSNP12,8251126,0692.100.060.010.0497.8620001
DrugBank37,57898934,60133.3916.812.017.5064.4480101
GenAge2,66611311,8754.300.040.010.0195.6620001
GenDR5,6641047054.224.170.010.0195.7430001
GO35,50439459,36216.510.900.871.3183.1442001
GOA33,59320422,0448.060.050.020.0291.8950001
HGNC23,43041436,85715.721.530.024.3084.2120001
iRefIndex20,06717128,0699.090.350.011.5090.8520001
KEGG5,6202518,9647.241.670.510.9392.0830001
LinkedGeoData13,4985,9912,62849.5124.150.0434.2741.2867278009
LinkedSQP326551440.050.030.020.0099.9120001
MGI28,70239123,8672.131.360.150.5697.7950001
NCBIGene11,7532544,4272.160.200.020.1897.7930101
OMIM23,50462350,2297.004.570.343.9592.52100003
PharmGKB1,0998313,5488.0350.690.821.8347.9700001
SABIORK14,22415619,7750.700.040.020.0199.2520001
SGD7,22850813,4606.835.650.034.0293.0620001
SIDER8,7921523,5890.530.080.020.0499.4360001
SWDF25,64042010,82332.057.273.340.9558.6294220017
Taxonomy16,20120797,29822.540.230.010.2177.4160001
Wikidata47,87111,779263,97446.6317.594.9812.0541.20134,8112,9443,838023,525
Wormbase53,80714824,08339.405.134.475.0760.5520001
Overall1,398,70435,5462,017,26415.746.580.725.4780.56185,31442,0864,1117,58256,285

Provenance: Executions and agents Next we look at how many clients (anonymised IPs) and unique queries underlie the executions registered in order to compare the diversity of the different datasets. Note that client information is not available for Wikidata. In Fig. 2(a) and Fig. 2(b), we present Lorenz curves for the number of executions per client and per query, respectively.1313 We present results for Bio2RDF together as one series to ensure better readability. In general, we see a skew in the graph away from the equality curve towards the bottom-left corner, meaning that a small number of clients/queries are involved in a large number of executions. The skew is more evident in the case of clients, and particularly for the SWDF and Bio2RDF datasets; thus consumers of LSQ 2.0 should be aware that a high ratio of queries from these datasets come from a small number of clients (likely bots). DBpedia is the most diverse in terms of clients and queries.

Fig. 2.

Lorenz curves for the LSQ dataset

Lorenz curves for the LSQ dataset

Static and runtime statistics Next, in order to characterise how complex the queries are to evaluate, in Table 5 we present some relevant static and runtime statistics, where static statistics can be computed from the query string, while runtime statistics require evaluating the query locally (only queries that were successfully run are counted; see Table 2 for statistics on runtime errors). Regarding runtimes, we recall that these were run with a one minute timeout, which represents the max runtime. We see that LinkedGeoData contains the most costly queries to run, which appears to correlate with larger result sizes and a larger mean join-vertex degree. Relatively high runtimes are also seen for the KEGG dataset. The simplest queries to run are found in the GenAge, GenDR and Taxonomy datasets. These results suggest, for example, that LinkedGeoData might be more suitable for consumers looking for a challenging benchmark.

Table 5

Comparison of the mean values of runtime statistics across all query logs (PVs = Project Variables, BGPs = Basic Graph Patterns, TPs = Triple Patterns, JVs = Join Vertices, MJVD = Mean Join Vertex Degree, MTPS = Mean Triple Pattern Selectivity)

DatasetStatic Statistics (mean)Runtime Statistics (mean)


PVsBGPsTPsJVsMJVDMTPSResult SizeRuntime (sec)
Affymetrix1.931.061.100.030.060.8212708.390.084
BioModels1.241.041.420.370.750.574896.670.011
BIOPORTAL1.161.031.941.431.120.541699.480.004
CTD2.561.051.080.020.040.8524354.240.102
DBpedia2.782.373.230.930.660.01114038.380.164
dbSNP1.091.021.040.020.040.97757108.370.009
DrugBank2.611.051.930.690.910.66119759.380.007
GenAge1.881.001.090.040.130.991642.840.003
GenDR2.731.001.080.080.090.9783.500.003
GO1.461.101.370.220.380.0293806.200.046
GOA1.871.031.120.080.160.857692.260.016
HGNC1.911.051.290.230.350.802419.430.019
iRefIndex2.921.131.430.190.250.8232200.760.077
KEGG2.271.151.310.130.180.33175469.533.862
LinkedGeoData2.271.162.621.101.760.1511055973.096.788
LinkedSQP2.011.001.000.000.001.009503.410.014
MGI2.041.041.110.050.060.842050.760.178
NCBIGene1.391.021.040.030.040.9510731.330.021
OMIM1.831.071.260.170.180.773505.540.020
PharmGKB1.961.342.481.061.080.39255.610.017
SABIORK2.961.051.060.010.020.881610.770.005
SGD1.451.121.960.350.180.58108951.600.058
SIDER1.341.001.010.010.010.989703.860.010
SWDF4.043.373.970.450.920.0337362.670.007
Taxonomy1.771.171.530.230.590.691928.750.004
Wikidata3.002.474.731.061.810.0017817773.630.412
Wormbase1.561.252.050.650.870.989888.610.007
Overall2.071.261.710.350.470.651126559.960.440

LSQ dataset statistics The LSQ 2.0 dataset, describing 43.95 million executions of 11.56 million unique queries, contains 1.24 billion triples, split into 27 named graphs (one for each of the datasets listed).1414

7.LSQ adoption

In this section we present how LSQ has been adopted since its initial release with four logs in 2015. We organise this discussion following the motivational use cases we originally envisaged, as presented in Section 2. Table 6 provides an overview of the research works that have used LSQ, and the relevant use case(s) that they target. We now discuss these works in more detail; note that in the case of works that relate to multiple use cases, we will discuss them once in what we identify to be the “primary” related use case. We further discuss some works that have used the LSQ dataset for use cases beyond the six we had originally envisaged.

Table 6

Research works making use of the LSQ dataset since its initial release, ordered by year and then alphabetically by author name, with relevant use cases indicated (UC1: Custom Benchmarks; UC2: SPARQL Adoption; UC3: Caching; UC4: Usability; UC5: Optimisation; UC6: Meta-Querying)

NameYearUC 1UC 2UC 3UC 4UC 5UC 6Other
Saleem et al. [79]2015
Arenas et al. [11]2016
Benedetti and Bergamaschi [17]2016
Georgala et al. [39]2016
Han et al. [41]2016
Hernandez et al. [42]2016
Knuth et al. [49]2016
Rico et al. [71]2016
Schoenfisch and Stuckenschmidt [85]2016
Song et al. [87]2016
Bonifati et al. [21]2017
Dellal et al. [31]2017
Fokou et al. [37]2017
Stegemann and Ziegler [89]2017
Thakkar et al. [90]2017
Akhtar et al. [2]2018
Bonifati et al. [22]2018
Darari et al. [29]2018
Martens and Trautner [58]2018
Salas and Hogan [76]2018
Saleem et al. [78]2018
Saleem et al. [80]2018
Varga et al. [94]2018
Viswanathan et al. [97]2018
Akhtar et al. [3]2019
Cheng and Hartig [26]2019
Fafalios and Tzitzikas [34]2019
Fernandez et al. [35]2019
Potoniec [69]2019
Saleem et al. [83]2019
Thost and Dolby [91]2019
Wang et al. [99]2019
Savafi et al. [75]2019
Singh et al. [86]2019
Azzam et al. [15]2020
Bigerl et al. [18]2020
Bonifati et al. [24]2020
Figueira et al. [36]2020
Jian et al. [47]2020
Zhang et al. [102]2020
Aebeloe et al. [1]2021
Almendros-Jimenez et al. [6]2021
Azzam et al. [14]2021
Table 6

(Continued)

NameYearUC 1UC 2UC 3UC 4UC 5UC 6Other
Davoudian et al. [30]2021
Desouki et al. [32]2021
Röder et al. [74]2021
Wang et al. [98]2021

UC1: Custom Benchmarks LSQ has been adopted in various works for creating custom benchmarks.

  • Saleem et al. [79] present a framework for generating benchmarks that can be used to evaluate SPARQL endpoints under typical workloads; the benchmarks generate query types depending on the features of the queries submitted to the endpoint, where LSQ is used for testing.

  • Later works by Saleem et al. further propose frameworks for generating benchmarks from LSQ for the purposes of evaluating query containment [80,82] and federated query evaluation [78], as well as comparing existing SPARQL benchmarks against LSQ in order to understand how representative they are of real workloads [83].

  • Hernández et al. [42] present an empirical study of the efficiency of graph database engines for answering SPARQL queries over Wikidata; they refer to LSQ to verify that the query shapes considered for evaluation correspond with other analyses of real-world SPARQL queries.

  • Fernández et al. [35] evaluate various archiving techniques and querying strategies for RDF archives that store historical data; in their evaluation, they select the 200 most frequent triple patterns from the DBpedia query set in LSQ.

  • Azzam et al. [15] use LSQ for retrieving highly-demanding queries from the dataset in order to evaluate their system for dividing the load processed by different SPARQL servers.

  • Bigerl et al. [18] develop a tensor-based triple store, where they used LSQ as input to the FEASIBLE framework to generate a custom benchmark.

  • Azzam et al. [14] present a system that dynamically delegates query processing load between clients and servers. The authors use the Linked Data Fragments client/server approach improving it with the aforementioned technique and use 16 queries from LSQ to complement their evaluation.

  • Davoudian et al. [30] present a system that partitions graphs depending on the access frequency to their nodes. In this way the system implements workload-aware partitioning. The authors use LSQ for evaluating their approach.

  • Desouki et al. [32] propose a method to generate synthetic benchmark data. To generate these synthetic data they use other RDF graphs available, such as SWDF and DBpedia 2016. They benchmark their approach using queries from LSQ.

  • Röder et al. [74] develop a method to predict the performance of knowledge graph query engines; to do so the authors use a stochastic generation model that is able to generate graphs of arbitrary sizes similar to the input graph. They use LSQ as a benchmark of real-world queries.

UC2: SPARQL adoption Other works have used LSQ to understand how SPARQL is being used in practice.

  • Han et al. [41] provide a statistical analysis of the queries of LSQ, surveying both syntactic features, such as the number of triple patterns, the SPARQL features used, the frequency of well-designed patterns; as well as semantic properties, such as montonicity, weak-monotonicity, non-monotonicity and satisfiability.

  • Bonifati et al. [21,22] conduct detailed analysis of the queries in various logs, including LSQ; they study a variety of phenomena in these queries, including their shape, their (hyper)treewidth, common abstract patterns found in the property paths, “streaks” that represent a sequence of user reformulations from a seed query, and more besides.

UC3: Caching LSQ can also be used to simulate real workloads for systems that explore caching techniques.

  • Knuth et al. [49] propose a middleware component to which applications register and get notifications when the results of their SPARQL queries change; the authors study the problem of scheduling refresh queries for a large number of registered queries and use LSQ to validate their approach.

  • Akhtar et al. [2,3] propose an approach to capture changes in an RDF dataset and update a cache system in front of the SPARQL endpoint exposing that data; their approach consists of a change metric that quantifies the changes in an RDF dataset, and a weighting function that assigns importance to recent changes; they use LSQ to verify their approach for real workloads.

  • Salas and Hogan [76] propose a method for query canonicalisation, which consists in mapping congruous queries – i.e., queries that are equivalent modulo variable names – to the same query string; their main use case is to increase the hit rate of SPARQL caches, where they use LSQ to test efficiency on real-world queries and to see how many congruent queries can be found in real workloads.

  • Savafi et al. [75] study SPARQL adoption using LSQ so they can later provide queries to summarise the Knowledge Graphs such that they can be more efficiently accessed from and stored on mobile devices with limited resources.

UC4: Usability LSQ also has applications for improving the usability of SPARQL endpoints.

  • Arenas et al. [11] propose a method for reverse-engineering SPARQL queries, which attempts to construct a query that will return a given set of positive examples as results, but not a second set of negative examples; the authors use LSQ to show that the approach scales well in the data size, number of examples, and in the size of the smallest query that fits the data.

  • Benedetti and Bergamaschi [17] present a system (LODeX) that allows users to explore SPARQL endpoints more easily through a formal model defined over the endpoint schema; they show that LODeX is able to generate 77.6% of the 5 million queries contained in the original LSQ dataset.

  • Dellal et al. [31] proposes query relaxation methods for queries with empty results, based on finding minimal failing subqueries (generating empty results) and maximal succeeding subqueries (generating non-empty results) to aid the user [37]. The paper refers to LSQ to establish that queries with empty results are common in practice.

  • Stegemann and Ziegler [89] propose new operators for the SPARQL language that allow for composing path queries more easily; the authors evaluated their approach with a user study and analysis of the extent to which their language is able to express the real-world queries found in LSQ.

  • Viswanathan et al. [97] propose a different form of query relaxation, which generalises a specific resource to a variable on which specific restrictions are added that correspond to relevant characteristics of the resource; they use LSQ to understand how entities are queried in practice.

  • Potoniec [69] proposes an interactive system for learning SPARQL queries from positive and negative examples;1515 he uses the DBpedia queries of LSQ for experiments.

  • Wang et al. [99] present an approach for explaining missing results for a SPARQL query – based on answering “why-not” questions that ask why a specific result is not included – to help users refine their initial queries; the authors search LSQ for queries useful for their approach.

  • Bonifati et al. [24] analyse “streaks” in DBpedia query logs,1616 where a streak is defined as a sequence of similar queries in chronological order, capturing the idea of a user refining and/or extending an initial query towards a final query.

  • Jian et al. [47] use LSQ to evaluate their approach for SPARQL query relaxation (to generalise users’ queries) and query restriction (to refine users’ queries) based on approximation and heuristics.

  • Zhang et al. [102] propose a method to model client behaviour when formulating SPARQL queries in order to predict their intent and optimise queries. They use LSQ for their evaluation.

  • Almendros-Jimenez et al. [6] present two methods for discovering and diagnosing “wrong” SPARQL queries based on ontology reasoning. They evaluate their approach using LSQ queries.

  • Wang et al. [98] focus on providing explanations for SPARQL query similarity measures. The authors provide similarity scores using several explainable models based on Linear Regression, Support Vector Regression, Ridge Regression, and Random Forest Regression. They use LSQ to evaluate their query classification.

UC5: Optimisation The LSQ dataset can also be used to identify and study fragments that are commonly used in practice and can be evaluated efficiently using dedicated algorithms.

  • The aforementioned analyses by Han et al. [41] and Bonifati et al. [21,22] suggest that well-designed patterns, queries of bounded treewidth, etc., make for promising fragments.

  • In the context of probabilistic Ontology-Based Data Access (OBDA), Schoenfisch and Stuckenschmidt [85] analyse the ratio of safe queries – whose evaluation is tractable in data complexity – versus unsafe queries – whose evaluation is #P-hard; they show that over 97.9% of the LSQ queries are safe, and can be efficiently evaluated.

  • Song et al. [87] use LSQ to analyse how nested OPTIONAL clauses affect query response times; they propose a way to approximate solutions for deeply-nested well-designed patterns.

  • Martens and Trautner [58] later take the property paths extracted by Bonifati et al. [21] from LSQ and other sources, defining simple transitive expressions that subsume almost all property path expressions seen in practice, while allowing more efficient evaluation than the general case.

  • Cheng and Hartig [26] introduce a monotonic version of the OPTIONAL operator to SPARQL called OPT+; a possible downside of the operator is an increase in query result sizes, where they use the LSQ dataset to study how OPTIONAL and OPT+ behave for real-world queries.

  • Building upon the work of Martens and Trautner [58], Figueira et al. [36] specifically study the containment problem for restricted classes of Conjunctive Regular Path Queries (CRPQs), which are akin to BGPs with property paths; aside from complexity results, they show the coverage of the different classes for logs that include LSQ [24].

UC6: Meta-querying A handful of works have also used LSQ in the context of meta-querying, where queries are found based on the resources they contain.

  • Rico et al. [71] observe that analogous DBpedia properties are often defined in two distinct namespaces – e.g., dbo:birthPlace and dbp:birthPlace – where they propose methods to automatically expand SPARQL queries to capture solutions involving analogous properties; they show that only 0.2% of the DBpedia queries in LSQ mention properties from both namespaces.

  • Varga et al. [94] provide an RDF-based metamodel for BI 2.0 systems, which allows for capturing the schema of a dataset, as well as previous queries that have been posed against that dataset by other users; the authors propose to re-use parts of the LSQ vocabulary in their model; they further instantiate their model using LSQ to retrieve queries asked about countries.

Other use cases A number of works have used LSQ (mostly for evaluation) in contexts that were not originally anticipated by the aforementioned use cases.

  • Georgala et al. [39] propose a method to predict temporal relations between events represented by RDF resources following Allen’s interval algebra; they use LSQ to validate their approach considering query executions as events.

  • Darari et al. [29] present a theoretical framework for augmenting RDF data sources with completeness statements, which allows for reasoning about the completeness of SPARQL query results; they evaluate their method using LSQ.

  • Fafalios and Tzitzikas [34] present a query evaluation strategy, called SPARQL-LD, that combines link traversal and query processing at SPARQL endpoints; they provide a method for checking if a SPARQL query can be answered through link traversal, and analyse a large corpus of real SPARQL query logs – including LSQ – for finding the frequency and distribution of answerable and non-answerable query patterns; they also use LSQ to evaluate their approach.

  • Singh et al. [86] use the LSQ vocabulary for providing a benchmark for Question Answering over Linked Data. The authors use the LSQ vocabulary to represent the SPARQL query related features prior to generating the benchmark.

  • Thost and Dolby [91] present QED: a system for generating concise RDF graphs that are sufficient to produce solutions from a given query, which can be used for benchmarking, for compliance testing, for training query-by-example models, etc.; they apply their system over LSQ queries to generate datasets from DBpedia.

  • Aebeloe et al. [1] present a decentralised architecture based on blockchain that allows users to propose updates to faulty or outdated data, tracing back their origin, and query older versions of the data. They use LSQ queries for their evaluation.

Discussion Per Table 6, we see that the original version of LSQ has been used in a wide variety of research works for a variety of purposes. Complementing other SPARQL query logs such as Wikidata’s [57], we believe that LSQ 2.0, with its extended set of queries, will likewise serve as a useful resource to help align the theory and practice of SPARQL research.

8.Conclusions and future directions

In this paper, we have described the Linked SPARQL Queries v.2 (LSQ 2.0) dataset, which represents queries in logs as RDF, allowing clients to quickly find real-world queries that may be of interest to them. We have described a number of use cases for LSQ, including the generation of custom benchmarks, the analysis of how SPARQL is used in practice, the evaluation of caching systems, the exploration of techniques to improve the usability of SPARQL services, the targeted optimisation of queries with characteristics commonly found in real workloads, as well as the ability to find queries relating to specific resources. We then described the model and vocabulary used to represent LSQ, including static features of queries, a SPIN representation, provenance encoding the agents and endpoints from which the query originate, as well as runtime statistics generated through local executions of the queries against their corresponding dataset. We then discussed how LSQ is published, thereafter describing the datasets and queries featured in the current version of LSQ. Finally we discussed how LSQ has been used for research purposes since its initial release in 2015.

As discussed in Section 7, since its initial release, LSQ has been adopted by a variety of research works for a variety of purposes. In terms of future directions, we will look to continue adding further logs with further queries to the dataset. Looking at how LSQ has been adopted in the literature has also revealed ways in which the metadata for LSQ could be extended in a future version, such as to add information about monotonicity and satisfiability [41], or information about (hyper)treewidth [21,22], for example. It may also be useful to provide a canonical version of the query string [76]; this could perhaps be leveraged, for example, when evaluating caching methods. Another useful feature would be to add questions in natural language that verbalise each query, which could be used, for example, in order to create datasets for training and testing question answering systems, as well as enabling users to find relevant queries through keyword search; given the large number of queries in the dataset, an automated approach may be applicable [64].

As discussed by Martens and Trautner [59], query logs allow to bridge the theory and practice of SPARQL. They serve an important role, ensuring that the research conducted by the community is guided by the requirements and trends that emerge in practice. We thus believe that LSQ (2.0) will continue to serve an important role in SPARQL research in the coming years.

Notes

1 http://usewod.org/; retr. 2015/04/14.

3 Note that for the purposes of presentation, we abbreviate some of the details of the query, including the IRIs used to identify local query executions.

4 Given a query Q and dataset D, let Q(D) denote the result(s) of evaluating Q over D. Two queries Q1 and Q2 are then defined to be equivalent if and only if Q1(D)=Q2(D) for every dataset D.

5 A “salt” in cryptography is a privately-held arbitrary string that is combined (e.g., concatenated) with the input being hashed in order to avoid attacks based on precomputed tables (e.g., of common values or, in this case, of a collection of I.P.’s of interest).

6 Although there exist properties called “endpoint” – such as void:sparqlEndpoint or sd:endpoint – the domains of these properties were not query executions, but rather VoID datasets (i.e., sets of RDF triples), or SPARQL services. Though it would be possible to define properties such as lsqv:dataset or lsqv:service and then link a query execution <x> to an endpoint URL <e> with <x> lsqv:dataset [void:sparqlEndpoint <e>], or alternatively <x> lsqv:service [sd:endpoint <e>], this would introduce O(n) additional triples to the LSQ 2.0 dataset, for n the number of remote query executions (in LSQ 2.0, n=43,952,379). (Please note that the dataset or service may change during the lifetime of the log, which we do not have information about; hence we cannot refer to one dataset/service at a given endpoint.) Thus we rather introduce lsqv:endpoint in the data and define property chain axioms in the LSQ 2.0 vocabulary to relate lsqv:endpoint to lsqv:dataset/void:sparqlEndpoint and lsqv:service/sd:endpoint.

7 The configuration used for Virtuoso was MaxQueryMem = 32G, NumberOfBuffers = 20050000, and MaxDirtyBuffers = 20000000.

8 The selectivity of the triple pattern is the ratio of triples from the dataset that it selects.

10 With respect to the fifth star, which requires that our LSQ vocabulary be linked to from external vocabularies, we are not aware of such links, though we do know, for example, that Varga et al. [94] incorporate elements of the LSQ vocabulary within their own Analytical Metadata (AM) model, while Singh et al. [86] also use the LSQ vocabulary within their benchmark.

11 We also acquired logs for the British Museum and UniProt endpoints, but decided to omit them due to having few unique queries.

12 The external SPARQL endpoint is spelt biomedels, and thus the IRIs use this spelling in LSQ 2.0.

13 Lorenz curves visualise (in)equality in distributions for a given quantity over a given set of elements: a coordinate (x,y) indicates that ratio x of elements (given in ascending order by their quantity) are associated with ratio y of the total quantity. The solid black line indicates a hypothetical equality where each element is associated with the same quality. For example, in Fig. 2(a) on the DBpedia curve, the point (0.80,0.29) denotes that 80% of clients invoke 29% of the executions (or 20% of the clients invoke 71% of the executions).

14 We exclude some named graphs created by Virtuoso.

15 Notably the system is called Learning SPARQL Queries (LSQ).

16 In fact, these logs were gathered directly from OpenLink, though we include discussion since similar analysis could have been applied to the LSQ logs, and LSQ logs where used in other analyses.

Acknowledgements

We thank the OpenLink Software team for hosting the DBpedia SPARQL endpoint and for making the logs available to us. Hogan was supported by Fondecyt Grant No. 1181896 and by ANID – Millennium Science Initiative Program – Code ICN17_002. Buil-Aranda was supported by Fondecyt Iniciación Grant No. 11170714 and by ANID – Millennium Science Initiative Program – Code ICN17_002. This work was also partially supported by the German Federal Ministry of Education and Research (BMBF) within the EuroStars project E!114681 3DFed under the grant no 01QE2114, project RAKI (01MD19012D) and project KnowGraphs (No 860801).

References

[1] 

C. Aebeloe, G. Montoya and K. Hose, ColChain: Collaborative linked data networks, in: The Web Conference (WWW), (2021) , pp. 1385–1396, ACM/IW3C2. doi:10.1145/3442381.3450037.

[2] 

U. Akhtar, M.A. Razzaq, U.U. Rehman, M.B. Amin, W.A. Khan, E.-N. Huh and S. Lee, Change-aware scheduling for effectively updating linked open data caches, IEEE Access 6: ((2018) ), 65862–65873. doi:10.1109/ACCESS.2018.2871511.

[3] 

U. Akhtar, A. Sant’Anna and S. Lee, A dynamic, cost-aware, optimized maintenance policy for interactive exploration of linked data, Applied Sciences 9: (22) ((2019) ), 4818. doi:10.3390/app9224818.

[4] 

R. Al-Harbi, I. Abdelaziz, P. Kalnis, N. Mamoulis, Y. Ebrahim and M. Sahli, Accelerating SPARQL queries by exploiting hash-based locality and adaptive partitioning, VLDB J. 25: (3) ((2016) ), 355–380. doi:10.1007/s00778-016-0420-y.

[5] 

K. Alexander, R. Cyganiak, M. Hausenblas and J. Zhao, Describing Linked Datasets with the VoID Vocabulary, 2011, W3C Interest Group Note, https://www.w3.org/TR/void/.

[6] 

J.M. Almendros-Jiménez and A. Becerra-Terón, Discovery and diagnosis of wrong SPARQL queries with ontology and constraint reasoning, Expert Systems with Applications 165: ((2021) ), 113772. doi:10.1016/j.eswa.2020.113772.

[7] 

G. Aluç, O. Hartig, M.T. Özsu and K. Daudjee, Diversified stress testing of RDF data management systems, in: International Semantic Web Conference (ISWC), Springer, (2014) , pp. 197–212. doi:10.1007/978-3-319-11964-9_13.

[8] 

G. Aluç, M.T. Özsu and K. Daudjee, Workload matters: Why RDF databases need a new design, PVLDB 7: (10) ((2014) ), 837–840. doi:10.14778/2732951.2732957.

[9] 

G. Aluç, M.T. Özsu and K. Daudjee, Building self-clustering RDF databases using tunable-LSH, VLDB J. 28: (2) ((2019) ), 173–195. doi:10.1007/s00778-018-0530-9.

[10] 

O. Ambrus, K. Möller and S. Handschuh, Konduit VQB: A visual query builder for SPARQL on the social semantic desktop, in: Visual Interfaces to the Social and Semantic Web (VISSW), ACM Press, (2010) .

[11] 

M. Arenas, G.I. Diaz and E.V. Kostylev, Reverse engineering SPARQL queries, in: World Wide Web Conference (WWW), ACM, (2016) , pp. 239–249. doi:10.1145/2872427.2882989.

[12] 

M. Arias-Gallego, J.D. Fernández, M.A. Martínez-Prieto and P. de la Fuente, An empirical study of real-world SPARQL queries, in: Usage Analysis and the Web of Data (USEWOD), CEUR-WS.org, (2011) . doi:10.48550/arXiv.1103.5043.

[13] 

D. Arroyuelo, A. Hogan, G. Navarro, J.L. Reutter, J. Rojas-Ledesma and A. Soto, Worst-case optimal graph joins in almost no space, in: SIGMOD International Conference on Management of Data, ACM, (2021) , pp. 102–114. doi:10.1145/3448016.3457256.

[14] 

A. Azzam, C. Aebeloe, G. Montoya, I. Keles, A. Polleres and K. Hose, WiseKG: Balanced access to web knowledge graphs, in: The Web Conference (WWW), (2021) , pp. 1422–1434. ACM/IW3C2. doi:10.1145/3442381.3449911.

[15] 

A. Azzam, J.D. Fernández, M. Acosta, M. Beno and A. Polleres, SMART-KG: Hybrid shipping for SPARQL querying on the web, in: The Web Conference (WWW), (2020) , pp. 984–994. doi:10.1145/3366423.3380177.

[16] 

S. Bail, S. Alkiviadous, B. Parsia, D. Workman, M. van Harmelen, R.S. Gonçalves and C. Garilao, FishMark: A linked data application benchmark, in: Joint Workshop on Scalable and High-Performance Semantic Web Systems (SSWS+HPCSW), (2012) , pp. 1–15.

[17] 

F. Benedetti and S. Bergamaschi, A model for visual building SPARQL queries, in: Symposium on Advanced Database Systems (SEBD), (2016) , pp. 19–30.

[18] 

A. Bigerl, F. Conrads, C. Behning, M.A. Sherif, M. Saleem and A.-C. Ngonga Ngomo, Tentris – a tensor-based triple store, in: International Semantic Web Conference (ISWC), Springer, (2020) , pp. 56–73. doi:10.1007/978-3-030-62419-4_4.

[19] 

C. Bizer and A. Schultz, The Berlin SPARQL benchmark, IJSWIS 5: (2) ((2009) ), 1–24. doi:10.4018/978-1-60960-593-3.ch004.

[20] 

A. Bonifati, S. Dumbrava, G. Fletcher, J. Hidders, M. Hofer, W. Martens, F. Murlak, J. Shinavier, S. Staworko and D. Tomaszuk, Threshold Queries in Theory and in the Wild, 2021, CoRR arXiv:2106.15703. doi:10.14778/3510397.3510407.

[21] 

A. Bonifati, W. Martens and T. Timm, An analytical study of large SPARQL query logs, PVLDB 11: (2) ((2017) ), 149–161. doi:10.14778/3149193.3149196.

[22] 

A. Bonifati, W. Martens and T. Timm, DARQL: Deep analysis of SPARQL queries, in: WWW Posters & Demos, ACM, (2018) , pp. 187–190. doi:10.1145/3184558.3186975.

[23] 

A. Bonifati, W. Martens and T. Timm, Navigating the maze of Wikidata query logs, in: World Wide Web Conference (WWW), ACM, (2019) , pp. 127–138. doi:10.1145/3308558.3313472.

[24] 

A. Bonifati, W. Martens and T. Timm, An analytical study of large SPARQL query logs, VLDB J. 29: (2–3) ((2020) ), 655–679. doi:10.1007/s00778-019-00558-9.

[25] 

S. Campinas, Live SPARQL auto-completion, in: ISWC Posters & Demos, CEUR-WS.org, (2014) , pp. 477–480.

[26] 

S. Cheng and O. Hartig, OPT+: A monotonic alternative to OPTIONAL in SPARQL, Journal of Web Engineering 18: (1) ((2019) ), 169–206. doi:10.13052/jwe1540-9589.18135.

[27] 

A. Clemmer and S. Davies, Smeagol: A “specific-to-general” semantic web query interface paradigm for novices, in: Database and Expert Systems Applications (DEXA), Springer, (2011) , pp. 288–302. doi:10.1007/978-3-642-23088-2_21.

[28] 

O. Curé, H. Naacke, M.A. Baazizi and B. Amann, HAQWA: A hash-based and query workload aware distributed RDF store, in: ISWC Posters & Demos, CEUR-WS.org, (2015) .

[29] 

F. Darari, W. Nutt, G. Pirrò and S. Razniewski, Completeness management for RDF data sources, ACM Transactions on the Web (TWEB) 12: (3) ((2018) ), 18. doi:10.1145/3196248.

[30] 

A. Davoudian, L. Chen, H. Tu and M. Liu, A workload-adaptive streaming partitioner for distributed graph stores, Data Science and Engineering 6: (2) ((2021) ), 163–179. doi:10.1007/s41019-021-00156-2.

[31] 

I. Dellal, S. Jean, A. Hadjali, B. Chardin and M. Baron, On addressing the empty answer problem in uncertain knowledge bases, in: International Conference on Database and Expert Systems Applications (DEXA), Springer, (2017) , pp. 120–129. doi:10.1007/978-3-319-64468-4_9.

[32] 

A.A. Desouki, F. Conrads, M. Röder and A.-C.N. Ngomo, SYNTHG: Mimicking RDF graphs using tensor factorization, in: International Conference on Semantic Computing (ICSC), (2021) , pp. 76–79. doi:10.1109/ICSC50631.2021.00017.

[33] 

M. Dumontier, A. Callahan, J. Cruz-Toledo, P. Ansell, V. Emonet, F. Belleau and A. Droit, Bio2RDF release 3: A larger, more connected network of linked data for the life sciences, in: ISWC Posters & Demos, CEUR-WS.org, (2014) , pp. 401–404.

[34] 

P. Fafalios and Y. Tzitzikas, How many and what types of SPARQL queries can be answered through zero-knowledge link traversal?, in: ACM/SIGAPP Symposium on Applied Computing (SAC), ACM, (2019) , pp. 2267–2274. doi:10.1145/3297280.3297505.

[35] 

J.D. Fernández, J. Umbrich, A. Polleres and M. Knuth, Evaluating query and storage strategies for RDF archives, Semantic Web 10: (2) ((2019) ), 247–291. doi:10.3233/SW-180309.

[36] 

D. Figueira, A. Godbole, S.N. Krishna, W. Martens, M. Niewerth and T. Trautner, Containment of simple conjunctive regular path queries, in: International Conference on Principles of Knowledge Representation and Reasoning (KR), (2020) , pp. 371–380. doi:10.24963/kr.2020/38.

[37] 

G. Fokou, S. Jean, A. Hadjali and M. Baron, Handling failing RDF queries: From diagnosis to relaxation, Knowl. Inf. Syst. 50: (1) ((2017) ), 167–195. doi:10.1007/s10115-016-0941-0.

[38] 

R. Frosini, A. Calì, A. Poulovassilis and P.T. Wood, Flexible query processing for SPARQL, Semantic Web 8: (4) ((2017) ), 533–563. doi:10.3233/SW-150206.

[39] 

K. Georgala, M.A. Sherif and A.-C.N. Ngomo, An efficient approach for the generation of Allen relations, in: European Conference on Artificial Intelligence (ECAI), IOS Press, (2016) , pp. 948–956. doi:10.3233/978-1-61499-672-9-948.

[40] 

Y. Guo, Z. Pan and J. Heflin, LUBM: A benchmark for OWL knowledge base systems, J. Web Semant. 3: (2–3) ((2005) ), 158–182. doi:10.1016/j.websem.2005.06.005.

[41] 

X. Han, Z. Feng, X. Zhang, X. Wang, G. Rao and S. Jiang, On the statistical analysis of practical SPARQL queries, in: International Workshop on Web and Databases (WebDB), ACM, (2016) , p. 2. doi:10.1145/2932194.2932196.

[42] 

D. Hernández, A. Hogan, C. Riveros, C. Rojas and E. Zerega, Querying Wikidata: Comparing SPARQL, relational and graph databases, in: International Semantic Web Conference (ISWC), Springer, (2016) , pp. 88–103. doi:10.1007/978-3-319-46547-0_10.

[43] 

A. Hogan, M. Mellotte, G. Powell and D. Stampouli, Towards fuzzy query-relaxation for RDF, in: European Semantic Web Conference (ESWC), Springer, (2012) , pp. 687–702. doi:10.1007/978-3-642-30284-8_53.

[44] 

F. Hogenboom, V. Milea, F. Frasincar and U. Kaymak, RDF-GL: A SPARQL-based graphical query language for RDF, in: Emergent Web Intelligence: Advanced Information Retrieval, (2010) , pp. 87–116. doi:10.1007/978-1-84996-074-8_4.

[45] 

K. Hose and R. Schenkel, WARP: Workload-aware replication and partitioning for RDF, in: Data Engineering Meets the Semantic Web (DESWEB@ICDE), IEEE Computer Society, (2013) , pp. 1–6. doi:10.1109/ICDEW.2013.6547414.

[46] 

K. Janowicz, P. Hitzler, B. Adams, D. Kolas and C. Vardeman, Five stars of linked data vocabulary use, Semantic Web 5: (3) ((2014) ), 173–176. doi:10.3233/SW-140135.

[47] 

X. Jian, Y. Wang, X. Lei, L. Zheng and L. Chen, SPARQL rewriting: Towards desired results, in: SIGMOD International Conference on Management of Data, (2020) , pp. 1979–1993. doi:10.1145/3318464.3389695.

[48] 

H. Knublauch, J.A. Hendler and K. Idehen, SPIN – Overview and Motivation. W3C Member Submission, 22 February 2011, available at: http://www.w3.org/Submission/spin-overview/.

[49] 

M. Knuth, O. Hartig and H. Sack, Scheduling refresh queries for keeping results from a SPARQL endpoint up-to-date, in: On the Move to Meaningful Internet Systems (OTM), Springer, (2016) , pp. 780–791. doi:10.1007/978-3-319-48472-3_49.

[50] 

T. Lampo, M. Vidal, J. Danilow and E. Ruckhaus, To cache or not to cache: The effects of warming cache in complex SPARQL queries, in: On the Move to Meaningful Internet Systems (OTM), Springer, (2011) , pp. 716–733. doi:10.1007/978-3-642-25106-1_22.

[51] 

T. Lebo, S. Sahoo, D. McGuinness, K. Belhajjame, J. Cheney, D. Corsar, D. Garijo, S. Soiland-Reyes and S. Zednik, PROV-O: The PROV Ontology, W3C Recommendation, 2013, https://www.w3.org/TR/prov-o/.

[52] 

J. Lehmann and L. Bühmann, AutoSPARQL: Let users query your knowledge base, in: European Semantic Web Conference (ESWC), Springer, (2011) , pp. 63–79. doi:10.1007/978-3-642-21034-1_5.

[53] 

J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P.N. Mendes, S. Hellmann, M. Morsey, P. van Kleef, S. Auer and C. Bizer, DBpedia – a large-scale, multilingual knowledge base extracted from Wikipedia, Semantic Web 6: (2) ((2015) ), 167–195. doi:10.3233/SW-140134.

[54] 

A.M. Loustaunau and A. Hogan, Predicting SPARQL query dynamics, in: K-CAP ’21: Knowledge Capture Conference, Virtual Event, USA, December 2–3, 2021, A.L. Gentile and R. Gonçalves, eds, ACM, (2021) , pp. 161–168. doi:10.1145/3460210.3493565.

[55] 

M. Luczak-Roesch, S. Aljaloud, B. Berendt and L. Hollink, USEWOD – Usage Analysis and the Web of Data, (2016) . doi:10.5258/SOTON/385344.

[56] 

F. Maali, I.A. Hassan and S. Decker, Scheduling for SPARQL endpoints, in: Scalable Semantic Web Knowledge Base Systems (SWSS), CEUR-WS.org, (2014) , pp. 19–28.

[57] 

S. Malyshev, M. Krötzsch, L. González, J. Gonsior and A. Bielefeldt, Getting the most out of Wikidata: Semantic technology usage in Wikipedia’s knowledge graph, in: International Semantic Web Conference (ISWC), Springer, (2018) , pp. 376–394. doi:10.1007/978-3-030-00668-6_23.

[58] 

W. Martens and T. Trautner, Evaluation and enumeration problems for regular path queries, in: International Conference on Database Theory (ICDT), Schloss Dagstuhl – Leibniz-Zentrum fuer Informatik, (2018) , pp. 19:1–19:21. doi:10.48550/arXiv.1710.02317.

[59] 

W. Martens and T. Trautner, Bridging theory and practice with query log analysis, SIGMOD Record 48: (1) ((2019) ), 6–13. doi:10.1145/3371316.3371319.

[60] 

M. Martin, J. Unbehauen and S. Auer, Improving the performance of semantic web applications with sparql query caching, in: Extended Semantic Web Conference, Springer, (2010) , pp. 304–318. doi:10.1007/978-3-642-13489-0_21.

[61] 

K. Möller, T. Heath, S. Handschuh and J. Domingue, Recipes for semantic web dog food - the ESWC and ISWC metadata projects, in: International Semantic Web Conference (ISWC), Springer, (2007) , pp. 802–815. doi:10.1007/978-3-540-76298-0_58.

[62] 

M. Morsey, J. Lehmann, S. Auer and A.-C. Ngonga Ngomo, DBpedia SPARQL benchmark – performance assessment with real queries on real data, in: International Semantic Web Conference (ISWC), Springer, (2011) . doi:10.1007/978-3-642-25073-6_29.

[63] 

T. Neumann and G. Weikum, RDF-3X: A RISC-style engine for RDF, PVLDB 1: (1) ((2008) ), 647–659. doi:10.14778/1453856.1453927.

[64] 

A.N. Ngomo, L. Bühmann, C. Unger, J. Lehmann and D. Gerber, Sorry, I don’t speak SPARQL: translating SPARQL queries into natural language, in: World Wide Web Conference (WWW), D. Schwabe, V.A.F. Almeida, H. Glaser, R. Baeza-Yates and S.B. Moon, eds, ACM, (2013) , pp. 977–988. doi:10.1145/2488388.2488473.

[65] 

A. Pacaci, A. Bonifati and M.T. Özsu, Regular path query evaluation on streaming graphs, in: SIGMOD International Conference on Management of Data, ACM, (2020) , pp. 1415–1430. doi:10.1145/3318464.3389733.

[66] 

N. Papailiou, D. Tsoumakos, P. Karras and N. Koziris, Graph-aware, workload-adaptive SPARQL query caching, in: SIGMOD International Conference of Management of Data, ACM, (2015) , pp. 1777–1792. doi:10.1145/2723372.2723714.

[67] 

J. Pérez, M. Arenas and C. Gutiérrez, Semantics and complexity of SPARQL, ACM Trans. Database Syst. 34: (3) ((2009) ), 16:1–16:45. doi:10.1007/11926078_3.

[68] 

F. Picalausa and S. Vansummeren, What are real SPARQL queries like?, in: Semantic Web Information Management (SWIM), ACM, (2011) , p. 7. doi:10.1145/1999299.1999306.

[69] 

J. Potoniec, Learning SPARQL queries from expected results, Computing and Informatics 38: (3) ((2019) ), 679–700. doi:10.31577/cai_2019_3_679.

[70] 

E. Prud’hommeaux and A. Seaborne, SPARQL 1.0 Query Language. W3C Recommendation, 15 January 2008, https://www.w3.org/TR/rdf-sparql-query/.

[71] 

M. Rico, N. Mihindukulasooriya and A. Gómez-Pérez, Data-driven RDF property semantic-equivalence detection using NLP techniques, in: International Conference on Knowledge Engineering and Knowledge Management (EKAW), Springer, (2016) , pp. 797–804. doi:10.1007/978-3-319-49004-5_51.

[72] 

L. Rietveld and R. Hoekstra, Man vs. machine: Differences in SPARQL queries, in: Usage Analysis and the Web of Data (USEWOD), CEUR-WS.org, (2014) , https://hdl.handle.net/11245/1.461475.

[73] 

L. Rietveld and R. Hoekstra, YASGUI: Feeling the pulse of linked data, in: Knowledge Engineering and Knowledge Management (EKAW), Springer, (2014) , pp. 441–452. doi:10.1007/978-3-319-13704-9_34.

[74] 

M. Röder, P.T.S. Nguyen, F. Conrads, A.A.M. da Silva and A.-C.N. Ngomo, Lemming – example-based mimicking of knowledge graphs, in: International Conference on Semantic Computing (ICSC), (2021) , pp. 62–69. doi:10.1109/ICSC50631.2021.00015.

[75] 

T. Safavi, C. Belth, L. Faber, D. Mottin, E. Müller and D. Koutra, Personalized knowledge graph summarization: From the cloud to your pocket, in: International Conference on Data Mining (ICDM), IEEE, (2019) , pp. 528–537. doi:10.1109/ICDM.2019.00063.

[76] 

J. Salas and A. Hogan, Canonicalisation of monotone SPARQL queries, in: International Semantic Web Conference (ISWC), Springer, (2018) , pp. 600–616. doi:10.1007/978-3-030-00671-6_35.

[77] 

M. Saleem, M.I. Ali, A. Hogan, Q. Mehmood and A.N. Ngomo, LSQ: The linked SPARQL queries dataset, in: International Semantic Web Conference (ISWC), Springer, (2015) , pp. 261–269. doi:10.1007/978-3-319-25010-6_15.

[78] 

M. Saleem, A. Hasnain and A.-C.N. Ngomo, LargeRDFBench: A billion triples benchmark for SPARQL endpoint federation, Journal of Web Semantics 48: ((2018) ), 85–125. doi:10.1016/j.websem.2017.12.005.

[79] 

M. Saleem, Q. Mehmood and A.N. Ngomo, FEASIBLE: A feature-based SPARQL benchmark generation framework, in: International Semantic Web Conference (ISWC), Springer, (2015) , pp. 52–69. doi:10.1007/978-3-319-25007-6_4.

[80] 

M. Saleem, Q. Mehmood, C. Stadler, J. Lehmann and A.N. Ngomo, Generating SPARQL query containment benchmarks using the SQCFramework, in: ISWC Posters & Demos, CEUR-WS.org, (2018) , http://ceur-ws.org/Vol-2180/paper-56.pdf.

[81] 

M. Saleem and A.N. Ngomo, HiBISCuS: Hypergraph-based source selection for SPARQL endpoint federation, in: European Semantic Web Conference (ESWC), Springer, (2014) , pp. 176–191. doi:10.1007/978-3-319-07443-6_13.

[82] 

M. Saleem, C. Stadler, Q. Mehmood, J. Lehmann and A.-C.N. Ngomo, Sqcframework: Sparql query containment benchmark generation framework, in: Proceedings of the Knowledge Capture Conference, (2017) , pp. 1–8. doi:10.1145/3148011.3148017.

[83] 

M. Saleem, G. Szárnyas, F. Conrads, S.A.C. Bukhari, Q. Mehmood and A.N. Ngomo, How representative is a SPARQL benchmark? An analysis of RDF triplestore benchmarks, in: World Wide Web Conference (WWW), ACM, (2019) , pp. 1623–1633. doi:10.1145/3308558.3313556.

[84] 

M. Schmidt, O. Görlitz, P. Haase, G. Ladwig, A. Schwarte and T. Tran, FedBench: A benchmark suite for federated semantic data query processing, in: International Semantic Web Conference (ISWC), Springer, (2011) , pp. 585–600. doi:10.1007/978-3-642-25073-6_37.

[85] 

J. Schoenfisch and H. Stuckenschmidt, Analyzing real-world SPARQL queries and ontology-based data access in the context of probabilistic data, Int. J. Approx. Reasoning 90: ((2017) ), 374–388. doi:10.1016/j.ijar.2017.08.005.

[86] 

K. Singh, M. Saleem, A. Nadgeri, F. Conrads, J.Z. Pan, A.-C.N. Ngomo and J. Lehmann, Qaldgen: Towards microbenchmarking of question answering systems over knowledge graphs, in: International Semantic Web Conference (ISWC), Springer, (2019) , pp. 277–292. doi:10.1007/978-3-030-30796-7_18.

[87] 

Z. Song, Z. Feng, X. Zhang, X. Wang and G. Rao, Efficient approximation of well-designed SPARQL queries, in: International Conference on Web-Age Information Management (WAIM), Springer, (2016) , pp. 315–327. doi:10.1007/978-3-319-47121-1_27.

[88] 

C. Stadler, J. Lehmann, K. Höffner and S. Auer, LinkedGeoData: A core for a web of spatial open data, Semantic Web 3: (4) ((2012) ), 333–354. doi:10.3233/SW-2011-0052.

[89] 

T. Stegemann and J. Ziegler, Investigating learnability, user performance, and preferences of the path query language SemwidgQL compared to SPARQL, in: International Semantic Web Conference (ISWC), Springer, (2017) , pp. 611–627. doi:10.1007/978-3-319-68288-4_36.

[90] 

H. Thakkar, Y. Keswani, M. Dubey, J. Lehmann and S. Auer, Trying not to die benchmarking: Orchestrating RDF and graph data management solution benchmarks using LITMUS, in: International Conference on Semantic Systems (SEMANTiCS), ACM, (2017) , pp. 120–127. doi:10.1145/3132218.3132232.

[91] 

V. Thost and J. Dolby, QED: Out-of-the-box datasets for SPARQL query evaluation, in: European Semantic Web Conference (ESWC), Springer, (2019) , pp. 491–506. doi:10.1007/978-3-030-21348-0_32.

[92] 

P. Vandenbussche, G. Atemezing, M. Poveda-Villalón and B. Vatant, Linked open vocabularies (LOV): A gateway to reusable semantic vocabularies on the web, Semantic Web 8: (3) ((2017) ), 437–452. doi:10.3233/SW-160213.

[93] 

P. Vandenbussche, J. Umbrich, L. Matteis, A. Hogan and C.B. Aranda, SPARQLES: Monitoring public SPARQL endpoints, Semantic Web 8: (6) ((2017) ), 1049–1065. doi:10.3233/SW-170254.

[94] 

J. Varga, O. Romero, T.B. Pedersen and C. Thomsen, Analytical metadata modeling for next generation BI systems, Journal of Systems and Software 144: ((2018) ), 240–254. doi:10.1016/j.jss.2018.06.039.

[95] 

H. Vargas, C.B. Aranda, A. Hogan and C. López, RDF explorer: A visual SPARQL query builder, in: International Semantic Web Conference (ISWC), Springer, (2019) , pp. 647–663. doi:10.1007/978-3-030-30793-6_37.

[96] 

R.D. Virgilio, A. Maccioni and R. Torlone, Approximate querying of RDF graphs via path alignment, Distributed and Parallel Databases 33: (4) ((2015) ), 555–581. doi:10.1007/s10619-014-7142-1.

[97] 

A. Viswanathan, G. de Mel and J.A. Hendler, Feature-based reformulation of entities in triple pattern queries, 2018, CoRR, http://arxiv.org/abs/1807.01801 arXiv:1807.01801.

[98] 

M. Wang, K. Chen, G. Xiao, X. Zhang, H. Chen and S. Wang, Explaining similarity for SPARQL queries, World Wide Web ((2021) ), 1–23. doi:10.1007/s11280-021-00886-3.

[99] 

M. Wang, J. Liu, B. Wei, S. Yao, H. Zeng and L. Shi, Answering why-not questions on SPARQL queries, Knowledge and Information Systems ((2019) ), 1–40. doi:10.1007/s10115-018-1155-4.

[100] 

G.T. Williams and J. Weaver, Enabling fine-grained HTTP caching of SPARQL query results, in: International Semantic Web Conference (ISWC), Springer, (2011) , pp. 762–777. doi:10.1007/978-3-642-25073-6_48.

[101] 

H. Wu, T. Fujiwara, Y. Yamamoto, J.T. Bolleman and A. Yamaguchi, BioBenchmark toyama 2012: An evaluation of the performance of triple stores on biological data, J. Biomedical Semantics 5: ((2014) ), 32. doi:10.1186/2041-1480-5-32.

[102] 

X. Zhang, M. Wang, M. Saleem, A.-C.N. Ngomo, G. Qi and H. Wang, Revealing secrets in SPARQL session level, in: International Semantic Web Conference (ISWC), Springer, (2020) , pp. 672–690. doi:10.1007/978-3-030-62419-4_38.