Skip to content

Commit

Permalink
Merge pull request #26 from mjovanovik/develop/2
Browse files Browse the repository at this point in the history
Merge of the major new version from develop/2
  • Loading branch information
mjovanovik authored Feb 5, 2021
2 parents 2365f89 + 26eca8a commit 222e34b
Show file tree
Hide file tree
Showing 623 changed files with 11,959 additions and 254 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,6 @@ dependency-reduced-pom.xml
.classpath
.project
.settings/
.idea/
GeoSPARQLBenchmark.iml
src/main/resources/.DS_Store
17 changes: 14 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,16 @@
# GeoSPARQL Benchmark
# GeoSPARQL Compliance Benchmark

This is the GeoSPARQL Benchmark, integrated into the [HOBBIT Platform](https://github.com/hobbit-project/platform).
This is the GeoSPARQL Compliance Benchmark, integrated into the [HOBBIT Platform](https://github.com/hobbit-project/platform).

The GeoSPARQL Benchmark (GSB) aims to evaluate GeoSPARQL compliance of RDF storage systems. The current version of the benchmark is based on the [GeoSPARQL subtest](https://github.com/BorderCloud/TFT-tests/tree/master/geosparql) of [Tester-for-Triplestores](https://github.com/BorderCloud/TFT) and the [SPARQLScore](http://sparqlscore.com/) standard conformance test suite.
The GeoSPARQL Compliance Benchmark aims to evaluate the GeoSPARQL compliance of RDF storage systems. The benchmark uses
206 SPARQL queries to test the extent to which the benchmarked system supports the 30 requirements defined in the [GeoSPARQL standard](https://www.ogc.org/standards/geosparql).

As a result, the benchmark provides two metrics:
* **Correct answers**: The number of correct answers out of all GeoSPARQL queries, i.e. tests.
* **GeoSPARQL compliance percentage**: The percentage of compliance with the requirements of the GeoSPARQL standard.

You can find a set of results from the [latest experiments on the hosted instance of the HOBBIT Platform](https://master.project-hobbit.eu/experiments/1612476122572,1612477003063,1612476116049,1612477500164,1612477015896,1612477025778,1612477047489,1612477849872,1612478626265,1612479271411)
(log in as Guest).

If you want your RDF triplestore tested, you can [add it as a system to the HOBBIT Platform](https://hobbit-project.github.io/system_integration.html),
and then [run an experiment](https://hobbit-project.github.io/benchmarking.html) using the [hosted instance of the HOBBIT Platform](https://hobbit-project.github.io/master.html).
1,257 changes: 1,228 additions & 29 deletions hobbit-settings/benchmark.ttl

Large diffs are not rendered by default.

20 changes: 20 additions & 0 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,26 @@
</properties>

<dependencies>
<!-- https://mvnrepository.com/artifact/org.apache.xml.security/xml-security -->
<!-- https://mvnrepository.com/artifact/org.apache.santuario/xmlsec -->
<dependency>
<groupId>org.apache.santuario</groupId>
<artifactId>xmlsec</artifactId>
<version>2.2.1</version>
</dependency>


<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.11.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.json/json -->
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
<version>20201115</version>
</dependency>
<dependency>
<groupId>org.hobbit</groupId>
<artifactId>core</artifactId>
Expand Down
Binary file added src/main/.DS_Store
Binary file not shown.
209 changes: 206 additions & 3 deletions src/main/java/org/hobbit/geosparql/GSBBenchmarkController.java

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions src/main/java/org/hobbit/geosparql/GSBDataGenerator.java
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,8 @@ public void close() throws IOException {
}

private void getFileAndSendData() {
String datasetFile = "gsb_dataset/example.rdf";
String datasetURI = "http://bordercloud.github.io/TFT-tests/geosparql/illustration/example.rdf";
String datasetFile = "gsb_dataset/dataset.rdf";
String datasetURI = "http://openlinksw.com/geosparql/dataset.rdf";

try {
LOGGER.info("Getting file " + datasetFile);
Expand Down Expand Up @@ -91,7 +91,7 @@ public void receiveCommand(byte command, byte[] data) {
for (int i=0; i < GSBConstants.GSB_QUERIES.length; i++) {
InputStream inputStream = new FileInputStream("gsb_queries/" + GSBConstants.GSB_QUERIES[i]);
String fileContent = IOUtils.toString(inputStream);
fileContent = "#Q0" + (i+1) + "\n" + fileContent; // add a comment line at the beginning of the query, to denote the query number (Q01, Q02, ...)
fileContent = "#Q-" + (i+1) + "\n" + fileContent; // add a comment line at the beginning of the query, to denote the query number (#Q-1, #Q-2, ...)
byte[] bytesArray = null;
bytesArray = RabbitMQUtils.writeString(fileContent);
sendDataToTaskGenerator(bytesArray);
Expand Down
1,579 changes: 1,548 additions & 31 deletions src/main/java/org/hobbit/geosparql/GSBEvaluationModule.java

Large diffs are not rendered by default.

22 changes: 19 additions & 3 deletions src/main/java/org/hobbit/geosparql/GSBSeqTaskGenerator.java
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
package org.hobbit.geosparql;

import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
Expand Down Expand Up @@ -50,6 +51,21 @@ private void internalInit() {
ResultSetFormatter.outputAsJSON(outputStream, rsf.fromXML(inputStream));
answers[i] = outputStream.toString();
inputStream.close();
for (int k=1; ; k++) {
String alternativeAnswerFileName = GSBConstants.GSB_ANSWERS[i].replace(".srx","") + "-alternative-" + k + ".srx";
LOGGER.info("Looking for an alternative file called: " + alternativeAnswerFileName);
File alternativeAnswerFile = new File("gsb_answers/" + alternativeAnswerFileName);
if (!alternativeAnswerFile.exists()) break;
else {
LOGGER.info("Alternative file found: " + alternativeAnswerFileName);
InputStream alternativeAnswerInputStream = new FileInputStream(alternativeAnswerFile);
ByteArrayOutputStream alternativeAnswerOutputStream = new ByteArrayOutputStream();
ResultSetFormatter.outputAsJSON(alternativeAnswerOutputStream, rsf.fromXML(alternativeAnswerInputStream));
answers[i] = answers[i] + "======" + alternativeAnswerOutputStream.toString(); // add the detected alternative expected answer with a corresponding delimiter
alternativeAnswerInputStream.close();
LOGGER.info("answers[" + i + "]: " + answers[i]);
}
}
}
} catch (IOException e) {
// TODO Auto-generated catch block
Expand Down Expand Up @@ -79,10 +95,10 @@ protected void generateTask(byte[] data) throws Exception {
// Locate the corresponding query answer
// and send it to the Evaluation Store for evaluation
String [] parts = dataString.split("\n");
String answerIndexString = parts[0].trim().substring(2);
int answerIndex = Integer.parseInt(answerIndexString) - 1; // The first line is a comment denoting the query (#Q01, #Q02, ...)
String answerIndexString = parts[0].trim().substring(3);
int answerIndex = Integer.parseInt(answerIndexString) - 1; // The first line is a comment denoting the query (#Q-1, #Q-2, ...)
String ans = answers[answerIndex];
data = RabbitMQUtils.writeString("#Q0" + (answerIndex+1) + "\n\n" + ans);
data = RabbitMQUtils.writeString("#Q-" + (answerIndex+1) + "\n\n" + ans);
sendTaskToEvalStorage(taskIdString, timestamp, data);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,11 @@ public void receiveGeneratedTask(String taskId, byte[] data) {
}
}
else {
if (queryString.contains("INFERENCE")) {
// Activate inference in Virtuoso
queryString = "DEFINE input:inference <myset>\n" + queryString;
LOGGER.info("Added an INFERENCE line to a query. This is the new query text:\n" + queryString);
}
QueryExecution qe = queryExecFactory.createQueryExecution(queryString);
ResultSet results = null;
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
Expand Down Expand Up @@ -201,7 +206,7 @@ public void receiveCommand(byte command, byte[] data) {
try {
allDataReceivedMutex.acquire();
} catch (InterruptedException e) {
LOGGER.error("Exception while waitting for all data for bulk load " + loadingNumber + " to be recieved.", e);
LOGGER.error("Exception while waiting for all data for bulk load " + loadingNumber + " to be received.", e);
}
LOGGER.info("All data for bulk load " + loadingNumber + " received. Proceed to the loading...");

Expand Down
Loading

0 comments on commit 222e34b

Please sign in to comment.