Sparql2flink: Evaluation of sparql queries on apache flink

Oscar Ceballos, Carlos Alberto Ramírez Restrepo, María Constanza Pabón, Andres M. Castillo, Oscar Corcho

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Existing SPARQL query engines and triple stores are continuously improved to handle more massive datasets. Several approaches have been developed in this context proposing the storage and querying of RDF data in a distributed fashion, mainly using the MapReduce Programming Model and Hadoop-based ecosystems. New trends in Big Data technologies have also emerged (e.g., Apache Spark, Apache Flink); they use distributed in-memory processing and promise to deliver higher data processing performance. In this paper, we present a formal interpretation of some PACT transformations implemented in the Apache Flink DataSet API. We use this formalization to provide a mapping to translate a SPARQL query to a Flink program. The mapping was implemented in a prototype used to determine the correctness and performance of the solution. The source code of the project is available in Github under the MIT license.

Original languageEnglish
Article number7033
JournalApplied Sciences (Switzerland)
Volume11
Issue number15
DOIs
StatePublished - 01 Aug 2021

Keywords

  • Apache flink
  • Massive static RDF data
  • PACT programming model
  • SPARQL

Fingerprint

Dive into the research topics of 'Sparql2flink: Evaluation of sparql queries on apache flink'. Together they form a unique fingerprint.

Cite this