SPIDER : A System for Scalable, Parallel / Distributed Evaluation of large-scale RDF Data

  1. Home
  2. Projects
  3. SPIDER : A System for Scalable, Parallel / Distributed Evaluation of large-scale RDF Data

This project aims at processing large-scale RDF data. In this project, we developed scale RDF processing method using MapReduce that is a distributed processing framework and storing techniques for large RDF data sets. This project was demonstrated in the 18th ACM Conference on Information and Knowledge Management (CIKM) in November 2009.

Features:

  • Extensible Storage for web-scale RDF Data
  • Scalable RDF Query Processor using MapReduce
  • Support to import for large-scale RDF data
  • Support to some of the SPARQL features
  • Based on Hbase and Hadoop

Members:

  • Hyunsik Choi
  • Jihoon Son
  • YongHyun Cho
  • Min Kyoung Sung
  • Yon Dohn Chung

Publications:

Menu