Media, Publishing & Advertising I

Session 1.2

Wednesday, September 16, 2015 - 11:00 to 12:00
LC Ceremonial hall 2
Chair: Tassilo Pellegrini


Head XML Tools and Standards


A Content Model for Elsevier Optimized Learning Suite

The Content model for Elsevier Optimized Learning Suite is an extensible framework to allow for authoring, storage and delivery of content assets that are used for highly interactive, personalized learning experiences. The content standards are based on W3C XML, HTML5, RDF, RDFa, JSON (Javascript Object Notation) and the extended JSON-LD (JSON for linking data) standards. An API supports the full workflow of content structuring, authoring, learning orchestration and deployment of learning objects into a product framework. 

Crowdsourced Semantic Annotation of Scientific Publications and Tabular Data in PDF

Significant amounts of knowledge in science and technology have so far not been published as Linked Open Data but are contained in the text and tables of legacy PDF publications. Making such information available as RDF would, for example, provide direct access to claims and facilitate surveys of related work. A lot of valuable tabular information that till now only existed in PDF documents would also finally become machine understandable. Instead of studying scientific literature or engineering patents for months, it would be possible to collect such input by simple SPARQL queries. The SemAnn approach enables collaborative annotation of text and tables in PDF documents, a format that is still the common denominator of publishing, thus maximising the potential user base. The resulting annotations in RDF format are available for querying through a SPARQL endpoint. To incentivise users with an immediate benefit for making the effort of annotation, SemAnn recommends related papers, taking into account the hierarchical context of annotations in a novel way. We evaluated the usability of SemAnn and the usefulness of its recommendations by analysing annotations resulting from tasks assigned to test users and by interviewing them. While the evaluation shows that even few annotations lead to a good recall, we also observed unexpected, serendipitous recommendations, which confirms the merit of our low-threshold annotation support for the crowd.