My Blog

Linked Data 2019


Linked Data and other Semantic Web technologies are used in a variety of
organisations for projects large and small. They are a key feature in both
government open data initiatives and enterprise-wide data integration systems
for internal use. In this course, you’ll learn about the building blocks of the
Semantic Web and how to use them, including how to model your data in RDF,
integrate with third-party and Open Data sources, and how to enter and run
SPARQL queries. You’ll hear about how these technologies are used today, and
have a chance to try them out in the hands-on portions of the classes.

For the hands-on sessions students will need a laptop with a wi-fi connection and
a browser capable of running modern Javascript applications (e.g. Chrome or a
recent version of Firefox).

This course is chaired by Kal Ahmed and taught by Dr Andy Seaborne, Jen Williams, Kal Ahmed, Peter Crocker, and Dr Stuart Williams.

Classes for 2019

The Linked Data course runs on

An Introduction to Linked Data

Taught by Jen Williams.

Linked Data is a set of best practises that defines how to structure data so that
it can be interlinked and published on the Web. By using the Web as the
publishing platform, data can distributed across innumerate points, while still
allowing navigation and discovery through the use of URIs in the data itself.
Data models are embedded into the structure of the Linked Data itself, and
follow the same best practises by being defined as Linked Data Vocabularies.

The principles of Linked Data allows developers to publish data without the need
to develop custom APIs, and offers the ability for data publishers to adopt data
standards to improve interoperability between disparate sources of data.

This “Introduction to Linked Data” course will cover all of the basics around the
structure of the data itself, the use of URIs, popular vocabularies and show how
these principles are applied by some existing Linked Data sites.

Coffee break

Creating Linked Data

Taught by Kal Ahmed.

This course goes through some of the practical and technical aspects of creating
and delivering linked data. We will start with an overview of how HTTP can be
used to deliver data to both humans and machines from the same set of web
addresses; and then move on to looking at the different syntaxes used for
encoding Linked Data.

For the practical element of this course, we will work with Turtle which is also
useful to understand for the later parts of the course that deal with SPARQL.
Attendees will have the chance to create their own mini knowledge graphs by hand
using Turtle syntax.

Lunch break


Taught by Andy Seaborne,
Jen Williams,
Kal Ahmed,
and Stuart Williams.

SPARQL is the standard W3C query language for semantic web applications and has
been widely adopted across both open source and commercial triple stores. As
well as query capabilities, the SPARQL standards define the way to access triple
stores over HTTP and get back query results in JSON, XML and other common data

In two sessions, attendees will get a solid grounding in SPARQL, including a
large component of practical exercises where attendees have the opportunity to
discuss tips and tricks with the instructors for being more effective with
SPARQL queries.

This first SPARQL session will cover the the fundamentals of SPARQL queries. We
will then do practical work with both teaching data and an existing large RDF
dataset to ground the learnings.

Attendees will need a laptop with a wi-fi connection and a browser capable of
running modern Javascript applications.

Coffee break

RDF Schema

Taught by Kal Ahmed.

In this short session we introduce the basics of RDF Schema, a vocabulary that
allows us to specify how RDF types and properties interact. We will look at how
we can enhance our existing hand-crafted RDF data with RDF Schema to identify
types, properties and their relationships in our data. This provides the
foundation for the later parts of the course that deal with RDF modelling.


Taught by Andy Seaborne.

RDF is a flexible data format for any data. Is the data what you expect it to
be? By ensuring the data has the right shape, applications that consume the data
don’t break as the data is updates and the shape of the data remains correct.

In this session, we will give an introduction to the W3C SHACL standard for RDF
data validation and discuss how it is used to build robust RDF data-driven

End of day


Taught by Andy Seaborne.

From the solid grounding in the building blocks of SPARQL, provided in the
SPARQL 101 session, this second session will introduce more of the SPARQL query
language as well as go in to the data access mechanism used over the web.

Coffee break

RDF Modelling: Introspecting Models From Data

Taught by Stuart Williams.

Building on our knowledge of RDFS and SPARQL this session will introduce students
to a ‘toolbox’ of SPARQL queries that can be used to introspect ‘unknown’
datasets to reveal their structure.

The techniques and tricks shown in this session can be incredibly helpful when
investigating an RDF data set. We will show how to use SPARQL queries and some
basic knowledge of RDF Schema to glean a lot of useful information about the
content of the data set and the structure of relationships contained in it.

Lunch break

RDF Modelling: Building Models

Taught by Stuart Williams.

Modelling the world in RDF involves identifying the important entity types within
some domain of interest, their properties and relationships. In this session we
will workshop the development of an RDF data model for a familiar domain.

We’ll start with some tabular data and work at recognising the entities and
relationships within. We’ll draw some diagrams to illustrate our models and
sketch some instances from the source data. If time permits we’ll write down our
domain models in RDFS. We may also touch on the extra expressivity that the Web
Ontology Language (OWL) brings.

Coffee break

Reasoning About Knowledge

Taught by Peter Crocker.

Big data technologies have made significant progress in addressing problems
related to the volume and velocity of data. Graph technologies have enhanced our
ability to deal with the challenges in the variety of data. In this talk we will
look at how logic or reasoning can capture expertise and knowledge to create a
true knowledge graph. We will cover a number of applications of the technology
and work through a particular example that illustrates how this reasoning can
transform the way in which you find answers your questions.

Coffee break

Wrap-up questions with the panel

Taught by Andy Seaborne,
Jen Williams,
Kal Ahmed,
Peter Crocker,
and Stuart Williams.

For the final tutorial of the course we will take a brief spin through some of
the other related technologies that we didn’t have time to include in the
previous sessions! This session is intended to give students an overview of what
other technologies to investigate and what tools and services are available to
start their own Linked Data projects.