Cet article CORDIS : a SPARQL endpoint is born ! est apparu en premier sur Sparna Blog.
]]>Not talking about 2024 exceptional Northern Lights to come, but this one’s also good news for science !
➡️ Late 2023, the Publications Office of the European Union announced on social media the public release of the new CORDIS SPARQL endpoint.
CORDIS, aka « the Community Research and Development Information Service of the European Commission », is « the […] primary source of results from the projects funded by the EU’s framework programmes for research and innovation, from FP1 to Horizon Europe ». Described as a « rich and structured public repository with all project information held by the European Commission such as project factsheets, participants, reports, deliverables and links to open-access publications », the CORDIS catalog has also been made available in 6 European languages by Publications Office’s editorial team.
Cherry on top of a whole process, the CORDIS SPARQL endpoint release comes to crown a long-term linked open data project. The aim identifying, acquiring, preserving and providing access to knowledge in a common will to share with the widest public possible a trust-worthy, qualified and structured information (see Publications Office 2021 Annual Management Report).
In the context of the pandemic (and recent opening of data.europa.eu, the official portal for European data, as defined in 2017–2025 European Open Data Space strategy), the EuroSciVoc taxonomy of fields of science was released April 2020, followed December 2021 by the publishing of European research information ontology (EURIO) on the EU Vocabularies website .
As presented at ENDORSE conference March 2021, the redesign of CORDIS data-model in accordance with Semantic Web standards contributed to bring the platform « from acting as a data repository to finally playing an active role as data provider », where EuroSciVoc taxonomy & EURIO ontology both played key roles in the creation of future CORDIS knowledge graph and SPARQL endpoint :
EuroSciVoc […] is a multilingual, SKOS-XL based taxonomy that represents all the main fields of science that were discovered from the CORDIS content, e.g., project abstracts. It was built starting from the hierarchy of the OECD’s Fields of R&D classification (FoRD) as root and extended through a semi-automatic process based on NLP techniques. It contains almost 1 000 categories in 6 languages (English, French, German, Italian, Polish and Spanish) and each category is enriched with relevant keywords extracted from the textual description of CORDIS projects. It is constantly evolving and is available on EU Vocabularies website […].
In order to transform CORDIS data into Linked Open Data, thus aligning with Semantic Web standards, best practices and tools in industry and public organizations, the need for an ontology emerged. CORDIS created the EURIO (European Research Information Ontology) based on data about research projects funded by the EU’s framework programmes for research and innovation. EURIO is aligned with EU ontologies such as DINGO and FRAPO and de facto standard ontologies such as schema.org and the Organization Ontology from W3C. It models projects, their results and actors such as people and organizations, and includes administrative information like funding schemes and grants.
EURIO, which is available on EU Vocabularies website, was the starting point to develop a Knowledge Graph of CORDIS data that will be publicly available via a dedicated SPARQL endpoint. »
(Enrico Bignotti & Baya Remaoun, « EuroSciVoc taxonomy and EURIO ontology: CORDIS as (semantic) data provider » , ENDORSE March 16, 2021. PDF VIDEO)
… A Knowledge graph that was soon released in 2022-2023 (see INDUSTRY TRACK 1 on Tuesday, 25 October of ISWC 2022 Conference for more detail), until final opening of a CORDIS SPARQL endpoint late november 2023.
Now fancy a few SPARQL queries in there ?
Follow the SPARQL
CORDIS SPARQL endpoint is actually made available on CORDIS Datalab (and already referenced in EU Knowledge Graph among other European SPARQL endpoints ! see the query / see the results)
Here you can access a quick documentation guide to CORDIS Linked Open Data : https://cordis.europa.eu/about/sparql.
Let’s have a look at EURIO ontology first : we need to understand it to query CORDIS knowledge graph.
As we are told in the guide, the latest version can be downloaded on EU Vocabularies website. When we unzip the archive we access the whole documentation about EURIO Classes & properties that we need to write our SPARQL queries – and a diagram of main classes and properties of CORDIS data model :
At first sight we can observe on the schema 3 main groups of entities :
Let’s open CORDIS SPARQL endpoint – some easy queries can be run to begin exploring CORDIS knowledge graph.
Nb : the data on SPARQL endpoint is a snapshot, but freshest dumps can be found on European data portal !
Here a simple one to find a list of FundingSchemes with their titles and IDs corresponding to « Horizon 2020 » programme :
FundingSchemes with their titles and IDs corresponding to « Horizon 2020 » programme
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> |
The FILTER REGEX enables us to display the IDs corresponding to H2020 Funding Schemes.
We can make another query to get the projects with the Funding Scheme Programme they are related to (note that, in EURIO a eurio:hasFundingSchemeProgramme is a sub-property of eurio:fundingScheme) :
Projects with the Funding Scheme Programme they are related to
PREFIX eurio: <http://data.europa.eu/s66#> |
(Here we used a property path with a « / » to shorten the query to get the acronyms of projects & Funding Scheme Programmes codes).
… and combining with the first query we can find the projects depending on H2020 Funding Scheme Programme in particular :
Projects depending on H2020 Funding Scheme Programme in particular
PREFIX eurio: <http://data.europa.eu/s66#> |
It is also possible to get the list of all existing Funding Scheme Programmes CORDIS projects have been funded by – we observe 27 of them here (from the SPARQL endpoint) – while adding a count function to know how many projects per FundingSchemeProgramme :
All existing Funding Scheme Programmes CORDIS projects have been funded by
PREFIX eurio: <http://data.europa.eu/s66#> |
Querying the organisations properties will return other kind of useful informations about geographical location of the projects stakeholders. Let’s figure out we want to find the projects whose coordinating organisations have sites located in France :
Projects whose coordinating organisations have sites located in France
PREFIX skos: <http://www.w3.org/2004/02/skos/core#> |
Depending on available data, you can either query via PostalAddress info (eurio:addressCountry ‘FR’) or AdministrativeArea (eurio:hasGeographicalLocation) … Here we’re lucky as both fields are mandatory ones.
Last but not least, we can also play with CORDIS vocabularies : here you’ll have the choice to investigate via plain keywords of Projects or Publications items, querying titles, abstracts or other types of literals…
An example of projects with abstracts containing string ❄ ‘winter’ ❄ – the URL giving the exact link to the project online :
Looking for ❄ ‘winter’ ❄ in CORDIS projects abstracts (with nice URL to go)
PREFIX eurio: <http://data.europa.eu/s66#> |
But funniest way will be using EuroSciVoc taxonomy (and navigating through thesaurus hierarchy) : to do so we need to navigate through property « eurio:hasEuroSciVocClassification » to get the Concepts skosxl:prefLabel property … to finally obtain the thesaurus labels (don’t forget to choose a prefered language with a FILTER (lang parameter) :
Projects with their associated EuroSciVoc keywords (English prefLabels )
PREFIX skosxl: <http://www.w3.org/2008/05/skos-xl#> |
A bit more complex one using first level of hierarchy of the taxonomy : here we are searching for all skos:broader concepts « with no other broader concept » (the FILTER NOT EXISTS formula), aka the top concepts or root concepts of the vocabulary used to describe the projects. Then counting the projects by each category :
All root categories of EuroSciVoc used to describe the projects
PREFIX skosxl: <http://www.w3.org/2008/05/skos-xl#> |
… and maybe again more explicit results if refined to level 2 of hierarchy :
All ‘level 2′ root categories of EuroSciVoc used to describe the projects
PREFIX skosxl: <http://www.w3.org/2008/05/skos-xl#> |
And a little last one with a count, to enumerate most used EuroSciVoc Concepts for indexing projects :
Most used EuroSciVoc Concepts for indexing projects
PREFIX skosxl: <http://www.w3.org/2008/05/skos-xl#> |
This one an ideal one to generate a word cloud maybe ?
What if we send the CSV data to some nice online word cloud generator then ?
(OMG they also have a shooting star shape in there 🤩)
As a conclusion…
According to Science (CORDIS saying !), New Year’s resolutions appear difficult to be held… because most of time too ambitious, restrictive or unprecisely formulated : indeed, « the effectiveness of resolutions depends on how they are framed. »
Horizon 2024, let’s suggest a(n RDF ?) well-framed one : may CORDIS SPARQL endpoint initiative be an example for other structures who want to share Linked Open Data !
Wishing you Best Interoperability and a Very Merry ✨ Sparqling New Year ! ✨
Cet article CORDIS : a SPARQL endpoint is born ! est apparu en premier sur Sparna Blog.
]]>Cet article 2013-2023 : ‘Tis SKOSPlay!’s Birthday ! est apparu en premier sur Sparna Blog.
]]>To inaugurate my first article on Sparna’s blog, let’s share a little feedback of mine today about Sparna’s well-known SKOSPlay! whose 10 years’ birthday is to celebrate this year !
10 yo, quite a historic tool ! but more than ever actual in a context where the semantic technologies get front of the scene anew due to growing interest shown by the digital humanities movement to data interoperability projects via the standardized knowledge structuration (Wikipedia-Wikidata projects e.g., as semantic wiki devices), and also due to the last progress of artificial intelligence, now able to processing large amount of data and soon fully leveraging the potential of ontologies and knowledge graphs.
From asking for a taxonomy to querying RDF files with an API… |
This said, in a more practical way, semantic web standards are not always easy to manipulate as a professional – if non-initiate to SPARQL and nor confirmed data scientist, and even when you have got to deal with a simple structured list of terms !
Either your data is already SKOS-standardized (great !), there sometimes come to have a gap between normalization step and visualization step that requires a bit more technical IT skills. Either – most of time – the common muggle-born is to start with a plain Excel spreadsheet, create a list, add some hierarchy, maybe other scope notes or definitions and… end far puzzled wondering how to get a 5-star data vocabulary ⭐ !
A SKOSPlay!-within-a-SKOSPlay!
Wink to @belett, anything possible now with SKOSPlay! |
Aiming at visualizing (and printing !) SKOS thesauri, taxonomies and vocabularies at the very beginning, SKOSPlay! is a full online free and open source tool leveraging semantic technologies (RDF, SPARQL, inference, Linked Data) to generate downloadable HTML or PDF documents. More and more new features have been added since then : alignments display, OWL and SKOS-XL files processing, autocomplete fields and permuted indexes generating …
Hello @veronikaheim, maybe SKOSPlay! could match your need ? |
… among other nice and useful developments.
But as an Excel aficionada, the one that I prefer is the Excel-to-RDF converter tool.
One sheet. One import. One result. Easy-peasy, happy terminologist :))
(And you can even keep your custom colors templates and formats !!! 🦄 )
Come on & let’s SKOSPlay!
Let’s figure out you want to display or construct a small vocabulary you could quickly visualize in a standardized SKOS-structured way :
Now to fit in the SKOS model your data has to follow a particular template you can fullfill by downloading on SKOSPlay! website.
First you have to define the header of the template : the global scheme of your vocabulary, its URI, title and description :
Adding the terms of your list (with the URIs)… Here with the “@en” language indication on top of the column as I am to create an English-French multilingual vocabulary :
Recreating the arborescent structure through the Excel template (don’t mind my color palette, I always like colouring my Excel sheets to better visualize the info at a glance !).
The hierarchical broader-narrower structure is to be recreated by adding a “skos:narrower” column (or skos:broader, as you want, with only 1 broader value per line) where you will list the different specific values front of the more generic one (separated by comas). Here I used a PREFIX too in order to shorten my http:// URIs, SKOSPlay! can process them anyway !
Then adding a few notes and other information (multilingual values, skos:notation, any other default properties known in the converter (see the documentation) or different custom elements of yours by adding other PREFIXes :
Your Excel template is ready to go ! quite an easy configuration in my demo here, but SKOSPlay! can also deal with skos:Collections, SKOS-XL and other advanced RDF structures : blank nodes, RDF lists, named graphs. And now possible to generate OWL and SHACL files with the converter too !
Now it’s time to turn your (finally-not-so-dirty-) data into a SKOS-charming file. Take your favorite magic wand SKOSPlay! Excel-to-RDF converter tool and load your Excel file in it (adding some optional parameters if needed).
Well done, it’s a wonderful RDF-ized vocabulary file (here in a Turtle format but you have also RDF/XML, N-Triples, N-Quads, N3 and TriG available) :
Wingardium Visualiza !
We’re almost done. Go back to the website, tab “Play!”, load your last RDF-serialized file and go to the next step to chose the kind of display you want to get, endly press (SKOS)Play! and … abracadataaaaaaa !
Many different options to visualize your arborescent data. Tree, static and dynamic, but also more « professional » and printable sorts of displays like alphabetical, hierarchical or permuted views :
And KWIC (as for « KeyWord In Context ») :
Even possible to load an online Google spreadsheet (mine is shared here), just by adapting a little its URL for the converter’s need. Interesting feature in a collaborative purpose when you are team-building a vocabulary !
The whole pack fully documented and findable on Sparna’s website & Git. Some recent users even produced a short video tutorial to show what they managed to do with different SKOSPlay! visualization tools.
Already knew about SKOSPlay! ? go see his little brother, SHACLPlay! and feel free to give us some feedback in the comments
Happy Birthday SKOSPlay! & Long live Semantic Web !
A bit more Vouvray with your nougat de Tours ?
Cet article 2013-2023 : ‘Tis SKOSPlay!’s Birthday ! est apparu en premier sur Sparna Blog.
]]>Cet article Dashboards from SPARQL knowledge graphs using Looker Studio (Google Data Studio) est apparu en premier sur Sparna Blog.
]]>This guide will describe every step you need to know in order to create a Looker Studio Dashboard from SPARQL queries. All along, an example will be shown to illustrate all the steps with screenshots, code text and quotes.
Looker Studio does not provide any native connector for SPARQL. But a community connector exists, called SPARQL Connector, made by Datafabrics LLC, that can be used to create the data source. You can find it by searching for community connectors, or use this link. The code is available in this Github repository.
You have to grant access to your Google account for SPARQL Connector before using it. You will be able to find it in the connectors panel, in the Partner Connectors section, for your next queries.
From your report, click on “Add Data” on the bottom right of the screen to open the connector panel. Select the SPARQL Connector in the connector panel (you can also search for it by entering “sparql” in the research field).
Then, follow the steps to create your own data source:
https://dbpedia.org/sparql
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX dbr: <http://dbpedia.org/resource/>
PREFIX dbo: <http://dbpedia.org/ontology/>
SELECT ?capital_city_label ?country_label ?population
WHERE {
?capital_city dbo:type dbr:Capital_city.
?capital_city rdfs:label ?capital_city_label.
?capital_city dbo:country ?country.
?country rdfs:label ?country_label.
OPTIONAL {?capital_city dbo:populationMetro ?population.}
FILTER (lang(?capital_city_label) = 'en')
FILTER (lang(?country_label) = 'en')
}
[{"name": "capital_city_label", "dataType": "STRING"},
{"name": "country_label", "dataType": "STRING"},
{"name": "population", "dataType": "NUMBER"}]
Be sure your “name” fields match the fields you have on your query in the same order. You have to select the “dataType” you want for each of your fields, but you can change it later within Google Data Studio. Click here to learn more about data types.
Once every field is completed, you have to click twice on “Add”. If everything goes well, the connector panel will disappear and your new data source will appear on the right of the window and is ready to use. It is defaultly named as “SPARQL Connector”.
If you made a mistake while creating your data source, the SPARQL Connector panel can :
If you click on “See details” Google Data Studio will show you the error type from the connector :
First, you can change the name of your data source by clicking on the icon on the left of the data source on Google Data Studio (the icon will change into a pencil) to open the data source edition panel.
Then, click on the top left of the new panel where the name of your data source is to modify it.
Change name of the example data source to “Capital city Data (DBpedia)”
You can also change your data source by modifying your parameters in SPARQL Connector. To do so, click on “EDIT CONNECTION”. The SPARQL Connector panel will open with your current parameters and you can modify them.
In the data source edition panel, you can also change the type of your fields so it fits your needs (numbers can be changed as currency, text can be changed as geographic data, etc.).
Be careful of your fields format, you may not be able to use your data anymore. For example, if you have a “,” as a decimal separator, you can change your data type but you won’t be able to use this field as Google Data Studio uses “.” as a decimal separator.
The connector will also apply default values in query results which don’t have a value for a requested field. The default values are 0 for numbers, “” for strings and false for booleans.
The population field on DBpedia has some null values, but the connector transformed all these values into default values (0 for numbers).
You may need to use calculated fields in order to obtain new fields or to transform data. To create one, click on “ADD A FIELD” on the right side of the same panel. Check the following page from the documentation to learn more about calculated fields.
By using a calculated field, the population data can be switched back to the original values.
In the new panel, choose the name of your new field, enter the formula. To ensure your formula is correct, a green check appears at the bottom of the panel. If not, it will turn into a red cross.
Enter the new field name: « population_recalculated ». Then enter the formula of the field : « NULLIF(population,0) ». In this case, if any population value is equal to 0 in the population field, it will turn into a null value in the calculated field.
Once you manage to create all your calculated fields, you may have some useless fields in your data source. Those fields may decrease the speed of your dashboard. You can use the “Extract Data” to keep the fields you need in another data source that you will use to make your report.
To use it, click on “Add Data” on the bottom right of the screen and select “Extract Data”.
Then, select your data source and the fields you want to keep in your report. You can make many extractions from one data source if you need.
Choose the data source and keep only 3 fields : “capital_city_label”, “country_label” and “population_recalculated”.
You can also configure the auto-update tool to make sure your extracted data are up to date with the latest version of your data source from SPARQL Connector. In the bottom right of the panel, switch the auto-update button then choose the occurrence of the update (between daily, weekly and monthly).
A data source defaultly named “Extract Data” appears with the fields you selected from the previous data source.
This method only works for data sources, you won’t be able to use it on blended data. Make sure to do the extraction before blending to improve your performance. To learn more about blending, see this page from the Looker Studio documentation.
Here is a quick guide on how to create a chart in Google Data Studio. Check the chart reference documentation for more information about charts available by default.
To build a dashboard, you will need to select a widget first (pie chart, table, histograms, etc.). Click on “Add a chart” on the top of the screen and select the one you need.
Click on “Add a chart” and select a pie chart.
Select your chart on the report, it will open a panel on the right side of the screen where you can see the chart type and modify it. You can select the data to display in the “SETUP” panel. You can also customize the chart with the “STYLE” panel.
Place the chart on your dashboard anywhere you want to see it. Google Data Studio will automatically choose the data source and some fields which fit the charts, but you can choose to modify them in the “SETUP” panel on the right.
Choose “capital_city_label” as dimension and “population recalculated” as metric.
Here is the result of this configuration :
In the “STYLE” panel, you can choose to modify some options in the chart to customize it.
Change the number of slices from 10 to 6 to see the 5 top values + others value.
The chart will change automatically with your new parameters as you change them.
Congratulations, you have successfully made your first chart!Try to get your own data sources with SPARQL Connector, make your own dashboards with Looker Studio, and send us the links !
Cet article Dashboards from SPARQL knowledge graphs using Looker Studio (Google Data Studio) est apparu en premier sur Sparna Blog.
]]>Cet article Clean JSON(-LD) from RDF using Framing est apparu en premier sur Sparna Blog.
]]>So what you need is to produce a clean JSON structure from your raw RDF triples. And when I mean “clean”, I mean :
You have 2 possibilities to do that :
There are 2 nice things about the solution with JSON-LD framing :
The principle of JSON-LD framing is that you provide a JSON-LD @context with an additionnal frame specification that defines how the JSON should be structured (indented), which entity to include at each level (entities can be filtered based on some criteria), and also which properties to include in each entity.
To start with JSON-LD framing, what you need is JSON-LD. Any JSON-LD. Typically the raw JSON-LD serialization that any RDF library or triplestore will produce; that kind of ugly, messy, full-of-URIs-and-@language kind of JSON. So something like:
(Brrr, scary, no ?)
And then what you need is the JSON-LD playground with the “Framed” tab. This will allow you to test your context and frame specification.
And when deployed in production, what you will need is a JSON-LD library that is capable of implementing the JSON-LD framing algorithm. Implementations are listed here, and you need an implementation compatible with JSON-LD 1.1.
As an example, I use a JSON-LD file from the French National Library, the one from Les Misérables here : https://data.bnf.fr/fr/13516296/victor_hugo_les_miserables/ (download link at the bottom of the page).
You can download the initial JSON example, the frame specification, and the result in a zip. The zip also contains intermediate frame specifications.
We’ll start by specifying the JSON-LD context part.
Average developer will wonder what are those @type and @id keys. Re-map them straight away to type and id:
"type" : "@type",
"id" : "@id",
Schema.org and lot of other specifications do that.
If you have a named graph at the top, introduced by @graph, my suggestion would be to simply remap it to a fixed key, like « data », or « entities » :
"data" : "@graph",
Get rid of any trace of URI or short URIs in JSON keys. Declare a term for every property in your graph. The simplest way to do this is to use the local part of the URI (after last “#” or “/”) as the term. Order the context by the alphabetical order of the terms. Terms for properties will usually start with a lowercase letter.
In corner cases you may end up with the same term (such as in the example bnf-onto:subject and dcterms:subject), so in that case you need a different key, I chose “bnf-subject” here for bnf-onto:subject and kept “subject” for dcterms:subject.
"creator" : "dcterms:creator",
"date" : "dcterms:date",
"dateOfWork" : "rdagroup1elements:dateOfWork",
"depiction" : "foaf:depiction",
"description" : "dcterms:description",
Now you want to do the same thing to get rid of any trace of URIs in the “type” of entities. Declare a term for every class in your ontology/application profile. List the classes in a different section than the properties. Terms for classes will usually start with an uppercase.
"Concept" : "skos:Concept",
"Document" : "foaf:Document",
"ExpositionVirtuelle" : "bnf-onto:ExpositionVirtuelle",
Now you want to get rid of all those ugly “id”, we are only interested in listing the values. To do that, modify the mapping of the property (here “depiction”) to state its values are URIs. You need to change the mapping from
"depiction" : "foaf:depiction",
to
"depiction" : { "@value" : "foaf:depiction", "@type":"@id" },
And so parts like this :
"depiction": [
{
"id": "https://gallica.bnf.fr/ark:/12148/btv1b8438568p.thumbnail"
},
{
"id": "https://gallica.bnf.fr/ark:/12148/btv1b9004781d.thumbnail"
},
{
"id": "https://gallica.bnf.fr/ark:/12148/bpt6k5545348q.thumbnail"
}
]
Will be turned into
"depiction": [
"https://gallica.bnf.fr/ark:/12148/btv1b8438568p.thumbnail",
"https://gallica.bnf.fr/ark:/12148/btv1b9004781d.thumbnail",
"https://gallica.bnf.fr/ark:/12148/bpt6k5545348q.thumbnail",
"https://gallica.bnf.fr/ark:/12148/btv1b8438570r.thumbnail"
]
Now you want to get rid of the @datatype information for literals. If the value of a property always uses the same datatype, which is the case 99,9% of the time, then you can change the mapping from
"property" : "http://myproperty",
to
"property" : { “@id”: "http://myproperty", “@type”:”xsd:date” }
(The example used does not have datatype properties.)
Now let’s get rid of the @language. For this you have 2 choices : when the language is always the same for the value, you can indicate it in the context, the same way that you would do for the datatype but with the @language key. So you change from
"description" : "dcterms:description",
to
"description" : { “@id” : "dcterms:description", “@language” : “fr” }
You could even have different terms for different languages, such as :
"title_fr" : { "@id" : "dcterms:title", "@language" : "fr" },
"title_en" : { "@id" : "dcterms:title", "@language" : "en" },
"title" : { "@id" : "dcterms:title" },
or when you have multilingual multiple values, you can make the property a language map by declaring it this way:
"editorialNote" : { "@id" : "skos:editorialNote", "@container" : "@language" },
Which will turn the language code as a key in the JSON output:
"editorialNote": {
"fr": [
"BN Cat. gén. (sous : Hugo, comte Victor-Marie) : Les misérables. - . - BN Cat. gén. 1960-1969 (sous : Hugo, Victor) : idem. - . -",
"Laffont-Bompiani, Oeuvres, 1994. - . - GDEL. - . -"
] },
In that case, watch out for cases where there is a value without language, it will generate a @none key.
By now you already get a much cleaner JSON and almost all “unnecessary” URIs have disappeared. But we still have some URI references that we can clean up : the ones that are references to controlled lists with a finite number of values.
We can declare term mappings for those values just like we did to map properties and classes. BUT – and this is the trick, we need to change the property declaration from “@id” to “@vocab” for the replacement to happen. This is documented in the « Type coercion » section of the spec.
In our example, the mapping to languages and subjects are good candidates to be mapped to JSON terms. So we change
"language" : { "@id" : "dcterms:language", "@type":"@id" },
"subject" : { "@id" : "dcterms:subject", "@type":"@id" },
to
"language" : { "@id" : "dcterms:language", "@type":"@vocab" },
"subject" : { "@id" : "dcterms:subject", "@type":"@vocab" },
“fre” : “http://id.loc.gov/vocabulary/iso639-2/fre”,
“eng” : “http://id.loc.gov/vocabulary/iso639-2/eng”,
Now the only URIs left are the ids of the main entities in our graph, and references to those ids. Reference to controlled vocabularies with a limited number of values have been mapped to JSON terms. Although we cannot turn all the remaining URIs to JSON terms (because we can’t declare all possible entity URIs in the context), we can shorten them by adding a prefix mapping in the context, in our case:
"ark-https": "https://data.bnf.fr/ark:/12148/",
(I note that there are http:// and https:// URIs in the data, I don’t know why)
So now we have clean values, no URIs, no fancy JSON-LD keys. But we still don’t have a structure indented the way the average developer would expect it; and this is where the frame specification comes into play.
The frame specification acts as both a filter/selection mechanism and as a structure definition. At each level you indicate the criterias for the object to be included. In our example we have a skos:Concept (the entry in the library catalog) that is foaf:focus a Work (the Book « in the real world »), and that skos:Concept is the subject of many virtual exhibits. We want to have the Concept and the Work at first level, and under the concept the exhibits. But there is a trick : it is the virtual exhibits that points to the concept with a dcterms:subject, and we want it the other direction : Concept is_subject_of Exhibit, so we need a @reverse property.
To do that, add the following reverse mapping declaration: (don’t modify the existing one):
"subject_of" : { "@reverse" : "dcterms:subject" },
Note the use of « @reverse » to indicate that JSON key is to be interpreted from object to subject when turned into triples.
With that in place, we can write our frame specification, which goes right after the @context we have designed before:
"type" : ["Concept", "Work"],
"subject_of" : {
"type" : "ExpositionVirtuelle"
}
Note how we use the terms defined in the context previously. This is to be understood the following way : « at the first level, take any entity with a type of either Concept or Work, then insert a subject_of key and put inside any value that has a type ExpositionVirtuelle ». This garantees the virtual exhibits objects will go under the Concept, and not above or at the same level. But this is not sufficient, as you will notice if you apply that framing that the Work is repeated under the « focus » property of the Concept, and at the root level. This is because of the default behavior of the JSON-LD playground regarding object embedding (objects are always embeded when they are referenced)
To avoid embedding when it is undesired, we can set the « @embed » option to « @never » on the « focus » property, like so :
"type" : ["Concept", "Work"],
"subject_of" : {
"type" : "ExpositionVirtuelle"
},
"focus" : {
"@embed" : "@never",
"@omitDefault": true
}
This tells the framing algorithm to never embed the complete entity inside the focus property, just reference the URI instead.
Also, you will notice the use of « @omitDefault » to true; this tells the framing algorithm to omit the focus property when it has no value. Otherwise, since the Work does not have a foaf:focus property (only the Concept), then it will get a « focus » key set to null.
Well, I am sure this can be controlled, either by specifying explicitely all the keys you want, in the order you want them, in the frame specification, or by using an « ordered » parameter to the JSON-LD API, but that is not available in the playground.
If you list all keys explicitely in the frame specification, don’t forget to use wildcards so that any value will match; wildcards are empty objects with « {} »:
"myProperty" : {}
Much nicer no ? This is something you can put into the hand of an average developer.
Do you have a SHACL specification of the structure of your graph ? wouldn’t it be nice to automate the generation of the JSON-LD context from SHACL ? Maybe we could do that in SHACL-Play ? stay tuned !
Probably what we can automate is the context part, which can be global and unique for all your graph, but the framing specification should probably be different for each different API you need; each framing specification will then reference the same context by its URL.
Image : [Encadrement ornemental] ([1er état]) / .Io. MIGon 1544. [Jean Mignon] ; [d’après Le Primatice] https://gallica.bnf.fr/ark:/12148/btv1b53230250h
Cet article Clean JSON(-LD) from RDF using Framing est apparu en premier sur Sparna Blog.
]]>Cet article Supports de formation SPARQL CC-BY-SA est apparu en premier sur Sparna Blog.
]]>Cet article Supports de formation SPARQL CC-BY-SA est apparu en premier sur Sparna Blog.
]]>Cet article Fair Data Collective is doing cool things with SKOS Play and xls2rdf est apparu en premier sur Sparna Blog.
]]>Here is also nice video showing how to visualize such a SKOS vocabulary in SKOS Play visualization tools.
Thanks to Nikola Vasiljevic and John Graybeal from FAIR Data Collective for this nice integration !
You can check out the Fair Data Collective page on LinkedIn : « Making practical and easy-to-use FAIR data solutions ».
Cet article Fair Data Collective is doing cool things with SKOS Play and xls2rdf est apparu en premier sur Sparna Blog.
]]>Cet article Alimenter Talend avec SPARQL (sur Wikidata) est apparu en premier sur Sparna Blog.
]]>Le principe est simple : arriver à exécuter une requête SPARQL puis traiter les résultats correspondants pour en faire un tableau de données. Ce tableau de données pourra ensuite être exporté, combiné, enregistré, comme vous le souhaitez.
Pour illustrer cela nous allons interroger Wikidata au travers de son service d’interrogation SPARQL en utilisant sa première requête d’exemple qui récupère… les chats !
La requête est la suivante, et voici le lien direct pour l’exécuter dans Wikidata :
SELECT ?item ?itemLabel WHERE { ?item wdt:P31 wd:Q146. SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". } } LIMIT 10
Vous pouvez télécharger le job présenté ici dans ce repository Github d’exemple, et l’importer directement dans Talend.
Nous allons utiliser les composants Talend suivant :
Pour commencer, vous devez créer un nouveau job.
Attention! La requête SPARQL est une chaîne de caractères Java, vous devez donc : 1/ L’entourer avec des guillemets doubles 2/ ajouter le caractère d’échappement \ avant les guillemets dans la requête et 3/ écrire la requête sur une seule ligne. Voici la chaîne de caractères correspondante :
“SELECT ?item ?itemLabel WHERE { ?item wdt:P31 wd:Q146. SERVICE wikibase:label { bd:serviceParam wikibase:language \"[AUTO_LANGUAGE],en\". } } LIMIT 10”
Nous allons paramétrer le tExtractXMLField :
Naviguez vers l’emplacement du fichier pour le récupérer.
Et voilà le résultat :
Uri;Label http://www.wikidata.org/entity/Q378619;CC http://www.wikidata.org/entity/Q498787;Muezza http://www.wikidata.org/entity/Q677525;Orangey http://www.wikidata.org/entity/Q851190;Mrs. Chippy http://www.wikidata.org/entity/Q1050083;Catmando http://www.wikidata.org/entity/Q1201902;Tama http://www.wikidata.org/entity/Q1207136;Dewey Readmore Books http://www.wikidata.org/entity/Q1371145;Socks http://www.wikidata.org/entity/Q1386318;F. D. C. Willard http://www.wikidata.org/entity/Q1413628;Nora
Vous savez donc maintenant comment alimenter Talend à partir d’une base accessible en SPARQL, en quelques clics et sans code ! Cela permet de valoriser votre graphe de connaissances pour l’intégrer dans le reste du système d’information.
Cet article Alimenter Talend avec SPARQL (sur Wikidata) est apparu en premier sur Sparna Blog.
]]>Cet article SHACL Play! free online SHACL validator for RDF data est apparu en premier sur Sparna Blog.
]]>Here is a screenshot of how the validation report looks like :
SHACL Play! features :
SHACL Play also features a command-line variant (see the documentation on the Github wiki).
Future features could include :
SHACL Play includes a catalog of online Shapes. The catalog is collaboratively editable by modifying the Shapes Catalog source file on Github, through pull requests.
Adding an entry to the catalog allows to:
Another SHACL validator is the SHACL playground is at https://shacl.org/playground/. However the UI is too technical for newcomers, and it relies on a Javascript validator in the browser, thus I am not sure it would be capable of validating large Datasets.
You need to create your own set of SHACL rules ? You don’t have a tool to do that and you don’t want to write Turtle file by hand ? One technique I use is to edit them in an Excel spreadsheet that is converted in RDF using SKOS Play Excel 2 RDF converter. Here are 1 example of such a file : OpenArchaeo shapes in Excel.
You could start from this template and modify it to create your own Shapes Graph.
Now let’s SHACL Play!
Post illustration : « Calculateur » Bulletin des sciences mathématiques 1922, found from https://www.bnf.fr/fr/mathematiques-informatique-et-sciences-du-numerique, from Gallica at https://gallica.bnf.fr/ark:/12148/bpt6k9620547z/f123.item
Cet article SHACL Play! free online SHACL validator for RDF data est apparu en premier sur Sparna Blog.
]]>Cet article Semantic Markdown Specifications est apparu en premier sur Sparna Blog.
]]>I see a lot of potential in this, and already see some use-cases. Unfortunately I don’t have the bandwith, nor the full skills to make this happens. So I am just writing this in the hope that the idea is implemented by someone, or that someone tells me it is totally nonsense…
Here are the semantic annotations use-cases I see with such a Semantic Markdown :
Note that I am not necessarily looking for a way to produce RDFa annotations on the generated HTML, although that would be nice for a schema.org use-case. Any conversion route from the original semantically annotated markdown to a set of triples would be fine.
My source of inspiration is essentially Span Inline Attribute Lists » from the Kramdown syntax.
This piece of Semantic Markdown :
Tomorrow I am travelling to _Berlin_ {.schema:Place}
When interprered by a Semantic Markdown parser would produce this set of triples :
_:1 a <http://schema.org/Place> . _:1 rdfs:label “Berlin” .
The span immediately preceding the « {.xxxx} » annotation is taken as the label of the entity. The use of rdfs:label to store the label of the entity could be subject to a parser configuration option.
One could imagine that a semantic markdown parser relies on the same RDFa Initial Context to interpret the « schema: » prefix without further declaration. But what about other ontologies ? we would need some kind of prefixes / vocab declaration somewhere in the document, just like in RDFa.
Note also that Markdown parser supporting the « {.xxxxx} » syntax will also insert this value as a CSS class on the corresponding span, so we win both on the CSS level and the semantic level.
Similarly, we could annotate a title
### European Semantic Web Conference {.schema:Event} Lorem ipsum...
In that case, the full content of the title is interpreted as the label of the entity :
_:1 a <http://schema.org/Event> . _:1 rdfs:label “European Semantic Web Conference” .
Tomorrow I am travelling to [Berlin](https://www.wikidata.org/wiki/Q64) {.schema:Place}
Would yield
<https://www.wikidata.org/wiki/Q64> a <http://schema.org/Place> . <https://www.wikidata.org/wiki/Q64> rdfs:label “Berlin” .
If a list follows an annotated entity, then it should be interpreted as a set of predicates with this entity as subject :
### Specifications Meeting {.schema:Event} * Date : _11/10_{.schema:startDate} * Place {.schema:location} : Our office, Street name, 75014 Paris * Meeting participants : {.schema:attendee} * Thomas Francart{.schema:Person} * [Someone else](https://www.wikidata.org/wiki/Q80) * Tim Foo * Description : Some information not annotated ### titre suivant Lorem ipsum...
Should yield :
_:1 a <http://schema.org/Event> . _:1 rdfs:label “Specifications Meeting” . _:1 <http://schema.org/startDate> "11/10" . _:1 <http://schema.org/location> "Our office, Street name, 75014 Paris" . _:1 <http://schema.org/attendee> _:2 , <https://www.wikidata.org/wiki/Q80>, _:3 . # attendee that is annotated : we know a type and a name _:2 a <http://schema.org/Person> _:2 rdfs:label “Thomas Francart” . # attendee that is annotated with a URI : we keep the URI and add a label to it (?) <https://www.wikidata.org/wiki/Q80> rdfs:label "Someone else" . # attendee that is not annotated - but we know he was an attendee _:3 rdfs:label "Tim Foo" .
Metadata for Markdown, a Python extension to generated JSON-LD from YAML section in a Markdown document.
EDIT : PanDoc divs and spans : https://pandoc.org/MANUAL.html#divs-and-spans
I like the <span> syntax :
[This is *some text*]{.class key="val"}
This is close ! but still would not produce triples, unless one writes explicitely RDFa :
My name is [Thomas Francart]{typeof="schema:Person"}
Cet article Semantic Markdown Specifications est apparu en premier sur Sparna Blog.
]]>Cet article SPARNAtural : écrire des requêtes SPARQL, tout naturellement est apparu en premier sur Sparna Blog.
]]>UPDATE avril 2021 : Sparnatural a un nouveau site web a http://sparnatural.eu !
Dans la copie d’écran ci-dessus, on demande « Toutes les oeuvres exposées dans un musée Français qui expose une oeuvre du Caravage, et dont l’auteur est Italien ».
Le screencast ci-dessus est extrait de la démo de Sparnatural paramétrée sur DBPedia avec laquelle vous pouvez jouer en ligne.
Le développement de ce composant a été réalisé dans le cadre du projet OpenArchaeo où il est utilisé pour naviguer dans des données archéologiques. Il est autonome du projet et peut être réutilisé dans le cadre de sa license LGPL. Le code source est ouvert et il est interdit de « refermer » le code source, toute modification doit être publiée sous la même licence, et idéalement reversée dans le dépôt Github du projet.
Sparnatural s’inspire en grande partie de la navigation proposée par l’interface ResearchSpace du British Museum.
Le résultat, au-delà d’un simple éditeur SPARQL, offre une vraie expérience d’exploration des données, avec des mécanismes d’essai-erreur, retour arrière, prise du graphe par un autre bout, etc.
L’objectif est d’offrir un moyen simple et compréhensible de naviguer dans des données. En conséquence, Sparnatural n’est capable que de construire des motifs de graphe SPARQL simple, et ne sais pas gérer les UNION, OPTIONAL, sous-select, BIND, etc.
Par ailleurs le composant s’arrête à sélectionner les URIs des objets cherchés, il n’est pas possible pour un utilisateur de choisir les colonnes présentées dans le tableau de résultats. Il faut post-traiter la requête pour injecter la sélection des valeurs de colonnes.
Si, comme pour la démo DBPedia, vous intégrez Sparnatural avec YASGui et YASR et que la page HTML envoie la requête SPARQL, faites attention que le service SPARQL doit supporter les requêtes CORS (Cross-Origin Resource Sharing), ce qui n’est pas le cas de tous les services SPARQL… mais ça devrait !
Rendez-vous sur le dépôt Github de Sparnatural si vous voulez un peu plus de doc ou que vous voulez remonter un ticket, un bug, ou contribuer au code. D’autres démos devraient suivre, stay tuned !
Cet article SPARNAtural : écrire des requêtes SPARQL, tout naturellement est apparu en premier sur Sparna Blog.
]]>