Recently I've been helping Yaron do some SPARQL query optimization for his site Commons Walkabout.
Its a cool site. It lets you explore the media on Wikimedia Commons by filtering through various metadata fields.
For example - bodies of water located in a place that is in Ireland.
Its addicting too. The media with the best quality metadata tends to be those donated by museums, which often means they are quite interesting.
.jpg/759px-Arabah_(997008136649905171).jpg) |
An image the commons walkabout app showed me that I thought was pretty: Arabah desert in 1952 photographed by Benno Rothenberg
|
Structured data on Commons
As a result of helping with the optimization, I've been exploring structured (Wikidata-style) data on commons.
The structured data project has been a bit controversial. I think there is a feeling in the community that WMF abandoned the project at the 90% done point. It does mostly work, but there is still a lot of rough edges that make it less successful than it would otherwise be. Tools for authoring, interacting and maintaining the metadata are lacking from what I understand. Most files are not as described by metadata as they ought to be. Importantly there is now a dual system of organization - the traditional free text image description pages and category system, along with the newer structured metadata. Doing both means we're never fully committed to either.
The biggest criticism is the situation with the
commons query service (See
T297995 and
T376979). Right now the service requires you to log in first. In the beginning this sounded like it would be a temporary restriction, but it now appears permanent.
Logging in is a big issue because it makes it impossible to do client side apps that act as a front-end to the query service (Its not theoretically impossible, but the WMF's implementation of logging in doesn't support that). The auth implementation is not very user friendly, which is a significant hindrance, especially when many people who want to do queries aren't necessarily professional programmers (For example, the official instructions suggest using the browser dev console to look up the value of certain cookies as one of the steps to use the system). Some users have described the authentication system as such a hindrance that it makes more sense to shut the whole thing down than to keep it behind auth.
The SPARQL ecosystem is designed to be a linked ecosystem where you can query remote databases. The auth system means commons query service can't be used in federation. It can talk to other servers but other servers cannot talk to it.
Its a bit hard to understand why WMF is doing this. Wikidata is fully open, and that is a much larger dataset of interest to a much broader group of people. If blazegraph is hard (Which don't get me wrong, i am sure it is), the commons instance should be trivial compared to the Wikidata one. You can just look at the
usage graphs that clearly show almost nobody using the commons query service relative to the wikidata query service. The commons query service seems to be averaging about 0 requests per minute with occasional spikes up to 15-40 reqs/min. In comparison,
the wikidata query service seems to average about 7500 reqs/minute
I've heard various theories: That this is the beginning of an
enshitification process so that Wikimedia can eventually sell access under the Wikimedia Enterprise banner or that they don't want AI companies to scrape all the data (Why an AI company would want to scrape this but not wikidata and why we would want to prevent them, I have no idea). These aren't really super convincing to me.
I suspect the real reason is that WMF has largely cut funding to the structured data project. That just leaves a sysadmin team responsible for the blazegraph query end point. However normally such a team would work in concert with another team more broadly responsible for sdc. With no such other team, the blazegraph sysadmin team is very scared of being sucked into a position where suddenly they are solely responsible for things that should be outside their team's remit. They really don't want that to happen (hard to blame them), so they are putting the breaks on moving things forward with the commons query service.
This is just a guess. I don't know for sure what is happening or why, but that is the theory that makes the most sense to me.
The data model
Regardless of the above, the structured data project and SPARQL is really cool. I actually really like it.
While playing with it though, some parts do seem kind of weird to me.
Blank nodes for creator
The creator property says who created the image. Ideally the value is supposed to be a Q-number, but many creators don't have one.
The solution is to use a blank node. This makes sense, blank nodes in RDF are placeholders. They aren't equal to any other node, but allow you to specify properties.
If the relationship chain was:
<sdc:Some image> <wdt:P170 (creator)> < blank node > <wdt:P4174 Wikimedia Username> "Some username"
That would be fine. However its not. Instead creator property is rectified so that <wdt:P4174 Wikimedia Username> "Some username" is modifying the creator predicate instead of being a property of the blank node.
This feels so ontologically weird to me. It kind of weird we have to resort to such a hack for what is undoubtedly the main use case of this system.
Functionally dependent predicates
Some predicates functionally depend on the file in question. I feel like these should be added by the system. They should not be controlled or edited by the user.
For example P3575 data size. That's technical metadata. Users should not be responsible for inserting it. Users should not be able to change it. The fact that they are is a poor design in my opinion. Similarly for P2048 height, P2049 width, P1163 media type, P4092 checksum.
I find P1163 media type (aka the mime type of the file) especially weird. Why is this a string? Surely it should be the Q number of the file format in question if we're going to be manually filling it out?
The especially weird part is some of this data is automatically added to the system in the schema namespace. The system automatically adds schema:contentSize, schema:encodingFormat, schema:height, schema:width, schema:numberOfPages which are equivalent to some of these properties. So why are we duplicating them by hand (or by bot)?
At the same time, there seems to be a lot missing from schema. I don't see sha1 hash (sha1 isn't the best hash since it is now broken, but its the one MediaWiki uses internally for image). I'd love to see XMP file metadata included here as well, since it is already in RDF format.
The thing really missing (unless i missed it) is it seems impossible to get the url of the file description page or even the mediawiki page title, without string manipulation. This seems like a bizarre omission. I should be able to go from the canonical url at commons to the SDC item.
Querying the system
Querying the system can be a bit tricky sometimes because the data is usually spread between commons and wikidata, so you have to make use of SERVICE clauses to query the remote data, which can be slow.
The main trick seems to be to try and minimize the cross database communication. If you have a big dataset, try and minimize it before communicating with the remote database (or the label service).
Copious use of named subqueries (due to them being isolated by the optimizer) can really help here.
If you are fetching distinct terms (or counts of distinct terms) ensuring that blazegraph can use the distinct term optimization is very helpful. It seems like the blazegraph query optimizer isn't very good and often cannot use this optimization even when it should. Making the group by very simple and putting it into a named subquery can help with this.
The distinct term optimization is critically important for running fast aggregation queries. Often it makes sense to first get the list of distinct terms you are interested in and their count (if applicable) in a named subquery, then go and fetch information for each item or filter them instead of doing it all in one group by.
If you have a slow query that involves any sort of group by, the first thing i would suggest is to try and extract a simple group by into a subquery (by simple I mean: only 1 basic graph pattern, no services, grouping by only 1 term, and either no aggregate functions or the only aggregrate function being count(*)) and then use the results of that query as the basis of the rest of your query.
If its still too much, using the
bd:sample service can be really helpful. This runs the query over a random subset of results instead of the whole thing. If you just want to get the broad trends this can often be good enough.
The most complicated thing to query seems to be the P170 creator predicate. There are 72 million items with that predicate, the vast majority are blank nodes, so the number of distinct terms is in the millions (Even for files with the same creator, they are considered distinct of they are blank nodes). Queries involving it seem to almost always time out.
Initially I thought the best that could be done was sampling and interpolating. For example,
this query that gives you the top creators (who have a Q numbers). The numbers aren't exactly right, but they seem to be within the right order of magnitude.
Unfortunately filtering via wikibase:isSomeValue() is very slow so we can't just filter out the blank nodes. I did find a hack though. In general blazegraph arranges blank nodes at the end of the result set (or at least, it seems so in this case). If you do a subquery of distinct terms with a limit of about 10,000 you can get all the non-blank nodes (Since there are only about 7200 of them and they are at the beginning). This is hacky since you can't use range queries with URI values and you can't even put an order by on the query or it will slow down, so you just have to trust blaze graph is consistently returning things in the order you want even though it is by no means required to. It seems to work.. For example,
here is an example table using this method counting the number of images created by creators (with Q numbers) grouped by their cause of death. A bit morbid, but is is fascinating you can make such an arbitrary query.
Conclusion
Anyways, I do find RDF and SPARQL really cool, so its been fun poking around in commons' implementation of it. Check out Yaron's site https://commonswalkabout.org/ it is really cool.
No comments:
Post a Comment