In economics, there’s the definition (phenomenon if you will) of the impossible trinity. It’s actually a trilemma in international economics which states that it’s impossible to have all three of the following at the same time:
- A fixed foreign exchange rate
- Free capital movement (absense of capital controls)
- An independent monetary policy
Reading about it in an article the other day, I came to think that there’s the same kind of impossible trinity in metadata. In this article, the author (Nikhil Sonnad) talked about the impossible trinity of encyclopedias, which is actually true for digital content as well I guess.
As the article supports, that the impossible trifecta in the case of encyclopedias is for the content to be (i) authoritative, (ii) comprehensive and (iii) up-to-date. By definition, formal processes and peer-review mechanisms, that ensure “authoritative” usually are slow and take a lot of time, which contradicts “up-to-date”. In the same sense, being “comprehensive” requires lots of effort which is difficult to pair with “authoritative” and timely enough to be considered up-to-date.
But how could this reflect metadata? In the same sense, metadata is content, and thinking in similar terms, authoritative, comprehensive and up-to-date applies as well. And then again, not. Metadata are in nature more specific on the learning object they describe. In the same sense, as well as the learning object is up-to-date, the metadata will be fine. Most probably, when the learning object becomes irrelevant, its metadata will become too. On the other hand, there are metadata that can and should be updated, especially the ones related to technical compatibility or formatting, in the case the digital resource is updated/preserved.
To choose my trinity, I will look into the metadata quality metrics, proposed by Bruce & Hillman. In their work, they propose the following quality metrics for metadata:
- Conformance to Expectations,
- Logical Consistency & Coherence,
Although all of them are really critical, for my impossible trifecta, I would have to choose, (i) Accuracy, (ii) Logical Consistency & Coherece and (iii) Timeliness. In the initiatives I have been involved so far, it has been really difficult to have at the same time all these metrics on an acceptable level.
Accuracy: Have accepted methods been used for creation or extraction? What has been done to ensure valid values and structure? Are default values appropriate, and have they been appropriately used?
Logical Consistency & Coherence: Is data in elements consistent throughout? How does it compare with other data within the community?
Timeliness: Is metadata regularly updated as the resources change? Are controlled vocabularies updated when relevant?
No matter how much I try to wrap my head around this, I really find that managing to get a high rank in all of these three metrics, is more difficult even from the work that Blade had to carry out in the following movie! Especially when we talk about repositories with various collections from various sources, trying to serve the needs of diverse user communities. 😉
The most interesting about this article though, was the way that the Stanford Encyclopedia of Philosophy set up a process and a workflow to achieve its trifecta. Having subject editors and authors that work hand-in-hand to develop and curate the content is a model that works. It works on this domain and case, but it doesn’t work everywhere as it has been proven. But nevertheless, its successful introduction in the case of philosophy, is an incentive and a starting point to tinker with it and see if it can prove useful in other cases as well. This article on Socrates is enough to prove the high quality of the content produced. My humble opinion is that if we find a uniform way to accredit content creators and metadata annotators across platforms and systems, linking their expertise to their professional development, then maybe we will come one step closer in hitting our own trifecta, without the needs of these cool blades that kill vampires! 😉