Approaches to estimating the universe of natural history collections data

Arturo H. Ariño

Abstract


This contribution explores the problem of recognizing and measuring the universe of specimen-level data existing in Natural History Collections around the world, in absence of a complete, world-wide census or register. Estimates of size seem necessary to plan for resource allocation for digitization or data capture, and may help represent how many vouchered primary biodiversity data (in terms of collections, specimens or curatorial units) might remain to be mobilized.
Three general approaches are proposed for further development, and initial estimates are given. Probabilistic models involve crossing data from a set of biodiversity datasets, finding commonalities and estimating the likelihood of totally obscure data from the fraction of known data missing from specific datasets in the set. Distribution models aim to find the underlying distribution of collections’ compositions, figuring out the occult sector of the distributions. Finally, case studies seek to compare digitized data from collections known to the world to the amount of data known to exist in the collection but not generally available or not digitized.
Preliminary estimates range from 1.2 to 2.1 gigaunits, of which a mere 3% at most is currently web-accessible through GBIF’s mobilization efforts. However, further data and analyses, along with other approaches relying more heavily on surveys, might change the picture and possibly help narrow the estimate. In particular, unknown collections not having emerged through literature are the major source of uncertainty.

Keywords


Natural history collections; size; estimates; primary biodiversity data

Full Text:

PDF


DOI: https://doi.org/10.17161/bi.v7i2.3991

Copyright (c) 2010 Arturo H. Ariño



Biodiversity Informatics. ISSN: 1546-9735
Hosted by the University of Kansas Libraries.