Start Submission Become a Reviewer

Reading: DataDeps.jl: Repeatable Data Setup for Reproducible Data Science

Download

A- A+
Alt. Display

Software Metapapers

DataDeps.jl: Repeatable Data Setup for Reproducible Data Science

Authors:

Lyndon White ,

The University of Western Australia, Crawley, Western Australia, AU
X close

Roberto Togneri,

The University of Western Australia, Crawley, Western Australia, AU
X close

Wei Liu,

The University of Western Australia, Crawley, Western Australia, AU
X close

Mohammed Bennamoun

The University of Western Australia, Crawley, Western Australia, AU
X close

Abstract

We present DataDeps.jl: a julia package for the reproducible handling of static datasets to enhance the repeatability of scripts used in the data and computational sciences. It is used to automate the data setup part of running software which accompanies a paper to replicate a result. This step is commonly done manually, which expends time and allows for confusion. This functionality is also useful for other packages which require data to function (e.g. a trained machine learning based model). DataDeps.jl simplifies extending research software by automatically managing the dependencies and makes it easier to run another author’s code, thus enhancing the reproducibility of data science research.

How to Cite: White, L., Togneri, R., Liu, W. and Bennamoun, M., 2019. DataDeps.jl: Repeatable Data Setup for Reproducible Data Science. Journal of Open Research Software, 7(1), p.33. DOI: http://doi.org/10.5334/jors.244
56
Views
4
Downloads
  Published on 29 Oct 2019
 Accepted on 03 Oct 2019            Submitted on 06 Aug 2018

1 Introduction

In the movement for reproducible sciences there have been two key requests upon authors: 1. Make your research code public, 2. Make your data public [3]. In practice this alone is not enough to ensure that results can be replicated. To get another author’s code running on a your own computing environment is often non-trivial. One aspect of this is data setup: how to acquire the data, and how to connect it to the code.

DataDeps.jl simplifies the data setup step for software written in Julia [1]. DataDeps.jl follows the unix philosophy of doing one job well. It allows the code to depend on data, and have that data automatically downloaded as required. It increases replicability of any scientific code that uses static data (e.g. benchmark datasets). It provides simple methods to orchestrate the data setup: making it easy to create software that works on a new system without any user effort. While it has been argued that the direct replicability of executing the author’s code is a poor substitute for independent reproduction [2], we maintain that being able to run the original code is important for checking, for understanding, for extension, and for future comparisons.

Vandewalle et al. [5] distinguishes six degrees of replicability for scientific code. The two highest levels require that “The results can be easily reproduced by an independent researcher with at most 15 min of user effort”. One can expend much of that time just on setting up the data. This involves reading the instructions, locating the download link, transferring it to the right location, extracting an archive, and identifying how to inform the script as to where the data is located. These tasks are automatable and therefore should be automated, as per the practice “Let the computer do the work” [7].

DataDeps.jl handles the data dependencies, while Pkg1 and BinDeps.jl,2 (etc.) handle the software dependencies. This makes automated testing possible, e.g., using services such as TravisCI3 or AppVeyor.4 Automated testing is already ubiquitous amongst julia users, but rarely for parts where data is involved. A particular advantage over manual data setup, is that automation allow scheduled tests for URL decay [8]. If the full deployment process can be automated, given resources, research can be fully and automatically replicated on a clean continuous integration environment.

1.1 Three common issues about research data

DataDeps.jl is designed around solving common issues researchers have with their file-based data. The three key problems that it is particularly intended to address are:

Storage location: Where do I put it? Should it be on the local disk (small) or the network file-store (slow)? If I move it, am I going to have to reconfigure things?

Redistribution: I don’t own this data, am I allowed to redistribute it? How will I give credit, and ensure the users know who the original creator was?

Replication: How can I be sure that someone running my code has the same data? What if they download the wrong data, or extract it incorrectly? What if it gets corrupted or has been modified and I am unaware?

2 DataDeps.jl

2.1 Ecosystem

DataDeps.jl is part of a package ecosystem as shown in Figure 1. It can be used directly by research software, to access the data they depend upon for e.g. evaluations. Packages such as MLDatasets.jl5 provide more convenient accesses with suitable preprocessing for commonly used datasets. These packages currently use DataDeps.jl as a back-end. Research code also might use DataDeps.jl indirectly by making use of packages, such as WordNet.jl6 which currently uses DataDeps.jl to ensure it has the data it depends on to function (see Section 4.1); or Embeddings.jl which uses it to load pretrained machine-learning models. Packages and research code alike depend on data, and DataDeps.jl exists to fill that need.

Figure 1 

The current package ecosystem depending on DataDeps.jl.

2.2 Functionality

Once the dependency is declared, data can accessed by name using a datadep string written datadep“Name”. This can treated just like a filepath string, however it is actually a string macro. At compile time it is replaced with a block of code which performs the operation shown in Figure 2. This operation always returns an absolute path string to the data, even that means the data must be download and placed at that path first.

Figure 2 

The process that is executed when a data dependency is accessed by name.

DataDeps.jl solves the issues in Section 1.1 as follows:

Storage location: A data dependency is referred to by name, which is resolved to a path on disk by searching a number of locations. The locations search is configurable.

Redistribution: DataDeps.jl downloads the package from its original source so it is not redistributed. A prompt is shown to the user before download, which can be set to display information such as the orignal author and any papers to cite etc.

Replication: When a dependency is declared, the creator specified the URL to fetch from and post fetch processing to be done (e.g. extraction). This removed the chance for human error. To ensure the data is exactly as it was originally checksum is used.

DataDeps.jl is primarily focused on public, static data. For researchers who are using private data, or collecting that data while developing the scripts, a manual option is provided; which only includes the Storage Location functionality. They can still refer to it using the datadep“Name”, but it will not be automatically downloaded. During publication the researcher can upload their data to an archival repository and update the registration.

2.3 Similar Tools

Package managers and build tools can be used to create adhoc solutions, but these solution will often be harder to use and fail to address one or more of the concerns in Section 1.1. Data warehousing tools, and live data APIs; work well with continuous streams of data; but they are not suitable for simple static datasets that available as a collection of files.

Quilt7 is a more similar tool. In contrast to DataDeps.jl, Quilt uses one centralised data-store, to which users upload the data, and they can then download and use the data as a software package. It does not directly attempt to handle any Storage Location, or Redistribution issues. Quilt does offer some advantages over DataDeps.jl: excellent convenience methods for some (currently only tabular) file formats, and also handling data versioning. At present DataDeps.jl does not handle versioning, being focused on static data.

2.4 Quality Control

Using AppVeyor and Travis CI testing is automatically performed using the latest stable release of Julia, for the Linux, Windows, and Mac environments. The DataDeps.jl tests include unit tests of key components, as well as comprehensive system/integration tests of different configurations of data dependencies. These latter tests also form high quality examples to supplement the documentation for users to looking to see how to use the package. The user can trigger these tests to ensure everything is working on their local machine by the standard julia mechanism: running Pkg.test(“DataDeps”) respectively.

The primary mechanism for user feedback is via Github issues on the repository. Bugs and feature requests, even purely by the author, are tracked using the Github issues.

3 Availability

3.1 Operating system

DataDeps.jl is verified to work on Windows 7+, Linux, Mac OSX.

3.2 Programming language

Julia v0.6, and v0.7 (1.0 support forthcoming).

3.3 Dependencies

DataDeps.jl’s dependencies are managed by the julia package manager. It depends on SHA.jl for the default generation and checking of checksums; on Reexport.jl to reexport SHA.jl’s methods; and on HTTP.jl for determining filenames based on the HTTP header information.

List of contributors

  • Lyndon White (The University of Western Australia) Primary Author
  • Christof Stocker (Unaffiliated), Contributor, significant design discussions
  • Sebastin Santy (Birla Institute of Technology and Science), Google Summer of Code Student working on DataDepsGenerators.jl

3.4 Software location

Name: oxinabox/DataDeps.jl

Persistent identifier: https://github.com/oxinabox/DataDeps.jl/

Licence: MIT

Date published: 28/11/2017

Documentation Language: English

Programming Language: Julia

Code repository: GitHub

4 Reuse potential

DataDeps.jl exists only to be reused, it is a “backend” library. The cases in which is should be reused are well discussed above. It is of benefit to any application, research tool, or scientific script that has a dependency on data for it’s functioning or for generation of its result.

DataDeps.jl is extendible via the normal julia methods of subtyping, and composition. Additional kinds of AbstractDataDep can be created, for example to add an additional validation step, while still reusing the behaviour defined. Such new types can be created in their own packages, or contributed to the open source DataDeps.jl package.

Julia is a relatively new language with a rapidly growing ecosystem of packages. It is seeing a lot of up take in many fields of computation sciences, data science and other technical computing. By establishing tools like DataDeps.jl now, which support the easy reuse of code, we hope to promote greater resolvability of packages being created later. Thus in turn leading to more reproducible data and computational science in the future.

4.1 Case Studies

Research Paper: White et al. [6] We criticize our own prior work here, so as to avoid casting aspersions on others. We consider it’s limitations and how it would have been improved had it used DataDeps.jl. Two version of the script were provided8 one with just the source code, and the other also including 3GB of data. It’s license goes to pains to explain which files it covers and which it does not (the data), and to explain the ownership of the data. DataDeps.jl would avoid the need to include the data, and would make the ownership clear during setup. Further sharing the source code alone would have been enough, the data would have been downloaded when (and only if) it is required. The scripts themselves have relative paths hard-coded. If the data is moved (e.g. to a larger disk) they will break. Using DataDeps.jl to refer to the data by name would solve this.

Research Tool: WordNet.jl WordNet.jl is the Julia binding for the WordNet tool [4]. As of PR #89 it now uses DataDeps.jl. It depends on having the WordNet database. Previously, after installing the software using the package manager, the user had to manually download and set this up. The WordNet.jl author previously had concerns about handling the data. Including it would inflate the repository size, and result in the data being installed to an unreasonable location. They were also worried that redistributing would violate the copyright. The manual instructions for downloading and extracting the data included multiple points of possible confusion. The gzipped tarball must be correctly extracted. The user must know to pass in the grand-parent directory of the database files. Using DataDeps.jl all these issues have now been solved.

5 Concluding Remarks

DataDeps.jl aims to help solve reproducibility issues in data driven research by automating the data setup step. It is hoped that by supporting good practices, with tools like DataDeps.jl, now for the still young Julia programming language better scientific code can be written in the future.

Notes

Acknowledgements

Thank particularly to Christof Stocker, the creator of MLDatasets.jl (and numerous other packages), in particular for his bug reports, feature requests and code reviews; and for the initial discussion leading to the creation of this tool.

Competing Interests

The authors have no competing interests to declare.

References

  1. Bezanson, J, Edelman, A, Karpinski, S, Shah, VB 2014 Julia: A fresh approach to numerical computing. URL http://arxiv.org/abs/1411.1607. 

  2. Drummond, C 2009 Replicability is not reproducibility: nor is it good science. Proceedings of the Evaluation Methods for Machine Learning Workshop at the 26th ICML. URL http://www.site.uottawa.ca/ICML09WS/papers/w2.pdf. 

  3. Goodman, A, Pepe, A, Blocker, A W, Borgman, C L, Cranmer, K, Crosas, M, Stefano, R D, Gil, Y, Groth, P, Hedstrom, M, Hogg, D W, Kashyap, V, Mahabal, A, Siemiginowska, A and Slavkovic, A 2014 Ten simple rules for the care and feeding of scientific data. PLOS Computational Biology, 10(4): 1–5. DOI: https://doi.org/10.1371/journal.pcbi.1003542 

  4. Miller, G A 1995 Wordnet: a lexical database for english. Communications of the ACM, 38(11): 39–41. DOI: https://doi.org/10.1145/219717.219748 

  5. Vandewalle, P, Kovacevic, J and Vetterli, M May 2009 Reproducible research in signal processing. IEEE Signal Processing Magazine, 26(3): 37–47. ISSN 1053-5888. DOI: https://doi.org/10.1109/MSP.2009.932122 

  6. White, L, Togneri, R Liu, W and Bennamoun, M 2016 Generating bags of words from the sums of their word embeddings. In 17th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing). 

  7. Wilson, G, Aruliah, D A, Brown, C T, Hong, N P C, Davis, M, Guy, R T, Haddock, S H D, Huff, K D, Mitchell, I M, Plumbley, M D, Waugh, B, White, E P and Wilson, P 2014 Best practices for scientific computing. PLOS Biology, 12(1): 1–7. DOI: https://doi.org/10.1371/journal.pbio.1001745 

  8. Wren, J D 2008 Url decay in medline: a 4-year follow-up study. Bioinformatics, 24(11): 1381–1385. URL https://academic.oup.com/bioinformatics/article/24/11/1381/191025. DOI: https://doi.org/10.1093/bioinformatics/btn127 

comments powered by Disqus