Difference between revisions of "Dataverse"

From Archivematica
Jump to navigation Jump to search
Line 38: Line 38:
 
== Transfer METS file ==
 
== Transfer METS file ==
  
When the ingest script retrieves content from Dataverse, it generates a METS file to allow Archivematicaf to understand the contents of the transfer and the relationships between its various data and metadata files.
+
When the ingest script retrieves content from Dataverse, it generates a METS file to allow Archivematica to understand the contents of the transfer and the relationships between its various data and metadata files.
 +
 
 +
=== Sample transfer METS file ===
 +
 
 +
This sample shows what a tranfer METS file might look like for a study consisting of one tabular data file and one non-tabular data file. The non-tabular file (Study info.pdf) has been downloaded as an individual file, and the tabular file (YVR_weather_data.tab and related files) as a zipped file which has been unzipped by the ingest script. The zipped file included a .sav file which was the original data file uploaded to Dataverse by the researcher; this is referenced in the fileGrp USE="original" in the fileSec. The pdf file is the other file in that file group. The .tab file and another derivative, an .rData version, are both identified as fileGrp USE="derivative". The zipped file also included three metadata files generated by Dataverse for the .tab file, which have been identified as fileGrp USE="metadata". The json file for the dataset is also included in that file group.
 +
</br>
  
 
[[File:METS1G.png|800px|thumb|center]]
 
[[File:METS1G.png|800px|thumb|center]]
 
[[File:METS2G.png|800px|thumb|center]]
 
[[File:METS2G.png|800px|thumb|center]]

Revision as of 18:49, 24 August 2015

Main Page > Documentation > Requirements > Dataverse

This page tracks development of a proof of concept integration of Archivematica with Dataverse.

See also

Overview

This wiki captures requirements for ingesting studies (datasets) from Dataverse into Archivematica for long-term preservation.

Workflow

  • The proposed workflow consists of issuing API calls to Dataverse, receiving content (data files and metadata) for ingest into Archivematica, preparing standard Archivematica Archival Information Packages (AIPs) and placing them in archival storage, and updating the Dataverse study with the AIP UUIDs.
  • Analysis is based on Dataverse tests using https://apitest.dataverse.org/ and https://dataverse-demo.iq.harvard.edu/, online documentation at http://guides.dataverse.org/en/latest/api/index.html and discussions with Dataverse developers and users.
  • Proposed integration is for Archivematica 1.5 and higher and Dataverse 4.x.

Workflow diagram

Dataverse-Archivematica workflow 1.png

Workflow diagram notes

[1] "Ingest script" refers to an automation tool designed to automate ingest into Archivematica for bulk processing. An existing automation tool would be modified to accomplish the tasks described in the workflow.

[2] A new or updated study is one that has been published, either for the first time or as a new version, since the last API call.

[3] The json file contains citation and other study-level metadata, an entity_id field that is used to identify the study in Dataverse, version information, a list of data files with their own entity_id values, and md5 checksums for each data file.

[4] If json file has content_type of tab separated values, Archivematica issues API call for multiple file ("bundled") content download. This returns a zipped package for tsv files containing the .tab file, the original uploaded file, several other derivative formats, a DDI XML file and file citations in Endnote and RIS formats.

[5] The METS file will consist of a dmdSec containing the DC elements extracted from the json file, and a fileSec and structMap indicating the relationships between the files in the transfer (eg. original uploaded data file, derivative files generated for tabular data, metadata/citation files). This will allow Archivematica to apply appropriate preservation micro-services to different filetypes and provide an accurate representation of the study in the AIP METS file (step 1.9).

[6] Archivematica ingests all content returned from Dataverse, including the json file, plus the METS file generated in step 1.6.

[7] Standard and pre-configured micro-services include: assign UUID, verify checksums, generate checksums, extract packages, scan for viruses, clean up filenames, identify formats, validate formats, extract metadata and normalize for preservation.

Transfer METS file

When the ingest script retrieves content from Dataverse, it generates a METS file to allow Archivematica to understand the contents of the transfer and the relationships between its various data and metadata files.

Sample transfer METS file

This sample shows what a tranfer METS file might look like for a study consisting of one tabular data file and one non-tabular data file. The non-tabular file (Study info.pdf) has been downloaded as an individual file, and the tabular file (YVR_weather_data.tab and related files) as a zipped file which has been unzipped by the ingest script. The zipped file included a .sav file which was the original data file uploaded to Dataverse by the researcher; this is referenced in the fileGrp USE="original" in the fileSec. The pdf file is the other file in that file group. The .tab file and another derivative, an .rData version, are both identified as fileGrp USE="derivative". The zipped file also included three metadata files generated by Dataverse for the .tab file, which have been identified as fileGrp USE="metadata". The json file for the dataset is also included in that file group.

METS1G.png
METS2G.png