Large datasets
Jump to navigation
Jump to search
- For instance, in the case of big data which Chuck just mentioned, an AIP bag should be open until its size reaches xxMB and then this bag closes and another bag opens up to be filled with the remaining smaller files in the dataset. - When a single huge file has to be preserved, the critical task is how to split this file into multiple smaller units. Take an example of a movie file which could be more than 1GB, on the YouTube one can access a number of smaller clips (part 1 of 17 and so on) to watch the whole movie. In a similar way, a built-in splitter is needed during the ingest process which creates sibling AIPs for a huge file.