Asynchronous Processing in Storage Service
The Storage Service is implemented as a web application, written using the Django Framework. It exposes a REST API, which is consumed by other components in Archivematica (the dashboard, the automation tools) and can be used by third party applications as well. All of the AIP endpoints are currently synchronous - the http request made by a client blocks until the work is completed. Many of the tasks performed by the storage service involve significant disk i/o and can take a long time (minutes/hours) to complete.
Adding a way to perform work asynchronously is an important feature, probably a requirement before a 1.0.0 version of the Storage Service can be released.
A couple of experiments have been tried already:
 API backwards compatibility
/api/v2, add new endpoints that can do the job asynchronously. Maintain the existing endpoints.
/api/v2, update existing parameters to accept a new query parameter
- It may not be a good idea when the properties of the response are going to be significantly different, e.g. a deferred call may expect a document with a single UUID that can be used later to query the status of the call in the future, instead of the document originally requested. This needs to be analyzed individually for each of the endpoints affected.
/api/v3, similar API but everything is async.
 List of endpoints affected
Frequently you need two components: a queue (broker) and a result backend (job state).
- It can use Redis both as a broker and a result backend.
- The caller can block and wait until it's done (useful for old API endpoints?)
- Do we want to put the results in the database instead? Make it configurable?
RQ, Taskmaster, etc...
Based on our own goals, e.g. reliability over performance, at-least-once delivery, simplicity...
- Acks are not necessary, consumers "commit" (using offsets).
- Order is preserved (the log is append-only), but this may be only relevant in a stream-oriented system but not in a job queue.
- One consumer per partition means that a long-running job keeps the consumer busy while other job may be dispatched to the same partition but can't be processed until the previous job is done.
- Complexity on the client client (offset storage, at-least-once, exact-once can be possible!)
- Complexity on the operation layer (Zookeper, explicit partitioning).
disque is being written by @antirez (creator of Redis). It's an off-the-shelve job queue. Redis 4.2 is going to bring disque as a module.
- Complexity on the broker (ACKs, locks...), client has little room for errors.
- Provides iterators to read dequeued messages efficiently (with filters).
- at-least-once, managed by the broker efficiently.
- Order is not guaranteed, but this makes the solution more scalable and redelivery can be done easily by the broker without extra effort on the client.
- disque 1.0 has not been released yet, only rc1. redis 4.2 + disque not available but in the roadmap.
Recent versions of this DB incorporate new solutions that allow developers to model job queues that can perform reasonably well - enough for our needs. Well-know open source apps like Concourse CI use PostgreSQL as a job queue.
- Hard to get right (unless you're using libraries like que)
It comes up all the time but NATS is not a great fit for the kind of job queue we need because it only implements at-most-once delivery. Great for other purposes.
 NATS Streaming
- at-least-once delivery.
- append-only like Kafka, with replay too, but offsets no required, the broker takes care of it (durable subscriptions).
- is it a good fit for a job queue?