When updating large datasets in a triplestore (in a single named graph), it seems that with most triplestores, this consists of performing something like:
DELETE <http://example.com/foo>
LOAD <http://example.com/foo>
For large datasets, this would leave the endpoints/applications using the store with a big lack of data in between the delete and the load operations, and possibly for much longer if the load operation fails.
Is there any best practice to avoid this?
Are there any triplestores which support transactional updates of named graphs, or allow data to be read without interruption during an update?
One option seems to be to use multiple stores (one is live while the other updates), but this seems sub-optimal if multiple datasets need to be updated at irregular intervals.