Avoiding data outages during large dataset re-imports

When updating large datasets in a triplestore (in a single named graph), it seems that with most triplestores, this consists of performing something like:

DELETE <http://example.com/foo>
LOAD <http://example.com/foo>

For large datasets, this would leave the endpoints/applications using the store with a big lack of data in between the delete and the load operations, and possibly for much longer if the load operation fails.

Is there any best practice to avoid this?

Are there any triplestores which support transactional updates of named graphs, or allow data to be read without interruption during an update?

One option seems to be to use multiple stores (one is live while the other updates), but this seems sub-optimal if multiple datasets need to be updated at irregular intervals.

Why not just apply updates directly to the data? For example in the Talis Platform we support the notion of Changesets, which describe a diff that is applied to an RDF graph. These can be applied in batches to handle lots of updates atomically. However for very large updates, the batch size might get prohibitive. Its fairly trivial to implement changeset support in any RDF store.

If you're really completely replacing a dataset, then I'd say that having a separate copy and switching between them is a reasonable approach.