I'm attempting to synchronise an offline data store in core data with an online store.
To maintain the last synchronisation point for both data coming in to the local cache, and data going out, I'm using change tokens. An NSPersistentHistoryToken to track the last data that went out, and a server provided token for the data coming in.
In terms of the best place to store the tokens, it seems to me it would be in the local store (NSPersistentStore) itself – saving the incoming data and the token can then be tied into a neat save transaction; if there's problem saving the incoming data, the token won't be updated.
One way of achieving this would be to create a new Entity in my model to track the tokens, but it appears NSPersistentStoreCoordinator already has a tidy pair of meta data functions that might be perfect for this: metadata(for:) and setMetadata(_:for:).
However, as I am considering updating the metadata after operations on separate managed object contexts (one for incoming remote data, one for outgoing remote data) I'm concerned about synchronisation. That's because the metadata functions operate at the persistent store coordinator level rather than the context level.
Potentially, if two perform(_:) blocks are running concurrently the metadata could become stale between the calls to metadata(for:) and setMetadata(_:for:), what's more, as store meta data is apparently only persisted upon a context save we'd want to restrict another context getting access to this new metadata until it's been permanently persisted as otherwise it may write changes for another context that were ultimately rolled-back.
What's the reccommended approach to handle this? Is it as simple as wrapping the meta data fetch, update and context save calls in to an operation that gets called on a serialised queue? Something like:
Or is that pushing store metadata outside of its use case?
To maintain the last synchronisation point for both data coming in to the local cache, and data going out, I'm using change tokens. An NSPersistentHistoryToken to track the last data that went out, and a server provided token for the data coming in.
In terms of the best place to store the tokens, it seems to me it would be in the local store (NSPersistentStore) itself – saving the incoming data and the token can then be tied into a neat save transaction; if there's problem saving the incoming data, the token won't be updated.
One way of achieving this would be to create a new Entity in my model to track the tokens, but it appears NSPersistentStoreCoordinator already has a tidy pair of meta data functions that might be perfect for this: metadata(for:) and setMetadata(_:for:).
However, as I am considering updating the metadata after operations on separate managed object contexts (one for incoming remote data, one for outgoing remote data) I'm concerned about synchronisation. That's because the metadata functions operate at the persistent store coordinator level rather than the context level.
Potentially, if two perform(_:) blocks are running concurrently the metadata could become stale between the calls to metadata(for:) and setMetadata(_:for:), what's more, as store meta data is apparently only persisted upon a context save we'd want to restrict another context getting access to this new metadata until it's been permanently persisted as otherwise it may write changes for another context that were ultimately rolled-back.
What's the reccommended approach to handle this? Is it as simple as wrapping the meta data fetch, update and context save calls in to an operation that gets called on a serialised queue? Something like:
Code Block let metadataSerialQueue = DispatchQueue("metadata-serial-queue")
Code Block context1.perform { // context update operations ... metadataSerialQueue.sync { var metadata = psc.metadata(for: store) // update change token metadata["change-token"] = changeToken psc.setMetadata(metadata, for: store) do { try context1.save() } catch let e as NSError { print("Failed to save context in store change. Rolling back. \(e)") context1.rollback() } } }
Or is that pushing store metadata outside of its use case?