Hosting a database in an XPC process.

In a macOS application I'm working on, I'm experimenting with the idea of moving all of my database code into an XPC process. The database in question is RocksDB and much of the code related to that part of the application is written in C++ or Objective-C++.


However, in Apple's Developer documentation and supporting WWDC videos, XPC processes are always presented as "stateless" processes that can be terminated at any time by macOS. Is that still the prevailing train of thought? Safari makes extensive use of XPC processes for each open tab and those processes are never automatically terminated by macOS.


Outside of the architectual issues of moving this much code into an XPC process, I had the following questions:


* Is there a way to tell macOS that my XPC process is "long lived" and should not be terminated? Perhaps an Info.plist flag or something.


* If my main application crashes, does the XPC process shutdown gracefully or is it abruptly terminated as well? It might not always have an open XPC connection back to the main application, otherwise I guess it could just install an interruption handler on the connection. But if such a connection isn't open, does the XPC process receive any notification (or time) to clean up? (I've struggled to figure out how to test this with Xcode.)

Why do you want to use an XPC at all? Never assume that you will be able to accomplish the same things as Apple. You don't have access to the source code and private APIs.

I want to explore using an XPC process for all the normal reasons. I already make extensive use of them for shorter-lived actions as Apple routinely encourages. Now I'm interested in how much farther I can take them.


Some of the VR rendering stuff is done entirely in a separate process using Metal's new shared textures. How do those proceses guarantee that macOS doesn't suddenly terminate them and thus kill the rendering? Is the solution to just ensure that the process is always "busy" or is there a more formal way to specify my intent using something like the NSSupportsSuddenTermination flag?

Up front, let me say that I think XPC is the right choice for you, for two reasons:

  • As mentioned on your other thread (I’ll respond to that thread soon, btw), you want to move large amounts of data across between processes without copying. Mach messaging can do that, but the Mach messaging API is both ugly and full of horrible pitfalls. XPC gives you much the same functionality without all the grief.

  • It’s likely that you’ll end up wanting to move

    IOSuface
    objects across the ‘wire’, and XPC can do that nicely while other IPC APIs can’t (other than Mach messaging, but you don’t wan to go there).

Is that still the prevailing train of thought?

Yes, but there’s certainly wiggle room. Modern versions of macOS rarely ‘garbage collect’ XPC Services, and only when the system is under serious memory pressure. While you have to design your service to work properly when this happens, it’s not generally a major performance concern in day-to-day use.

Is there a way to tell macOS that my XPC process is "long lived" and should not be terminated?

No, because the previous point makes it irrelevant.

You can prevent sudden termination by holding a transaction open (

xpc_transaction_begin
/
xpc_transaction_end
). This makes sense if, for example, your database supports a request/response model but you want to do some clean up after sending the response.

However, you should not hold the transaction open indefinitely. At some point it makes sense to get the data safely on to disk [1], at which point sudden termination is no longer a concern and you can release the transaction.

If my main application crashes, does the XPC process shutdown gracefully or is it abruptly terminated as well?

I’d have to test this to be sure, but I suspect it’ll shut down like it would normally, that is, once the last transaction is done.

(I've struggled to figure out how to test this with Xcode.)

The debugger definitely affects system behaviour like this, so you should test without the debugger attached. That is, run the app (and XPC Service) from the Finder, add a ‘crash’ button to your app, and use logging to see what’s actually happening.

Share and Enjoy

Quinn “The Eskimo!”
Apple Developer Relations, Developer Technical Support, Core OS/Hardware

let myEmail = "eskimo" + "1" + "@apple.com"

[1] This is important because, regardless of all the edge cases you’ve discussed so far, there’s always the possibility of a wider failure taking out your XPC Service.

Thanks for the fantastic reply, much appreciated and reassuring. I've gone back and forth between the XPC C-API and the NS-API for various reasons. I started with the C-API because it, originally, made more sense to me and it was the only way to use IOSurface at the time. When IOSurface support was added to the NS-API, I decided to give it another shot. But the "reply" model and "protocol" model for how the client should talk to the service never really felt right for the use-case I had in mind. I much prefer a more asynchronous model where the client sends a packaged/serialized "message" across to the service and, asynchronously, receives packaged/serialized "replies" on a different connection. Messages and replies are matched up using unique identifiers.


So I went back to using the C-API for awhile until Metal introduced shared events and shared textures, which appeared to only support the NS-API. With IOSurface now on the NS-API and Metal appearing to favour it, I took my C-API "architecture" and implemented it using the NS-API, which is where I stand today and I'm reasonably happy with it.

But, per this thread and the other one, there's no DispatchData support in the NS-API (at least there wasn't until you showed me the new API for xpc_type_t).


When all is said-and-done, I'm really just sending serialized data across as one big blob. I'll have to wire back up a way to "attach" IOSurfaces and shared Metal objects into the transport layer, but for those messages I might just fallback to a basic NSObject that implements NSSecureCoding to make things easier. (Message replies that include an IOSurface or shared Metal object don't have much other date in them, so there's no need for fast, efficient serialization of those messages.)


Finally, to tie things back to your reply, what is the NS-API equivalent of xpc_transaction_begin, and xpc_transaction_end? Some of the WWDC videos talk about "holding on to the message" in the XPC process to ensure that appropritate QOS boost levels are applied. I never really understood what that meant, but I assume if I'm "holding on to the message" in the service then perhaps I'm implicitly inside a transaction as well?


For example, my XPC service's protocol has a single method:


// Implemented by XPC Services.
@objc protocol ServiceRequestHandler {
  func handleServiceMessage(_ message:ServiceMessage)
}

// Implemented by the App to receive response.
@objc protocol ServiceResponseHandler {
  func handleServiceMessage(_ message:ServiceMessage)
}

// ServiceMessage is an NSObject that implements NSSecureCoding. It has a single NSData field in it
// representing serialized data. (And, optionally, an IOSSurface as mentioned above.)
// -------------------------------------------------------------------------------------------------

// On the service side, the implementation of handleServiceMessage looks roughly like this: 

func handleServiceMessage(_ message:ServiceMessage) {
  messagesToProcess.append(message)
  processAnyPendingMessagesInASeparateThread()    
}


My understanding of "holding on to a message" is that if Io keep a strong reference to `message`, then an implicit transation remains opens and my service remains boosted. As soon as I dequeue that message, process it and release it, then I'm no longer "holding on to it" and the XPC machinery can terminate the service or at least lower its QOS boost.


I've just assumed this by concluding that when `ServiceMessage` is serialized on the client using an XPC Coder, it must have some additional metadata attached to it that the XPC system tracks to determine whether the message is in-flight, delivered, handled and still alive or not.


For now I'll stick with the NS-API because I'm curious to explore shared Metal textures down the road, but perhaps something else compelling on the C-API will crop up and I'll swing back...

what is the NS-API equivalent of

xpc_transaction_begin
, and
xpc_transaction_end
?

There is none. Which is annoying, but not the end of the world because it’s fine to call those C routines from Objective-C / Swift code.

Some of the WWDC videos talk about "holding on to the message" in the XPC process to ensure that appropriate QOS boost levels are applied. I never really understood what that meant

It only makes sense in a request/reply context. If a request has a reply, then you can return from the request handler without calling the reply block and the transaction stays open. And it’ll stay open until you call that reply block.

In your case you seem to be using requests without reply blocks, in which case the transaction closes as soon as you return from the request handler.

Share and Enjoy

Quinn “The Eskimo!”
Apple Developer Relations, Developer Technical Support, Core OS/Hardware

let myEmail = "eskimo" + "1" + "@apple.com"

"In your case you seem to be using requests without reply blocks, in which case the transaction closes as soon as you return from the request handler."


Interesting. You're right that I return from the request handler on the XPC side almost immediately. All the request handler does is take the message that was sent over from the app and store it into an array for processing "at a later date" and in a separate thread. The request handler thus returns almost immediately though a strong reference to the original message is obviously in play as the message is in the array.


According to your comments, this architecture does not affect the transaction state of an XPC connection, but does it affect the QOS boost? I could have sworn a year or so ago when I was playing around with this I "discovered" that I needed to hold on to that message longer than I thought otherwise my XPC's performance dropped significantly.


This reasoning was based on my interpretation of WWDC 2014-716 Power, Performance and Diagnostics - What's new in GCD and XPC. At the 33:20 time mark, they are discussing two things that cause the lifetime of the XPC boost to be maintained:


  • Until reply is sent.
  • While using message.


The first one is clear but doesn't apply to me since I'm not using the reply APIs. But the second item isn't so clear to me. What does it mean "while using the message". Note that this video is using the C API when discussing XPC, so I always just assumed that the xpc_object that was sent across the wire and received in the XPC service needed to be strongly held. I couldn't just extract the values I wanted and then discard the xpc_object. Doing so, appeared, to cause the XPC service to significantly slow down. (I was doing image processing in the service and saw terrible performance if I didn't keep the xpc_object around.)


But perhaps this isn't the case with the NS-API?


(Note that several DTS reports concluded that it was no longer possible to run imptrace to monitor QOS boosting, unfortunately.)

Note that several DTS reports concluded that it was no longer possible to run imptrace to monitor QOS boosting, unfortunately.

Indeed. We have a bug for that (r. 46307851) but there’s no sign of it being resolved any time soon.

But perhaps this isn't the case with the NS-API?

Right. I rarely work with the C API, so I’m coming at this from the perspective of

NSXPCConnection
, where there is no ‘message’ XPC object (well, there is under the covers, but you can’t access it). Thus I have no direct experience with the ‘hold on to the message’ approach, and I’m inclined to trust your interpretation here.

Share and Enjoy

Quinn “The Eskimo!”
Apple Developer Relations, Developer Technical Support, Core OS/Hardware

let myEmail = "eskimo" + "1" + "@apple.com"
Hosting a database in an XPC process.
 
 
Q