On further investigation I've managed to achieve the results I was after... i.e. dynamically setup images at run-time to use for ARKitImageRecognition. For others that find this question here is a few sample lines of code demonstrating the solution...
Instead of encoding images in xcassets at build time as shown in the Apple sample code to use as image recognition targets, build a image reference object from embedded or downloaded images. The sample lines below read embedded images in the App bundle but obviously they can be downloaded from a server, camera, photo library or other resource dependant upon requirements.
// create two image objects
UIImage * sample_image1 = [UIImage imageNamed:@"imac-landscape.png"];
UIImage * sample_image2 = [UIImage imageNamed:@"jaguar.jpg"];
// use the image objects to create two ARReferenceImages providing orientation and real world image size in meters
ARReferenceImage * sampleImage1 = [[ARReferenceImage alloc] initWithCGImage:sample_image1.CGImage orientation:kCGImagePropertyOrientationUp physicalWidth:0.47628f];
ARReferenceImage * sampleImage2 = [[ARReferenceImage alloc] initWithCGImage:sample_image2.CGImage orientation:kCGImagePropertyOrientationUp physicalWidth:0.25f];
// assign the two ARReferenceImage objects to the ARWorldTrackingConfiguration.detectionImages
ARWorldTrackingConfiguration *configuration = [ARWorldTrackingConfiguration new];
[configuration setDetectionImages:[NSSet setWithObjects:sampleImage1, sampleImage2, nil]];
// You are ready to perform the scanning and image recognition operation as if the images were held in an xcassets object as described in the Apple example.