Hello!
I'm working on an a desktop OSX app which grabs screen from an iOS device and creates vnc video stream from it. This is how I do it:
1. Create AVCaptureSession
2. Iterate over available AVCaptureDevice list
3. Attach input and output
4. setSampleBufferDelegate
AVCaptureDevice* capdev = [devices objectAtIndex:devid];
NSError* err;
self.input = [AVCaptureDeviceInput deviceInputWithDevice:capdev error:&err];
self.output = [[AVCaptureVideoDataOutput alloc] init];
NSLog(@"[capture] using device: %@ err: %@",capdev,err);
[self.session addInput:self.input];
[self.session addOutput:self.output];
[self.output setAlwaysDiscardsLateVideoFrames:YES]; // drop late frames
dispatch_queue_t queue = dispatch_queue_create("capture_queue", NULL);
[self.output setSampleBufferDelegate:self queue:queue];
[self.session startRunning];
5. When screen changes I get updates in:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// here I process frames for vnc
}
Everything works fine when I have only 1 device connected. Once I connect a second phone and run a second instance of the application second device is detected correctly (I'm using device id as a command-line parameter), session seems to be created correctly, but I don't get any frame updates.
Is it some kind of limitation or do am I doing something wrong here?