Post

Replies

Boosts

Views

Activity

CGEventPost && CGEventCreateKeyboardEvent failed
Hi I'm trying to make an application that sends keypresses to the system. For that I'm using CoreGraphics framework. It works fine on MacOS Mojave but not on Monterey (with M1 architecture). In each system, I allowed the application in Privacy(System preferences) to control the computer. I'm using the following code void postKeyCode(int keyCode, bool down){ CGEventRef keyboardEvent = CGEventCreateKeyboardEvent(NULL, keyCode, down); CGEventPost(kCGHIDEventTap, keyboardEvent); CFRelease(keyboardEvent); } Is there any additional requirements to allow the application ?
0
0
654
Sep ’22
Auto focus on AVCaptureDeviceTypeBuiltInDualWideCamera
Hi I got an iPhone 13 pro. The ultra wide camera has improved a lot. Now it is possible to see details with it on small objects which was impossible with the 12 pro max. I have a photogrammetry program of my own that uses two synchronized cameras. I'm using AVCaptureDeviceTypeBuiltInDualCamera(wide and tele as a single device). It allows to measure standard objects when I do not need precise texture. But this configuration does not allow me to measure small objects with details. Now the configuration AVCaptureDeviceTypeBuiltInDualWideCamera is interesting for such cases. But when I'm activating auto focus on this configuration, the focus is only made on the wide and not also on ultra wide. When the ultra wide is used alone, it can focus on my objects. Is there a limitation, a bug or a new function to allow the dual camera device to focus on each camera at the same time ?
0
0
785
Sep ’21
Best Apple device for Object Capture
I have tested Object Capture with the ios app and the command line tool on macos. I'm wondering what is the best Apple device to use to get the best quality (geometry and texture), there are several configurations that may not give the same results. I have installed ios 15 on a 11 pro max. The ios app outputs some depth data. Which cameras are used to compute the depth ? Does it use three cameras or two cameras ? If it uses only two cameras, what pair does it use ? In theory for me, if only two cameras are used, the best configuration is tele and wide. I'm afraid with configuration with only wide and ultra wide, the results will be less accurate. In short, can we get the same accuracy with an iphone 12 and with an ipad pro ? The ipad seems more ergonomic to measure an object instead of iphone. Does the lidar of the iphone 12 pro/ipad pro can also be used to improve results ?
3
0
1.1k
Jun ’21
Drawing in millimetres in a UIView
I'm trying to draw some UIBezierPath objects in a UIView. These paths are expressed in millimetres and must be drawn with the correct size. I'm using the method draw(rect:CGRect) of UIView. In my understanding, drawing functions in a UIView are made in points. So I just need to convert the millimetres in points. But if I'm doing that the paths do not have the correct size. On my 12 pro max, the drawing is always is divided by two. Here my code, the width of the view is equal to width of the screen // 72 points = 25.4 mm // So 1 mm  = 72/25.4 pt func mmToPoints(_ value:CGFloat) -> CGFloat{     return 72.0 * value/25.4 } func pointsToMM(_ value:CGFloat) -> CGFloat{     return value * 25.4/72.0 } class MyView : UIView{     override func draw(_ rect: CGRect) {         let width = rect.width         let height = rect.height         print("Rect in pt \(width) \(height)")         print("Rect in mm \(pointsToMM(width)) \(pointsToMM(height))")               let path = UIBezierPath()         let sizeInMM = CGFloat(50.0); // 50 mm, 5 cm         let sizeInPts = mmToPoints(sizeInMM)         UIColor.black.setStroke()         path.move(to: CGPoint.zero)         path.addLine(to: CGPoint(x: sizeInPts, y: 0.0))         path.addLine(to: CGPoint(x: sizeInPts, y: sizeInPts))         path.addLine(to: CGPoint(x: 0.0, y: sizeInPts))         path.stroke()     } } I get the following result in the console: Rect in pt 428.0 845.0 Rect in mm 150.98888888888888 298.09722222222223 We can notice that the width of the rect is twice the screen width of 12 pro max
3
0
1.4k
Dec ’20
Custom features, detector and matching algorithms in ARKit
I have some objects printed with a special texture like a checkerboard. This texture has a lot of repetitions. ARKit is not able to detect the features correctly. I have an algorithm that is able to detect uniquely each feature and match them with high accuracy in real time. Is there a way to plug algorithms to provide the features and the matching to ARKit ?
0
0
415
Dec ’20
Focus
I'm trying to make an application that uses the AVCaptureDeviceTypeBuiltInDualCamera cameras.When the auto focus is enabled on AVCaptureDeviceTypeBuiltInDualCamera, it seems that the focus is made on the AVCaptureDeviceTypeBuiltInWideAngleCamera. I know that they are some constraints on a dual setup for the focus.In some cases, the focus is correct AVCaptureDeviceTypeBuiltInWideAngleCamera and really bad on AVCaptureDeviceTypeBuiltInTelephotoCamera. Is there a way to indicate that we prefer having a correct focus on the AVCaptureDeviceTypeBuiltInTelephotoCamera instead of AVCaptureDeviceTypeBuiltInWideAngleCamera ?
0
0
535
May ’20
Dual delivery with empty calibration data
HiI'm trying to play with dual delivery of my iPhone 8 plus.I've watched videos from WWDC and it seems that the iPhone 8 plus is able to provide two synchronized frames only in the photo mode with AVCapturePhotoOutputI succeeded to get the two frames from the telephoto and wide camera. For each frame, I'm interested in getting the intrinsic matrix.I'm getting the intrinsic matrix in the method captureOutput:didFinishProcessingPhoto:error: in the delegateThe AVCapturePhoto has a property depthData which contains the cameraCalibrationDataFor the first frame, the depthData and the cameraCalibrationData are available.For the second frame (wide camera), there is no depthData available (nil) Is this a bug ?How can I get the intrinsic matrix for the second frame ?
12
0
3.1k
Apr ’20