Posts

Post not yet marked as solved
0 Replies
577 Views
Hi I'm trying to make an application that sends keypresses to the system. For that I'm using CoreGraphics framework. It works fine on MacOS Mojave but not on Monterey (with M1 architecture). In each system, I allowed the application in Privacy(System preferences) to control the computer. I'm using the following code void postKeyCode(int keyCode, bool down){ CGEventRef keyboardEvent = CGEventCreateKeyboardEvent(NULL, keyCode, down); CGEventPost(kCGHIDEventTap, keyboardEvent); CFRelease(keyboardEvent); } Is there any additional requirements to allow the application ?
Posted
by cglinel.
Last updated
.
Post not yet marked as solved
0 Replies
657 Views
Hi I got an iPhone 13 pro. The ultra wide camera has improved a lot. Now it is possible to see details with it on small objects which was impossible with the 12 pro max. I have a photogrammetry program of my own that uses two synchronized cameras. I'm using AVCaptureDeviceTypeBuiltInDualCamera(wide and tele as a single device). It allows to measure standard objects when I do not need precise texture. But this configuration does not allow me to measure small objects with details. Now the configuration AVCaptureDeviceTypeBuiltInDualWideCamera is interesting for such cases. But when I'm activating auto focus on this configuration, the focus is only made on the wide and not also on ultra wide. When the ultra wide is used alone, it can focus on my objects. Is there a limitation, a bug or a new function to allow the dual camera device to focus on each camera at the same time ?
Posted
by cglinel.
Last updated
.
Post not yet marked as solved
3 Replies
1.1k Views
I have tested Object Capture with the ios app and the command line tool on macos. I'm wondering what is the best Apple device to use to get the best quality (geometry and texture), there are several configurations that may not give the same results. I have installed ios 15 on a 11 pro max. The ios app outputs some depth data. Which cameras are used to compute the depth ? Does it use three cameras or two cameras ? If it uses only two cameras, what pair does it use ? In theory for me, if only two cameras are used, the best configuration is tele and wide. I'm afraid with configuration with only wide and ultra wide, the results will be less accurate. In short, can we get the same accuracy with an iphone 12 and with an ipad pro ? The ipad seems more ergonomic to measure an object instead of iphone. Does the lidar of the iphone 12 pro/ipad pro can also be used to improve results ?
Posted
by cglinel.
Last updated
.
Post not yet marked as solved
3 Replies
1.2k Views
I'm trying to draw some UIBezierPath objects in a UIView. These paths are expressed in millimetres and must be drawn with the correct size. I'm using the method draw(rect:CGRect) of UIView. In my understanding, drawing functions in a UIView are made in points. So I just need to convert the millimetres in points. But if I'm doing that the paths do not have the correct size. On my 12 pro max, the drawing is always is divided by two. Here my code, the width of the view is equal to width of the screen // 72 points = 25.4 mm // So 1 mm  = 72/25.4 pt func mmToPoints(_ value:CGFloat) -> CGFloat{     return 72.0 * value/25.4 } func pointsToMM(_ value:CGFloat) -> CGFloat{     return value * 25.4/72.0 } class MyView : UIView{     override func draw(_ rect: CGRect) {         let width = rect.width         let height = rect.height         print("Rect in pt \(width) \(height)")         print("Rect in mm \(pointsToMM(width)) \(pointsToMM(height))")               let path = UIBezierPath()         let sizeInMM = CGFloat(50.0); // 50 mm, 5 cm         let sizeInPts = mmToPoints(sizeInMM)         UIColor.black.setStroke()         path.move(to: CGPoint.zero)         path.addLine(to: CGPoint(x: sizeInPts, y: 0.0))         path.addLine(to: CGPoint(x: sizeInPts, y: sizeInPts))         path.addLine(to: CGPoint(x: 0.0, y: sizeInPts))         path.stroke()     } } I get the following result in the console: Rect in pt 428.0 845.0 Rect in mm 150.98888888888888 298.09722222222223 We can notice that the width of the rect is twice the screen width of 12 pro max
Posted
by cglinel.
Last updated
.
Post not yet marked as solved
0 Replies
355 Views
I have some objects printed with a special texture like a checkerboard. This texture has a lot of repetitions. ARKit is not able to detect the features correctly. I have an algorithm that is able to detect uniquely each feature and match them with high accuracy in real time. Is there a way to plug algorithms to provide the features and the matching to ARKit ?
Posted
by cglinel.
Last updated
.
Post not yet marked as solved
0 Replies
460 Views
I'm trying to make an application that uses the AVCaptureDeviceTypeBuiltInDualCamera cameras.When the auto focus is enabled on AVCaptureDeviceTypeBuiltInDualCamera, it seems that the focus is made on the AVCaptureDeviceTypeBuiltInWideAngleCamera. I know that they are some constraints on a dual setup for the focus.In some cases, the focus is correct AVCaptureDeviceTypeBuiltInWideAngleCamera and really bad on AVCaptureDeviceTypeBuiltInTelephotoCamera. Is there a way to indicate that we prefer having a correct focus on the AVCaptureDeviceTypeBuiltInTelephotoCamera instead of AVCaptureDeviceTypeBuiltInWideAngleCamera ?
Posted
by cglinel.
Last updated
.
Post not yet marked as solved
12 Replies
2.6k Views
HiI'm trying to play with dual delivery of my iPhone 8 plus.I've watched videos from WWDC and it seems that the iPhone 8 plus is able to provide two synchronized frames only in the photo mode with AVCapturePhotoOutputI succeeded to get the two frames from the telephoto and wide camera. For each frame, I'm interested in getting the intrinsic matrix.I'm getting the intrinsic matrix in the method captureOutput:didFinishProcessingPhoto:error: in the delegateThe AVCapturePhoto has a property depthData which contains the cameraCalibrationDataFor the first frame, the depthData and the cameraCalibrationData are available.For the second frame (wide camera), there is no depthData available (nil) Is this a bug ?How can I get the intrinsic matrix for the second frame ?
Posted
by cglinel.
Last updated
.
Post not yet marked as solved
4 Replies
753 Views
HiI have developed a software using OpenGL 2.1.It runs correctly on 10.9 to 10.14.On the public beta of 10.15, I have a rendering issue.My 3D models do not display correctly textures. I have some artefacts.I think I have already found where is the issue. I think there is a problem in the OpenGL driver on this system.It does not handle correctly compressed textures. The internal format I'm using in glTexImage2D function is GL_COMPRESSED_RGBA.If I'm checking the contents of the pushed texture with glGetTexImage I don't have the same texture, there are some artefacts in it.The size of the texture is 1664 x 1664 but for different sizes I have the same issues (even tough with power of twos).I'm using a MacBook pro 2014 with nvidia 750M
Posted
by cglinel.
Last updated
.