We want to use new iOS 14 PencilKit vector capabilities in our app.
However, our current drawing component is always fixed-width. If we use dynamic brush width, like PencilKit has, it will look different on our other platforms.
Is there a way to make PKInkingTool having fixed width?
I tried swizzling methods like defaultWidthForInkType:, but my swizzled methods are never called.
@implementation PKInkingTool (Tracking)
(void)load {
[super load];
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
const char *className = [NSStringFromClass(self) UTF8String];
Class class = objc_getMetaClass(className);
SEL originalSelector = @selector(defaultWidthForInkType:);
SEL swizzledSelector = @selector(ed_defaultWidthForInkType:);
Method originalMethod = class_getClassMethod(class, originalSelector);
Method swizzledMethod = class_getClassMethod(class, swizzledSelector);
BOOL didAddMethod =
class_addMethod(class,
originalSelector,
method_getImplementation(swizzledMethod),
method_getTypeEncoding(swizzledMethod));
if (didAddMethod) {
class_replaceMethod(class,
swizzledSelector,
method_getImplementation(originalMethod),
method_getTypeEncoding(originalMethod));
} else {
method_exchangeImplementations(originalMethod, swizzledMethod);
}
}
(CGFloat)ed_defaultWidthForInkType:(PKInkType)inkType {
return 15.0;
}
@end
Post
Replies
Boosts
Views
Activity
Auto scale in kPDFDisplaySinglePageContinuous mode scales the whole document so that the _maximum width page_ fits the superview width and then other, smaller-width-pages, have respectively smaller size.Is there a way to make PDFView scale all pages to the same superview width separately?There is a similar question in archive with no answers:https://forums.developer.apple.com/message/248709#248709
iOS 12 brings us new API for eye tracking and I'm currently trying to figure out how to estimate the on-screen position of where the person is looking at.The result I want to achieve is an (x, y) point in screen coordinates, which I'll be later able to use to determine a specific UIView using hitTest or something like that.I was able to use these lines below to figure out the location of an eye in screen coordinates, however, I'm struggling to convert `lookAtPoint` into screen coordinates.func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor else { return }
let leftEyeTransform = node.simdConvertTransform(faceAnchor.leftEyeTransform, to: nil)
let position = leftEyeTransform.position()
let projectedPosition = renderer.projectPoint(position)
let point = CGPoint(x: Double(projectedPosition.x), y: Double(projectedPosition.y))
// point here is in the screen
}It feels like I'm missing something obvious, but so far I wasn't able to project `lookAtPoint` into the screen plane.Would appreciate any help or hints on how to attack this problem.