Posts

Post not yet marked as solved
0 Replies
231 Views
I understand that MapKit automatically sizes the font based on the system Dynamic Type size. The thing is, the default font size is plenty legible for me system wide except for some type in MapKit, while other type in MapKit is already plenty big. For instance, the stream names are much harder to read than they should be by default. And if I increase the system Dynamic Type size, then it makes some MapKit text much larger than needed, while the stream text can still be hard to read. So is there anyway to override or adjust font sizes in MapKit? I'd like to be able to apply percentages to Dynamic Type suggestions. Like for streams, I'd like to scale it somewhere between 133% and 150%. The smallest Dynamic Type is size is .caption2 at 11 points with default settings. With default Dynamic Type settings, it looks like MapKit is drawing stream text around 7 points.
Posted
by 3DTOPO.
Last updated
.
Post not yet marked as solved
1 Replies
282 Views
I need multiple MKTileOverlays with multiple blendModes. Apparently using an overlay with a different blend causes the layer under to use the same blend mode. For the example below, using a normal blend mode on top of a soft light blend mode causes a normal blend mode to be used instead of soft light. The soft light layer is rendered as expected until the normal layer is displayed starting at zoom level 15. First, I've subclassed MKTileOverlay to add an overlay type so that the correct renderer is provided per overlay. (I know there is a title, but I prefer this) enum OverlayType { case softLight, normal } class TileOverlay: MKTileOverlay { var type: OverlayType = .normal } Then setup layers and renderers in the typical fashion: var softLightRenderer: MKTileOverlayRenderer! var normalRenderer: MKTileOverlayRenderer! private func setupSoftlightRenderer() { let overlay = TileOverlay(urlTemplate: "http://localhost/softlight/{z}/{x}/{y}") overlay.type = .softLight overlay.canReplaceMapContent = false overlay.minimumZ = 9 overlay.maximumZ = 20 mapView.addOverlay(overlay, level: .aboveLabels) softLightRenderer = MKTileOverlayRenderer(tileOverlay: overlay) tileRenderer.blendMode = .softLight } private func setupNormalRenderer() { let overlay = TileOverlay(urlTemplate: "http://localhost/normal/{z}/{x}/{y}") overlay.type = .normal overlay.canReplaceMapContent = false overlay.minimumZ = 15 overlay.maximumZ = 20 mapView.addOverlay(overlay, level: .aboveLabels) normalRenderer = MKTileOverlayRenderer(tileOverlay: overlay) normalRenderer.blendMode = .normal } func mapView(_ mapView: MKMapView, rendererFor overlay: MKOverlay) -> MKOverlayRenderer { if let overlay = overlay as? TileOverlay { switch overlay.type { case .softLight: return softLightRenderer case .normal: return normalRenderer } } print("Warning: using unhandled overlay renderer") return blankRenderer } override func viewDidLoad() { ... setupSoftlightRenderer() setupNormalRenderer() } Interestingly, if I put the overlays at two different levels, one .aboveLabels and another .aboveRoads it works as expected. The problem is that limits me to two different overlay blend modes. I really could use more than two. I tried every possible variation of inserting the layers at different indexes and methods, but the only two that seem to work are the .aboveLabels and .aboveRoads. How can I use more than two different blend modes?
Posted
by 3DTOPO.
Last updated
.
Post not yet marked as solved
0 Replies
273 Views
This post states that attribution is required for MapKit or violates the developer agreement. Can I provide my own attribution if I am displaying my original map content? It seems bizarre to attribute map sources that are not used.
Posted
by 3DTOPO.
Last updated
.
Post not yet marked as solved
3 Replies
1.8k Views
When using an exclusion path with an UITextView, the text view is not properly wrapping the words for the first and last line in the example shown (lower text). It does render properly using word-wrap with Core Text (top example).(Hmm - the image link I provided in the HTML here works with the preview, but not the post. The example image is at: http i.stack.imgur.com/Z91ge.pngHere is the code for the UITextView (both this and the Core Text example use the same size bezier path, and both use the appropriate corresponding font and paragraph settings; wrap by word and centered):NSString *testText = @"Text Kit exclusion paths ..."; / UIBezierPath *viewPath = [UIBezierPath bezierPathWithRect:CGRectMake(0, 0, 280, 280)]; UIBezierPath *shapePath = [UIBezierPath bezierPathWithOvalInRect:CGRectMake(10, 10, 265, 265)]; viewPath.usesEvenOddFillRule = true; shapePath.usesEvenOddFillRule = true; [shapePath appendPath:viewPath]; NSMutableAttributedString *title = [[NSMutableAttributedString alloc]initWithString:testText]; UIFont *font = [UIFont fontWithName:@"BradleyHandITCTT-Bold" size:14]; [title addAttribute:NSFontAttributeName value:font range:NSMakeRange(0, title.length)]; [title addAttribute:NSForegroundColorAttributeName value:[UIColor redColor] range:NSMakeRange(0, title.length)]; NSMutableParagraphStyle *paragraphStyle = [[NSMutableParagraphStyle alloc] init]; [paragraphStyle setAlignment:NSTextAlignmentCenter]; [paragraphStyle setLineBreakMode:NSLineBreakByWordWrapping]; [title addAttribute:NSParagraphStyleAttributeName value:paragraphStyle range:NSMakeRange(0, title.length)]; UITextView *textView = [[UITextView alloc] initWithFrame:CGRectMake(0.0, 370.0, 280, 280)]; textView.textContainerInset = UIEdgeInsetsMake(0,0,0,0); textView.textContainer.exclusionPaths = @[shapePath]; [textView.textContainer setLineBreakMode:NSLineBreakByWordWrapping]; textView.contentInset = UIEdgeInsetsMake(0,0,0,0); textView.attributedText = title; textView.backgroundColor = [UIColor clearColor];It is worth noting that the Text Kit is respecting the word wrapping rules except for the first and (possibly last) line where the text does not actually fit. I need this to work with a UITextView because text entry is required, or I would be done with my Core Text example.I have tried everything I can think of to reliably work-around the issue. I don't fully understand how UITextView is working with Core Text (or I would expect to get the same results which I don’t), so any suggestions would be greatly appreciated.
Posted
by 3DTOPO.
Last updated
.
Post not yet marked as solved
1 Replies
892 Views
VNContoursObservation is taking 715 times as long as OpenCV’s findContours() when creating directly comparable results. VNContoursObservation creates comparable results when I have set the maximumImageDimension property to 1024. If I set it lower, it runs a bit faster, but creates lower quality contours and still takes over 100 times as long. I have a hard time believing Apple doesn’t know what they are doing, so does anyone have an idea what is going on and how to get it to run much faster? There doesn’t seem to be many options, but nothing I’ve tried closes the gap. Setting the detectsDarkOnLight property to true makes it run even slower. OpenCV findContours runs with a binary image, but I am passing a RGB image to Vision assuming it would convert it to an appropriate format. OpenCV: double taskStart = CFAbsoluteTimeGetCurrent(); int contoursApproximation = CV_CHAIN_APPROX_NONE; int contourRetrievalMode = CV_RETR_LIST; findContours(input, contours, hierarchy, contourRetrievalMode, contoursApproximation, cv::Point(0,0)); NSLog(@"###### opencv findContours: %f", CFAbsoluteTimeGetCurrent() - taskStart); ###### opencv findContours: 0.017616 seconds Vision: let taskStart = CFAbsoluteTimeGetCurrent() let contourRequest = VNDetectContoursRequest.init() contourRequest.revision = VNDetectContourRequestRevision1 contourRequest.contrastAdjustment = 1.0 contourRequest.detectsDarkOnLight = false contourRequest.maximumImageDimension = 1024 let requestHandler = VNImageRequestHandler.init(cgImage: sourceImage.cgImage!, options: [:]) try! requestHandler.perform([contourRequest]) let contoursObservation = contourRequest.results?.first as! VNContoursObservation print(" ###### contoursObservation: \(CFAbsoluteTimeGetCurrent() - taskStart)") ###### contoursObservation: 12.605962038040161 The image I am providing OpenCV is 2048 pixels and the image I am providing Vision is 1024.
Posted
by 3DTOPO.
Last updated
.
Post not yet marked as solved
0 Replies
789 Views
Hello, I have my largely iOS app running using Mac Catalyst, but I need to limit what Macs will be able to install it from the Mac App Store based on the GPU Family like MTLGPUFamily.mac2. Is that possible? Or I could limit it to Apple Silicon using the Designed for iPad target, but I would prefer to use Mac Catalyst instead of Designed for iPad. Is it possible to limit Mac Catalyst installs to Apple Silicon Macs? Side question: what capabilities are supported by MTLGPUFamily.mac2? I can't find it. My main interest is in CoreML inference acceleration. Thank you.
Posted
by 3DTOPO.
Last updated
.
Post marked as solved
1 Replies
1.3k Views
When I run the performance test on a CoreML model, it shows predictions are 834% faster running on the Neural Engine as it is on the GPU. It also shows, that 100% of the model can run on the Neural Engine: GPU only: But when I set the compute units to all: let config = MLModelConfiguration() config.computeUnits = .all and profile, it shows that the neural engine isn’t used at all. Well, other than loading the model which takes 25 seconds when allowed to use the neural engine versus less than a second when not allowing the neural engine: The difference in speed is the difference between the app being too slow to even release versus quite reasonable performance. I have a lot of work invested in this, so I am really hoping that I can get it to run on the Neural Engine. Why isn't it actually running on the Neural Engine when it shows that it is supported and I have the compute unit set to run on the Neural Engine?
Posted
by 3DTOPO.
Last updated
.
Post not yet marked as solved
3 Replies
747 Views
My app is capable of writing multi-gigabyte images. I need a way to allow the user to do something with the full-resolution images (such as AirDrop). It does not appear possible to write the image to Photos since there may not be enough memory to read the whole image in RAM and as far as I can tell PhotoKit allows readonly access. With UIActivityViewController I am able to copy the large images with a NSFileCoordinator, but only if I use an extension of .zip or .bin for the destination URL. The file is copied, but macOS thinks they are archives and not an image. After the file has been copied, changing the extension to .jpg on the Mac allows the copied image to be used, but that is less than an ideal user experience. Zipping the image (that is already compressed) is not a viable solution since the whole image would need to be read to zip it, which may exceed the amount of memory available. Not to mention it is a total waste of time/power consumption. How can I allow a user to copy multi-gigabyte images from my app, and ideally with UIActivityViewController? Here is the code that allows the image to be copied. If I don't use .zip or .bin for the destination URL, the image isn't copied, nothing happens, no errors reported. func copyImage() { let fm = FileManager.default var archiveUrl: URL? var error: NSError? let imageProductURL = TDTDeviceUtilites.getDocumentUrl( forFileName: "hugeImage.jpg") let coordinator = NSFileCoordinator() coordinator.coordinate(readingItemAt: imageProductURL, options: [.forUploading], error: &error) { (zipUrl) in let tmpUrl = try! fm.url( for: .itemReplacementDirectory, in: .userDomainMask, appropriateFor: zipUrl, create: true ).appendingPathComponent("hugeImage.jpg.zip") try! fm.moveItem(at: zipUrl, to: tmpUrl) archiveUrl = tmpUrl } if let url = archiveUrl { let avc = UIActivityViewController(activityItems: [url], applicationActivities: nil) present(avc, animated: true) } else { print(error) } }
Posted
by 3DTOPO.
Last updated
.
Post marked as solved
2 Replies
1.4k Views
I'm experiencing 100% reproducible bug with Xcode. It took me days to figure it out. After many tears shed. Xcode is producing a corrupt binary. It seems like my metal buffers are being mis-aligned or something. The problem showed up after changing the Deployment to iOS 15. It had been at iOS 13 and built and ran without any related issues - for years that I've been developing this app. So at some point after changing the target to iOS 15, compiling the shaders went from about 0.001 seconds to 30 seconds. When that happens, the GPU will also hang with the messages: GPUDebug] Invalid device load executing kernel function "computeArtPointsToRender" encoder: "0", dispatch: 0, at offset 22080048 Shaders.metal:1290:41 - computeArtPointsToRender() To fix the issue, I have to change the build target to iOS 13, clean, build, change to iOS 15, clean, build and then works again as expected (until it doesn't at some random point or until I have restarted the machine). This is 100% reproducible: Restart mac Build project Issue occurs Change build target to iOS 13 Clean build folder Build Change build target to iOS 15 Delete derived files Build Works as expected Xcode: Version 13.1 (13A1030d) macOS: 11.5.2 (20G95) Mac mini (M1, 2020) In my 11+ years as a full time iOS developer, I've never encountered such a serious issue with Xcode. If I don't have stable tools, it is impossible for me to develop.
Posted
by 3DTOPO.
Last updated
.
Post not yet marked as solved
1 Replies
1.5k Views
Ever since switching to an M1 Mac for my development, Xcode no longer shows any image previews at break points. Quick Look is not functioning with any images: CGImages, UIIMages, CIImages, MTLTextures, etc. I am working on an image-based app, so this is extremely frustrating. I am running in debug mode. I tried linking to images to show some screenshots but apparently they are not allowed on this forum. Xcode 12.5 macOS 11.3
Posted
by 3DTOPO.
Last updated
.
Post not yet marked as solved
6 Replies
1.4k Views
I installed the Big Sur beta 2 profile on my ARM development Mac, restarted, and it said no updates are available. I noticed that there was a message I hadn't seen that said something like updates are managed remotely. I thought I would see if changing that helped, but now I get the message: Unable to check for updates I tried reinstalling the Big Sur beta 2 profile again, and same error. I don't see anyway to reenable "Remote management" - whatever that even is.
Posted
by 3DTOPO.
Last updated
.