Loading tile overlays is slow even when the raster data is locally available on the device running iOS 18.2 and built with Xcode 16.2.
In this video (https://3dtopo.com/superSlowTileLoading.mov) it takes 38 seconds to load tiles readily available on the device. Then, the whole screen flashes when tiles that are already drawn are redrawn, making for a very poor user experience. 38 seconds to load a dozen or so small images (512x512) stored locally on the device is simply unacceptable. I can't release a product like this that I've spent the last 1.5 years building and many years developing the maps themselves. This severe issue is new since I committed to basing my app on MapKit.
Note that this issue does not occur with Apple's base map tiles.
I created a Feedback Assitant case, FB16110803, for this issue.
For the video, I disabled loading any tiles from the network and disabled loading any other data, such as polylines. Essentially all I am doing is loading the tiles stored on the device and returning them, such as:
public func loadTile(at path: MKTileOverlayPath, result: @escaping (Data?, Error?) -> Void) {
fetchData(forKey: key,
failure: {error in result(nil, error)},
success: {data in result(data, nil)})
}
open func fetchData(forKey key: String, failure fail: ((Error?) -> ())? = nil, success succeed: @escaping (Data) -> ()) {
let path = self.path(forKey: key)
do {
let data = try Data(
contentsOf: URL(fileURLWithPath: path),
options: Data.ReadingOptions())
succeed(data)
self.updateDiskAccessDate(atPath: path)
} catch {
if let block = fail {
block(error)
}
}
}
Post
Replies
Boosts
Views
Activity
I cannot find the hardware requirements for Image Playground documented anywhere. I'm also not sure if they are identical to devices that support Apple Intelligence.
On the App Store, the only requirement listed for Image Playground is iOS 18.2.
Not knowing the requirements is an issue because I need to be able to clearly state the requirements for the feature in my app description.
Also, I'm sure my mother's current iPad is too old, but I'm not sure what models support it if I were to buy her a new one.
I'm trying to determine the best practice for handling if Image Playground is available but not installed or simply not supported.
If ImagePlaygroundViewController.isAvailable is true, I will just display a button to start an Image Playground session. If it is false, does that mean ImagePlayground is supported but not installed?
If it's supported and not installed, instead of a button to launch it, I want to display something like "Enable Apple Intelligence in Settings" or, better yet, a button that opens the Intelligence settings. Is that possible?
But if it is on a system that doesn't support it, of course, I don't want to instruct the user to enable it. How can I determine if a device cannot install Image Playground?
I read that Apple Intelligence requires iPhone 15 Pro, iPhone 15 Pro Max, and all iPhone 16 models, and no mention of the M1 iPad Pro, yet Image Playground runs on my M1 iPad Pro. What are the hardware requirements for Image Playground?
To my horror, after upgrading to macOS 15 beta (24A5264n), the stable version of Xcode (15.4) refuses to run.
I need to be able compare behavior and produce builds for the App Store.
Is there a stable version of Xcode planned for macOS 15 beta before Xcode 16 is released?
I understand that MapKit automatically sizes the font based on the system Dynamic Type size.
The thing is, the default font size is plenty legible for me system wide except for some type in MapKit, while other type in MapKit is already plenty big.
For instance, the stream names are much harder to read than they should be by default. And if I increase the system Dynamic Type size, then it makes some MapKit text much larger than needed, while the stream text can still be hard to read.
So is there anyway to override or adjust font sizes in MapKit? I'd like to be able to apply percentages to Dynamic Type suggestions. Like for streams, I'd like to scale it somewhere between 133% and 150%.
The smallest Dynamic Type is size is .caption2 at 11 points with default settings. With default Dynamic Type settings, it looks like MapKit is drawing stream text around 7 points.
This post states that attribution is required for MapKit or violates the developer agreement.
Can I provide my own attribution if I am displaying my original map content? It seems bizarre to attribute map sources that are not used.
I need multiple MKTileOverlays with multiple blendModes. Apparently using an overlay with a different blend causes the layer under to use the same blend mode.
For the example below, using a normal blend mode on top of a soft light blend mode causes a normal blend mode to be used instead of soft light. The soft light layer is rendered as expected until the normal layer is displayed starting at zoom level 15.
First, I've subclassed MKTileOverlay to add an overlay type so that the correct renderer is provided per overlay. (I know there is a title, but I prefer this)
enum OverlayType {
case softLight,
normal
}
class TileOverlay: MKTileOverlay {
var type: OverlayType = .normal
}
Then setup layers and renderers in the typical fashion:
var softLightRenderer: MKTileOverlayRenderer!
var normalRenderer: MKTileOverlayRenderer!
private func setupSoftlightRenderer() {
let overlay = TileOverlay(urlTemplate: "http://localhost/softlight/{z}/{x}/{y}")
overlay.type = .softLight
overlay.canReplaceMapContent = false
overlay.minimumZ = 9
overlay.maximumZ = 20
mapView.addOverlay(overlay, level: .aboveLabels)
softLightRenderer = MKTileOverlayRenderer(tileOverlay: overlay)
tileRenderer.blendMode = .softLight
}
private func setupNormalRenderer() {
let overlay = TileOverlay(urlTemplate: "http://localhost/normal/{z}/{x}/{y}")
overlay.type = .normal
overlay.canReplaceMapContent = false
overlay.minimumZ = 15
overlay.maximumZ = 20
mapView.addOverlay(overlay, level: .aboveLabels)
normalRenderer = MKTileOverlayRenderer(tileOverlay: overlay)
normalRenderer.blendMode = .normal
}
func mapView(_ mapView: MKMapView, rendererFor overlay: MKOverlay) -> MKOverlayRenderer {
if let overlay = overlay as? TileOverlay {
switch overlay.type {
case .softLight:
return softLightRenderer
case .normal:
return normalRenderer
}
}
print("Warning: using unhandled overlay renderer")
return blankRenderer
}
override func viewDidLoad() {
...
setupSoftlightRenderer()
setupNormalRenderer()
}
Interestingly, if I put the overlays at two different levels, one .aboveLabels and another .aboveRoads it works as expected. The problem is that limits me to two different overlay blend modes. I really could use more than two. I tried every possible variation of inserting the layers at different indexes and methods, but the only two that seem to work are the .aboveLabels and .aboveRoads.
How can I use more than two different blend modes?
Hello,
I have my largely iOS app running using Mac Catalyst, but I need to limit what Macs will be able to install it from the Mac App Store based on the GPU Family like MTLGPUFamily.mac2. Is that possible?
Or I could limit it to Apple Silicon using the Designed for iPad target, but I would prefer to use Mac Catalyst instead of Designed for iPad. Is it possible to limit Mac Catalyst installs to Apple Silicon Macs?
Side question: what capabilities are supported by MTLGPUFamily.mac2? I can't find it. My main interest is in CoreML inference acceleration.
Thank you.
When I run the performance test on a CoreML model, it shows predictions are 834% faster running on the Neural Engine as it is on the GPU.
It also shows, that 100% of the model can run on the Neural Engine:
GPU only:
But when I set the compute units to all:
let config = MLModelConfiguration()
config.computeUnits = .all
and profile, it shows that the neural engine isn’t used at all. Well, other than loading the model which takes 25 seconds when allowed to use the neural engine versus less than a second when not allowing the neural engine:
The difference in speed is the difference between the app being too slow to even release versus quite reasonable performance. I have a lot of work invested in this, so I am really hoping that I can get it to run on the Neural Engine.
Why isn't it actually running on the Neural Engine when it shows that it is supported and I have the compute unit set to run on the Neural Engine?
My app is capable of writing multi-gigabyte images. I need a way to allow the user to do something with the full-resolution images (such as AirDrop). It does not appear possible to write the image to Photos since there may not be enough memory to read the whole image in RAM and as far as I can tell PhotoKit allows readonly access.
With UIActivityViewController I am able to copy the large images with a NSFileCoordinator, but only if I use an extension of .zip or .bin for the destination URL. The file is copied, but macOS thinks they are archives and not an image. After the file has been copied, changing the extension to .jpg on the Mac allows the copied image to be used, but that is less than an ideal user experience.
Zipping the image (that is already compressed) is not a viable solution since the whole image would need to be read to zip it, which may exceed the amount of memory available. Not to mention it is a total waste of time/power consumption.
How can I allow a user to copy multi-gigabyte images from my app, and ideally with UIActivityViewController?
Here is the code that allows the image to be copied. If I don't use .zip or .bin for the destination URL, the image isn't copied, nothing happens, no errors reported.
func copyImage() {
let fm = FileManager.default
var archiveUrl: URL?
var error: NSError?
let imageProductURL = TDTDeviceUtilites.getDocumentUrl(
forFileName: "hugeImage.jpg")
let coordinator = NSFileCoordinator()
coordinator.coordinate(readingItemAt: imageProductURL,
options: [.forUploading],
error: &error) { (zipUrl) in
let tmpUrl = try! fm.url(
for: .itemReplacementDirectory,
in: .userDomainMask,
appropriateFor: zipUrl,
create: true
).appendingPathComponent("hugeImage.jpg.zip")
try! fm.moveItem(at: zipUrl, to: tmpUrl)
archiveUrl = tmpUrl
}
if let url = archiveUrl {
let avc = UIActivityViewController(activityItems: [url],
applicationActivities: nil)
present(avc, animated: true)
} else {
print(error)
}
}
VNContoursObservation is taking 715 times as long as OpenCV’s findContours() when creating directly comparable results.
VNContoursObservation creates comparable results when I have set the maximumImageDimension property to 1024. If I set it lower, it runs a bit faster, but creates lower quality contours and still takes over 100 times as long.
I have a hard time believing Apple doesn’t know what they are doing, so does anyone have an idea what is going on and how to get it to run much faster? There doesn’t seem to be many options, but nothing I’ve tried closes the gap. Setting the detectsDarkOnLight property to true makes it run even slower.
OpenCV findContours runs with a binary image, but I am passing a RGB image to Vision assuming it would convert it to an appropriate format.
OpenCV:
double taskStart = CFAbsoluteTimeGetCurrent();
int contoursApproximation = CV_CHAIN_APPROX_NONE;
int contourRetrievalMode = CV_RETR_LIST;
findContours(input, contours, hierarchy, contourRetrievalMode, contoursApproximation, cv::Point(0,0));
NSLog(@"###### opencv findContours: %f", CFAbsoluteTimeGetCurrent() - taskStart);
###### opencv findContours: 0.017616 seconds
Vision:
let taskStart = CFAbsoluteTimeGetCurrent()
let contourRequest = VNDetectContoursRequest.init()
contourRequest.revision = VNDetectContourRequestRevision1
contourRequest.contrastAdjustment = 1.0
contourRequest.detectsDarkOnLight = false
contourRequest.maximumImageDimension = 1024
let requestHandler = VNImageRequestHandler.init(cgImage: sourceImage.cgImage!, options: [:])
try! requestHandler.perform([contourRequest])
let contoursObservation = contourRequest.results?.first as! VNContoursObservation
print(" ###### contoursObservation: \(CFAbsoluteTimeGetCurrent() - taskStart)")
###### contoursObservation: 12.605962038040161
The image I am providing OpenCV is 2048 pixels and the image I am providing Vision is 1024.
I'm experiencing 100% reproducible bug with Xcode. It took me days to figure it out. After many tears shed.
Xcode is producing a corrupt binary. It seems like my metal buffers are being mis-aligned or something.
The problem showed up after changing the Deployment to iOS 15. It had been at iOS 13 and built and ran without any related issues - for years that I've been developing this app.
So at some point after changing the target to iOS 15, compiling the shaders went from about 0.001 seconds to 30 seconds. When that happens, the GPU will also hang with the messages:
GPUDebug] Invalid device load executing kernel function "computeArtPointsToRender" encoder: "0", dispatch: 0, at offset 22080048
Shaders.metal:1290:41 - computeArtPointsToRender()
To fix the issue, I have to change the build target to iOS 13, clean, build, change to iOS 15, clean, build and then works again as expected (until it doesn't at some random point or until I have restarted the machine).
This is 100% reproducible:
Restart mac
Build project
Issue occurs
Change build target to iOS 13
Clean build folder
Build
Change build target to iOS 15
Delete derived files
Build
Works as expected
Xcode: Version 13.1 (13A1030d)
macOS: 11.5.2 (20G95)
Mac mini (M1, 2020)
In my 11+ years as a full time iOS developer, I've never encountered such a serious issue with Xcode. If I don't have stable tools, it is impossible for me to develop.
Ever since switching to an M1 Mac for my development, Xcode no longer shows any image previews at break points.
Quick Look is not functioning with any images: CGImages, UIIMages, CIImages, MTLTextures, etc.
I am working on an image-based app, so this is extremely frustrating.
I am running in debug mode. I tried linking to images to show some screenshots but apparently they are not allowed on this forum.
Xcode 12.5
macOS 11.3
I installed the Big Sur beta 2 profile on my ARM development Mac, restarted, and it said no updates are available.
I noticed that there was a message I hadn't seen that said something like updates are managed remotely. I thought I would see if changing that helped, but now I get the message:
Unable to check for updates I tried reinstalling the Big Sur beta 2 profile again, and same error.
I don't see anyway to reenable "Remote management" - whatever that even is.
It would be really nice if the Playground shown in the "Control training in Create ML with Swift" session would be made available somewhere.
I looked and can't find it anywhere.