I just tried the "Building a document-based app with SwiftUI" sample code for iOS 18.
https://developer.apple.com/documentation/swiftui/building-a-document-based-app-with-swiftui
I can create a document and then close it. But once I open it back up, I can't navigate back to the documents browser. It also struggles to open documents (I would tap multiple times and nothing happens). This happens on both simulator and device.
Will file a bug, but anyone know of a work-around? I can't use a document browser that is this broken.
Post
Replies
Boosts
Views
Activity
I've been using an XCFramework of LuaJIT for years now in my app. After upgrading to Xcode 16, calls to the LuaJIT interpreter started segfaulting on macOS in release mode only. Works on iOS (both debug/release). Works fine with Xcode 15.4 (both debug/release). I have a very simple repro.
Wondering what could actually cause that sort of thing, given it's the same XCFramework (i.e. precompiled with optimization on). Guessing this is something having to do with the way LuaJIT is called, or the environment in which it runs, when the optimizer is turned on.
https://github.com/LuaJIT/LuaJIT/issues/1290
(FB15512926)
I'm wondering if it's possible to do Parallax Occlusion Mapping in RealityKit? Does RK's metal shader API provide enough?
I think it would need to be able to discard fragments and thus can't be run as a deferred pass. Not sure though!
"Specifically, your App Description and screenshot references paid features but does not inform users that a purchase is required to access this content."
My App Description (Pro 3D art app) doesn't mention that the entire app is a subscription. I didn't think I needed to because Final Cut Pro and Logic Pro don't do that either. Anyone had experience with this? Is there a double-standard or did App Review just make a mistake?
Suppose I can add some language at the end of the App Description like "All Features unlocked with subscription"
I'm trying to ray-march an SDF inside a RealityKit surface shader. For the SDF primitive to correctly render with other primitives, the depth of the fragment needs to be set according to the ray-surface intersection point. Is there a way to do that within a RealityKit surface shader? It seems the only values I can set are within surface::surface_properties.
If not, can an SDF still be rendered in RealityKit using ray-marching?
In my Metal-based app, I ray-march a 3D texture. I'd like to use RealityKit instead of my own code. I see there is a LowLevelTexture (beta) where I could specify a 3D texture. However on the Metal side, there doesn't seem to be any way to access a 3D texture (realitykit::texture::textures::custom returns a texture2d).
Any work-arounds? Could I even do something icky like cast the texture2d to a texture3d in MSL? (is that even possible?) Could I encode the 3d texture into an argument buffer and get that in somehow?
I'm recreating the sleep timer from the Podcasts app. How can I display an icon for the picker instead of the current selection?
This doesn't work:
Picker("Sleep Timer", systemImage: "moon.zzz.fill", selection: $sleepTimerDuration) {
Text("Off").tag(0)
Text("5 Minutes").tag(5)
Text("10 Minutes").tag(10)
Text("15 Minutes").tag(15)
Text("30 Minutes").tag(30)
Text("45 Minutes").tag(45)
Text("1 Hour").tag(60)
}
Do I need to drop down to UIKit for this?
I'm trying to create a MTLFXTemporalScaler as follows (this is adapted from the sample code):
func updateTemporalScaler() {
let desc = MTLFXTemporalScalerDescriptor()
desc.inputWidth = renderTarget.renderSize.width
desc.inputHeight = renderTarget.renderSize.height
desc.outputWidth = renderTarget.windowSize.width
desc.outputHeight = renderTarget.windowSize.height
desc.colorTextureFormat = .bgra8Unorm
desc.depthTextureFormat = .depth32Float
desc.motionTextureFormat = .rg16Float
desc.outputTextureFormat = .bgra8Unorm
guard let temporalScaler = desc.makeTemporalScaler(device: device) else {
fatalError("The temporal scaler effect is not usable!")
}
temporalScaler.motionVectorScaleX = Float(renderTarget.renderSize.width)
temporalScaler.motionVectorScaleY = Float(renderTarget.renderSize.height)
mfxTemporalScaler = temporalScaler
}
I'm getting the following error the 3rd time the code is called:
/AppleInternal/Library/BuildRoots/91a344b1-f985-11ee-b563-fe8bc7981bff/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Runtimes/MPSRuntime/Operations/RegionOps/ANRegion.mm:855: failed assertion `ANE intermediate buffer handle not same!'
When I copy the code out to a playground, it succeeds when called with the same sequence of descriptors. Does this seem like a bug with MTLFXTemporalScaler?
In my app, I only get one interruption notification when a phone call comes in, and nothing after that. The app uses AVAudioEngine. Is this a bug?
A very simple repro is to just register for the notification, but not do anything else with audio:
struct ContentView: View {
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundStyle(.tint)
Text("Hello, world!")
}
.padding()
.onReceive(NotificationCenter.default.publisher(for: AVAudioSession.interruptionNotification)) { event in
handleAudioInterruption(event: event)
}
}
private func handleAudioInterruption(event: Notification) {
print("handleAudioInterruption")
guard let info = event.userInfo,
let typeValue = info[AVAudioSessionInterruptionTypeKey] as? UInt,
let type = AVAudioSession.InterruptionType(rawValue: typeValue) else {
print("missing the stuff")
return
}
if type == .began {
print("interruption began")
} else if type == .ended {
print("interruption ended")
guard let optionsValue = info[AVAudioSessionInterruptionOptionKey] as? UInt else { return }
if AVAudioSession.InterruptionOptions(rawValue: optionsValue).contains(.shouldResume) {
print("should resume")
}
}
}
}
And do this in the app's init:
@main
struct InterruptionsApp: App {
init() {
try! AVAudioSession.sharedInstance().setCategory(.playback,
options: [])
try! AVAudioSession.sharedInstance().setActive(true)
}
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
I've got a full-screen animation of a bunch of circles filled with gradients, with plenty of (careless) overdraw, plus real-time audio processing driving the animation, plus the overhead of SwiftUI's dependency analysis, and that app uses less energy (on iPhone 13) than the Xcode "Metal Game" template which is a rotating textured cube (a trivial GPU workload). Why is that? How can I investigate further?
Does CoreAnimation have access to a compositor fast-path that a Metal app cannot access?
Maybe another data point: when I do the same circles animation using SwiftUI's Canvas, the energy use is "Very High" and GPU utilization is also quite high. Eventually the phone's thermal state goes "Serious" and I get a message on the device that "Charging will resume when iPhone returns to normal temperature".
Is this an uncaught C++ exception that could have originated from my code? or something else? (this report is from a tester)
(also, why can't crash reporter tell you info about what exception wasn't caught?)
(Per instructions here, to view the crash report, you'll need to rename the attached .txt to .ips to view the crash report)
thanks!
AudulusAU-2024-02-14-020421.txt
I had to switch from the SwiftUI app lifecycle to the UIKit lifecycle due to this issue: https://developer.apple.com/forums/thread/742580
When I switch to UIKit I get a black screen on startup. It's the inverse of this issue: https://openradar.appspot.com/FB9692750
For development, I can work around this by deleting and reinstalling the app, but I can't ship an app that results in a black screen for users when they update.
Anyone know of a work-around?
I've filed FB13462315
I've had to ditch the SwiftUI app lifecycle due to this issue: https://developer.apple.com/forums/thread/742580
After creating a new UIKit document app as a test, it doesn't have a toolbar when opening the document. How can I add one along the lines of https://developer.apple.com/wwdc22/10069 ?
The UIDocumentViewController isn't already embedded in a UINavigationController it seems.
To reproduce: New Project -> iOS -> Document App. Select Interface: Storyboard. Add an empty "untitled.txt" resource to the project. Change the first line in documentBrowser(_:,didRequestDocumentCreationWithHandler:) to
let newDocumentURL: URL? = Bundle.main.url(forResource: "untitled", withExtension: "txt")
I'm trying to get boost to compile for the iOS simulator on my M2 Mac.
I've got this script:
set -euxo pipefail
# See https://formulae.brew.sh/formula/boost
# See https://stackoverflow.com/questions/1577838/how-to-build-boost-libraries-for-iphone
wget https://boostorg.jfrog.io/artifactory/main/release/1.83.0/source/boost_1_83_0.tar.bz2
tar zxf boost_1_83_0.tar.bz2
mv boost_1_83_0 boost
root=`pwd`
cd boost
B2_ARGS="-a -j12 --with-iostreams --with-regex"
# Build for simulator
./bootstrap.sh --prefix=$root/install-ios-sim
IOSSIM_SDK_PATH=$(xcrun --sdk iphonesimulator --show-sdk-path)
cat << EOF >> project-config.jam
# IOS Arm Simulator
using clang : iphonesimulatorarm64
: xcrun clang++ -arch arm64 -stdlib=libc++ -std=c++20 -miphoneos-version-min=16.0 -fvisibility-inlines-hidden -target arm64-apple-ios16.0-simulator -isysroot $IOSSIM_SDK_PATH
;
EOF
./b2 $B2_ARGS --prefix=$root/install-ios-sim toolset=clang-iphonesimulatorarm64 link=static install
xcodebuild -create-xcframework thinks ./install-ios-sim/libboost_iostreams.a is not for the simulator.
Specifically, if you run the following after the build script, it will show the binary is ios-arm64.
xcodebuild -create-xcframework \
-library install-ios-sim/lib/libboost_iostreams.a \
-headers install-ios-sim/include \
-output boost.xcframework
I know how to use lipo, etc to determine the architecture of a library, but I don't know how create-xcframework differentiates a simulator binary from an iOS binary.
Note: I've also tried using the boost build script by Pete Goodliffe which generates an xcframework. However, I need a normal install of boost because I'm compiling other libraries against it. I couldn't get the script to do that. I also don't understand how the script successfully generates a simulator binary.
I'm getting
Thread 5: EXC_RESOURCE (RESOURCE_TYPE_MEMORY: high watermark memory limit exceeded) (limit=6 MB)
My thumbnails do render (and they require more than 6mb to do so), so I wonder about the behavior here. Does the OS try to render thumbnails with a very low memory limit and then retry if that fails?