I'm getting
Thread 5: EXC_RESOURCE (RESOURCE_TYPE_MEMORY: high watermark memory limit exceeded) (limit=6 MB)
My thumbnails do render (and they require more than 6mb to do so), so I wonder about the behavior here. Does the OS try to render thumbnails with a very low memory limit and then retry if that fails?
Post
Replies
Boosts
Views
Activity
Adding an inspector and toolbar to Xcode's app template, I have:
struct ContentView: View {
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundStyle(.tint)
Text("Hello, world!")
}
.padding()
.toolbar {
Text("test")
}
.inspector(isPresented: .constant(true)) {
Text("this is a test")
}
}
}
In the preview canvas, this renders as I would expect:
However when running the app:
Am I missing something?
(Relevant wwdc video is wwdc2023-10161. I couldn't add that as a tag)
Anyone have a sense of what could cause this? Running on iOS 17.0.2. This seems to be a regression in iOS 17.
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x100ad4437fff8)
* frame #0: 0x00000001ca2264ec AttributeGraph`AG::swift::existential_type_metadata::project_value(void const*) const + 40
frame #1: 0x00000001ca2349a8 AttributeGraph`AG::LayoutDescriptor::compare_existential_values(AG::swift::existential_type_metadata const*, unsigned char const*, unsigned char const*, unsigned int) + 108
frame #2: 0x00000001ca21b938 AttributeGraph`AG::LayoutDescriptor::Compare::operator()(unsigned char const*, unsigned char const*, unsigned char const*, unsigned long, unsigned int) + 560
frame #3: 0x00000001ca21b9b8 AttributeGraph`AG::LayoutDescriptor::Compare::operator()(unsigned char const*, unsigned char const*, unsigned char const*, unsigned long, unsigned int) + 688
frame #4: 0x00000001ca21b674 AttributeGraph`AG::LayoutDescriptor::compare(unsigned char const*, unsigned char const*, unsigned char const*, unsigned long, unsigned int) + 96
frame #5: 0x00000001ca21afb0 AttributeGraph`AGGraphSetOutputValue + 268
frame #6: 0x00000001a7bdd924 SwiftUI`___lldb_unnamed_symbol227590 + 72
frame #7: 0x00000001a6ce9194 SwiftUI`___lldb_unnamed_symbol111702 + 20
frame #8: 0x000000019bca3994 libswiftCore.dylib`Swift.withUnsafePointer<τ_0_0, τ_0_1>(to: inout τ_0_0, _: (Swift.UnsafePointer<τ_0_0>) throws -> τ_0_1) throws -> τ_0_1 + 28
frame #9: 0x00000001a6c6d70c SwiftUI`___lldb_unnamed_symbol110270 + 1592
frame #10: 0x00000001a7bdeb3c SwiftUI`___lldb_unnamed_symbol227617 + 408
frame #11: 0x00000001a7bde698 SwiftUI`___lldb_unnamed_symbol227614 + 876
frame #12: 0x00000001a7619cfc SwiftUI`___lldb_unnamed_symbol184045 + 32
frame #13: 0x00000001ca21e854 AttributeGraph`AG::Graph::UpdateStack::update() + 512
frame #14: 0x00000001ca215504 AttributeGraph`AG::Graph::update_attribute(AG::data::ptr<AG::Node>, unsigned int) + 424
frame #15: 0x00000001ca21ff58 AttributeGraph`AG::Subgraph::update(unsigned int) + 848
frame #16: 0x00000001a7a621d4 SwiftUI`___lldb_unnamed_symbol216794 + 384
frame #17: 0x00000001a7a63610 SwiftUI`___lldb_unnamed_symbol216852 + 24
frame #18: 0x00000001a710a638 SwiftUI`___lldb_unnamed_symbol143862 + 28
frame #19: 0x00000001a7b55a0c SwiftUI`___lldb_unnamed_symbol223201 + 108
frame #20: 0x00000001a7b481f4 SwiftUI`___lldb_unnamed_symbol223031 + 96
frame #21: 0x00000001a710187c SwiftUI`___lldb_unnamed_symbol143639 + 84
frame #22: 0x00000001a7a635d8 SwiftUI`___lldb_unnamed_symbol216851 + 200
frame #23: 0x00000001a7a634c4 SwiftUI`___lldb_unnamed_symbol216850 + 72
frame #24: 0x00000001a74514c0 SwiftUI`___lldb_unnamed_symbol170645 + 28
frame #25: 0x00000001a6d196d4 SwiftUI`___lldb_unnamed_symbol114472 + 120
frame #26: 0x00000001a6d19780 SwiftUI`___lldb_unnamed_symbol114473 + 72
frame #27: 0x00000001a490ad94 UIKitCore`_UIUpdateSequenceRun + 84
frame #28: 0x00000001a490a484 UIKitCore`schedulerStepScheduledMainSection + 144
frame #29: 0x00000001a490a540 UIKitCore`runloopSourceCallback + 92
frame #30: 0x00000001a2684acc CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 28
frame #31: 0x00000001a2683d48 CoreFoundation`__CFRunLoopDoSource0 + 176
frame #32: 0x00000001a26824fc CoreFoundation`__CFRunLoopDoSources0 + 244
frame #33: 0x00000001a2681238 CoreFoundation`__CFRunLoopRun + 828
frame #34: 0x00000001a2680e18 CoreFoundation`CFRunLoopRunSpecific + 608
frame #35: 0x00000001e51415ec GraphicsServices`GSEventRunModal + 164
frame #36: 0x00000001a4a8f350 UIKitCore`-[UIApplication _run] + 888
frame #37: 0x00000001a4a8e98c UIKitCore`UIApplicationMain + 340
frame #38: 0x00000001a7457354 SwiftUI`___lldb_unnamed_symbol171027 + 176
frame #39: 0x00000001a7457198 SwiftUI`___lldb_unnamed_symbol171025 + 152
frame #40: 0x00000001a70d4434 SwiftUI`___lldb_unnamed_symbol142421 + 128
I've got the following code to generate an MDLMaterial from my own material data model:
public extension MaterialModel {
var mdlMaterial: MDLMaterial {
let f = MDLPhysicallyPlausibleScatteringFunction()
f.metallic.floatValue = metallic
f.baseColor.color = CGColor(red: CGFloat(color.x), green: CGFloat(color.y), blue: CGFloat(color.z), alpha: 1.0)
f.roughness.floatValue = roughness
return MDLMaterial(name: name, scatteringFunction: f)
}
}
When exporting to OBJ, I get the expected material properties:
# Apple ModelI/O MTL File: testExport.mtl
newmtl material_1
Kd 0.163277 0.0344635 0.229603
Ka 0 0 0
Ks 0
ao 0
subsurface 0
metallic 0
specularTint 0
roughness 0
anisotropicRotation 0
sheen 0.05
sheenTint 0
clearCoat 0
clearCoatGloss 0
newmtl material_2
Kd 0.814449 0.227477 0.124541
Ka 0 0 0
Ks 0
ao 0
subsurface 0
metallic 0
specularTint 0
roughness 1
anisotropicRotation 0
sheen 0.05
sheenTint 0
clearCoat 0
clearCoatGloss 0
However when exporting USD I just get:
#usda 1.0
(
defaultPrim = "_0"
endTimeCode = 0
startTimeCode = 0
timeCodesPerSecond = 60
upAxis = "Y"
)
def Xform "Obj0"
{
def Mesh "_"
{
uniform bool doubleSided = 0
float3[] extent = [(896, 896, 896), (1152, 1152, 1148.3729)]
int[] faceVertexCounts = ...
int[] faceVertexIndices = ...
point3f[] points = ...
}
def Mesh "_0"
{
uniform bool doubleSided = 0
float3[] extent = [(898.3113, 896.921, 1014.4961), (1082.166, 1146.7178, 1152)]
int[] faceVertexCounts = ...
int[] faceVertexIndices = ...
point3f[] points = ...
matrix4d xformOp:transform = ( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )
uniform token[] xformOpOrder = ["xformOp:transform"]
}
}
There aren't any material properties.
FWIW, this specifies a set of common material parameters for USD: https://openusd.org/release/spec_usdpreviewsurface.html
(Note: there is no tag for ModelIO, so using SceneKit, etc.)
I received a rejection for "Your app spawns processes that continue running after the user has quit the app."
The process in question is the app's Thumbnail extension.
When I remove all of my own code from the thumbnail extension, it still continues to run after I exit my app. This is the entirety of the extension's code, which now renders blank thumbnails:
import QuickLookThumbnailing
class ThumbnailProvider: QLThumbnailProvider {
override init() { }
override func provideThumbnail(for request: QLFileThumbnailRequest,
_ handler: @escaping (QLThumbnailReply?, Error?) -> Void) {
let reply = QLThumbnailReply(contextSize: request.maximumSize) { (context: CGContext) -> Bool in
return true
}
handler(reply, nil)
}
}
Presumably Thumbnail extensions continue to run so that Finder (among others) can generate thumbnails as necessary. AFAIK, I have no direct control over the extension's lifecycle.
Is this just App Review's mistake? The "Next Steps" are clueless:
"You can resolve this by leaving this option unchecked by default, providing the user the option to turn it on."
The app uses its own thumbnail extension to render thumbnails for document templates, which may be an uncommon thing.
I'm getting this segfault stack trace from TestFlight. Any idea about how I should approach it?
Thread 0 Crashed:
0 SwiftUI 0x000000018dd18e14 specialized static Array<A>.== infix(_:_:) + 0 (<compiler-generated>:0)
1 SwiftUI 0x000000018e2ab404 static StrokeStyle.== infix(_:_:) + 100 (<compiler-generated>:0)
2 SwiftUI 0x000000018e2ab468 protocol witness for static Equatable.== infix(_:_:) in conformance StrokeStyle + 60 (<compiler-generated>:0)
3 AttributeGraph 0x00000001b0e4feec AGDispatchEquatable + 24 (Misc.swift:160)
4 AttributeGraph 0x00000001b0e4fd68 AG::LayoutDescriptor::Compare::operator()(unsigned char const*, unsigned char const*, unsigned char const*, unsigned long, unsigned int) + 1632 (ag-value.cc:579)
5 AttributeGraph 0x00000001b0e4f674 AG::LayoutDescriptor::compare(unsigned char const*, unsigned char const*, unsigned char const*, unsigned long, unsigned int) + 96 (ag-value.cc:723)
6 AttributeGraph 0x00000001b0e4efb0 AGGraphSetOutputValue + 268 (AGGraph.mm:784)
7 SwiftUI 0x000000018e8c1eb4 closure #1 in StatefulRule.value.setter + 72 (<compiler-generated>:0)
8 SwiftUI 0x000000018defc57c partial apply for closure #1 in StatefulRule.value.setter + 20 (<compiler-generated>:0)
9 libswiftCore.dylib 0x0000000182a779e4 withUnsafePointer<A, B>(to:_:) + 28 (LifetimeManager.swift:128)
10 SwiftUI 0x000000018def9e84 closure #1 in closure #1 in UnwrapConditional.updateValue() + 360 (ConditionalMetadata.swift:286)
11 SwiftUI 0x000000018defc55c partial apply for closure #1 in closure #1 in UnwrapConditional.updateValue() + 36 (<compiler-generated>:0)
12 SwiftUI 0x000000018def8374 ConditionalTypeDescriptor.project(at:baseIndex:_:) + 192 (ConditionalMetadata.swift:203)
13 SwiftUI 0x000000018def8458 ConditionalTypeDescriptor.project(at:baseIndex:_:) + 420 (ConditionalMetadata.swift:212)
14 SwiftUI 0x000000018def9cf0 closure #1 in UnwrapConditional.updateValue() + 136 (ConditionalMetadata.swift:283)
15 SwiftUI 0x000000018defc530 partial apply for closure #1 in UnwrapConditional.updateValue() + 28 (<compiler-generated>:0)
16 libswiftCore.dylib 0x0000000182a779e4 withUnsafePointer<A, B>(to:_:) + 28 (LifetimeManager.swift:128)
17 SwiftUI 0x000000018def9c1c UnwrapConditional.updateValue() + 260 (ConditionalMetadata.swift:282)
18 SwiftUI 0x000000018e3b3de8 partial apply for implicit closure #1 in closure #1 in closure #1 in Attribute.init<A>(_:) + 32 (<compiler-generated>:0)
19 AttributeGraph 0x00000001b0e52854 AG::Graph::UpdateStack::update() + 512 (ag-graph-update.cc:578)
20 AttributeGraph 0x00000001b0e49504 AG::Graph::update_attribute(AG::data::ptr<AG::Node>, unsigned int) + 424 (ag-graph-update.cc:719)
(UIApplication.m:3679)
....
137 UIKitCore 0x000000018b841cf0 UIApplicationMain + 340 (UIApplication.m:5266)
138 SwiftUI 0x000000018e1f2ff8 closure #1 in KitRendererCommon(_:) + 176 (UIKitApp.swift:37)
139 SwiftUI 0x000000018e1f2e3c runApp<A>(_:) + 152 (UIKitApp.swift:14)
140 SwiftUI 0x000000018de6fda0 static App.main() + 128 (App.swift:114)
141 Sculptura 0x0000000100b61f50 static SculpturaApp.$main() + 24 (SculpturaApp.swift:17)
142 Sculptura 0x0000000100b61f50 main + 36 (SculpturaApp.swift:0)
143 dyld 0x00000001abaafd44 start + 2104 (dyldMain.cpp:1269)
High up in the trace there's some ZStack layout stuff. Maybe just trying to simplify my view hierarchy?
(I wonder if this would be more stable if AttributeGraph wasn't written in C++)
I'm trying to implement de-noising of AO in my app, using the MPSDynamicScene example as a guide: https://developer.apple.com/documentation/metalperformanceshaders/animating_and_denoising_a_raytraced_scene
In that example, it computes motion vectors in UV coordinates, resulting in very small values:
// Compute motion vectors
if (uniforms.frameIndex > 0) {
// Map current pixel location to 0..1
float2 uv = in.position.xy / float2(uniforms.width, uniforms.height);
// Unproject the position from the previous frame then transform it from
// NDC space to 0..1
float2 prevUV = in.prevPosition.xy / in.prevPosition.w * float2(0.5f, -0.5f) + 0.5f;
// Next, remove the jittering which was applied for antialiasing from both
// sets of coordinates
uv -= uniforms.jitter;
prevUV -= prevUniforms.jitter;
// Then the motion vector is simply the difference between the two
motionVector = uv - prevUV;
}
Yet the documentation for MPSSVGF seems to indicate the offsets should be expressed in texels:
The motion vector texture must be at least a two channel texture representing how many texels
* each texel in the source image(s) have moved since the previous frame. The remaining channels
* will be ignored if present. This texture may be nil, in which case the motion vector is assumed
* to be zero, which is suitable for static images.
Is this a mistake in the example code?
Asking because doing something similarly in my own app leaves AO trails, which would indicate the motion vector texture values are too small in magnitude. I don't really see trails in the example, even when I speed up the animation, but that could be due to the example being monochrome.
Update:
If I multiply the uv offsets by the size of the texture, I get a bad result. Which seems to indicate the header is misleading and they are in fact in uv coordinates. So perhaps the trails I'm seeing in my app are for some other reason.
I also wonder who is actually using this API other than me? I would think most game engines are doing their own thing. Perhaps some of apple's own code uses it.
I've got a scene which renders as I expect:
but in the acceleration structure inspector, the kraken primitive doesn't render:
In the list on the left, the structure is there. As expected, there is just one bounding-box primitive as a lot happens in the intersection function (doing it this way since I've already built my own octree and it takes too long to rebuild BVHs for dynamic geometry)
This is just based on the SimplePathTracer example.
The signatures of the sphereIntersectionFunction and octreeIntersectionFunction aren't that different:
[[intersection(bounding_box, triangle_data, instancing)]]
BoundingBoxIntersection sphereIntersectionFunction(// Ray parameters passed to the ray intersector below
float3 origin [[origin]],
float3 direction [[direction]],
float minDistance [[min_distance]],
float maxDistance [[max_distance]],
// Information about the primitive.
unsigned int primitiveIndex [[primitive_id]],
unsigned int geometryIndex [[geometry_intersection_function_table_offset]],
// Custom resources bound to the intersection function table.
device void *resources [[buffer(0), function_constant(useResourcesBuffer)]]
#if SUPPORTS_METAL_3
,const device Sphere* perPrimitiveData [[primitive_data]]
#endif
,ray_data IntersectionPayload& payload [[payload]])
{
vs.
[[intersection(bounding_box, triangle_data, instancing)]]
BoundingBoxIntersection octreeIntersectionFunction(// Ray parameters passed to the ray intersector below
float3 origin [[origin]],
float3 direction [[direction]],
float minDistance [[min_distance]],
float maxDistance [[max_distance]],
// Information about the primitive.
unsigned int primitiveIndex [[primitive_id]],
unsigned int geometryIndex [[geometry_intersection_function_table_offset]],
// Custom resources bound to the intersection function table.
device void *resources [[buffer(0)]],
const device BlockInfo* perPrimitiveData [[primitive_data]],
ray_data IntersectionPayload& payload [[payload]])
Note: running 15.0 beta 5 (15A5209g) since even the unmodified SimplePathTracer example project will hang the acceleration structure viewer on Xcode 14.
Update:
Replacing the octreeIntersectionFunction's code with just a hard-coded sphere does render. Perhaps the viewer imposes a time (or instruction count) limit on intersection functions so as to not hang the GPU?
In Platforms State of the Union, there was a reference to "Custom Gestures" in SwiftUI, among new features. I didn't see anything about it in What's New with SwiftUI. Did I miss it? Anyone have more info?
I have an app (currently for sale on the Mac App Store) which is a programming environment for audio processing (DSP node graph). I would like it to be able to export apps that are ready to be uploaded to the App Store or Mac App Store (including Audio Unit extensions).
Can I code sign from within my Mac App Store app? (Seems I can use Process to invoke codesign. Otherwise perhaps I could add the source for codesign to my app.. it seems to be open source)
Is this whole process too hard for a solo developer to take on?
What resources should I look at?
thanks!
I'm getting this error when using fragmentLinkedFunctions in Metal.
Compiler failed to build request
exception: Error Domain=CompilerError Code=2 "
Linking two modules of different data layouts: '' is '' whereas '1' is 'e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v16:16:16-v24:32:32-v32:32:32-v48:64:64-v64:64:64-v96:128:128-v128:128:128-v192:256:256-v256:256:256-v512:512:512-v1024:1024:1024-n8:16:32'
SC compilation failure
More boolean const than hw allows" UserInfo={NSLocalizedDescription=
Linking two modules of different data layouts: '' is '' whereas '1' is 'e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v16:16:16-v24:32:32-v32:32:32-v48:64:64-v64:64:64-v96:128:128-v128:128:128-v192:256:256-v256:256:256-v512:512:512-v1024:1024:1024-n8:16:32'
SC compilation failure
More boolean const than hw allows}
Anyone know what that all means?
If I replace the body of my intersection function with just return {false, 0.0f}, I get only the More boolean const than hw allows.
I'm extending an AudioUnit to generate multi-channel output, and trying to write a unit test using AVAudioEngine. My test installs a tap on the AVAudioNode's output bus and ensures the output is not silence. This works for stereo.
I've currently got:
auto avEngine = [[AVAudioEngine alloc] init];
[avEngine attachNode:avAudioUnit];
auto format = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:44100. channels:channelCount];
[avEngine connect:avAudioUnit to:avEngine.mainMixerNode format:format];
where avAudioUnit is my AU.
So it seems I need to do more than simply setting the channel count for the format when connecting, because after this code, [avAudioUnit outputFormatForBus:0].channelCount is still 2.
Printing the graph yields:
AVAudioEngineGraph 0x600001e0a200: initialized = 1, running = 1, number of nodes = 3
******** output chain ********
node 0x600000c09a80 {'auou' 'ahal' 'appl'}, 'I'
inputs = 1
(bus0, en1) <- (bus0) 0x600000c09e00, {'aumx' 'mcmx' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
node 0x600000c09e00 {'aumx' 'mcmx' 'appl'}, 'I'
inputs = 1
(bus0, en1) <- (bus0) 0x600000c14300, {'augn' 'brnz' 'brnz'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
outputs = 1
(bus0, en1) -> (bus0) 0x600000c09a80, {'auou' 'ahal' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
node 0x600000c14300 {'augn' 'brnz' 'brnz'}, 'I'
outputs = 1
(bus0, en1) -> (bus0) 0x600000c09e00, {'aumx' 'mcmx' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
So AVAudioEngine just silently ignores whatever channel counts I pass to it.
If I do:
auto numHardwareOutputChannels = [avEngine.outputNode outputFormatForBus:0].channelCount;
NSLog(@"hardware output channels %d\n", numHardwareOutputChannels);
I get 30, because I have an audio interface connected. So I would think AVAudioEngine would support this. I've also tried setting the format explicitly on the connection between the mainMixerNode and the outputNode to no avail.
I'm getting the following error on Intel Iris integrated graphics. Code works well on newer Mac GPUs as well as Apple GPUs.
Execution of the command buffer was aborted due to an error during execution. Invalid Resource (00000009:kIOAccelCommandBufferCallbackErrorInvalidResource)
The error is for a compute command, not a draw command.
The constant isn't in the documentation. All buffers and textures seem to be created successfully. I've also checked that the GPU supports the required threadgroup size for the compute pipeline.
thanks!
How should an App Extension (in this case an Audio Unit Extension) determine if an IAP has been purchased in the containing app? (and related: can an IAP be purchased from within the extension?)
On macOS, I suppose I could share the receipt file with the extension? and on iOS, suppose I could write some data to shared UserDefaults in an app group.
Is there any official guidance on this?
thanks!
I have on the order of 50k small meshes (~64 vertices), all different connectivity, some subset of which change each frame (generated by a compute kernel). Can I render those in a performant way with Metal?
I'm assuming 50k separate draw calls would be too slow. I have a few ideas:
encode those draw calls on the GPU
or lay out the meshes linearly in blocks, with some maximum size, and use a single draw call, but wasting vertex shader threads on the blocks that aren't full
or use another kernel to combine the little meshes into a big mesh
thanks!