Post

Replies

Boosts

Views

Activity

Why Auto Layout?
Why Auto Layout instead of the horizontal and vertical boxes used by other systems like Qt? Or a layout system like CSS?What is missing in the other ways of doing this stuff?Over a couple years, Auto Layout has given me mostly headaches, and I now generally try to avoid it, so I want to know if there's a good reason for my suffering.thanks!
8
0
3.0k
Jun ’18
Cocoa Views Available: 0
auval is reporting:Cocoa Views Available: 0Anyone know how to troubleshoot this? I've started with the XCode AUv3 template, and then added my own AudioUnit, which I already use in my stand-alone app. Validation passes otherwise, but I can't load the UI in Logic.
0
0
2.3k
Jul ’19
UIDocumentBrowserViewController can't create documents under "On My iPhone"
For some reason, my app cannot create documents in the "On My iPhone" section in its file browser.I'm using UIDocumentBrowserViewController. UISupportsDocumentBrowser is true in my Info.plist.I see folders for various other apps under "On My iPhone" but there isn't one present for mine.I can create documents in iCloud drive.Is there a setting that enables document creation "On My iPhone." Why would this ever be disabled? (ARGH!)
3
0
973
Aug ’19
why the complex types?
I lot of the frustration I'm having with trying to do simple things in SwiftUI seems to have to do with the complexity of the types of values involved. Type inference failing, and I'm unable to come up with the right types myself. Error messages not localized to where the actual mistake is!Sure, one can blame me for not completely understanding this, but this is not user friendly.Why does View need to be a generic protocol? Why all the complexity? Why the hard-to-understand error messages?Mind you: I've been coding in swift for several years now, so while I'm not an expert, I'm no novice either.
1
0
529
Aug ’19
Troubleshooting the AUv3 sample code on macOS
We haven't been able to get Apple's AUv3 sample code to work reliably across our machines.I'm referring to the code here: https://developer.apple.com/documentation/audiotoolbox/creating_custom_audio_effectsiMac Pro Late 2017: validates and seems to workMacBook Pro 15" 2018: auval can't find the pluginMacBook Pro 15" Late 2013: auval -a segfaults, auval -v afux filtr Demo returns:Input/Output Channel Handling:1-1 1-2 1-4 1-5 1-6 1-7 1-8 2-2 2-4 2-5 2-6 2-7 2-8 4-4 4-5 5-5 6-6 7-7 8-8X X X X X X X ERROR: -10868 IN CALL Cannot Set Output Num Channels:2 when unit says it canERROR: -10868 IN CALL Cannot Set Output Num Channels:4 when unit says it canERROR: -10868 IN CALL Cannot Set Output Num Channels:5 when unit says it canERROR: -10868 IN CALL Cannot Set Output Num Channels:6 when unit says it canERROR: -10868 IN CALL Cannot Set Output Num Channels:7 when unit says it canERROR: -10868 IN CALL Cannot Set Output Num Channels:8 when unit says it canERROR: -10868 IN CALL Cannot Set Output Num Channels:2 when unit says it canI've tried deleting ~/Library/Caches/AudioUnitCache and rebooting.I'm not sure what else to do.
1
0
2.5k
Feb ’20
reducing memory usage when exporting
I've got an issue with ModelIO using over 2gb of ram to export a 400mb file.My app (3d sculpting) tends to produce big meshes. For simple file formats (obj, stl) I use my own code and stream out the data, keeping usage very low.But for a complex format like USD, I use modelIO. Wondering if there's any way I can reduce memory usage.I've already tried using memory mapping for the vertex and index data (NSDataReadingMappedAlways), but it doesn't seem to help.
1
0
618
Apr ’20
binary framework workflow without duplicating project files
I'm using an xcframework to hide code.So I've got an App.xcodeproj and the xcframework in one repo visible to contractors.Then I have another private project which has everything in App.xcodeproj except has the framework project as a sub-project for easier development instead of the xcframework.This works reasonably well except I have to keep the internal (private) and external projects in sync. Changes the contractors make to App.xcodeproj have to be manually brought over to the internal project, updating paths accordingly.Is there a better way to do this?
0
0
617
Apr ’20
numerical result changes with optimizer turned on
On my machine (macOS 10.15.5 (19F101)), this simple program changes when the optimizer (-O2) is turned on: #include <stdio.h> #include <math.h> #include "load.h" int main() {   float x = load(0x3cc3a2be);   float result = 100 * powf(2, x);   printf("result: %f\n", result);   return 0; } Without optimization: result: 101.669098 With -O2: result: 101.669106 With load defined in a separate translation unit so this all doesn't get inlined and constant folded: float load(uint32_t value) {    union {         float f;         uint32_t u;     } f2u;    f2u.u = value;    return f2u.f; } If you use godbolt.org, you'll see that clang's optimizer replaces powf(2, x) with exp2f which results in very slightly different results (at least on my machine). Here's the generated assembly on my machine:         movl    $1019454142, %edi       imm = 0x3CC3A2BE         callq   _load         callq   _exp2f Is this a bug? Does the optimizer claim to produce the same numerical results as non-optimized?
0
0
651
Jul ’20
all data in managed buffer copied
I'm modifying <1mb of a 256mb managed buffer (calling didModifyRange), but according to Metal System Trace, the GPU copies the whole buffer (SDMA0 channel, "Page On 268435456 bytes"), taking 13ms. I'm making lots of small modifications (~4k) per frame. I also tried coalescing into a single call to didModifyRange (~66mb) and still the entire buffer is copied. I also tried calling didModifyRange for the first byte, and then the copied data is small. So I'm wondering why didModifyRange doesn't seem to be efficient for many small updates to a big buffer?
1
0
719
Aug ’21
rendering thousands of small meshes
I have on the order of 50k small meshes (~64 vertices), all different connectivity, some subset of which change each frame (generated by a compute kernel). Can I render those in a performant way with Metal? I'm assuming 50k separate draw calls would be too slow. I have a few ideas: encode those draw calls on the GPU or lay out the meshes linearly in blocks, with some maximum size, and use a single draw call, but wasting vertex shader threads on the blocks that aren't full or use another kernel to combine the little meshes into a big mesh thanks!
2
1
1.4k
Sep ’21
IAP in App Extension
How should an App Extension (in this case an Audio Unit Extension) determine if an IAP has been purchased in the containing app? (and related: can an IAP be purchased from within the extension?) On macOS, I suppose I could share the receipt file with the extension? and on iOS, suppose I could write some data to shared UserDefaults in an app group. Is there any official guidance on this? thanks!
1
1
1.1k
Oct ’21
what does kIOAccelCommandBufferCallbackErrorInvalidResource mean?
I'm getting the following error on Intel Iris integrated graphics. Code works well on newer Mac GPUs as well as Apple GPUs. Execution of the command buffer was aborted due to an error during execution. Invalid Resource (00000009:kIOAccelCommandBufferCallbackErrorInvalidResource) The error is for a compute command, not a draw command. The constant isn't in the documentation. All buffers and textures seem to be created successfully. I've also checked that the GPU supports the required threadgroup size for the compute pipeline. thanks!
3
0
1.2k
Apr ’22
testing multichannel AudioUnit output with AVAudioEngine
I'm extending an AudioUnit to generate multi-channel output, and trying to write a unit test using AVAudioEngine. My test installs a tap on the AVAudioNode's output bus and ensures the output is not silence. This works for stereo. I've currently got: auto avEngine = [[AVAudioEngine alloc] init]; [avEngine attachNode:avAudioUnit]; auto format = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:44100. channels:channelCount]; [avEngine connect:avAudioUnit to:avEngine.mainMixerNode format:format]; where avAudioUnit is my AU. So it seems I need to do more than simply setting the channel count for the format when connecting, because after this code, [avAudioUnit outputFormatForBus:0].channelCount is still 2. Printing the graph yields: AVAudioEngineGraph 0x600001e0a200: initialized = 1, running = 1, number of nodes = 3 ******** output chain ******** node 0x600000c09a80 {'auou' 'ahal' 'appl'}, 'I' inputs = 1 (bus0, en1) <- (bus0) 0x600000c09e00, {'aumx' 'mcmx' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] node 0x600000c09e00 {'aumx' 'mcmx' 'appl'}, 'I' inputs = 1 (bus0, en1) <- (bus0) 0x600000c14300, {'augn' 'brnz' 'brnz'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] outputs = 1 (bus0, en1) -> (bus0) 0x600000c09a80, {'auou' 'ahal' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] node 0x600000c14300 {'augn' 'brnz' 'brnz'}, 'I' outputs = 1 (bus0, en1) -> (bus0) 0x600000c09e00, {'aumx' 'mcmx' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] So AVAudioEngine just silently ignores whatever channel counts I pass to it. If I do: auto numHardwareOutputChannels = [avEngine.outputNode outputFormatForBus:0].channelCount; NSLog(@"hardware output channels %d\n", numHardwareOutputChannels); I get 30, because I have an audio interface connected. So I would think AVAudioEngine would support this. I've also tried setting the format explicitly on the connection between the mainMixerNode and the outputNode to no avail.
0
2
1.3k
Jun ’22