Posts

Post not yet marked as solved
3 Replies
841 Views
My app uses an external PCI board connected with the thunderbolt cable to a 2019 MacBook Pro (Big Sur beta 10). The strange thing is that the external example apps coming with the driver can communicate with it. The same apps compiled with Xcode 12.2 beta cannot communicate with the driver even it is the same code. When I installed the driver together with the example apps a window appeared that these software come from an external source and the usage must be granted in the security settings. I did it and therefor the apps work fine. Is there maybe a similar setting now in Xcode where I must allow the compiled app to communicate with third party drivers? The code of the app was unchanged and worked with a previous macOS version.
Posted Last updated
.
Post not yet marked as solved
0 Replies
346 Views
Above Xcode's navigator area or uitilty area there are some buttons for switching the content views below. I implemented the standard SwiftUI TabView and also a Picker with the segmented style to get the same appearance in my macOS app but both are looking different. Plain buttons in a HStack above my content area are solving the problem. But it seems more like a workaround. Is there maybe a simpler solution to implement a Xcode style tabview that I have overseen.
Posted Last updated
.
Post not yet marked as solved
1 Replies
3.4k Views
My goal is to hide the SwiftUI slider line on macOS by using a clear color because I set a color gradient as a background view. But whatever color I use (.clear, .red, ...) with the accentColor or foregroundColor modifier the line does not change or can be hidden and appears still on top of the sliders background view.
Posted Last updated
.
Post marked as solved
3 Replies
2.5k Views
Please apologize if this question is really trival. I implementent a content view with two buttons on it. One button should be 8px below the top, the other 8px above the bottom. Maybe I am doing something wrong. But I would use a simple ZStack here. Only problem is that I can set only one alignment for the ZStack at a time. So I can either align all views from the top border or from the bottom border but not one from the top and one from the bottom. Is there a way to solve this layout problem with these three views and a ZStack?
Posted Last updated
.
Post not yet marked as solved
4 Replies
3.3k Views
I have a simple class conforming to the codable protocol. Here is an example which is the same like in the developer documentation...class Landmark: Codable { var name: String var foundingYear: Int }Because the two properties inside the class body are basic Swift types the compiler has nothing to complain.But if I want to edit these properties using SwiftUI I must also make sure that the class conforms also to the ObservableObject protocol.class Landmark: Codable, ObservableObject { @Published var name: String @Published var foundingYear: Int }Now I get an Xcode error message that type Landmark does not conform to the codable protocol anymore.This is something I cannot understand. The property wrapper @Published should not change the type of the properties String and Int. The codable conformance should imho still be given to the class without additional lines of codes.
Posted Last updated
.
Post not yet marked as solved
1 Replies
1k Views
Inside the app delegate I uses a simple SwiftUI view as the root for my macOS main window.func applicationDidFinishLaunching(_ aNotification: Notification) { // Create the SwiftUI view that provides the window contents. let contentView = ContentView().touchBar(myTouchbar) // Create the window and set the content view. window = NSWindow( contentRect: NSRect(x: 0, y: 0, width: 480, height: 300), styleMask: [.titled, .closable, .miniaturizable, .resizable, .fullSizeContentView], backing: .buffered, defer: false) window.center() window.setFrameAutosaveName("Main Window") window.contentView = NSHostingView(rootView: contentView) window.makeKeyAndOrderFront(nil) }This contentView has a touchbar asigned to it. If the contentView is a simple view that can hold a focus (like a Textield) then the touchbar becomes visible on my MacBook. But in my application the contentView (the root view of the NSWindow) is a layout container like a HSplitView. In this case the window appears but the touchbar is not visible. How can I use a SwiftUI touchbar with a window without focusable or input elements so that the touchbar is always visible together with the window.
Posted Last updated
.
Post not yet marked as solved
14 Replies
7.8k Views
Can someone explain how to use the CarPlay simulator in Xcode 11 GM? Maybe I am doing something wrong.In the simulator I choosed hardware->external displays and then a small rectangle window appears. But the screen stays black. My goal is to start the iOS app in the simulator and see the CarPlay screen in this window side by side.In the internet I saw a video where this simulator scenario was working. But I get only the black screen and an iPhone simulator with no CarPlay option in the settings.Some people write you must enter first some commands in the terminal window before you can use the CarPlay simulator. I am not sure if this is still neccessary in Xcode 11 but here is what I used in terminal:defaults write com.apple.iphonesimulator CarPlay -bool YESdefaults write com.apple.iphonesimulator CarPlayExtraOptions -bool YESdefaults write com.apple.iphonesimulator CarPlayProtocols -array-add com.brand.carplay.featureMust I request first the MFI profile from apple when you test only in the simulator? Or does CarPlay simulator not work in Xcode 11 because I saw another thread in the developer forum where someone said he gets also the black screen.
Posted Last updated
.
Post not yet marked as solved
3 Replies
1.7k Views
Does someone know how they GPU channels work exactly in Metal?I implemented a blit command encoder in two different ways and metal system traces showed me that one blit command was sheduled to the GPUs blit channel while the other is running on the gfx channel.- First Blit Copy: Shared MTLBuffer -> Private MTLBuffer- Second Blit Copy: CVMetalTexture -> Private MTLTextureBoth blit commands were commited to the queue in seperate command buffers and on seperate threads.A blit commander for generateMipmaps() on the private MTLTexture from above is also running on the gfx channel and not on the blit channel. Copying parts from one private texture to another region of a destination private texture also runs on the gfx channel.So only the blit copy from one buffer to another buffer seems to run in the blit channel or is this wrong?
Posted Last updated
.
Post not yet marked as solved
4 Replies
3.3k Views
I created two threads in my macOS app, a main thread that dispatches work every 40ms to a worker thread. This worker thread causes a memory leak. After some debugging it seems that the MTLCommandBuffer is the reason for the leak:if let commandBuffer = commandQueue.makeCommandBuffer() { // some code here commandBuffer.commit() }I uncommented every code from the worker thread and there is no memory leak anymore. But when I add these lines above and create only an empty command buffer and commit it to the queue then the CPU memory will increase over time.The app is running on macOS and compiled with Xcode 10.3 (and Xcode 11 beta with the the same effect).Instruments cannot find any leaks. Persistent allocation stays constant over a long time. Only Xcode debugger shows this static memory increase. And only if I create a commandBuffer (with commands encoded or also when empty).Edit: I created a new project in Xcode with the game template and selected metal. Same effect. The debugger memory section has the same memory increase over time (nearly 0.1MB each second).
Posted Last updated
.
Post not yet marked as solved
3 Replies
3.2k Views
Hi guys,I am working on a video application that needs realtime performance. First I used GCD but than I saw one WWDC video where the priority decay mechanism was explained.My application is quite simple. I have an external API call that waits for the next incoming video frame. So instead of using GCD I implemented a pthread which opt-out of the priority decay mechanism and set the priority very high as it was shown in the WWDC video.This dispatch thread only waits for the next frame sync and starts several worker threads using conditional flags. All the worker threads also opt-out from priority decay with the same high priority like the dispatcher. They do not much, some are encoding Metal commands, others are reading some UDP data or fetching the new video frame from the hardware board.When I used instruments I saw that all working threads has been scheduled to all available cores and blocking them all for 10ms. The UI gets unresponsive as well.Here is a short version of the code. Maybe I understand something wrong with pthreads. So please apologize if this is a silly mistake I made here in my code. GCD would be much better to use but when I watched the WWDC video it seems that there is no way to opt-out from the priority decay problem.void *dispatcherThread(void *params) { while( !forcedExit ) { waitForNextFrame(); pthread_mutex_lock(&mutex); needsCaptureFrame = true; needsProcessFrame = true; needsPlayoutFrame = true; pthread_cond_broadcast(&condition); pthread_mutex_unlock(&mutex); } pthread_exit(NULL); }void *workerThreadProcessFrame(void *params) { while( !forcedExit ) { pthread_mutex_lock(&mutex); while (!needsProcessFrame && !forcedExit) pthread_cond_wait(&condition, &mutex); needsProcessFrame = false; pthread_mutex_unlock(&mutex); if (!forcedExit) { processFrame(); } } pthread_exit(NULL); }The C function processFrame itself is bound to a Swift function. This works pretty well. Only problem is that all worker threads block every 40ms all cores of the Mac for 10ms even when their Swift function returns in a few mikroseconds.Here is also the code snippet how the pthreads are created.void startThread(void * _Nullable (* _Nonnull start_routine)(void * _Nullable)) { pthread_t thread; pthread_attr_t attr; int returnVal; // create attributes with the standard values returnVal = pthread_attr_init(&attr); assert(!returnVal); // set the detachstate attribute (because we don't need return values and therefor a pthread_join) returnVal = pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED); assert(!returnVal); // set the scheduling policy to round robin to avoid priority decay (this is very important!!!) pthread_attr_setschedpolicy(&attr, SCHED_RR); // the thread priority is set to 45 which seems to be a good value on the mac struct sched_param param = {.sched_priority = 45}; pthread_attr_setschedparam(&attr, ¶m); int threadError = pthread_create(&thread, &attr, start_routine, NULL); assert(!threadError); returnVal = pthread_attr_destroy(&attr); assert(!returnVal); }I would be really happy if someone has an idea why this dispath/worker mechanism does not work or if there is also a solution with GCD avoiding the priority decay problem.
Posted Last updated
.
Post not yet marked as solved
0 Replies
627 Views
Is it possible to initialize the content of a stencil texture within a compute kernel function? In my case I wanted to fill zeros in the even rows and ones in the odd rows of the stencil buffer. When I use .stencil8 as a pixel format for this texture, then Xcode gives me an error that this pixel format .stencil8 has no write access for a compute function even the usage property in the texture descriptor contains the .shaderWrite flag.
Posted Last updated
.