There used to be a "-" icon on the right side of each key binding, but that's been removed. With no help text on how to clear a key binding. Even in the "conflicts" list, there's no help as to how to clear them. When I highlight a binding, and hit "delete" key (or shift + delete), Xcode just enters that as a conflicting key binding. How about just honoring the "delete" key and clearing the binding?
Post
Replies
Boosts
Views
Activity
I'm on a MBP 16" 2019 with HDR 10.
latest Apple/AMD drivers don't enable HDR mode under Windows 10. Is there a monitor profile that Apple can provide/install?
trackpad stutters in M2 chip during high cpu usage
keyboard keys for VK_OEM3/5/7 are incorrect, mapping to one another
static bool useBootcampHack = true;
if ( useBootcampHack )
{
if ( button == VK_OEM_3 )
button = VK_OEM_7; // '" for US
else if ( button == VK_OEM_5 )
button = VK_OEM_3; // `~ for US
else if ( button == VK_OEM_7 )
button = VK_OEM_5; // \| for US
}
The Windows Keyboard layout tool shows the ~ key as VK_OEM_3, but when it reaches our app, we get VK_OEM_5. And same with the '" and | keys. These all arrive incorrectly as if something in software is remapping them. This is on an Intel MBP 16" 2019 w/Windows 10 Pro latest. These forums don't even have a Bootcamp channel.
// hack for bootcamp
if ( button == VK_OEM_3 )
button = VK_OEM_7;
else if ( button == VK_OEM_5 )
button = VK_OEM_3;
else if ( button == VK_OEM_7 )
button = VK_OEM_5;
These make no sense. Several of the presentations on wide gamut lack specific example code. I would assume if I have linear rgba16f data, that I could specify srgb, or linearSrgb colorspace and get the same output to the screen but that is not the case. There is no documentation except for the briefest of comments on each color space, but now how MTKView actually transform the colors.
There is even less documentation on extended color spaces, and what to do when the fail to display expected results. When we set one of these, the gamma is totally off. And it's unclear what to set so we go from HDR back to EDR.
src srgb8 -> srgbColorSpace -> gamma 2.2 -> incorrect, doubly applied srgb?, why can layer just do passthrough
rgba16f -> srgbColorSpace -> gamma 2.2 -> correct, seems to apply gamma and composites properly with rest of AppKit
rgba16f -> linearSrbColorSpace -> gamma 1.0 -> incorrect, isn't my data linear?
I see the breakpoints. Whether or not I'm in the debugger, I can't delete the breakpoints that are defined. I get a bonk sound trying to delete any of them. SKAgent crashed, and is often using way too much memory. But this still occurs even after a reboot of the machine.
This seems to have broken as of my update from iPadOS 15 to 16. Now our connect returns "Network unreachable" on the connect() call, and the select() call times out after 10 or 30s. I have "Developer Mode" enabled on the device.
We are just trying to connect the iPad back to the devhost mac that is running the application.
When the mac does the same connect to the same IP, connect returns "Connect in progress", and the select succeeds in setting up the socket.
I'm using macOS 14.1 on macOS Intel, and Xcode 14.1.
I tried converting our Android ATrace scopes to use os_signpost, but this seems to add 20ms of cpu time to every frame. ATrace_isEnabled is only called with AGI (Android GPU Inspector) takes a capture, but there don't seem to be flags that indicate when an Instruments capture is being taken.
AGI gives us a nice tracks in Perfetto of cpu and gpu timings with pseudo-coloring and text in each track that help interpret the frame, and without a 20ms hit.
Instruments gives microscopically tiny tracks that are all blue with no text in the os_signpost widget. I have to hover over every track which is about 2 pixels high to see the timings, and the timings for each frame is 400ms instead of the actual 50ms that is the actual time.
Is there a better method to see scoped cpu timings for macOS/iOS considering dtrace isn't available, or somehow improve the performance hit there?
By the time I background the app, hit the capture button, wait on the UI popup to appear, and then hit the "capture" button in the popup, the even that I was trying to capture has already passed. Can we get a button, or double-click on the slanted M icon to just do the capture instead of verify that I want to. All told, it's about 5s to get a capture to execute and that is too long when running at 60 or 120Hz.
I know there's programmatic capture too, but we don't have that hooked up yet.
We are using first pass depth. I know it's not recommended, but we have one and need it. Deferred renders uses this, and we do too.
We've tried setting [invariant] on the position, and now are resorting to slope and depth biasing the second pass. We even set -fpreserve-invariance on the compiler. This whole construct is confusing. "invariant" was added in MSL 2.1, but requires iOS 13 to set that compiler flag, and then other code states that flag must be set for iOS 14 and macOS11 SDK use (minSDK? buildSDK?). We also tried disabling -fno-fast-math to no avail.
But why is a simple v = v * m calculation different once polys hit the near plane or the viewport edges. The polys then seem to per-tile z-fight. Some tiles have stripes of z, and some are just completely missing. These are the same tris going through two shaders that do the same vertex calc.
That shouldn't be happening, unless the tiles are computing gradients per tile incorrectly from the one pass to the next. On long clipped tris, it looks like a hardware/driver bug computing consistent depths across the same triangles. This was tested on older (iPhone 6) and newer iOS devices and M1 MBP.
We have this on many of our platforms, but Apple doesn't appear to expose this in Metal. Nvidia/AMD have had this for a long time. We can workaround for now, with gather followed by a component min/max on a single channel. For large scale multi-channel downsampling, having access to the sampler setting would be better. This would even work with 3d volumes, etc.
VK_EXT_sampler_filter_minmax
These are the three modes
WeightedAverage - basic nearest/blinear/trilinear
Min
Max
I know how to do this with macOS 12/iOS 15, but how do we determine the split prior? I know most phones are 2/4, but A10 is 2/2 exclusive.
This is the new way below, but what is the old way? Especially with Alderlake chips using 8HT/8 configs with 24 threads, this info is important to identify.
sysctlbyname( "hw.nperflevels", &perfLevelCount, &countSize, NULL, 0 )
sysctlbyname( "hw.perflevel0.physicalcpu", &info.bigCores, &countSize, NULL, 0 )
sysctlbyname( "hw.perflevel1.physicalcpu", &info.littleCores, &countSize, NULL, 0 )
I can't figure out why macOS keeps updating itself without my consent. I have "automatically download" and "automatically update" turned off. But macOS is constantly indicating an update is available, and then on reboot, the new macOS installs itself anyways. Since this often tends to break Xcode or gpu capture, I'd really like to prevent this.
When we build our C++ code in Visual Studio, IntelliSense finds all of the types and functions. When we build in Xcode, it finds about 90%.
There seems to be no consistent pattern to why Xcode skips some things, and then that daisychains into the next header that includes that prior header.
We have a class with If/Else function calls, but Add calls are skipped. Even one header with the struct defined in the same header isn't highlighted as a type within that header.
Sources are built with Gnu makefiles, but ultimately the .o and .d files are all complied and linked together by clang using Xcode 13.3 and we use the new build system. What could we be doing wrong here? This isn't a recent problem, and has happened with all Xcode builds prior.
I see reasonable numbers from this on macOS, but on iPad I see really large numbers from this, and in the gpu capture that don't add up. This is Xcode 12.2 and and iPad 14.0.1.
Textures and Buffers add up to 261MB which is close to the macOS. The memory summary, and the "other" area in the buffers area report 573MB when I hover over that. Also device.currentAllocatedSize reports 868MB total. I assume the buffer size is skewing the memory totals, since Xcode reports 620MB for the entire app.
I would attach a screenshot of the gpu capture showing the memory capture, but seems that the new forums don't support this, and not being able to search categories anymore is rather limiting.
Non-voliatile 261
Volatile 0
Textures 195
Buffers 66 <- but hover over "other" reports 573
Private 184
Shared 77
Used 166
Unused 95
For keyboard handling on iOS (and iOS on macOS M1), the iOS 13.4 keyboard constants are missing the command keys. We need to be able to detect key up/down on all the modifiers. I realize there's a modifiers field on UIKey, but this seems inconsistent.
case UIKeyboardHIDUsageKeyboardLeftShift: b = kButton_Shift; break;
case UIKeyboardHIDUsageKeyboardRightShift: b = kButton_Shift; break;
case UIKeyboardHIDUsageKeyboardLeftAlt: b = kButton_Alt; break;
case UIKeyboardHIDUsageKeyboardRightAlt: b = kButton_Alt; break;
// ? case kVK_Command: b = kButton_Command; break;
// ? case kVK_RightCommand: b = kButton_Command; break;
case UIKeyboardHIDUsageKeyboardLeftControl: b = kButton_Ctrl; break;
case UIKeyboardHIDUsageKeyboardRightControl: b = kButton_Ctrl; break;