I have the following set, but the WKWebView is losing the page content that I'm updating in a given page whenever the user presses the "delete" key. This is in SwiftUI, and I see no way to intercept and block this key.
webView.allowsBackForwardNavigationGestures = false
Isn't the whole point of setting this flag, to not just have "swipe" navigation stop, but all back/forward support stop? Also with the backForwardList immutable, it's not like I can delete an entry from it.
Post
Replies
Boosts
Views
Activity
Visual Studio, Stadia, and JetBrains are supporting natvis files, where Xcode is still stuck on lldb python files that even Xcode no longer uses to debug data structures. Apple has converted all of their scripts to native code, so there are no samples of how to write complex lldb visualizer rules.
There is already an lldb-eval that can bring in natvis files, so something like this should be brought to Xcode. C++ packages like EASTL only ship with a natvis file, and it's far simpler to edit and write than the lldb rules and python scripting.
Seems that metal-shaderconverter can build a metallib, but I need .air files. Then I link the .air files into a single metallib and metallibdsym file.
HLSL -> dxc -> DXIL -> metal-shaderconverter -> .metallib
But there's no way to link together multiple metallib into a single metallib is there?
My build generates 10 errors, and sometimes it halts the "build and run" as expected.
Most of the time, it just runs the previously successful build anyways. Seems like a basic tenent of an IDE to not do this unless I've explicitly enabled the run.
I have a UTI for "public.directory" and can drag-drop folders onto my app and open them. I also added this to the Info.plist to say the app supported directoryies.
But the default "Open" command seems to popup up an NSOpenPanel with folders not selectable. The "Open" button stays disabled.
How do I change this? I tried implementing "openDocument", but then it lets through any file type, not just the ones in my Info.plist. So I'd like to just use the default implementation, but need an override for the NSOpenPanel.
(IBAction)openDocument:(id)sender
{
NSOpenPanel *panel = [NSOpenPanel openPanel];
[panel setCanChooseFiles:YES];
[panel setCanChooseDirectories:YES];
[panel setAllowsMultipleSelection:NO];
...
}
How is this a valid stack trace with mangled symbol names and file/line information? I've already demangled the name, even though Windows does this for me.
The recommended approach to get file/line seems to be to proc "atos" process repeatedly on the symbols to turn them all into file/line. But shouldn't there just be a function call for this, or option to backtrace_symbols() since it's looking up the symbol anyways.
I don't get how this external process call would work for iOS, and it seems slow for macOS as well.
Compare this with Window CaptureStackBackTrace, and then there is a simple function call via DbgHelp.lib to retrieve the file/line. Am I supposed to somehow use the CoreSymbolicate framework on macOS/iOS?
Since this api requires &__dso_handle instead of the standard file/line/func, I had to modify my entire log system to pass this down from the macro call sites.
I have many callbacks that typically just forward data from other C++ libraries that never supply a dso_handle. so it's great how this logging system breaks most logger systems and doesn't have a warning level to match fault/error. I have the forwarded threadName, timestamp, etc and no where to store that in os_log. syslog was more powerful and clear than os_log, but I'm sure it's now too late to head down a more reasonable path..
So I pass the &__dso_handle all the way to the log command and hand it into the macro
#define my_os_log_with_type(dso, log, type, format, ...) __extension__({ \
os_log_t _log_tmp = (log); \
os_log_type_t _type_tmp = (type); \
if (os_log_type_enabled(_log_tmp, _type_tmp)) { \
OS_LOG_CALL_WITH_FORMAT(_os_log_impl, \
((void*)dso, _log_tmp, _type_tmp), format, ##__VA_ARGS__); \
} \
})
Logger.mm
// This doesn't work, logging the dso from the callsite. No file/line.
my_os_log_with_type(entry.dso, os_log_create( "com.foo", entry.tag), logLevel(entry.level)), "%{public}s", text );
// This does work, but who wants to jump to the forwarding log implementation?
os_log_with_type(os_log_create( "com.foo", entry.tag), logLevel(entry.level)), "%{public}s", text );
The macOS screen recording tool doesn't appear to support recording HDR content (f.e. in QuickTime player). This tool can record from the camera using various YCbCr 422 and 420 formats needed for HVEC and ProRes HDR10 recording, but doesn't offer any options for screen recording HDR.
So that leaves in-game screen recording with AVFoundation. Without any YCbCr formats exposed in Metal api, how do we use CVPixelBuffer with Metal, and then send these formats off to the video codes directly? Can we send Rec2020 RGB10A2Unorm data directly? I'd like the fewest conversions possible.
Trying to test HDR running an EDR-capable iOS app on macOS. This is running on the same M2 MBP.
On macOS, NSScreen.maximumPotentialExtendedDynamciRangeColorComponentValue returns 16. That's what I'd expect on a mini-led display like this one.
On iOS on macOS, UIScreen.potentialEDRHeadroom reports 1.0. That's not correct.
This has been broken since Monterrey macOS 12.0. I am running an x64 app under Rosetta2, and trying to test ProMotion. Is this possibly fixed in macOS 14.0? I see mention of a universal CAMetalDisplayLink finally, so we can also try that, but it won't fix testing on older macOS.
https://developer.apple.com/forums/thread/701855?answerId=708409022#708409022
There was a good 2021 WWDC presentation on using ProMotion on iOS, and using Adaptive Sync (ProMotion) on macOS. But while the macOS presentation showed how to detect ProMotion (fullscreen + min/maxInterval mismatch). The iOS side doesn't have this same mechanism. The talk mentions Metal sample code for the talk, but I don't see ProMotion mentioned anywhere in the Metal samples when I do search.
https://developer.apple.com/videos/play/wwdc2021/10147/
The memory layout doesn't change in this sort of cast, and this is a common construct when transforming normal and tangents.
float3 normal = input.normal * (float3x3)skinTfm;
no matching conversion for functional-style cast from 'metal::float4x4' (aka 'matrix<float, 4, 4>') to 'metal::float3x3' (aka 'matrix<float, 3, 3>')
Want to use half data, but it's unclear how the A series processors handle interpolating it across the polygon. Adreno/Nvidia doesn't allow half in shader input/output due to banding. Mali recommends declaring half out of the VS to minimize the parameter buffer, and declare float in the FS.
Can Apple provide some insight as to best practices here?
I never want to run a C++ app when there is build failure, but XCode seems to think this is okay to do. If you hit play fast enough, it happens all the time. How can this be fixed?
I'm working on a parser which translates HLSL to HLSL/MSL. But valid MSL isn't compiling when passing the depth2d to a class and class ctor. The ctor use allows globals to be referenced as member variables by the MSL which typically passes it's parameters from call to call.
This reports the following which makes no sense. The code is fine with use of texture2d and references, so seems to be a metal compiler bug. It's saying the ctor input needs to be device space, but it's already decleared as such. This limits any use of depth style textures in MSL.
DepthTest.metal:31:16: error: no matching constructor for initialization of 'SamplePSNS'
SamplePSNS shader(shadowMap, sampleBorder);
^ ~~~~~~~~~~~~~~~~~~~~~~~
DepthTest.metal:18:5: note: candidate constructor not viable: address space mismatch in 1st argument ('depth2d<float>'), parameter type must be 'device depth2d<float> &'
SamplePSNS(
^
DepthTest.metal:5:8: note: candidate constructor (the implicit copy constructor) not viable: requires 1 argument, but 2 were provided
struct SamplePSNS {
^
DepthTest.metal:5:8: note: candidate constructor (the implicit copy constructor) not viable: requires 1 argument, but 2 were provided
#include <metal_stdlib>
using namespace metal;
struct SamplePSNS {
struct InputPS {
float4 position [[position]];
};
device depth2d<float>& shadowMap;
thread sampler& sampleBorder;
float4 SamplePS(InputPS input) {
return shadowMap.sample_compare(sampleBorder, input.position.xy, input.position.z);
};
SamplePSNS(
device depth2d<float>& shadowMap,
thread sampler& sampleBorder)
: shadowMap(shadowMap),
sampleBorder(sampleBorder)
{}
};
fragment float4 SamplePS(
SamplePSNS::InputPS input [[stage_in]],
depth2d<float> shadowMap [[texture(0)]],
sampler sampleBorder [[sampler(0)]])
{
SamplePSNS shader(shadowMap, sampleBorder);
return shader.SamplePS(input);
}