The error is:
"Error Domain=MTLLibraryErrorDomain Code=3 ":1:10: fatal error: cannot open file './metal_types': Operation not permitted
#include "metal_types""
On my Mac mini (with intel chip), I run flutter application in VScode lldb debugger and got this error, flutter application cannot draw its UI and shows a blank white window.
My Xcode version is latest version 15.2.
Flutter application can run normally in Mac mini M1 in VSCode lldb debugger, and can run normally without debugger in Mac mini Intel chip.
In Metal framework and Core Graphic framework location, there is no file named "metal_types".
Before, it didn't happen. I could run normal in vscode lldb debugger on Mac mini intel chip and M1.
Anyone knows anythings, please comments.
Thank you!
Core Graphics
RSS for tagHarness the power of Quartz technology to perform lightweight 2D rendering with high-fidelity output using Core Graphics.
Posts under Core Graphics tag
55 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
My app depends on the user granting Accessibility access (Allow this application to control your computer). There’s no formal permissions API (that I know of) for this, it just happens implicitly when I use the API for the first time. I get an error if the user hasn’t granted permission.
If the user grants permission and I'm able to successfully register my CGEventTap (a modifier key event tap), but then later revokes it, key responsiveness goes awry. I don’t get any kind of error to my callback, but I do get tapDisabledByTimeout events periodically. I believe something is causing significant delays (but not preventing) in delivering events to my tap.
Upon receiving this, I'm considering attempting to register another tap as a way to test permission, and disabling the real one if I no longer have permission.
Does anyone have any better ideas?
For Apple: see FB13533901.
Hey Guys,
I am writing a little Swift apllication, which runs on my Mac, that is connected to the TV. To control the mouse I use my own SmartHome-App. In this app I have implemented a touchpad like from the MacBook.
If the user triggers a tap/drag event, a UDP message will be sent to the mentioned application (MouseServer).
The MouseServer is listening for UDP messages and moves the mouse, when the command for mouse move was recieved.
Everything is working very well. But with the following mouse move code, I can't open the apple menu bar on top of the screen if I move the mouse to the top (when for example the browser is in fullscreen mode). I hope you know what I mean. If you are in a fullscreen window, the top menu bar within the apple icon disappears. But when you move the cursor to the top, the menu bar appears again. This doesn't work with my code. I've tried many different approches, but can't get it to work. Do you have any Idea?
func moveMouse(x: Int, y: Int) {
// show cursor
NSCursor.setHiddenUntilMouseMoves(false)
NSCursor.unhide()
// generate the CGPoint object for click event
lastPoint = CGPoint(x: x, y: y)
print("X: \(x), Y: \(y)")
// --- Variant 1 ---
// move the cursor to desired position
CGDisplayMoveCursorToPoint(CGMainDisplayID(), lastPoint)
CGDisplayShowCursor(CGMainDisplayID())
// --- Variant 2 ---
//if let eventSource = CGEventSource(stateID: .hidSystemState) {
// let mouseEvent = CGEvent(mouseEventSource: eventSource, mouseType: .mouseMoved, mouseCursorPosition: lastPoint, mouseButton: .left)
// mouseEvent?.post(tap: .cghidEventTap)
//}
// --- Variant 3 ---
//moveMouseWithAppleScript(x: x, y: y)
}
func moveMouseWithAppleScript(x: Int, y: Int) {
let script = """
tell application "System Events"
set mousePos to {\(x), \(y)}
do shell script "caffeinate -u -t 0.1"
do shell script "osascript -e \\"tell application \\\\\\"System Events\\\\\\"
set position of mousePos to {item 1 of mousePos, item 2 of mousePos}
end tell\\""
end tell
"""
let appleScript = NSAppleScript(source: script)
var error: NSDictionary?
appleScript?.executeAndReturnError(&error)
if let error = error {
print("Error executing AppleScript: \(error)")
}
}
Best regards,
Robin11
When zoom in, CATiledLayer works very well. It shows previous layer while rendering next layer. I cannot aware the rendering.
When zoom out, it sucks. It leaves blank when rendering smaller scale layer.
How can I solve this???
For example, when downscale, put the draw rect into main thread?
Hello,
I'm wondering if there is a way to programmatically write a series of UIImages into an APNG, similar to what the code below does for GIFs (credit: https://github.com/AFathi/ARVideoKit/tree/swift_5). I've tried implementing a similar solution but it doesn't seem to work. My code is included below
I've also done a lot of searching and have found lots of code for displaying APNGs, but have had no luck with code for writing them.
Any hints or pointers would be appreciated.
func generate(gif images: [UIImage], with delay: Float, loop count: Int = 0, _ finished: ((_ status: Bool, _ path: URL?) -> Void)? = nil) {
currentGIFPath = newGIFPath
gifQueue.async {
let gifSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFLoopCount as String : count]]
let imageSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFDelayTime as String : delay]]
guard let path = self.currentGIFPath else { return }
guard let destination = CGImageDestinationCreateWithURL(path as CFURL, __UTTypeGIF as! CFString, images.count, nil)
else { finished?(false, nil); return }
//logAR.message("\(destination)")
CGImageDestinationSetProperties(destination, gifSettings as CFDictionary)
for image in images {
if let imageRef = image.cgImage {
CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary)
}
}
if !CGImageDestinationFinalize(destination) {
finished?(false, nil); return
} else {
finished?(true, path)
}
}
}
My adaptation of the above code for APNGs (doesn't work; outputs empty file):
func generateAPNG(images: [UIImage], delay: Float, count: Int = 0) {
let apngSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGLoopCount as String : count]]
let imageSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGDelayTime as String : delay]]
guard let destination = CGImageDestinationCreateWithURL(outputURL as CFURL, UTType.png.identifier as CFString, images.count, nil)
else { fatalError("Failed") }
CGImageDestinationSetProperties(destination, apngSettings as CFDictionary)
for image in images {
if let imageRef = image.cgImage {
CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary)
}
}
}
Yesterday, my code ran just fine under the previous Xcode version. Today, some print() statements seem to come with extra lines. I know it sounds stupid, but my code did not change in the meantime. It doesn't appear to come from anything I control, almost like some Apple code emits an extra line feed somewhere. It's just a Swift Mac App I built to make my digital art; otherwise, nothing else is incorrect, just these odd lines.
It's not as simple as just making a test case with a few print("***") statements, it seems to require other code to run in between calls to print. Most of my app is using CoreGraphics. It has no UI.
It's like when you see spurious Apple debugging info in the console sometimes, but it's only a blank line this time. It's not a big issue, just annoying.
I am trying to create a custom CGColorSpace in Swift on macOS but am not sure I really understand the concepts.
I want to use a custom color space called Spot1 and if I extract the spot color from a PDF I get the following:
"ColorSpace<Dictionary>" = {
"Cs2<Array>" = (
Separation,
Spot1,
DeviceCMYK,
{
"BitsPerSample<Integer>" = 8;
"Domain<Array>" = (
0,
1
);
"Filter<Name>" = FlateDecode;
"FunctionType<Integer>" = 0;
"Length<Integer>" = 526;
"Range<Array>" = (
0,
1,
0,
1,
0,
1,
0,
1
);
"Size<Array>" = (
1024
);
}
);
};
How can I create this same color space using the CGColorSpace(propertyListPlist: CFPropertyList) API
func createSpot1() -> CGColorSpace? {
let dict0 : NSDictionary = [
"BitsPerSample": 8,
"Domain" : [0,1],
"Filter" : "FlateDecode",
"FunctionType" : 0,
"Length" : 526,
"Range" : [0,1,0,1,0,1,0,1],
"Size" : [1024]]
let dict : NSDictionary = [
"Cs2" : ["Separation","Spot1", "DeviceCMYK", dict0]
]
let space = CGColorSpace(propertyListPlist: dict as CFPropertyList)
if space == nil {
DebugLog("Spot1 color space is nil!")
}
return space
}
I am trying to generate a PDF file with certain components draw with Spot Colours. Spot colours are used for printing and I am not clear on how one would do that but I think that if I can create a custom ColorSpace with a specific name or a color that has a specific name - our printer looks for the name Spot1 and they use the colour green.
Can anyone shed any light on how I might be able to do this. For reference I have attached two pdf files with two different spot colours in them.
I need to be able to create similar using CGContext and CGPDFDocument. I can already generate the PDF documents using CMYK colors but don't know how I can create the equivalent "spot" colors.
At the moment I am loading the page from these attached pdf files and scaling them to fill the page to get a background with the spot color. This works fine but I also need to generate text and lines using this same spot color and I am not clear how I could do that using the Core Graphics APIs.
My guess is I need to create a custom ColorSpace with a single color and then use that color for drawing with.
The only 'custom' option for creating a ColorSpace seems to be the CGColorSpace(propertyListPList:) constructor, however there does not appear to be any documentation on what needs to be in the property list to do so. Nor can I find any examples of that.
Any pointers would be appreciated.
Regards
I want to read metadata of image files such as copyright, author etc.
I did a web search and the closest thing is CGImageSourceCopyPropertiesAtIndex:
- (void)tableViewSelectionDidChange:(NSNotification *)notif {
NSDictionary* metadata = [[NSDictionary alloc] init];
//get selected item
NSString* rowData = [fileList objectAtIndex:[tblFileList selectedRow]];
//set path to file selected
NSString* filePath = [NSString stringWithFormat:@"%@/%@", objPath, rowData];
//declare a file manager
NSFileManager* fileManager = [[NSFileManager alloc] init];
//check to see if the file exists
if ([fileManager fileExistsAtPath:filePath] == YES) {
//escape all the garbage in the string
NSString *percentEscapedString = (NSString *)CFURLCreateStringByAddingPercentEscapes(NULL, (CFStringRef)filePath, NULL, NULL, kCFStringEncodingUTF8);
//convert path to NSURL
NSURL* filePathURL = [[NSURL alloc] initFileURLWithPath:percentEscapedString];
NSError* error;
NSLog(@"%@", [filePathURL checkResourceIsReachableAndReturnError:error]);
//declare a cg source reference
CGImageSourceRef sourceRef;
//set the cg source references to the image by passign its url path
sourceRef = CGImageSourceCreateWithURL((CFURLRef)filePathURL, NULL);
//set a dictionary with the image metadata from the source reference
metadata = (NSDictionary *)CGImageSourceCopyPropertiesAtIndex(sourceRef,0,NULL);
NSLog(@"%@", metadata);
[filePathURL release];
} else {
[self showAlert:@"I cannot find this file."];
}
[fileManager release];
}
Is there any better or easy approach than this?
CGImageRef __nullable CGImageCreate(size_t width, size_t height,
size_t bitsPerComponent, size_t bitsPerPixel, size_t bytesPerRow,
CGColorSpaceRef cg_nullable space, CGBitmapInfo bitmapInfo,
CGDataProviderRef cg_nullable provider,
const CGFloat * __nullable decode, bool shouldInterpolate,
CGColorRenderingIntent intent)
function returns null when kCGImageAlphaNone is passed for bitmap info with error message "verify_image_parameters: invalid image alphaInfo: kCGImageAlphaNone. It should be kCGImageAlphaNoneSkipLast"
This issue happens only when installing on iOS 17 from XCode 15(Swift 5).
Is it possible to fix this problem without having change the bitmap info as that can affect other parts of image processing.
SayI have a function that will receive a UIImage as an argument, perform the logic inside it and then return it updated
So I came up with something like this:
func processImage(image: UIImage?) -> UIImage? {
if let image = image, let cgImage = image.cgImage {
let width = cgImage.width
let height = cgImage.height
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
if let context = CGContext(data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: 4 * width,
space: colorSpace,
bitmapInfo: bitmapInfo.rawValue) {
context.draw(cgImage, in: CGRect(x: 0, y: 0, width: CGFloat(width), height: CGFloat(height)))
if let data = context.data {
let buffer = data.bindMemory(to: UInt8.self, capacity: width * height * 4)
for i in 0..<(width * height) {
let pixelIndex = i * 4
let red = buffer[pixelIndex]
let green = buffer[pixelIndex + 1]
let blue = buffer[pixelIndex + 2]
if isSystemGrayPixel(red: red, green: green, blue: blue) {
// Convert systemGray to systemPink
buffer[pixelIndex] = 255 // R component
buffer[pixelIndex + 1] = 182 // G component
buffer[pixelIndex + 2] = 193 // B component
}
}
if let modifiedCGImage = context.makeImage() {
let processedImage = UIImage(cgImage: modifiedCGImage)
return processedImage
}
}
}
}
return image
}
insde it I have called a helper function that will conditionally check if it contains .systemGrey and if so than change it as I
func isSystemGrayPixel(red: UInt8, green: UInt8, blue: UInt8) -> Bool {
let systemGrayColor = UIColor.systemBlue
let ciSystemGrayColor = CIColor(color: systemGrayColor)
let tolerance: CGFloat = 10.0 / 255.0
let redDiff = abs(ciSystemGrayColor.red - CGFloat(red) / 255.0)
let greenDiff = abs(ciSystemGrayColor.green - CGFloat(green) / 255.0)
let blueDiff = abs(ciSystemGrayColor.blue - CGFloat(blue) / 255.0)
return redDiff <= tolerance && greenDiff <= tolerance && blueDiff <= tolerance
}
When I try to save it into a saveCanvas I will show in the console that the entry is saved but when i try to retrieve it later I will get nil
This is my saveCanvas to serve as a reference
@objc func saveCanvas(_ sender: Any) {
guard let canvas = Canvas(name: "", canvas: mainImageView, numOfPages: 0) else {
return
}
var savedImageView2 = UIImageView()
savedImageView2.image = mainImageView.image?.copy() as? UIImage
let updatedImage = processImage(image: savedImageView2.image)
canvas.canvas.image = updatedImage
// array that stores UIimageView gobally defined
canvasArray.append(canvas)
if canvasArray.count > 0 {
canvasArray.forEach{ canvas in
print("\(canvas.canvas == savedImageView )")
print("clicked button \( canvasArray) ")
}
}
}
I am expecting to retrieve each iteration of canvasArray when i call it later in another function(which worked fine before I created the processImage function) .
Like I said the purpose of processImage is to check if my UIImage contains .systemGrey and i so returned updated as I defined inside my isSystemGrayPixel function.
Is there anything you might consider I do different rather do in my processImage to make it work as expected?
Translated Report (Full Report Below)
Version: 1.0.0 (2.0)
Code Type: X86-64 (Translated)
Parent Process: launchd [1]
User ID: 948009654
Date/Time: 2023-11-02 19:47:33.1522 +0800
OS Version: macOS 12.1 (21C52)
Report Version: 12
Anonymous UUID: 815896E6-939E-002C-08C6-C903A4B87DF4
Sleep/Wake UUID: F06CECA0-3643-4423-A6F4-1163217FF863
Time Awake Since Boot: 100000 seconds
Time Since Wake: 92675 seconds
System Integrity Protection: enabled
Crashed Thread: 0 CrBrowserMain Dispatch queue: com.apple.main-thread
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Application Specific Information:
Assertion failed: (mach_vm_map(mach_task_self(), &address, size, 0, VM_FLAGS_ANYWHERE | VM_MAKE_TAG(VM_MEMORY_COREGRAPHICS_BACKINGSTORES), port, 0, false, prot, prot, VM_INHERIT_SHARE) == KERN_SUCCESS), function backing_map, file CGSBackingStore.c, line 192.
Kernel Triage:
VM - Compressor failed a blocking pager_get
VM - Compressor failed a blocking pager_get
VM - Compressor failed a blocking pager_get
VM - Compressor failed a blocking pager_get
VM - Compressor failed a blocking pager_get
Thread 0 Crashed:: CrBrowserMain Dispatch queue: com.apple.main-thread
0 <translation info unavailable> 0x108107a20 ???
1 libsystem_kernel.dylib 0x7ff8023cd5e2 __sigreturn + 10
2 ??? 0x7fc103a4f190 ???
3 libsystem_c.dylib 0x7ff80234dd10 abort + 123
4 libsystem_c.dylib 0x7ff80234d0be __assert_rtn + 314
5 SkyLight 0x7ff8075129de backing_map + 550
6 SkyLight 0x7ff8072c82ad lock_window_backing + 557
7 SkyLight 0x7ff807369f41 SLSDeviceLock + 54
8 CoreGraphics 0x7ff8076e6550 ripd_Lock + 56
9 CoreGraphics 0x7ff807678772 RIPLayerBltShape + 490
10 CoreGraphics 0x7ff8076769c7 ripc_Render + 328
11 CoreGraphics 0x7ff8076737d4 ripc_DrawRects + 482
12 CoreGraphics 0x7ff807673565 CGContextFillRects + 145
13 CoreGraphics 0x7ff8076734c4 CGContextFillRect + 117
14 CoreGraphics 0x7ff807672fe8 CGContextClearRect + 52
15 HIToolbox 0x7ff80b6176e0 HIMenuBarView::DrawOnce(CGRect, CGRect, bool, HIMenuBarTextAppearance, CGContext*) + 110
16 HIToolbox 0x7ff80b617640 HIMenuBarView::DrawIntoWindow(unsigned int*, CGRect, double, CGRect, bool, HIMenuBarTextAppearance, CGContext*) + 410
17 HIToolbox 0x7ff80b53c146 HIMenuBarView::DrawSelf(short, __HIShape const*, CGContext*) + 280
18 HIToolbox 0x7ff80b53bd56 HIMenuBarView::DrawingDelegateHandler(OpaqueEventHandlerCallRef*, OpaqueEventRef*, void*) + 262
19 HIToolbox 0x7ff80b520d1d DispatchEventToHandlers(EventTargetRec*, OpaqueEventRef*, HandlerCallRec*) + 1391
20 HIToolbox 0x7ff80b52014e SendEventToEventTargetInternal(OpaqueEventRef*, OpaqueEventTargetRef*, HandlerCallRec*) + 333
21 HIToolbox 0x7ff80b51ffef SendEventToEventTargetWithOptions + 45
22 HIToolbox 0x7ff80b53b8d3 HIView::SendDraw(short, OpaqueGrafPtr*, __HIShape const*, CGContext*) + 325
23 HIToolbox 0x7ff80b53b399 HIView::RecursiveDrawComposited(__HIShape const*, __HIShape const*, unsigned int, HIView*, CGContext*, unsigned char, double) + 571
24 HIToolbox 0x7ff80b53b56d HIView::RecursiveDrawComposited(__HIShape const*, __HIShape const*, unsigned int, HIView*, CGContext*, unsigned char, double) + 1039
25 HIToolbox 0x7ff80b53add8 HIView::DrawComposited(short, OpaqueGrafPtr*, __HIShape const*, unsigned int, HIView*, CGContext*) + 832
26 HIToolbox 0x7ff80b53aa89 HIView::Render(unsigned int, CGContext*) + 51
27 HIToolbox 0x7ff80b5521a9 FlushWindowObject(WindowData*, void**, unsigned char) + 772
28 HIToolbox 0x7ff80b551c2f FlushAllBuffers(__CFRunLoopObserver*, unsigned long, void*) + 317
29 CoreFoundation 0x7ff8024c6f98 __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 23
30 CoreFoundation 0x7ff8024c6e34 __CFRunLoopDoObservers + 543
31 CoreFoundation 0x7ff8024c5830 CFRunLoopRunSpecific + 446
32 HIToolbox 0x7ff80b5474f1 RunCurrentEventLoopInMode + 292
33 HIToolbox 0x7ff80b547118 ReceiveNextEventCommon + 284
34 HIToolbox 0x7ff80b546fe5 _BlockUntilNextEventMatchingListInModeWithFilter + 70
35 AppKit 0x7ff804e1bb4c _DPSNextEvent + 886
36 AppKit 0x7ff804e1a1b8 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1411
37 AppKit 0x7ff804e0c5a9 -[NSApplication run] + 586
38 libqcocoa.dylib 0x11402762f QCocoaEventDispatcher::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 2495
39 QtCore 0x11ace2acf QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) + 431
40 QtCore 0x11ace7042 QCoreApplication::exec() + 130
Anyone knows why it happens and how to fix it?
How can I take the contents (i.e. the stroke and fill) of a CAShapeLayer and draw it into an MTLTexture, which can then be displayed with a normal vertex/fragment shader?
I'm attaching the crash log, hopefully someone can help me out. Thanks!
crashlog.crash
This status is essential for generating simulated CGEvents. Games often use this API to implement cursor lock. If we send CGEvents with a moving position, it's possible for the cursor to move outside the game window and cause the game window to become inactive. If we don't send CGEvents with updated positions, we can only control the mouse within the game but not in other windows or the desktop.
Hello All,
I am trying to compress PNG image by applying PNG Filters like(Sub, Up, Average, Paeth), I am applying filers using property kCGImagePropertyPNGCompressionFilter but there is no change seen in resultant images after trying any of the filter. What is the issue here can someone help me with this.
Do I have compress image data after applying filter? If yes how to do that?
Here is my source code
CGImageDestinationRef outImageDestRef = NULL;
long keyCounter = kzero;
CFStringRef dstImageFormatStrRef = NULL;
CFMutableDataRef destDataRef = CFDataCreateMutable(kCFAllocatorDefault,0);
Handle srcHndl = //source image handle;
ImageTypes srcImageType = //'JPEG', 'PNGf, etct;
CGImageRef inImageRef = CreateCGImageFromHandle(srcHndl,srcImageType);
if(inImageRef)
{
CFTypeRef keys[4] = {nil};
CFTypeRef values[4] = {nil};
dstImageFormatStrRef = CFSTR("public.png");
long png_filter = IMAGEIO_PNG_FILTER_SUB; //IMAGEIO_PNG_FILTER_SUB, IMAGEIO_PNG_FILTER_UP, IMAGEIO_PNG_FILTER_AVG, IMAGEIO_PNG_FILTER_PAETH .. it is one of this at a time
keys[keyCounter] = kCGImagePropertyPNGCompressionFilter;
values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&png_filter);
keyCounter++;
outImageDestRef = CGImageDestinationCreateWithData(destDataRef, dstImageFormatStrRef, 1, NULL);
if(outImageDestRef)
{
// keys[keyCounter] = kCGImagePropertyDPIWidth;
// values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&Resolution);
// keyCounter++;
//
// keys[keyCounter] = kCGImagePropertyDPIHeight;
// values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&Resolution);
// keyCounter++;
CFDictionaryRef options = CFDictionaryCreate(NULL,keys,values,keyCounter,&kCFTypeDictionaryKeyCallBacks,&kCFTypeDictionaryValueCallBacks);
CGImageDestinationAddImage(outImageDestRef,inImageRef, options);
CFRelease(options);
status = CGImageDestinationFinalize(outImageDestRef);
if(status == true)
{
UInt8 *destImagePtr = CFDataGetMutableBytePtr(destDataRef);
destSize = CFDataGetLength(destDataRef);
//using destImagePtr after this ...
}
CFRelease(outImageDestRef);
}
for(long cnt = kzero; cnt < keyCounter; cnt++)
if(values[cnt])
CFRelease(values[cnt]);
if(inImageRef)
CGImageRelease(inImageRef);
}
I have a SwiftUI view which consists on a horizontal scroll view with some pages inside.
The elements of the pages project shadows.
I noticed that when scrolling, as elements stop being visible, the shadow gets removed abruptly. The shadow itself is visible when it gets removed and it creates an unpleasant effect.
I tried adding a transparent background to the element with the shadow that extends its frame to see if that way the shadow would be retained longer but it didn't work.
Is there any workaround to make this behave the way I would like?
Thanks in advance
It seems it's not possible to post videos on the dev forums but I captured 3 frames of it showcasing the issue.
I'm am planning to use CoreGraphics for its low-level functionality, so I wrote up a small snippet that I expected to work:
#include <CoreGraphics/CoreGraphics.h>
int main() {
double rot = CGDisplayRotation(0);
printf("Rotation %f\n", rot);
return 0;
}
Unfortunately, it seems that the call CGDisplayRotation blocks. When I tried writing a similar snippet in XCode though, it works just fine but I will not be able to use XCode for unrelated reasons. Am I compiling incorrectly? Could this be a permission issue?
I compiled with clang -Wall -framework CoreGraphics core.c -o core.o
Since the type identifiers in UTCoreTypes.h have been deprecated, what's the expected way to use the Core Graphics APIs that use those types, particularly in C code that doesn't have access to the UniformTypeIdentifiers framework?
Using CFSTR( "public.jpeg" ) works, but is that the new best practice, or have the core type definitions been moved/renamed?
I would like to get some information of the connected display such as vendor number, eisaId, … after connecting the external display via “screen mirroring” -> “use as Separate Display”
When the same display was connected through HDMI port or extend mode in screen mirroring, the information is not identical:
HDMI
Other display found - ID: 19241XXXX, Name: YYYY (Vendor: 19ZZZ, Model: 57WWW)
Screen mirroring - extend mode
Other display found - ID: 41288XX, Name: AAA (Vendor: 163ZYYBBB, Model: 16ZZWWYYY)
I tried to get display information with the below method.
func configureDisplays() {
var onlineDisplayIDs = [CGDirectDisplayID](repeating: 0, count: 16)
var displayCount: UInt32 = 0
guard CGGetOnlineDisplayList(16, &onlineDisplayIDs, &displayCount) == .success else {
os_log("Unable to get display list.", type: .info)
return
}
for onlineDisplayID in onlineDisplayIDs where onlineDisplayID != 0 {
let name = DisplayManager.getDisplayNameByID(displayID: onlineDisplayID)
let id = onlineDisplayID
let vendorNumber = CGDisplayVendorNumber(onlineDisplayID)
let modelNumber = CGDisplayModelNumber(onlineDisplayID)
let serialNumber = CGDisplaySerialNumber(onlineDisplayID)
if !DEBUG_SW, DisplayManager.isAppleDisplay(displayID: onlineDisplayID) {
let appleDisplay = AppleDisplay(id, name: name, vendorNumber: vendorNumber, modelNumber: modelNumber, serialNumber: serialNumber, isVirtual: isVirtual, isDummy: isDummy)
os_log("Apple display found - %{public}@", type: .info, "ID: \(appleDisplay.identifier), Name: \(appleDisplay.name) (Vendor: \(appleDisplay.vendorNumber ?? 0), Model: \(appleDisplay.modelNumber ?? 0))")
} else {
let otherDisplay = OtherDisplay(id, name: name, vendorNumber: vendorNumber, modelNumber: modelNumber, serialNumber: serialNumber, isVirtual: isVirtual, isDummy: isDummy)
os_log("Other display found - %{public}@", type: .info, "ID: \(otherDisplay.identifier), Name: \(otherDisplay.name) (Vendor: \(otherDisplay.vendorNumber ?? 0), Model: \(otherDisplay.modelNumber ?? 0))")
}
}
}
Can we have the same display information when connect with an external display via HDMI port and extend mode in Screen Mirroring?