I have a CoreImage pipeline and one of my steps is to rotate my image about the origin (bottom left corner) and then translate it. I'm not seeing the behaviour I'm expecting, and I think my problem is in how I'm combining these two steps.
As an example, I start with an identity transform
(lldb) po transform333
▿ CGAffineTransform
- a : 1.0
- b : 0.0
- c : 0.0
- d : 1.0
- tx : 0.0
- ty : 0.0
I then rotate 1.57 radians (approx. 90 degrees, CCW)
transform333 = transform333.rotated(by: 1.57)
- a : 0.0007963267107332633
- b : 0.9999996829318346
- c : -0.9999996829318346
- d : 0.0007963267107332633
- tx : 0.0
- ty : 0.0
I understand the current contents of the transform.
But then I translate by 10, 10:
(lldb) po transform333.translatedBy(x: 10, y: 10)
- a : 0.0007963267107332633
- b : 0.9999996829318346
- c : -0.9999996829318346
- d : 0.0007963267107332633
- tx : -9.992033562211013
- ty : 10.007960096425679
I was expecting tx and ty to be 10 and 10.
I have noticed that when I reverse the order of these operations, the transform contents look correct. So I'll most likely just perform the steps in what feels to me like the incorrect order.
Is anyone willing/able to point me to an explanation of why the steps I'm performing are giving me these results?
thanks,
mike
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
Hello all... is there a way to close a contour if you have found say two points on each side top "extension"? see image attached. So in end desire a trapezoid type shape. Code example would be very appreciated. thank you :) Think I have it as a CGPath. So a way to edit a CGPath, or close the top from a top left to a top right point?
Hi, I’m learning MAUI and was trying to use VNDocumentCameraViewController provided by Vision Kit to scan documents and it is working fine but I realized that I was not able to customize some of the options provided by default like, disabling the auto scan option. Is there any way to disable the auto scan option or are there any other alternatives with the same founctionalities as VNDocumentCameraViewController that are more customizable?
End goal: to detect 3 lines, and 2 corners accurately. Trying contours but they are a bit off. Is there a way or settings in contours to detect corners and lines more accurately, maybe less an sharper edged/corner contours? Or some other API or methods please?
I would love an email please ;) thank you. 2. also an overlay/scale issue
I facing to many lags in pubgmobile when i m playing its not running properly
macOS 15.2, MBP M1, built-in display.
The following code produces a line outside the bounds of my clipping region when drawing to CGLayers, to produce a clockwise arc:
CGContextBeginPath(m_s.outContext);
CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -startRads, -endRads, 1);
CGContextSetRGBStrokeColor(m_s.outContext, col.fRed(), col.fGreen(), col.fBlue(), col.fAlpha());
CGContextSetLineWidth(m_s.outContext, width);
CGContextStrokePath(m_s.outContext);
Drawing other shapes such as rects or ellipses doesn't cause a problem.
I can work around the issue by bringing the path back to the start of the arc:
CGContextBeginPath(m_s.outContext);
CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -startRads, -endRads, 1);
// add a second arc back to the start
CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -endRads, -startRads, 0);
CGContextSetRGBStrokeColor(m_s.outContext, col.fRed(), col.fGreen(), col.fBlue(), col.fAlpha());
CGContextSetLineWidth(m_s.outContext, width);
CGContextStrokePath(m_s.outContext);
But this does appear to be a bug.
Hello, I am making a project in SDL, and with that I am using SDL_Image. I am doing all of this on Xcode.
I've been able to initialize everything just fine, but issues spring up when I try to load an image.
When I give the code a path to look for an image:
Unable to load image! IMG_Error: Couldn't open [Insert image path here]: Operation not permitted
I get that error.
Keep in mind "Unable to load image" is a general error I put in the code should loading said image fail, the specific error which I called with IMG_GetError() is what we really need to know.
I've seen before that this might occur if a program does not have full disk access. Because of this, I've tried giving Xcode full disk access, but this didn't work and I still got the same error.
After many former OS and Xcode updates, my Game Controller Swift code generates a "DIS-CONNECTED" MESSAGE.
Mac Sequoia 15.2
Xcode 16.2
Tried to update PlayStation controller firmware on my Mac.
Still no luck with Xcode and its use of a game controller with tvOS.
I’m trying to create a CGContext that matches my screen using the following code
let context = CGContext(
data: pixelBuffer,
width: pixelWidth, // 3456
height: pixelHeight, // 2234
bitsPerComponent: 10, // PixelEncoding = "--RRRRRRRRRRGGGGGGGGGGBBBBBBBBBB"
bytesPerRow: bytesPerRow, // 13824
space: CGColorSpace(name: CGColorSpace.extendedSRGB)!,
bitmapInfo: CGImageAlphaInfo.none.rawValue
| CGImagePixelFormatInfo.RGBCIF10.rawValue
| CGImageByteOrderInfo.order16Little.rawValue
)
But it fails with an error
CGBitmapContextCreate: unsupported parameter combination:
RGB
10 bits/component, integer
13824 bytes/row
kCGImageAlphaNone
kCGImageByteOrder16Little
kCGImagePixelFormatRGBCIF10
Valid parameters for RGB color space model are:
16 bits per pixel, 5 bits per component, kCGImageAlphaNoneSkipFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipLast
32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedLast
32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little
64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast
64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast
64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little
64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little
128 bits per pixel, 32 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents
128 bits per pixel, 32 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents
See Quartz 2D Programming Guide (available online) for more information.
Why is it unsupported if it matches the 6th option? (32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little)
Hi, I uptaded to the 15.2 my system. It was Turkish and I change the English for AI. But Image Playground is not working.
My Playground app has been stuck at Downloading support for Image Playground now for about 4-5 days.
We as a team of engineers work on an app intended to visualize medical images. The type of situations where the app is used involves time critical decision making for acute clinical conditions. Stability of the app and performance are of utmost importance and can directly help timely treatment action. The app we are developing uses multiple libraries and tools like vtk, webgl, opengl, webkit, gl-matrix etc.
The problem specifically can be described as follows, it has been observed that when 3D volume is rendered in the app and we try to rotate the volume the rotation is slow, unresposive and laggy. Specifically, we have noticed that iOS 18.1 the volume rotation is much smoother as compared to latest iOS 18.2. Eariler, we have faced somewhat similar issue with iOS 17 but it got improved in iOS 18.1. This performance regression is affecting the user experience in our healthcare application.
We have taken reference from the cornerstone.js code and you can reproduce the issue using the following example: https://www.cornerstonejs.org/live-examples/volumeviewport3d
Steps to Reproduce:
Load the above mentioned test example on an iPhone running version 18.2 using safari.
Perform volume rendering using the provided dataset.
Measure the time taken by volume for each rotate or drag action.
Repeat the same steps on an iPhone running version 18.1 for comparison.
Additional Information:
Device Model Tested:
iPhone12, iPhone13, iPhone14
iOS Version With Issue:
18.2
18.3(Beta)
I would appreciate any insights or suggestions on how to address this performance regression. If additional information is needed, please let me know.
Thank you.
I am trying to convert a JPG image to a JP2 (JPEG 2000) format using the ImageMagick library on iOS. However, although the file extension is changing to .jp2, the format of the image does not seem to be changing. The output image is still being treated as a JPG file, and not as a true JP2 format.
Here is the code
(IBAction)convertButtonClicked:(id)sender {
NSString *jpgPath = [[NSBundle mainBundle] pathForResource:@"Example" ofType:@"jpg"];
NSString *tempFilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:@"Converted.jp2"];
MagickWand *wand = NewMagickWand();
if (MagickReadImage(wand, [jpgPath UTF8String]) == MagickFalse) {
char *description;
ExceptionType severity;
description = MagickGetException(wand, &severity);
NSLog(@"Error reading image: %s", description);
MagickRelinquishMemory(description);
return;
}
if (MagickSetFormat(wand, "JP2") == MagickFalse) {
char *description;
ExceptionType severity;
description = MagickGetException(wand, &severity);
NSLog(@"Error setting image format to JP2: %s", description);
MagickRelinquishMemory(description);
}
if (MagickWriteImage(wand, [tempFilePath UTF8String]) == MagickFalse) {
NSLog(@"Error writing JP2 image");
return;
}
NSLog(@"Image successfully converted.");
}
@end
Hello,
I am trying to read video frames using AVAssetReaderTrackOutput. Here is the sample code:
//prepare assets
let asset = AVURLAsset(url: some_url)
let assetReader = try AVAssetReader(asset: asset)
guard let videoTrack = try await asset.loadTracks(withMediaCharacteristic: .visual).first else {
throw SomeErrorCode.error
}
var readerSettings: [String: Any] = [
kCVPixelBufferIOSurfacePropertiesKey as String: [String: String]()
]
//check if HDR video
var isHDRDetected: Bool = false
let hdrTracks = try await asset.loadTracks(withMediaCharacteristic: .containsHDRVideo)
if hdrTracks.count > 0 {
readerSettings[AVVideoAllowWideColorKey as String] = true
readerSettings[kCVPixelBufferPixelFormatTypeKey as String] =
kCVPixelFormatType_420YpCbCr10BiPlanarFullRange
isHDRDetected = true
}
//add output to assetReader
let output = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: readerSettings)
guard assetReader.canAdd(output) else {
throw SomeErrorCode.error
}
assetReader.add(output)
guard assetReader.startReading() else {
throw SomeErrorCode.error
}
//add writer ouput settings
let videoOutputSettings: [String: Any] = [
AVVideoCodecKey: AVVideoCodecType.hevc,
AVVideoWidthKey: 1920,
AVVideoHeightKey: 1080,
]
let finalPath = "//some URL oath"
let assetWriter = try AVAssetWriter(outputURL: finalPath, fileType: AVFileType.mov)
guard assetWriter.canApply(outputSettings: videoOutputSettings, forMediaType: AVMediaType.video)
else {
throw SomeErrorCode.error
}
let assetWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoOutputSettings)
let sourcePixelAttributes: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: isHDRDetected
? kCVPixelFormatType_420YpCbCr10BiPlanarFullRange : kCVPixelFormatType_32ARGB,
kCVPixelBufferWidthKey as String: 1920,
kCVPixelBufferHeightKey as String: 1080,
]
//create assetAdoptor
let assetAdaptor = AVAssetWriterInputTaggedPixelBufferGroupAdaptor(
assetWriterInput: assetWriterInput, sourcePixelBufferAttributes: sourcePixelAttributes)
guard assetWriter.canAdd(assetWriterInput) else {
throw SomeErrorCode.error
}
assetWriter.add(assetWriterInput)
guard assetWriter.startWriting() else {
throw SomeErrorCode.error
}
assetWriter.startSession(atSourceTime: CMTime.zero)
//prepare tranfer session
var session: VTPixelTransferSession? = nil
guard
VTPixelTransferSessionCreate(allocator: kCFAllocatorDefault, pixelTransferSessionOut: &session)
== noErr, let session
else {
throw SomeErrorCode.error
}
guard let pixelBufferPool = assetAdaptor.pixelBufferPool else {
throw SomeErrorCode.error
}
//read through frames
while let nextSampleBuffer = output.copyNextSampleBuffer() {
autoreleasepool {
guard let imageBuffer = CMSampleBufferGetImageBuffer(nextSampleBuffer) else {
return
}
//this part copied from (https://developer.apple.com/videos/play/wwdc2023/10181) at 23:58 timestamp
let attachment = [
kCVImageBufferYCbCrMatrixKey: kCVImageBufferYCbCrMatrix_ITU_R_2020,
kCVImageBufferColorPrimariesKey: kCVImageBufferColorPrimaries_ITU_R_2020,
kCVImageBufferTransferFunctionKey: kCVImageBufferTransferFunction_SMPTE_ST_2084_PQ,
]
CVBufferSetAttachments(imageBuffer, attachment as CFDictionary, .shouldPropagate)
//now convert to CIImage with HDR data
let image = CIImage(cvPixelBuffer: imageBuffer)
let cropped = "" //here perform some actions like cropping, flipping, etc. and preserve this changes by converting the extent to CGImage first:
//this part copied from (https://developer.apple.com/videos/play/wwdc2023/10181) at 24:30 timestamp
guard
let cgImage = context.createCGImage(
cropped, from: cropped.extent, format: .RGBA16,
colorSpace: CGColorSpace(name: CGColorSpace.itur_2100_PQ)!)
else {
continue
}
//finally convert it back to CIImage
let newScaledImage = CIImage(cgImage: cgImage)
//now write it to a new pixelBuffer
let pixelBufferAttributes: [String: Any] = [
kCVPixelBufferCGImageCompatibilityKey as String: true,
kCVPixelBufferCGBitmapContextCompatibilityKey as String: true,
]
var pixelBuffer: CVPixelBuffer?
CVPixelBufferCreate(
kCFAllocatorDefault, Int(newScaledImage.extent.width), Int(newScaledImage.extent.height),
kCVPixelFormatType_420YpCbCr10BiPlanarFullRange, pixelBufferAttributes as CFDictionary,
&pixelBuffer)
guard let pixelBuffer else {
continue
}
context.render(newScaledImage, to: pixelBuffer) //context is a CIContext reference
var pixelTransferBuffer: CVPixelBuffer?
CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelTransferBuffer)
guard let pixelTransferBuffer else {
continue
}
// Transfer the image to the pixel buffer.
guard
VTPixelTransferSessionTransferImage(session, from: pixelBuffer, to: pixelTransferBuffer)
== noErr
else {
continue
}
//finally append to taggedBuffer
}
}
assetWriterInput.markAsFinished()
await assetWriter.finishWriting()
The result video is not in correct color as the original video. It turns out too bright. If I play around with attachment values, it can be either too dim or too bright but not exactly proper as the original video. What am I missing in my setup? I did find that kCVPixelFormatType_4444AYpCbCr16 can produce proper video output but then I can't convert it to CIImage and so I can't do the CIImage operations that I need. Mainly cropping and resizing the CIImage
Hello!
Brand new to the Apple developer community, so, hello everyone! I'm a game developer, we just launched our first game on PC and I'm looking to port it to ios. Time is something I'm kind of short on, and I hear it takes some jumping through hoops to get the know-how to port something to mobile. Any good sites you'd recommend for finding programmers to port your game? It's fairly simple - just a visual novel. Any and all suggestions welcome!
All the best!
Elijah
Hi all,
I have been trying to get Apple's assistive touch's snap to item to work for a unity game built using Apple's Core & Accessibility API. The switch control recognises these buttons however, eye tracking will not snap to them. The case in which it needs to snap is when an external eye tracking device is connected and utilises assistive touch & assistive touch's snap to item.
All buttons in the game have a AccessibilityNode with the trait 'Button' on them & an appropriate label, which, following the documentation and comments on the developer forum, should allow them to be recognised by snap to item.
This is not the case, devices (iPads and iPhones) do not recognise the buttons as a snap to target.
Does anyone know why this is the case, and if this is a bug?
I am trying to create an empty metadata, and set the HDRGainMapHeadroom at ***. However the final returned mutableMetadata doesn't contain the HDRGainMap:HDRGainMapVersion or HDRGainMap:HDRGainMapHeadroom. But iio:hasXMP exist.
why? Is that the reason that the namespace HDRGainMap is private?
func createHDRGainMapMetadata(version: Int, headroom: Double) -> CGImageMetadata? {
// Create a mutable metadata object
let mutableMetadata = CGImageMetadataCreateMutable()
// Define the namespace for HDRGainMap
let namespace = "HDRGainMap"
let xmpKeyPath = "iio:hasXMP"
let xmpValue = String(true)
// Set the HDRGainMapVersion item
let versionKeyPath = "\(namespace):HDRGainMapVersion"
let versionValue = String(version)
// Set the version value
let xmpSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, xmpKeyPath as CFString, xmpValue as CFString)
if xmpSetResult == false {
print("Failed to set xmp")
}
// Set the version value
let versionSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, versionKeyPath as CFString, versionValue as CFString)
if versionSetResult == false {
print("Failed to set HDRGainMapVersion")
}
// Set the HDRGainMapHeadroom item
let headroomKeyPath = "\(namespace):HDRGainMapHeadroom"
let headroomValue = String(headroom)
// Set the headroom value
let headroomSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, headroomKeyPath as CFString, headroomValue as CFString)
if headroomSetResult == false {
print("Failed to set HDRGainMapHeadroom")
}
return mutableMetadata
}
1.Open the DingTalk macOS application, start a meeting, and initiate screen sharing.
2.The DingTalk app calls the [SCShareableContent getShareableContentWithCompletionHandler:] API.
3.The API takes over 40+ seconds to return a response.
Dobrý den, koupil jsem si IPhone 13 a na displeji přesně uprostřed tlačítka hlasitosti (+) je tečka. Vypadá to jako kdyby to byla nějaká funkce či co, ale na nic jsem nepřišel. Nevím jestli je to vada nebo ne.
I am making a SpriteKit game and I am trying to change the cursor image from the default pointer to a png image that I have imported into the project, but it’s not really working. when I run the project I can see the cursor image change for a brief second and then return to the default image. Here is my code:
print(NSCursor.current)
if let image = NSImage(named: customImage) {
print("The image exists")
cursor = NSCursor(image: image, hotSpot: .zero)
cursor.push()
print(cursor)
}
print(NSCursor.current)
The above code is all contained in the didMove(:) function in GameScene. From the print statements I can see that the memory address of the NSCursor.current changes to that of cursor. HOWEVER, in the mouseMoved(:) call back function I print out the mouse location and the current cursor. I can see from these print stamens that the cursor memory address has again changed and no longer matches the custom cursor address… so I am not sure what is going on…
Also, fyi the cursor is a global property within game scene so it should persist. Also, image is not nil. This is verified by the print statements I see
Thanks
__builtin_ia32_cvtb2mask512() is the GNU C builtin for vpmovb2m k, zmm.
The Intel intrinsic for it is _mm512_movepi8_mask.
It extracts the most-significant bit from each byte, producing an integer mask.
The SSE2 and AVX2 instructions pmovmskb and vpmovmskb do the same thing for 16 or 32-byte vectors, producing the mask in a GPR instead of an AVX-512 mask register. (_mm_movemask_epi8 and _mm256_movemask_epi8).
I would like an implementation for ARM that is faster than below
I would like an implementation for ARM NEON
I would like an implementation for ARM SVE
I have attached a basic scalar implementation in C. For those trying to implement this in ARM, we care about the high bit, but each byte's high bit (in a 128bit vector), can be easily shifted to the low bit using the ARM NEON intrinsic: vshrq_n_u8(). Note that I would prefer not to store the bitmap to memory, it should just be the return value of the function similar to the following function.
#define _(n) __attribute((vector_size(1<<n),aligned(1)))
typedef char V _(6); // 64 bytes, 512 bits
typedef unsigned long U;
#undef _
U generic_cvtb2mask512(V v) {
U mask=0;int i=0;
while(i<64){
// shift mask by 1 and OR with MSB of v[i] byte
mask=(mask<<1)|((v[i]&0x80)>>7);
i++;}
return mask;
}
This is also a dup of : https://stackoverflow.com/questions/79225312