Yeah, that works if you want to display a quad within a metal view. That's actually a lot less code than most rendering APIs.
If you have a metal texture, eg from rendering with Core Image / MPS / your own metal compute kernel and just want to show it in a CALayer, you can make an IOSurface-backed texture and set the IOSurface as the contents of the CALayer.
Post
Replies
Boosts
Views
Activity
Have you considered creating one or more texture atlass of the maximum texture size, then using blit commands to replace regions as needed? On newer devices you might also benefit from sparse textures - https://developer.apple.com/documentation/metal/textures/managing_texture_memory?language=objc.
Rendering offscreen is normal in Metal. You can create a MTLCommandQueue independently of any metal view or CADisplayLink, render to MTLTextures that you create, and wait for each command buffer to complete. No need to display it on screen. Perhaps if you post more info, we could point you in the right direction for what you're trying to do.
Usually, you want to double- or triple- buffer so you're not waiting for the memory region to become available again. ie, make 3 textures and rotate between them so one is free to be updated, one is being enqueued, and one is being used for rendering at any given time.
The purpose of threadgroup or tile memory is that it is much faster to access than system memory, since it's directly on the GPU hardware as opposed to needing to send loads and stores out over the memory bus. You can store intermediate results there to avoid expensive ram traffic. Hope that helps.
Generally you want to use the "4:2:0", or 1/2 res biplanar formats like:
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v'
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange = '420f'
kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange = 'x420'
kCVPixelFormatType_420YpCbCr10BiPlanarFullRange = 'xf20'
I've successfully used these to grab frames and put them into -[CIImage initWithCVPixelBuffer:]
Hope that helps!
I'm assuming you want to read the depth data for a certain point within the texture on the CPU?
It's a bit more straightforward in metal code, but here is a quick extension that might help you
swift
import CoreVideo
import Metal
import simd
extension CVPixelBuffer {
// Requires CVPixelBufferLockBaseAddress(_:_:) first
var data: UnsafeRawBufferPointer? {
let size = CVPixelBufferGetDataSize(self)
return .init(start: CVPixelBufferGetBaseAddress(self), count: size)
}
var pixelSize: simd_int2 {
simd_int2(Int32(width), Int32(height))
}
var width: Int {
CVPixelBufferGetWidth(self)
}
var height: Int {
CVPixelBufferGetHeight(self)
}
func sample(location: simd_float2) - simd_float4? {
let pixelSize = self.pixelSize
guard pixelSize.x 0 && pixelSize.y 0 else { return nil }
guard CVPixelBufferLockBaseAddress(self, .readOnly) == noErr else { return nil }
guard let data = data else { return nil }
defer { CVPixelBufferUnlockBaseAddress(self, .readOnly) }
let pix = location * simd_float2(pixelSize)
let clamped = clamp(simd_int2(pix), min: .zero, max: pixelSize &- simd_int2(1,1))
let bytesPerRow = CVPixelBufferGetBytesPerRow(self)
let row = Int(clamped.y)
let column = Int(clamped.x)
let rowPtr = data.baseAddress! + row * bytesPerRow
switch CVPixelBufferGetPixelFormatType(self) {
case kCVPixelFormatType_DepthFloat32:
// Bind the row to the right type
let typed = rowPtr.assumingMemoryBound(to: Float.self)
return .init(typed[column], 0, 0, 0)
case kCVPixelFormatType_32BGRA:
// Bind the row to the right type
let typed = rowPtr.assumingMemoryBound(to: UInt8.self)
return .init(Float(typed[column]) / Float(UInt8.max), 0, 0, 0)
default:
return nil
}
}
}
Forgot to mention: I filed a more in depth description and sample project in FB8835138. For custom video HDR rendering (outside of AVPlayerLayer) I filed FB8834833
What is going on in ImagePicker? What type is Image? The device's camera should give an image a lot smaller than 9072 by 12198.
If you want to support images that large, try CGImageDestination. For example:
struct ImageDestination {
let outputData = CFDataCreateMutable(nil, 0)!
let cgImageDestination: CGImageDestination
init?(type: UTType, count: Int = 1) {
guard let d = CGImageDestinationCreateWithData(outputData, type.identifier as CFString, count, nil)
else { return nil }
self.cgImageDestination = d
}
mutating func addImage(_ image: CGImage) {
CGImageDestinationAddImage(cgImageDestination, image, nil)
}
mutating func finalize() throws {
guard CGImageDestinationFinalize(cgImageDestination) else {
throw DocumentFailure.savingUnknown
}
}
}
/// usage:
let imageCG = uiImage.cgImage!
var dest = ImageDestination(type: .jpeg)!
dest.addImage(imageCG)
dest.finalize()
// your data
dest.outputData
Post more of your code. MTKView works fine with UIScrollView, so maybe one of their frames is zero-sized?
My guess is that this is related to the LCD display in your mac.
LCD works by blocking light, but it's not perfect. Say black = 99.9% absorption or 0.001 transmittance.
Say your max brightness is 500 nits, that means full backlight + LCD at reference white -> 500 nits. Well your blacks are going to be 0.001 * 500 = 0.5 nits.
At half brightness, if you keep the backlight the same but increase the dimming on the LCD, you end up with EDR headroom. BUT, now the blacks (still 0.5 nits) got brighter relative to the reference white (250 nits).
So if you did this all the way down to minimum brightness, the laptop screen would look washed out and be consuming extra power to keep the backlight at full.
Other display types like OLED screens don't have this issue. LCDs with local-dimming can mitigate this by turning down some of the backlight zones (you still get blacks washing out in some areas, ie ghosting), so it's practical to end up with a greater dynamic range.
If you don't have access to a more recent Mac, try using an automation on Github Actions.