Post

Replies

Boosts

Views

Activity

Reply to EDR and display brightness
My guess is that this is related to the LCD display in your mac. LCD works by blocking light, but it's not perfect. Say black = 99.9% absorption or 0.001 transmittance. Say your max brightness is 500 nits, that means full backlight + LCD at reference white -> 500 nits. Well your blacks are going to be 0.001 * 500 = 0.5 nits. At half brightness, if you keep the backlight the same but increase the dimming on the LCD, you end up with EDR headroom. BUT, now the blacks (still 0.5 nits) got brighter relative to the reference white (250 nits). So if you did this all the way down to minimum brightness, the laptop screen would look washed out and be consuming extra power to keep the backlight at full. Other display types like OLED screens don't have this issue. LCDs with local-dimming can mitigate this by turning down some of the backlight zones (you still get blacks washing out in some areas, ie ghosting), so it's practical to end up with a greater dynamic range.
Sep ’21
Reply to Converting UIImage to jpegData - [Metal] 9072 by 12198 iosurface is too large for GPU
What is going on in ImagePicker? What type is Image? The device's camera should give an image a lot smaller than 9072 by 12198. If you want to support images that large, try CGImageDestination. For example: struct ImageDestination { let outputData = CFDataCreateMutable(nil, 0)! let cgImageDestination: CGImageDestination init?(type: UTType, count: Int = 1) { guard let d = CGImageDestinationCreateWithData(outputData, type.identifier as CFString, count, nil) else { return nil } self.cgImageDestination = d } mutating func addImage(_ image: CGImage) { CGImageDestinationAddImage(cgImageDestination, image, nil) } mutating func finalize() throws { guard CGImageDestinationFinalize(cgImageDestination) else { throw DocumentFailure.savingUnknown } } } /// usage: let imageCG = uiImage.cgImage! var dest = ImageDestination(type: .jpeg)! dest.addImage(imageCG) dest.finalize() // your data dest.outputData
Sep ’21
Reply to How can I get depth data (distance) from Ipad Pro 12.9 2020?
I'm assuming you want to read the depth data for a certain point within the texture on the CPU? It's a bit more straightforward in metal code, but here is a quick extension that might help you swift import CoreVideo import Metal import simd extension CVPixelBuffer { // Requires CVPixelBufferLockBaseAddress(_:_:) first var data: UnsafeRawBufferPointer? { let size = CVPixelBufferGetDataSize(self) return .init(start: CVPixelBufferGetBaseAddress(self), count: size) } var pixelSize: simd_int2 { simd_int2(Int32(width), Int32(height)) } var width: Int { CVPixelBufferGetWidth(self) } var height: Int { CVPixelBufferGetHeight(self) } func sample(location: simd_float2) - simd_float4? { let pixelSize = self.pixelSize guard pixelSize.x 0 && pixelSize.y 0 else { return nil } guard CVPixelBufferLockBaseAddress(self, .readOnly) == noErr else { return nil } guard let data = data else { return nil } defer { CVPixelBufferUnlockBaseAddress(self, .readOnly) } let pix = location * simd_float2(pixelSize) let clamped = clamp(simd_int2(pix), min: .zero, max: pixelSize &- simd_int2(1,1)) let bytesPerRow = CVPixelBufferGetBytesPerRow(self) let row = Int(clamped.y) let column = Int(clamped.x) let rowPtr = data.baseAddress! + row * bytesPerRow switch CVPixelBufferGetPixelFormatType(self) { case kCVPixelFormatType_DepthFloat32: // Bind the row to the right type let typed = rowPtr.assumingMemoryBound(to: Float.self) return .init(typed[column], 0, 0, 0) case kCVPixelFormatType_32BGRA: // Bind the row to the right type let typed = rowPtr.assumingMemoryBound(to: UInt8.self) return .init(Float(typed[column]) / Float(UInt8.max), 0, 0, 0) default: return nil } } }
Apr ’21
Reply to supported iosurface/pixelbuffer formats
Generally you want to use the "4:2:0", or 1/2 res biplanar formats like: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v' kCVPixelFormatType_420YpCbCr8BiPlanarFullRange  = '420f' kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange = 'x420' kCVPixelFormatType_420YpCbCr10BiPlanarFullRange  = 'xf20' I've successfully used these to grab frames and put them into -[CIImage initWithCVPixelBuffer:] Hope that helps!
Oct ’20
Reply to Rendering from a texture
Yeah, that works if you want to display a quad within a metal view. That's actually a lot less code than most rendering APIs. If you have a metal texture, eg from rendering with Core Image / MPS / your own metal compute kernel and just want to show it in a CALayer, you can make an IOSurface-backed texture and set the IOSurface as the contents of the CALayer.
Oct ’20