I have this code to create an IOSurface from a bitmap image:
auto src = loadSource32f(); // rgba 32-bit float image
const auto desc = src->getDescriptor(); // metadata for that image
auto pixelFmt = CGMTLBufferManager::getCVPixelFormat( desc.channelBitDepth, desc.channelOrder ); // returns proper `RGfA`
int width = static_cast<int>( desc.width );
int height = static_cast<int>( desc.height );
int trowbytes = static_cast<int>( desc.trueRowbytes() ); // returns proper rowbytes value
CFMutableDictionaryRef properties = CFDictionaryCreateMutable(
kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks );
CFDictionarySetValue(
properties, kIOSurfaceWidth, CFNumberCreate( kCFAllocatorDefault, kCFNumberIntType, &width ) );
CFDictionarySetValue(
properties, kIOSurfaceHeight, CFNumberCreate( kCFAllocatorDefault, kCFNumberIntType, &height ) );
CFDictionarySetValue(
properties, kIOSurfacePixelFormat, CFNumberCreate( kCFAllocatorDefault, kCFNumberIntType, &pixelFmt ) );
CFDictionarySetValue(
properties, kIOSurfaceBytesPerRow, CFNumberCreate( kCFAllocatorDefault, kCFNumberIntType, &trowbytes ) );
NSDictionary *nsprops = ( __bridge NSDictionary * )properties;
IOSurface *oSurface = [[IOSurface alloc] initWithProperties:nsprops];
CFRelease( properties );
ASSERT_TRUE( oSurface );
auto ioSurface = (IOSurfaceRef) oSurface;
I tested that the pixels are properly written into the iosurface:
// copy data to surface
memcpy([oSurface baseAddress], src->getRawPtr(), src->getSizeInBytes());
auto surfPtr = (uint8_t*)[oSurface baseAddress];
// extract raw surface data and write it into a file
saveOutputRaw(desc, surfPtr, getFileName("IOSurfaceTestSurfaceRaw"));
And I see this:
Now I want to create a MTLTexture based on the iosurface:
// create texture
auto fmt = IOSurfaceGetPixelFormat( ioSurface );
auto w = IOSurfaceGetWidth( ioSurface );
auto h = IOSurfaceGetHeight( ioSurface );
auto rowbytes = IOSurfaceGetBytesPerRow( ioSurface );
MTLTextureDescriptor *textureDescriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:CGMTLBufferManager::getMTLPixelFormat( fmt )
width:w
height:h
mipmapped:NO];
textureDescriptor.usage = MTLTextureUsageShaderRead | MTLTextureUsageShaderWrite;
textureDescriptor.storageMode = MTLStorageModeShared;
auto device = MTLCreateSystemDefaultDevice();
id<MTLTexture> surfaceTex = [device newTextureWithDescriptor:textureDescriptor iosurface:ioSurface plane:0];
And now I want to test this:
auto region = MTLRegionMake2D(0, 0, w, h);
auto bufSize = [oSurface allocationSize];
// get texture bytes
auto outBuf2 = std::vector<uint8_t>(bufSize);
[surfaceTex getBytes:outBuf2.data()
bytesPerRow:rowbytes
fromRegion:region
mipmapLevel:0];
// save to file
saveOutputRaw(desc, outBuf2.data(), getFileName("IOSurfaceTestCreateTex"));
// get bytes
saveOutputRaw(desc, surfPtr, getFileName("IOSurfaceTestCreateRaw"));
And I get this result:
I also tried replaceRegion and blitEncoder copyFromTexture: toTexture: as well as managed texture with syncing, but the result is always the same - only the first 22 pixels get filled and the rest is transparent.
I have no idea what I'm missing. Please help.
Post
Replies
Boosts
Views
Activity
I have the following piece of code:
swift
extension MTLTexture {
public var cgImage: CGImage? {
let ctx = CIContext(mtlDevice: self.device)
let options: [CIImageOption: Any] = [CIImageOption.colorSpace: CGColorSpace.linearSRGB]
guard let image = CIImage(mtlTexture: self, options: options) else {
print("CIImage not created")
return nil
}
guard let imageSrgb = image.matchedToWorkingSpace(from: CGColorSpace(name: CGColorSpace.linearSRGB)!) else {
print("CIImage not converted to srgb")
return nil
}
let flipped = imageSrgb.transformed(by: CGAffineTransform(scaleX: 1, y: -1))
return ctx.createCGImage(flipped, from: flipped.extent)
}
}
This code crashes on line 14 with EXC_BAD_ACCESS code 1 without additional calls. However, when I set options to nil in CIImage call on line 5, it renders fine.
FWIW, there's no change if I remove lines 9-13 and call createCGImage straight from image.
It also crashes, if I instantiate the context without the device:
return CIContext().createCGImage(...)
What am I missing?
I've got the following code that attempts to use MPSImageScale shader to flip and convert a texture:
/// mtlDevice and mtlCommandBuffer are obtained earlier
/// srcTex and dstTex are valid and existing MTLTexture objects with the same descriptors
MPSScaleTransform scale{};
scale.scaleX = 1;
scale.scaleY = -1;
auto scaleShader = [[MPSImageScale alloc] initWithDevice:mtlDevice];
if ( scaleShader == nil ) {
return ErrorType::OUT_OF_MEMORY;
}
scaleShader.scaleTransform = &scale;
[scaleShader encodeToCommandBuffer:mtlCommandBuffer
sourceTexture:srcTex
destinationTexture:dstTex;
No matter what I do, I keep getting the EXC_BAD_ACCESS with the last line with the assembly stopping before endEncoding:
0x7ff81492d804 <+1078>: callq 0x7ff81492ce5b ; ___lldb_unnamed_symbol373$$MPSImage
-> 0x7ff81492d809 <+1083>: movq -0x98(%rbp), %rdi
0x7ff81492d810 <+1090>: movq 0x3756f991(%rip), %rsi ; "endEncoding"
All Metal objects are valid and I did all that I could to ensure that they are not culprits here, including making sure that the pixel format of both textures is the same, even if this is not required for MPS shaders. What am I missing?
I don't know if I'm going to get an answer to this, but basically I get different mean values when running MPSImageStatisticsMeanAndVariance shader than when I use either of CUDA nppiMeanStdDev, custom OpenCL and shader and CPU code.
The difference for mean is significant. For a sample image I get
{ 0.36, 0.30, 0.22 } // MPS
{ 0.55, 0.43, 0.21 } // Any other method
Deviation/Variance is slightly different as well (though much less), but it might be due to the difference in the mean value.
My first guess is that MTLTexture transforms underlying data somehow (sRBG->linear?) and the mean is calculated from that transformed data, instead of from the original data. But maybe there's something else going on that I'm missing? How can I achieve parity between Metal and other methods?
Any assistance would be appreciated.
What are the conditions when [MTLBuffer newTextureWithDescriptor] would return nil?
Buffer size and buffer rowbytes are valid and compatible with pixel format and dimensions, texture descriptor has all the valid options mentioned in the documentation, and yet this call returns nil regardless of whether the buffer is shared or managed.
GPU memory is also available.
I've used this method before and never had issues with it.
XPC connection keeps getting interrupted.
I'm creating an xpc endpoint in FxPlug plugin for FCP X using xpc_endpoint_create.
This endpoint is then passed to a helper mach service running in the background and stored there.
Next, our main application is launched and retrieves the stored endpoint from the helper service.
It creates the communication channel using xpc_connection_create_from_endpoint
The main application communicates with FxPlug plugin using that endpoint.
It all works well when I am debugging either our application or FxPlug.
The moment I use the release build on both, the connection works fine for a while but is very quickly interrupted (usually 2-10 seconds), FxPlug plugin gets flagged as non-responsive and is unloaded by FCP X. This behavior is erratic and may cease after some time on some machines.
We've been working on this and some other issues with FxPlug team for months and some changes have been made, but we're stuck with that one last bit.
I want to stress the following: when I use a debug version of either plugin or our app, everything works fine, fxplug is never unloaded or marked as unresponsive, the connection is stable. When both components are using release builds, it all comes apart for no apparent reason.
Both plugin and application can normally recover and reconnect after being unloaded and restored.
Any thoughts on why an xpc connection would be interrupted in this way?
I have a question about texture3d sampling on M1.
I create a 3D texture from 33x33x33 buffer:
Objective-C
idMTLTexture tex = [device newTextureWithDescriptor:texDescriptor];
[tex replaceRegion:MTLRegionMake3D( 0, 0, 0, 33, 33, 33 )
mipmapLevel:0
slice:0
withBytes:GetRawBuffer()
bytesPerRow:sizeof(vector_float4) * 33
bytesPerImage:sizeof(vector_float4) * 33 * 33)];
It is then of course passed to my Metal compute kernel and then I sample it:
metal
float4 apply3DLut(const float4 pixel,
const float3 coords,
texture3dfloat, access::sample lut) {
constexpr sampler smp (mag_filter::linear, min_filter::linear);
float4 out = float4( lut.sample(smp, pixel.rgb) );
return float4( out.rgb, pixel.a );
}
This code worked fine on Intel, but on M1 the sampling seems to always return 0.0f, no matter where I sample, as if the texture was not created or sampling didn't work.
Hi all,
I don't understand the error:
validateFunctionArguments:3487: failed assertion `Fragment Function(MyFunction): Bytes are being bound at index 0 to a shader argument with write access enabled. Does this error pertain to textures or to buffers?
What am I doing wrong:
// set input texture
[commandEncoder setFragmentTexture:srcTexture
													 atIndex:0];
// set params
[commandEncoder setFragmentBytes:¶ms
													length:sizeof(params)
												 atIndex:0];
[commandEncoder drawPrimitives:MTLPrimitiveTypeTriangleStrip
											 vertexStart:0
											 vertexCount:4];
The shader function header looks like this:
fragment float4 MyFunction(RasterizerData in [[ stage_in ]], device MyParams& params [[ buffer (0) ]], texture2d<half, access::sample> src [[ texture(0) ]])
MyParams is a struct of several floats and simd_float4 (because it seems that I can't use metal's float4 in a C++/Objective-C file. Texture has RenderTarget, ShaderRead and ShaderWrite usage.
Note, that this code seems to have worked in another project, but here it throws this error.
Any help appreciated.
Hello,
I have a following problem. I define an Objective-C interface as such:
objective-c
extern @interface CGRenderOptions : NSObject (id _Nonnull) init;
(void)makeInputTransform:(NSString* _Nullable)transform;
(void)makeOutputTransform:(NSString* _Nullable)transform;
@property(assign) BOOL ColorManaged;
@property(assign) CGBypassOption Bypass;
@property(assign) BOOL FalseColor;
@property(assign) MTLSize* _Nullable OutputSize;
@property(assign) NSString* _Nullable InputTransform;
@property(assign) NSString* _Nullable OutputTransform;
@property(assign) NSString* _Nullable WatermarkString;
@end
This header is then imported to a Swift library and the following items are not visible: InputTransform, OutputTransform, makeInputTransform and makeOutputTransform.
These were added recently, so I thought it's an issue with build and compilation, but I cleaned all the build folders, and I still can't get it to work.
Any ideas?
Hi everyone,
I have an executable that handles NSXPC connectivity between apps that I'm installing, and I would like to install it as a launch daemon. I can do this locally with ease by sudo adding the .plist file to /Library/LaunchDaemons/, but when I tried to do this as part of the installation process, the installer complains about lack of permissions, despite me using su in the post install script:
	/usr/bin/su - ${loc_user} -c "cp ${plist_path} ${plist_target}"
	/usr/bin/su - ${loc_user} -c "chown root:wheel ${plist_target}"
	/usr/bin/su - ${loc_user} -c "chmod 644 ${plist_target}"
Is there a best practice when it comes to these services? Currently the plist file and the executable are part of a framework that I am installing.
Thanks!