Post

Replies

Boosts

Views

Activity

Undefined symbols for architecture x86_64:
I am developing a simple camera JNI interface program in Objc. I managed to compile. But I get the following link error. I use the following command. Is there anything I can add to solve this problem? Note that I use Intel MacMini. g++ -framework Foundation -framework AVFoundation CameraMacOS.m Undefined symbols for architecture x86_64: "_CMVideoFormatDescriptionGetDimensions", referenced from: _openCamera in CameraMacOS-517c44.o _listWebcamNamesAndSizes in CameraMacOS-517c44.o "_CVPixelBufferGetBaseAddress", referenced from: -[CaptureDelegate captureOutput:didFinishProcessingPhoto:error:] in CameraMacOS-517c44.o "_CVPixelBufferGetBytesPerRow", referenced from: -[CaptureDelegate captureOutput:didFinishProcessingPhoto:error:] in CameraMacOS-517c44.o "_CVPixelBufferGetHeight", referenced from: -[CaptureDelegate captureOutput:didFinishProcessingPhoto:error:] in CameraMacOS-517c44.o "_CVPixelBufferGetWidth", referenced from: -[CaptureDelegate captureOutput:didFinishProcessingPhoto:error:] in CameraMacOS-517c44.o "_CVPixelBufferLockBaseAddress", referenced from: -[CaptureDelegate captureOutput:didFinishProcessingPhoto:error:] in CameraMacOS-517c44.o "_CVPixelBufferUnlockBaseAddress", referenced from: -[CaptureDelegate captureOutput:didFinishProcessingPhoto:error:] in CameraMacOS-517c44.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation)
2
0
237
3w
Xcode does not use GPU for Swift Metal
I am wondering why Xcode uses CPU for Metal GPU shader program execution for Swift. With Objective-C Metal, it finishes instantly. But the same shader program, when called from Swift, takes many seconds. GPU is idle while CPU runs 99%. I use the following line to choose GPU in Swift. But doesn't seem to choose real GPU. let device: MTLDevice = MTLCreateSystemDefaultDevice()! For Objective-C, I use the same default device. This chooses GPU correctly. MTLDevice device = MTLCreateSystemDefaultDevice(); Is there any way I can choose GPU in Swift Metal?
0
0
533
Aug ’23
Transfering data to Metal MTLBuffer dynamically in Objective-c
I have a following MTLBuffer created. How can I send INPUTVALUE to the memINPUT buffer? I need to send repeatedly in Objective-C. // header file @property id<MTLBuffer> memINPUT; // main file int length = 1000; ... memINPUT = [_device newBufferWithLength:(sizeof(float)*length) options:0]; ... float INPUTVALUE[length]; for (int i=0; i < length; i++) { INPUTVALUE[i] = (float)i; } // How to send to INPUTVALUE to memINPUT? ... The following is Swift version. I am looking for Objective-c version. memINPUT.contents().copyMemory(from: INPUTVALUE, byteCount: length * MemoryLayout<Float>.stride);
1
0
741
Jul ’23
Metal compute shader parameter specification and setting
I am new to Metal. I need to port OpenCL compute shader programs to Metal compute shaders. I am having trouble in finding sample codes in Metal and Swift, Objective-C. I can see examples with GPU buffer objects only. As in the following OpenCL shader function, I need to pass uniform constant float and integer values along with GPU buffer pointers. I only use compute shaders. __kernel void testfunction ( float ratio1, int opr1, int opr2, __global float *INPUT1, __global float *INPUT2, __global float *OUTPUT } { int peIndex = get_global_id(0); // main compute block } How can I code these in Metal? And how can I set/pass these parameter values in Swift and Objective-c main programs?
2
0
709
Jul ’23