I am developing a simple camera JNI interface program in Objc. I managed to compile. But I get the following link error. I use the following command. Is there anything I can add to solve this problem? Note that I use Intel MacMini.
g++ -framework Foundation -framework AVFoundation CameraMacOS.m
Undefined symbols for architecture x86_64:
"_CMVideoFormatDescriptionGetDimensions", referenced from:
_openCamera in CameraMacOS-517c44.o
_listWebcamNamesAndSizes in CameraMacOS-517c44.o
"_CVPixelBufferGetBaseAddress", referenced from:
-[CaptureDelegate captureOutput:didFinishProcessingPhoto:error:] in CameraMacOS-517c44.o
"_CVPixelBufferGetBytesPerRow", referenced from:
-[CaptureDelegate captureOutput:didFinishProcessingPhoto:error:] in CameraMacOS-517c44.o
"_CVPixelBufferGetHeight", referenced from:
-[CaptureDelegate captureOutput:didFinishProcessingPhoto:error:] in CameraMacOS-517c44.o
"_CVPixelBufferGetWidth", referenced from:
-[CaptureDelegate captureOutput:didFinishProcessingPhoto:error:] in CameraMacOS-517c44.o
"_CVPixelBufferLockBaseAddress", referenced from:
-[CaptureDelegate captureOutput:didFinishProcessingPhoto:error:] in CameraMacOS-517c44.o
"_CVPixelBufferUnlockBaseAddress", referenced from:
-[CaptureDelegate captureOutput:didFinishProcessingPhoto:error:] in CameraMacOS-517c44.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Post
Replies
Boosts
Views
Activity
I am wondering why Xcode uses CPU for Metal GPU shader program execution for Swift. With Objective-C Metal, it finishes instantly. But the same shader program, when called from Swift, takes many seconds. GPU is idle while CPU runs 99%. I use the following line to choose GPU in Swift. But doesn't seem to choose real GPU.
let device: MTLDevice = MTLCreateSystemDefaultDevice()!
For Objective-C, I use the same default device. This chooses GPU correctly.
MTLDevice device = MTLCreateSystemDefaultDevice();
Is there any way I can choose GPU in Swift Metal?
I have a following MTLBuffer created. How can I send INPUTVALUE to the memINPUT buffer? I need to send repeatedly in Objective-C.
// header file
@property id<MTLBuffer> memINPUT;
// main file
int length = 1000;
...
memINPUT = [_device newBufferWithLength:(sizeof(float)*length) options:0];
...
float INPUTVALUE[length];
for (int i=0; i < length; i++) {
INPUTVALUE[i] = (float)i;
}
// How to send to INPUTVALUE to memINPUT?
...
The following is Swift version. I am looking for Objective-c version.
memINPUT.contents().copyMemory(from: INPUTVALUE, byteCount: length * MemoryLayout<Float>.stride);
I am new to Metal. I need to port OpenCL compute shader programs to Metal compute shaders. I am having trouble in finding sample codes in Metal and Swift, Objective-C. I can see examples with GPU buffer objects only. As in the following OpenCL shader function, I need to pass uniform constant float and integer values along with GPU buffer pointers. I only use compute shaders.
__kernel void testfunction (
float ratio1,
int opr1,
int opr2,
__global float *INPUT1,
__global float *INPUT2,
__global float *OUTPUT
} {
int peIndex = get_global_id(0);
// main compute block
}
How can I code these in Metal? And how can I set/pass these parameter values in Swift and Objective-c main programs?