Hello,
I'm currently making a C++ application, in Xcode, that uses C++ packages (like libglfw,libglew,opengl) along with a custom Rust Language C++ package that is compiled with a build script during the build phase.
All of the packages are compiled for ARM, since I use an M1 as a development system, and the packages are downloaded through homebrew.
However, when I build my application with Mac(Rosetta) for x86 Intel CISC MacOS it says that "Symbols cant be found for architecture x86_64" for the packages. Makes sense since the packages are built for arm64 RISC ARM (besides the Rust package, which i can fix with a patch of the build script).
Is there a way to tell Xcode to use specified x86 Intel CISC versions of the packages when building for Rosetta and to use arm64 RISC ARM packages when building for native?
Im looking to use the libsteam_api package as well, but that is x86 only. I can imagine I can just set that to be used with C++ preprocessor directives.
Post
Replies
Boosts
Views
Activity
Hello,
I am doing a cross-platform project that uses C++ and OpenGL ( I know I should be using MoltenVK or Metal, but OpenGL is nice and simple for starting out and is cross platform). I am also doing most of my development on a M1 Macbook Pro, which supports up to OpenGL 4.1.
The M1 also only supports up to 16 active fragment shader samplers ( maximum number of supported image units)
I am currently working on a batch rendering system that uses an array of textures thats uploaded to the GPU and the shader can switch based off of the index into a sampler array. Heres the shader that I am using ( the vertex and fragment shaders are combined, but the program parses them separately) :
#type vertex
#version 410 core
layout(location = 0) in vec3 a_Position;
layout(location = 1) in vec4 a_Color;
layout(location = 2) in vec2 a_TexCoord;
layout(location = 3) in float a_TexIndex;
layout(location = 4) in float a_TilingFactor;
uniform mat4 u_ViewProjection;
out vec4 v_Color;
out vec2 v_TexCoord;
out float v_TexIndex;
out float v_TilingFactor;
void main()
{
v_Color = a_Color;
v_TexCoord = a_TexCoord;
v_TexIndex = a_TexIndex;
v_TilingFactor = a_TilingFactor;
gl_Position = u_ViewProjection * vec4(a_Position, 1.0);
}
#type fragment
#version 410 core
layout(location = 0) out vec4 color;
in vec4 v_Color;
in vec2 v_TexCoord;
in float v_TexIndex;
in float v_TilingFactor;
uniform sampler2D u_Textures[16];
void main()
{
color = texture(u_Textures[int(v_TexIndex)], v_TexCoord * v_TilingFactor) * v_Color;
}
However, when the program runs I get this message: UNSUPPORTED (log once): POSSIBLE ISSUE: unit 2 GLD_TEXTURE_INDEX_2D is unloadable and bound to sampler type (Float) - using zero texture because texture unloadable
I double and triple checked my code and im binding everything correctly to the shader (if im not feel free to point it out :), and the only thing I found on the web relating to this error was saying that it was an error within the GLSL compiler on the new M1s. Is this true? Or is it a code issue?
Thanks
side note: I am using EMACS to run Cmake and do C++ development, so if you try and test my project on Xcode and it doesnt include the shaders its most likely a Cmake/Xcode copy issue.
Hello,
I am currently stuck on a problem involving the newly released feature of CreateML components.
I followed Get to know CreateML components, however, its not clear to me how this model can train with multiple iterations. I got a model to go through one iteration with the code shown in the video (it wasnt released as a project file), but there doesnt seem to be a way to increase the iterations. I also looked through the only associated project on CreateML components, but the code was different from what was described in that video and lacked the audio classifier example to see how it ticked.
It was also mentioned in the video that there might be issues saving the model in a CoreML file format due to it being custom, but that leaves the question of how ones supposed to save the trained model once done. It seems like saving a model would be really beneficial to machine learning tasks, right?
Here is the code I am using in swift playgrounds:
truct ImageRegressor{
static let trainingDataURL = URL(fileURLWithPath: "Project/regression_label")
static let parameterURL = URL(fileURLWithPath: "Project/parameters")
static func train() async throws -> some Transformer<CIImage, Float>{
let estimator = ImageFeaturePrint()
.appending(LinearRegressor())
let data = try AnnotatedFiles(labeledByNamesAt: trainingDataURL, separator: "-",type: .image)
.mapFeatures(ImageReader.read)
.mapAnnotations({_ in Float()})
let (training,validation) = data.randomSplit(by: 0.8)
let transformer = try await estimator.fitted(to:training,validateOn: validation){
event in guard let trainingMaxError = event.metrics[.trainingMaximumError] else{
return
}
guard let validationMaxError = event.metrics[.validationMaximumError] else{
return
}
print("Training max error: \(trainingMaxError), Validation max error: \(validationMaxError)")
}
let validationError = try await meanAbsoluteError(
transformer.applied(to: validation.map(\.feature)),
validation.map(\.annotation))
print("Mean absolute error: \(validationError)")
try estimator.write(transformer, to:parameterURL)
return transformer
}
}
func doSomething() {
Task{
let transformer = try await ImageRegressor.train()
}
}
doSomething()
Im on the recent version of MacOs and I recently trained a Style Transfer model using CreateML.
I used the preview tab of CreateML to preview my model with a video (as well as an image), however when I press the button to export or share the result from the neural network none are exported. The modal window appears but doesnt save after the progress bar shows up for the conversion
I tried converting the CoreML model file into a CoreML package, however when I tried exporting the preview it crashed and switched tabs to the package information section.
I've been having this issue with all three export buttons on the model preview section of both the CreateML application and Xcode. Is this happening to anyone else?
Ive also tried using the coremltools package for Python to extract a preview, however documentation for Style Transfer networks doesnt exist for loading videos with that package. The style transfer network only takes an input of images, so its unclear where a video file can be loaded.