as the title said, I want to sample the current pixel in the layer and if the color is the same as one in the parameter, I need to find a way to return that position back to SwiftUI.
Do you think that's possible?
in Shader.Argument I didn't find a way to directly convert to a pointer.
Post
Replies
Boosts
Views
Activity
so, my app needs to find the dominant palette and the position in the image of the k-most dominant colors. I followed the very useful sample project from the vImage documentation
https://developer.apple.com/documentation/accelerate/bnns/calculating_the_dominant_colors_in_an_image
and the algorithm works fine although I can't wrap my head around how should I go on about and linking said colors with a point in the image. Since the algorithm works by filling storages first, I tried also filling an array of CGPoints called LocationStorage and working with that
//filling the array
for i in 0...width {
for j in 0...height {
locationStorage.append(
CGPoint(x: i, y: j))
}
.
.
.
//working with the array
let randomIndex = Int.random(in: 0 ..< width * height)
centroids.append(Centroid(red: redStorage[randomIndex],
green: greenStorage[randomIndex],
blue: blueStorage[randomIndex],
position: locationStorage[randomIndex]))
}
struct Centroid {
/// The red channel value.
var red: Float
/// The green channel value.
var green: Float
/// The blue channel value.
var blue: Float
/// The number of pixels assigned to this cluster center.
var pixelCount: Int = 0
var position: CGPoint = CGPointZero
init(red: Float, green: Float, blue: Float, position: CGPoint) {
self.red = red
self.green = green
self.blue = blue
self.position = position
}
}
although it's not accurate.
I also tried force trying every pixel in the image to get as close to each color but I think it's too slow.
What do you think my approach should be?
Let me know if you need additional info
Please be kind I'm learning Swift.