The wwdc video "Design for spatial input" talks about hands and direct interaction, and uses the example of the keyboard (at around 17:10). The keyboard buttons display a hover state as that gets brighter as the finger gets closer. What do I have to do as a developer to implement this?
I understand that hoverEffect and HoverEffectComponent respond to eye contact. Do they also automatically respond to direct touch? Or is there something else that I need to do?
Post
Replies
Boosts
Views
Activity
I can't seem to get the rotation gesture to work. Here is my code:
struct ContentView: View {
@State var rotation:Rotation3D = Rotation3D()
var rotate:some Gesture {
RotateGesture3D()
.targetedToAnyEntity()
.onChanged { gesture in
rotation = gesture.rotation
}
}
var body: some View {
RealityView { content in
let box = ModelEntity(mesh: .generateBox(size: 0.25))
box.generateCollisionShapes(recursive: false)
box.components.set(InputTargetComponent())
content.add(box)
}
.gesture(rotate)
.rotation3DEffect(rotation)
}
}
I assume it is up to me to apply the rotation with rotation3DEffect but I have also tried it without. Is it just not supported in the simulator? Can I expect it to work on the device? If so, is the code correct?
I'm trying to save metal textures in a lossless compressed format. I've tried png and tiff, but I run into the same problem: the pixel data changes after save and load when we have transparency. Here is the code I use to save a Tiff:
import ImageIO
import UIKit
import Metal
import MobileCoreServices
extension MTLTexture {
func saveAsLosslessTIFF(url: URL) throws {
guard let context = CIContext() else { return }
guard let colorSpace = CGColorSpace(name: CGColorSpace.linearSRGB) else { return }
guard let ciImage = CIImage(mtlTexture: self, options: [.colorSpace : colorSpace]) else { return }
guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return }
// create a dictionary with TIFF compression options
let tiffCompression_LZW = 5
let options: [String: Any] = [
kCGImagePropertyTIFFCompression as String: tiffCompression_LZW,
kCGImagePropertyDepth as String: depth,
kCGImagePropertyPixelWidth as String: width,
kCGImagePropertyPixelHeight as String: height,
]
let fileDestination = CGImageDestinationCreateWithURL(url as CFURL, kUTTypeTIFF, 1, nil)
guard let destination = fileDestination else {
throw RuntimeError("Unable to create image destination.")
}
CGImageDestinationAddImage(destination, cgImage, options as CFDictionary)
if !CGImageDestinationFinalize(destination) {
throw RuntimeError("Unable to save image to destination.")
}
}
}
I can then load the texture like this:
func loadTexture(url:URL) throws -> MTLTexture {
let usage = MTLTextureUsage(rawValue: MTLTextureUsage.renderTarget.rawValue | MTLTextureUsage.shaderRead.rawValue | MTLTextureUsage.shaderWrite.rawValue)
return try loader.newTexture(URL:url,options:[MTKTextureLoader.Option.textureUsage:usage.rawValue,MTKTextureLoader.Option.origin:MTKTextureLoader.Origin.flippedVertically.rawValue])
}
After saving and then loading the texture again, I want to get back the exact same texture. And I do, if there is no transparency. Transparent pixels, however, are transformed in a way that I don't understand. Here is an example pixel:
[120, 145, 195, 170] -> [144, 174, 234, 170]
My first guess would be that something is trying to undo a pre-multiplied alpha that never happened. But the numbers don't seem to work out. For example, if that were the case I'd expect 120 to go to (120 * 255) / 170 = 180 , not 144.
Any idea what I am doing wrong?