So, I've declared an AppIntent that indicates my app can "Open files" that conform to UTType.Image.
I've got a @AssistantEntity(schema: .files.file) and a
@AssistantIntent(schema: .files.openFile) declared.
So I navigate to the files app, quicklook an image, and open type-to-siri.
I tell siri "open this in " and all it does is act like "open ". No breakpoint is hit in my intent's perform method.
Am I doing something wrong? How can I test these cross-app behaviors?
Are they... not actually possible? Does an "OpenIntent" only work on my app's own URLs and not on file URLs from other apps?
Post
Replies
Boosts
Views
Activity
Given
I do not understand much at all about how to write shaders
I do not understand the math associated with page-curl effects
I am trying to:
implement a page-curl shader for use on SwiftUI views.
I've lifted a shader from HIROKI IKEUCHI that I believe they lifted from a non-metal shader resource online, and I'm trying to digest it.
One thing I want to do is to paint the "underside" of the view with a given color and maintain the transparency of rounded corners when they are flipped over.
So, if an underside pixel is "clear" then I want to sample the pixel at that position on the original layer instead of the "curl effect" pixel.
There are two comments in the shader below where I check the alpha, and underside flags, and paint the color red as a debug test.
The shader gives this result:
The outside of those rounded corners is appropriately red and the white border pixels are detected as "not-clear". But the "inner" portion of the border is... mistakingly red?
I don't get it. Any help would be appreciated. I feel tapped out and I don't have any IRL resources I can ask.
//
// PageCurl.metal
// ShaderDemo3
//
// Created by HIROKI IKEUCHI on 2023/10/17.
//
#include <metal_stdlib>
#include <SwiftUI/SwiftUI_Metal.h>
using namespace metal;
#define pi float(3.14159265359)
#define blue half4(0.0, 0.0, 1.0, 1.0)
#define red half4(1.0, 0.0, 0.0, 1.0)
#define radius float(0.4)
// そのピクセルの色を返す
[[ stitchable ]] half4 pageCurl
(
float2 _position,
SwiftUI::Layer layer,
float4 bounds,
float2 _clickedPoint,
float2 _mouseCursor
) {
half4 undersideColor = half4(0.5, 0.5, 1.0, 1.0);
float2 originalPosition = _position;
// y座標の補正
float2 position = float2(_position.x, bounds.w - _position.y);
float2 clickedPoint = float2(_clickedPoint.x, bounds.w - _clickedPoint.y);
float2 mouseCursor = float2(_mouseCursor.x, bounds.w - _mouseCursor.y);
float aspect = bounds.z / bounds.w;
float2 uv = position * float2(aspect, 1.) / bounds.zw;
float2 mouse = mouseCursor.xy * float2(aspect, 1.) / bounds.zw;
float2 mouseDir = normalize(abs(clickedPoint.xy) - mouseCursor.xy);
float2 origin = clamp(mouse - mouseDir * mouse.x / mouseDir.x, 0., 1.);
float mouseDist = clamp(length(mouse - origin)
+ (aspect - (abs(clickedPoint.x) / bounds.z) * aspect) / mouseDir.x, 0., aspect / mouseDir.x);
if (mouseDir.x < 0.)
{
mouseDist = distance(mouse, origin);
}
float proj = dot(uv - origin, mouseDir);
float dist = proj - mouseDist;
float2 linePoint = uv - dist * mouseDir;
half4 pixel = layer.sample(position);
if (dist > radius)
{
pixel = half4(0.0, 0.0, 0.0, 0.0); // background behind curling layer (note: 0.0 opacity)
pixel.rgb *= pow(clamp(dist - radius, 0., 1.) * 1.5, .2);
}
else if (dist >= 0.0)
{
// THIS PORTION HANDLES THE CURL SHADED PORTION OF THE RESULT
// map to cylinder point
float theta = asin(dist / radius);
float2 p2 = linePoint + mouseDir * (pi - theta) * radius;
float2 p1 = linePoint + mouseDir * theta * radius;
bool underside = (p2.x <= aspect && p2.y <= 1. && p2.x > 0. && p2.y > 0.);
uv = underside ? p2 : p1;
uv = float2(uv.x, 1.0 - uv.y); // invert y
pixel = layer.sample(uv * float2(1. / aspect, 1.) * float2(bounds[2], bounds[3])); // ME<----
if (underside && pixel.a == 0.0) { //<---- PIXEL.A IS 0.0 WHYYYYY
pixel = red;
}
// Commented out while debugging alpha issues
// if (underside && pixel.a == 0.0) {
// pixel = layer.sample(originalPosition);
// } else if (underside) {
// pixel = undersideColor; // underside
// }
// Shadow the pixel being returned
pixel.rgb *= pow(clamp((radius - dist) / radius, 0., 1.), .2);
}
else
{
// THIS PORTION HANDLES THE NON-CURL-SHADED PORTION OF THE SAMPLING.
float2 p = linePoint + mouseDir * (abs(dist) + pi * radius);
bool underside = (p.x <= aspect && p.y <= 1. && p.x > 0. && p.y > 0.);
uv = underside ? p : uv;
uv = float2(uv.x, 1.0 - uv.y); // invert y
pixel = layer.sample(uv * float2(1. / aspect, 1.) * float2(bounds[2], bounds[3])); // ME
if (underside && pixel.a == 0.0) { //<---- PIXEL.A IS 0.0 WHYYYYY
pixel = red;
}
// Commented out while debugging alpha issues
// if (underside && pixel.a == 0.0) {
// // If the new underside pixel is clear, we should sample the original image's pixel.
// pixel = layer.sample(originalPosition);
// } else if (underside) {
// pixel = undersideColor;
// }
}
return pixel;
}
As a user, when viewing a photo or image, I want to be able to tell Siri, “add this to ”, similar to example from the WWDC presentation where a photo is added to a note in the notes app.
Is this... possible with app domains as they are documented?
I see domains like open-file and open-photo, but I don't know if those are appropriate for this kind of functionality?
Have the requirements to support swipe to dismiss from a quick-look view controller changed in iOS18? I am noticing that my app no longer supports gestural dismissal in an iOS18 build.
Not this is a QLPreviewController presented from a UIViewController presented in a SwiftUI view hierarchy as part of a ViewControllerRepresentable.
I'm unsure what could be causing this, but it appears that all widgets that I have built with Xcode 16 replace image content with solid color views that change to the color of the tint.
Is this... fixable?
Note: None of the subviews in my widgetUI view have widgetAccentable() on then.
Adding it to the Image Views did not appear to change anything.
I want the result of an "If greater" node to return a boolean, but the best I can seem to get is a float of 0.00, or 1.00. I then can't seem to convert these to a boolean so I can use the "AND" node.
Am I holding this wrong?
If one would like to perform network debugging that involves toggling wireless on and off while still remaining connected to the debugger, how can one disable wireless connections of their devices to Xcode?
The Photos app on VisionOS does not apply a blurry navigation bar background to the top of the photos views. Instead if has a transparent navigation bar with some stylized floating buttons.
How can I mimic this in my own SwiftUI VisionOS app?
Is there selection capabilities built into the new container APIs?
I would like to ensure that I can spawn a context menu for multiple selected items in my custom-layout container.
Background:
The app that I am working on lets the user place things in their surroundings and recovers those placements the next time their enter the immersive scene.
From the documentation and discussions I have had, World Tracked Anchors are local to the device.
My questions are:
What happens to these anchors when the user updates their device to the next generation?
What happens to these anchors if the user gets an Apple Care replacement?
Are they backed up and restored via iCloud?
If not, I filed a feedback about it a few months back :D
FB13613066
What I want to do:
I want to turn only the walls of a room into RealityKit Entities that I can collide with, or turn into occlusion surfaces.
This requires adding and maintaining RealityKit entities that with mesh information from the RoomAnchor. It also requires creating a "collision shape" from the mesh information.
What I've explored:
A RoomAnchor can provide me MeshAnchor.Geometry's that match only the "wall" portions of a Room.
I can use this mesh information to create RealityKit entities and add them to my immersive view.
But those Mesh's don't come with UUIDs, so I'm not sure how I could know which entities meshes need to to be updated as the RoomAnchor is updated.
As such I just keep adding duplicate wall entities.
A RoomAnchor also provides me with the UUIDs of its plane anchors, but no way to connect those to the provided meshes that I've discovered so far.
Here is how I add the green walls from the RoomAnchor wall meshes.
Note: I don't like that I need to wrap this in a task to satisfy the async nature of making a shape from a mesh. could be stuck with it, though.
Warning: this code will keep adding walls, even if there are duplicates and will likely cause performance issues :D.
func updateRoom(_ anchor: RoomAnchor) async throws {
print("ROOM ID: \(anchor.id)")
anchor.geometries(of: .wall).forEach { mesh in
Task {
let newEntity = Entity()
newEntity.components.set(InputTargetComponent())
realityViewContent?.addEntity(newEntity)
newEntity.components.set(PlacementUtilities.PlacementSurfaceComponent())
collisionEntities[anchor.id]?.components.set(OpacityComponent(opacity: 0.2))
collisionEntities[anchor.id]?.transform = Transform(matrix: anchor.originFromAnchorTransform)
// Generate a mesh for the plane
do {
let contents = MeshResource.Contents(planeGeometry: mesh)
let meshResource = try MeshResource.generate(from: contents)
// Make this plane occlude virtual objects behind it.
// entity.components.set(ModelComponent(mesh: meshResource, materials: [OcclusionMaterial()]))
collisionEntities[anchor.id]?.components.set(ModelComponent(mesh: meshResource, materials: [SimpleMaterial.init(color: .green, roughness: 1.0, isMetallic: false)]))
} catch {
print("Failed to create a mesh resource for a plane anchor: \(error).")
return
}
// Generate a collision shape for the plane (for object placement and physics).
var shape: ShapeResource? = nil
do {
let vertices = anchor.geometry.vertices.asSIMD3(ofType: Float.self)
shape = try await ShapeResource.generateStaticMesh(positions: vertices,
faceIndices: anchor.geometry.faces.asUInt16Array())
} catch {
print("Failed to create a static mesh for a plane anchor: \(error).")
return
}
if let shape {
let collisionGroup = PlaneAnchor.verticalCollisionGroup
collisionEntities[anchor.id]?.components.set(CollisionComponent(shapes: [shape], isStatic: true,
filter: CollisionFilter(group: collisionGroup, mask: .all)))
// The plane needs to be a static physics body so that objects come to rest on the plane.
let physicsMaterial = PhysicsMaterialResource.generate()
let physics = PhysicsBodyComponent(shapes: [shape], mass: 0.0, material: physicsMaterial, mode: .static)
collisionEntities[anchor.id]?.components.set(physics)
}
collisionEntities[anchor.id]?.components.set(InputTargetComponent())
}
}
}
Hello, I'm trying to accept drags from outside my app to create a new row in a list.
I've observed .onInsert not getting called in this scenario and I'm curious if it's 100% not possible, or if there's an obscure view modifier that I am missing.
Thank you.
struct ContentView: View {
@State var data = ["One", "Two", "Three"]
var body: some View {
HStack {
List {
ForEach(data, id: \.self) { item in
Text(item)
}
.onMove(perform: { indices, newOffset in
data.move(fromOffsets: indices, toOffset: newOffset)
})
.onInsert(of: [UTType.plainText], perform: { index, items in
// WORKS
data.insert("new", at: index)
})
.onInsert(of: [UTType.data], perform: { index, items in
// Never called
data.insert("OUTSIDE", at: index)
})
}
Text("DragMe")
.onDrag {
return NSItemProvider(item: "DragMe" as NSString, typeIdentifier: UTType.plainText.identifier)
}
}
}
}
Hello, I'm trying to determine if my application is not releasing all security scoped resources and I'm curious if there's a way to view the count of all currently accessed URLs.
I am balancing all startAccessingSecurityScopedResource calls that return true with a stopAccessingSecurityScopedResource, but sometimes my application is unresponsive when my mac wakes from sleep.
Console logs indicate some Sandboxing issues.
Unresponsiveness is resolved by a force-quit and restart of the application.
I'd like to try and observe what's going on with the number of Security Scoped resources to get to the bottom of this. Is it possible?
Hello,
I've noticed that when I have my ARSession run the sceneReconstruction provider and the world tracking provider at the same time, I receive no scene reconstruction mesh updates. My catch closure doesn't receive any errors, it just doesn't send anything to the async list.
If I run just the scene reconstruction provider by itself, then I do get mesh updates.
Is this a bug? Is it expected that it's not possible to do this?
Thank you
How persistent is the storage of the WorldTrackingProvider and its underlying world map reconstruction?
The documentation mentions town-to-town anchor recovery, and recovery between sessions, but is that including device restarts and app quits? There are no clues about how persistent it all is.