I'm trying to hit an API URL, however I am getting this error
(501) Invalidation handler invoked, clearing connection
(501) personaAttributesForPersonaType for type:0 failed with error Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.mobile.usermanagerd.xpc was invalidated: failed at lookup with error 159 - Sandbox restriction."
UserInfo={NSDebugDescription=The connection to service named com.apple.mobile.usermanagerd.xpc was invalidated: failed at lookup with error 159 - Sandbox restriction.}
Received port for identifier response: <(null)> with error:Error Domain=RBSServiceErrorDomain Code=1 "Client not entitled" UserInfo={RBSEntitlement=com.apple.runningboard.process-state, NSLocalizedFailureReason=Client not entitled, RBSPermanent=false}
elapsedCPUTimeForFrontBoard couldn't generate a task port
This is what my info.plist looks like -
This is the code I'm using to hit the URL and get a response
func sendMessage() {
guard let url = URL(string: "https://API_URL") else { return }
var request = URLRequest(url: url)
request.httpMethod = "POST"
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
let body = ["query": message] //creates a dictionary with key:value pair
request.httpBody = try? JSONSerialization.data(withJSONObject: body) //converts the dictionary to json data and sets as body of request
isLoading = true
response = ""
URLSession.shared.dataTask(with: request) { data, _, error in //initiates async task for sending request
DispatchQueue.main.async { //async update of UI on main thread
isLoading = false
}
if let data = data, error == nil, //checks that data was received and that there is no error
let json = try? JSONSerialization.jsonObject(with: data) as? [String: String] { //parsing json response
DispatchQueue.main.async {
response = json["response"] ?? "No response"
}
}
}.resume() // starts the network request
}
Can anyone help me understand what the errors are and why I'm not able to get the response back?
Post
Replies
Boosts
Views
Activity
So I am tracking 2 objects in my scene, and spawning a tiny arrow on each of the objects (this part is working as intended).
Inside my scene I have added Collision Components and Physics Body Components to each of the arrows.
I want to detect when when a collision occurs between the 2 arrow entities.. I have made the collision boxes big enough so they should definitely be overlapping, however I am not able to detect when the Collision occurs.
This is the code that I use for the scene -
import SwiftUI
import RealityKit
import RealityKitContent
struct DualObjectTrackingTest: View {
@State private var subscription: EventSubscription?
var body: some View {
RealityView { content in
if let immersiveContentEntity = try? await Entity(named: "SceneFind.usda", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
print("Collision check started")
}
} update: { content in
if let arrow = content.entities.first?.findEntity(named: "WhiteArrow") as? ModelEntity {
let subscription = content.subscribe(to: CollisionEvents.Began.self, on: arrow) { collisionEvent in
print("Collision has occured")
}
}
}
}
}
All I see in my console logs is "Collision check started" and then whenever I move the 2 objects really close to each other so as to overlap the collision boxes, I don't see any updates in the logs.
Can anyone give me some further guidance/resources on this?
Thanks again!
A second post on the same topic, as I feel I may have over complicated the earlier one.
I essentially am performing object tracking inside Reality Composer Pro and adding a digital entity to the tracked object. I now want to get the coordinates of this digital entity inside Xcode..
Secondly, can I track more than 1 object inside the same scene? For example if I want to find a spanner and a screwdriver amongst a bunch of tools laid out on the table, and spawn an arrow on top of the spanner and the screwdriver, and then get the coordinates of the arrows that I spawn, how can I go about this?
Hey, as a follow up to my earlier posts about object tracking on visionOS 2 - I'm doing some experimentation, and my use-case/requirements require me to track the coordinates of some digital entity that I attach (relative to my reference object) to my reference object.
Can something like this be done?
Right now, all I'm doing is putting my reference object in my scene, and then positioning the 3D content that I want to show at the corresponding locations on the reference object. I am then loading the scene in a RealityView block via my SwiftUI code.
I want to know now if I can also extract and use the coordinates of the digital entity that I have placed (post object-tracking), and then make some manipulations via code, for example, if the physical coordinates of the digital entity is in a certain x,y,z range -> trigger this function/bring up this alert message in a tile..
Is something like this possible, and if so, can you help me with understanding different aspects to this problem via code with some sample/reference code? So far I've only done most of the object tracking related tasks via the Reality Composer Pro, but this task that I'm trying to implement will require me to do quite a bit of programming as well, and I'm kinda lost as to how to start and go about this.
Thanks for any help that ya'll can give me!
I have been able to get object tracking working with vision OS 2. So now in my reality view, when my reference object is detected - I am overlaying digital content on top of the reference object. I am implementing this with a Transform entity and attaching an object anchor to the entity and then placing my digital content in the scene (inside Reality Composer Pro)
I now want to know if it's possible to create attachments and attach them to the digital content (say modelXYZ) that is spawned when the physical object is detected. If I need to write SwiftUI code to do this that works together with my RCP scene (that has the object tracking content), how do I do this? Some sample code or some reference to accomplish this would be extremely helpful
I am currently running the beta version of macOS Sequoia on my MacBook Pro. Are there any approved or recommended antivirus softwares I can install on this MacBook? I would greatly appreciate if anyone could point me towards some resources for this.
Thanks!
Hi, I'm brainstorming ideas for getting dynamic content inside my visionOS app on the Vision Pro. I have some data coming out of a piece of equipment, and reaching a cloud hub (something like IoT Hub on Azure). I want to get that data inside a visionOS app, ideally inside an attachment that is attached to some 3D entity inside my RealityView.
Is something like this possible? Can someone give me some starter points on how I can enable a pipeline like this, and if there are any resources that I could use for reference.
Hi, I read online that to downgrade from Vision OS 2 developer beta back to Vision OS 1 - we need the developer strap. My request for the strap is still under review, is there a way to downgrade without the strap if the need arises?
I've been trying to get the drag gesture up and running so I can move my 3D model around in my immersive space, but for some reason I am not able to move it around. The model shows up in my visionOS 1.0 Simulator, but I can't seem to get it to move around. Would love some help with this and some resources too that would be helpful. Here's a snippet of my Reality View code
import SwiftUI
import RealityKit
import RealityKitContent
struct GearRealityView: View {
static var modelEntity = Entity()
var body: some View {
RealityView { content in
if let model = try? await Entity(named: "LandingGear", in: realityKitContentBundle) {
GearRealityView.modelEntity = model
content.add(model)
}
}.gesture(
DragGesture()
.targetedToEntity(GearRealityView.modelEntity)
.onChanged({ value in
GearRealityView.modelEntity.position = value.convert(value.location3D, from: .local, to: GearRealityView.modelEntity.parent!)
})
)
}
}