Apply computer vision algorithms to perform a variety of tasks on input images and video using Vision.

Vision Documentation

Posts under Vision tag

102 Posts
Sort by:
Post not yet marked as solved
2 Replies
595 Views
I have trained a model to classify some symbols using Create ML. In my app I am using VNImageRequestHandler and VNCoreMLRequest to classify image data. If I use a CVPixelBuffer obtained from an AVCaptureSession then the classifier runs as I would expect. If I point it at the symbols it will work fairly accurately, so I know the model is trained fairly correctly and works in my app. If I try to use a cgImage that is obtained by cropping a section out of a larger image (from the gallery), then the classifier does not work. It always seems to return the same result (although the confidence is not a 1.0 and varies for each image, it will be to within several decimal points of it, eg 9.9999). If I pause the app when I have the cropped image and use the debugger to obtain the cropped image (via the little eye icon and then open in preview), then drop the image into the Preview section of the MLModel file or in Create ML, the model correctly classifies the image. If I scale the cropped image to be the same size as I get from my camera, and convert the cgImage to a CVPixelBuffer with same size and colour space to be the same as the camera (1504, 1128, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) then I get some difference in ouput, it's not accurate, but it returns different results if I specify the 'centerCrop' or 'scaleFit' options. So I know that 'something' is happening, but it's not the correct thing. I was under the impression that passing a cgImage to the VNImageRequestHandler would perform the necessary conversions, but experimentation shows this is not the case. However, when using the preview tool on the model or in Create ML this conversion is obviously being done behind the scenes because the cropped part is being detected. What am I doing wrong. tl;dr my model works, as backed up by using video input directly and also dropping cropped images into preview sections passing the cropped images directly to the VNImageRequestHandler does not work modifying the cropped images can produce different results, but I cannot see what I should be doing to get reliable results. I'd like my app to behave the same way the preview part behaves, I give it a cropped part of an image, it does some processing, it goes to the classifier, it returns a result same as in Create ML.
Posted
by Bergasms.
Last updated
.
Post not yet marked as solved
1 Replies
450 Views
I used metal and CompositorLayer to render an immersive space skybox. In this space, the window created by the Swift UI I created only displays the gray frosted glass background effect (it seems to ignore the metal-rendered skybox and only samples and displays the black background). why is that? Is there any solution to display the normal frosted glass background? Thank you very much!
Posted
by zane1024.
Last updated
.
Post not yet marked as solved
1 Replies
429 Views
Hey guys! I'm building an app which detects cars via Vision and then retrieves the distance to said car by a synchronized depthDataMap. However, I'm having trouble finding the correct corresponding pixel in that depthDataMap. While the CGRect of the ObjectObservation ranges from 0 - 300 (x) and 0 - 600 (y), The width x height of the DepthDataMap is Only 320 x 180, so I can't get the right corresponding pixel. Any Idea on how to solve this? Kind regards
Posted Last updated
.
Post not yet marked as solved
0 Replies
330 Views
My project is use OC, is a iOS App, and now I need make it to visionOS (not unmodified designed for iPhone). So a question one, how can I differentiate visionOS by code, need use macro definitions, otherwise, it cannot be compiled. The question two, have some other tips?or other question need I know? Thanks.
Posted
by lowinding.
Last updated
.
Post not yet marked as solved
0 Replies
344 Views
I want to make icloud backup using SwiftData in VisionOS and I need to use SwiftData first but I get the following error even though I do the following steps I followed the steps below I created a Model import Foundation import SwiftData @Model class NoteModel { @Attribute(.unique) var id: UUID var date:Date var title:String var text:String init(id: UUID = UUID(), date: Date, title: String, text: String) { self.id = id self.date = date self.title = title self.text = text } } I added modelContainer WindowGroup(content: { NoteView() }) .modelContainer(for: [NoteModel.self]) And I'm making inserts to test import SwiftUI import SwiftData struct NoteView: View { @Environment(\.modelContext) private var context var body: some View { Button(action: { // new Note let note = NoteModel(date: Date(), title: "New Note", text: "") context.insert(note) }, label: { Image(systemName: "note.text.badge.plus") .font(.system(size: 24)) .frame(width: 30, height: 30) .padding(12) .background( RoundedRectangle(cornerRadius: 50) .foregroundStyle(.black.opacity(0.2)) ) }) .buttonStyle(.plain) .hoverEffectDisabled(true) } } #Preview { NoteView().modelContainer(for: [NoteModel.self]) }
Posted
by OVRIDOO.
Last updated
.
Post not yet marked as solved
2 Replies
543 Views
Currently, I try to test the ImageTrackingProvider with the Apple Vision Pro. I started with some basic code: import RealityKit import ARKit @MainActor class ARKitViewModel: ObservableObject{ private let session = ARKitSession() private let imageTracking = ImageTrackingProvider(referenceImages: ReferenceImage.loadReferenceImages(inGroupNamed: "AR")) func runSession() async { do{ try await session.run([imageTracking]) } catch{ print(error) } } func processUpdates() async { for await _ in imageTracking.anchorUpdates{ print("test") } } } I only have one picture in the AR folder. I added the size an I have no error messages in the AR folder. As I am trying to run the application with the vision Pro, I receive following error: ar_image_tracking_provider_t <0x28398f1e0>: Failed to load reference image <ARReferenceImage: 0x28368f120 name="IMG_1640" physicalSize=(1.350, 2.149)> with error: Failed to add reference image. It finds the image, but there seems to be a problem with the loading. I tried the jpeg and the png format. I do not understand why it fails to load the ReferenceImage. I use Xcode Version 15.3 beta 3
Posted Last updated
.
Post not yet marked as solved
2 Replies
489 Views
I think there is a problem with the keyboard of vision pro. I don't think it's difficult to enter another language. If you're not going to make it, shouldn't Apple provide an extended custom keyboard? Sometimes it's frustrating to see things that are intentionally restricted. If you have any information about vision pro's keyboard or want to discuss it, let's talk about your thoughts together! I don't have any information yet
Posted Last updated
.
Post not yet marked as solved
0 Replies
342 Views
Hello . I'm currently selling an app. I tried to run the ios version on visionos to provide the same service as the existing app, but a huge amount of incompatible code was generated. Can I create a visionos app with the same name, add visionos from the App Store connect site, and upload a completely separate app project file? The iOS app and the visionos app will use the same icloud, in-app payment, and so on. However, we plan to separate the project itself to reduce the size of the user's app itself and optimize it.
Posted Last updated
.
Post not yet marked as solved
0 Replies
301 Views
Like the basic environment of visionpro, I want to create a surrounding space when I run my app. Like other apps, I want to customize the space like when I run a movie theater or Apple TV, so I want to give users a better experience. Does anyone know the technology or developer documentation?
Posted Last updated
.
Post not yet marked as solved
1 Replies
332 Views
Hi, I'm working on developing an app. I need my app to work in a crowd with multiple people (who have multiple hands). From what I understand, the Vision Framework currently uses a heuristic of "largest hand" to assign as the detected hand. This won't work for my application since the largest hand won't always be the one that is of interest. In fact, the hand of interest will be the one that is pointing. I know how to train a model using CreateML to identify a hand that is pointing, but where I'm running into issues is that there is no straightforward way to directly override the Vision framework's built-in heuristic of selecting the largest hand when you're solely relying on Swift and Create ML. I would like my framework to be: Request hand landmarks Process image CreateML reports which hand is pointing We use the pointing hand to collect position data on the points of the index finger But within vision's framework, if you set the number of hands to collect data for to 1, it will just choose the largest hand and report position data for that hand only. Of course, the easy work around here is to set it to X number of hands, but within the scope of an IOS device, this is computationally intensive (since my app could be handling up to 10 hands at a time). Has anyone come up with a simpler solution to this problem or aware of something within visionOS to do it?
Posted Last updated
.
Post not yet marked as solved
7 Replies
866 Views
Hi Developers, I want to create a Vision app on Swift Playgrounds on iPad. However, Vision does not properly function on Swift Playgrounds on iPad or Xcode Playgrounds. The Vision code only works on a normal Xcode Project. SO can I submit my Swift Student Challenge 2024 Application as a normal Xcode Project rather than Xcode Playgrounds or Swift Playgrounds File. Thanks :)
Posted Last updated
.
Post not yet marked as solved
1 Replies
2k Views
Currently, the id of my actual visionpro device is different from the xcode that works on my macmini. I added a visionpro id to xcode, but I can't build it with my visionpro. Can I log out of the existing login to xcode and log in with the same ID as visionpro? However, the appleID created for visionpro does not have an Apple Developer membership, so there is no certificate that makes it run on the actual device. How can I add my visionpro ID from the ID with my apple developer membership to run the xcode project app on visionpro? This is the first time this is the first time.
Posted Last updated
.
Post not yet marked as solved
0 Replies
385 Views
Hi, I tried to change the default size for a volumetric window but It looks like this window has a maximum width value. Is it true? WindowGroup(id: "id") { ItemToShow() }.windowStyle(.volumetric) .defaultSize(width: 100, height: 0.8, depth: 0.3, in: .meters) Here I set the width to 100 meters but It still looks like about 2 meters
Posted Last updated
.
Post not yet marked as solved
0 Replies
399 Views
Hello everyone, I want to develop an app for vision pro that aims to help people with vertigo and dizziness problems. The problem is that I can not afford vision pro. If I use standart vr set with an iPhone inside would it cause issues on real vision pro?
Posted
by Uvrutfus.
Last updated
.
Post marked as solved
2 Replies
773 Views
I'm exploring my Vision Pro and finding it unclear whether I can even achieve things like body pose detection etc. https://developer.apple.com/videos/play/wwdc2023/111241/ It's clear that I can apply it to self provided images, but how about to the data coming from visionOS SDKs? All I can find is this mesh data from ARKit, https://developer.apple.com/documentation/arkit/arkit_in_visionos - am I missing something or do we not yet have good APIs for this? Appreciate any guidance! Thanks.
Posted
by nkarpov.
Last updated
.
Post not yet marked as solved
2 Replies
565 Views
When trying to run my app with .windowStyle(.volumetric) for vision OS, this error is returning: Fatal error: Your app was given a scene with session role UISceneSessionRole(_rawValue: UIWindowSceneSessionRoleApplication) but no scenes declared in your App body match this scroll.
Posted Last updated
.
Post marked as solved
1 Replies
471 Views
Hello, My understanding of the paper below is that iOS ships with a MobileNetv3-based ML model backbone, which then uses different heads for specific tasks in iOS. I understand that this backbone is accessible for various uses through the Vision framework, but I was wondering if it is also accessible for on-device fine-tuning for other purposes. Just as an example, if I want to have a model to detect some unique object in a photo, can I use the built in backbone or do I have to include my own in the app. Thanks very much for any advice and apologies if I didn't understand something correctly. Source: https://machinelearning.apple.com/research/on-device-scene-analysis
Posted
by Sark.
Last updated
.
Post not yet marked as solved
0 Replies
453 Views
I'm using RealityKit to give an immersive view of 360 pictures. However, I'm seeing a problem where the window disappears when I enter immersive mode and returns when I rotate my head. Interestingly, putting ".glassBackground()" to the back of the window cures the issue, however I prefer not to use it in the UI's backdrop. How can I deal with this? here is link of Gif:- https://firebasestorage.googleapis.com/v0/b/affirmation-604e2.appspot.com/o/Simulator%20Screen%20Recording%20-%20Apple%20Vision%20Pro%20-%202024-01-30%20at%2011.33.39.gif?alt=media&token=3fab9019-4902-4564-9312-30d49b15ea48
Posted Last updated
.