Apply computer vision algorithms to perform a variety of tasks on input images and video using Vision.

Posts under Vision tag

82 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

How to get selected usdz model thumbnail image using QuickLookThumbnailing
I am doing below code for getting thumbnail from usdz model using the QuickLookThumbnailing, But don't get the proper out. guard let url = Bundle.main.url(forResource: resource, withExtension: withExtension) else{ print("Unable to create url for resource.") return } let request = QLThumbnailGenerator.Request(fileAt: url, size: size, scale: 10.0, representationTypes: .all) let generator = QLThumbnailGenerator.shared generator.generateRepresentations(for: request) { thumbnail, type, error in DispatchQueue.main.async { if thumbnail == nil || error != nil { print(error) }else{ let tempImage = Image(uiImage: thumbnail!.uiImage) print(tempImage) self.thumbnailImage = Image(uiImage: thumbnail!.uiImage) print("=============") } } } } Below Screen Shot for selected model : Below is the thumbnail image, which not come with guitar but get only usdz icon.
1
0
110
4d
How to get selected usdz model thumbnail image with material apply in vision os?
I want to get thumbnail image from USDZ model from vision os, But it will get image without material apply. Here is my code import Foundation import SceneKit import SceneKit.ModelIO class ARQLThumbnailGenerator { private let device = MTLCreateSystemDefaultDevice()! /// Create a thumbnail image of the asset with the specified URL at the specified /// animation time. Supports loading of .scn, .usd, .usdz, .obj, and .abc files, /// and other formats supported by ModelIO. /// - Parameters: /// - url: The file URL of the asset. /// - size: The size (in points) at which to render the asset. /// - time: The animation time to which the asset should be advanced before snapshotting. func thumbnail(for url: URL, size: CGSize, time: TimeInterval = 0) -> UIImage? { let renderer = SCNRenderer(device: device, options: [:]) renderer.autoenablesDefaultLighting = true if (url.pathExtension == "scn") { let scene = try? SCNScene(url: url, options: nil) renderer.scene = scene } else { let asset = MDLAsset(url: url) let scene = SCNScene(mdlAsset: asset) renderer.scene = scene } let image = renderer.snapshot(atTime: time, with: size, antialiasingMode: .multisampling4X) self.saveImageFileInDocumentDirectory(imageData: image.pngData()!) return image } func saveImageFileInDocumentDirectory(imageData : Data){ var uniqueID = UUID().uuidString let tempPath = NSSearchPathForDirectoriesInDomains(FileManager.SearchPathDirectory.documentDirectory, FileManager.SearchPathDomainMask.userDomainMask, true) let tempDocumentsDirectory: AnyObject = tempPath[0] as AnyObject let uniqueVideoID = uniqueID + "image.png" let tempDataPath = tempDocumentsDirectory.appendingPathComponent(uniqueVideoID) as String try? imageData.write(to: URL(fileURLWithPath: tempDataPath), options: []) } }
1
0
112
6d
Object tracking on Vision Pro using Vision
I'm wondering if it's possible to implement object tracking on Vision Pro using the Vision framework of Apple? I see that the Vision documentation offers a variety of classes for computer vision which have a tag "visionOS", but all the example codes in the documentation are only for iOS, iPadOS or macOS. So can those classes also be used for developing Vision Pro apps? If so, how do they get data feed from the camera of Vision Pro?
0
0
97
1w
Build errors for iOS for my visionOS app
I'm taking my iOS/iPadOS app and converting it so it runs on visionOS. I’m trying to compile my app, build it, for both visionOS and iOS. When I try to build for an iPhone and iPad simulator, I get the following error:  Building for 'iphonesimulator', but realitytool only supports [xros, xrsimulator] I’m thinking I might need to do a # if conditional compilation statement for visionOS so iOS doesn’t try to build lines of code but I can’t for this particular error find out for which file or code I need to do the conditional compilation. Anyone know how to get rid of this error? 
2
0
175
4d
Store USDZ with SwiftData
I am trying to store usdz files with SwiftData for now. I am converting usdz to data, then storing it with SwiftData My model import Foundation import SwiftData import SwiftUI @Model class Item { var name: String @Attribute(.externalStorage) var usdz: Data? = nil var id: String init(name: String, usdz: Data? = nil) { self.id = UUID().uuidString self.name = name self.usdz = usdz } } My function to convert usdz to data. I am currently a local usdz just to test if it is going to work. func usdzData() -> Data? { do { guard let usdzURL = Bundle.main.url(forResource: "tv_retro", withExtension: "usdz") else { fatalError("Unable to find USDZ file in the bundle.") } let usdzData = try Data(contentsOf: usdzURL) return usdzData } catch { print("Error loading USDZ file: \(error)") } return nil } Loading the items @Query private var items: [Item] ... var body: some View { ... ForEach(items) { item in HStack { Model3D(?????) { model in model .resizable() .scaledToFit() } placeholder: { ProgressView() } } } ... } How can I load the Model3D? I have tried: Model3D(data: item.usdz) Gives me the errors: Cannot convert value of type '[Item]' to expected argument type 'Binding<C>' Generic parameter 'C' could not be inferred Both errors are giving in the ForEach. I am able to print the content inside item: ForEach(items) { item in HStack { Text("\(item.name)") Text("\(item.usdz)") } } This above works fine for me. The item.usdz prints something like Optional(10954341 bytes) I would like to know 2 things: Is this the correct way to save usdz files into SwiftData? Or should I use FileManager? If so, how should I do that? Also how can I get the usdz from the storage (SwiftData) to my code and use it into Model3D?
3
0
169
3d
text orientation on images
hello I am trying to detect the orientation of text in images. (each image has a label with a number but sometimes the the label is not in the right orientation and I would like two detect these cases and add a prefix to the image files) this code is working well but when the text is upside down it considers that the text is well oriented is it a way to distinguish the difference ? thanks for your help ! import SwiftUI import Vision struct ContentView: View { @State private var totalImages = 0 @State private var processedImages = 0 @State private var rotatedImages = 0 @State private var remainingImages = 0 var body: some View { VStack { Button(action: chooseDirectory) { Text("Choisir le répertoire des images") .padding() } Text("TOTAL: \(totalImages)") Text("TRAITEES: \(processedImages)") Text("ROTATION: \(rotatedImages)") Text("RESTANT: \(remainingImages)") } .padding() } func chooseDirectory() { let openPanel = NSOpenPanel() openPanel.canChooseDirectories = true openPanel.canChooseFiles = false openPanel.allowsMultipleSelection = false openPanel.begin { response in if response == .OK, let url = openPanel.url { processImages(in: url) } } } func processImages(in directory: URL) { DispatchQueue.global(qos: .userInitiated).async { do { let fileManager = FileManager.default let urls = try fileManager.contentsOfDirectory(at: directory, includingPropertiesForKeys: nil) let imageUrls = urls.filter { $0.pathExtension.lowercased() == "jpg" || $0.pathExtension.lowercased() == "png" } DispatchQueue.main.async { self.totalImages = imageUrls.count self.processedImages = 0 self.rotatedImages = 0 self.remainingImages = self.totalImages } for url in imageUrls { self.processImage(at: url) } } catch { print("Error reading contents of directory: \(error.localizedDescription)") } } } func processImage(at url: URL) { guard let image = NSImage(contentsOf: url), let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) else { return } let request = VNRecognizeTextRequest { (request, error) in if let error = error { print("Error recognizing text: \(error.localizedDescription)") return } if let results = request.results as? [VNRecognizedTextObservation], !results.isEmpty { let orientationCorrect = self.isTextOrientationCorrect(results) if !orientationCorrect { self.renameFile(at: url) DispatchQueue.main.async { self.rotatedImages += 1 } } } DispatchQueue.main.async { self.processedImages += 1 self.remainingImages = self.totalImages - self.processedImages } } let handler = VNImageRequestHandler(cgImage: cgImage, options: [:]) do { try handler.perform([request]) } catch { print("Error performing text recognition request: \(error.localizedDescription)") } } func isTextOrientationCorrect(_ observations: [VNRecognizedTextObservation]) -> Bool { // Placeholder for the logic to check text orientation // This should be implemented based on your specific needs for observation in observations { if let recognizedText = observation.topCandidates(1).first { let boundingBox = observation.boundingBox let angle = atan2(boundingBox.height, boundingBox.width) if abs(angle) > .pi / 4 { return false } } } return true } func renameFile(at url: URL) { let fileManager = FileManager.default let directory = url.deletingLastPathComponent() let newName = "ROTATION_" + url.lastPathComponent let newURL = directory.appendingPathComponent(newName) do { try fileManager.moveItem(at: url, to: newURL) } catch { print("Error renaming file: \(error.localizedDescription)") } } } struct ContentView_Previews: PreviewProvider { static var previews: some View { ContentView() } }
0
0
93
1w
Specific barcode not recognized
I faced a problem during development that I could not scan Code39 barcode with iPad using Vision. A sample label I used for test has multiple Code39 barcode on it and I could scan almost all barcodes except for specific one. And when I use conventional barcode scanner and free apps to scan barcode, I could scan the barcode with no problem. I failed to scan the barcode only when I use Vision function. Has anyone faced similar situation? Do you know the cause why specific barcode could not be scanned with iPad with Vision?
0
0
106
1w
Specific barcode is not recognized
Hi, I face a problem that I could not scan a specific Code 39 barcode with Vision framework. We have multiple barcode in a label and almost all Code 39 can be scanned, but not for specific one. One more information, regardless the one that is not recognized with Vision can be read by a general barcode scanner. Have anyone faced similar situation? Is there unique condition to make it hard to scan the barcode when using Vision?(size, intensity, etc) Regards,
0
0
112
1w
Reading 12K panoramic images system API read error, VisionOS does not support 12K panoramic photos view
xtension Entity { func addPanoramicImage(for media: WRMedia) { let subscription = TextureResource.loadAsync(named:"image_20240425_201630").sink( receiveCompletion: { switch $0 { case .finished: break case .failure(let error): assertionFailure("(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) self.components.set(ModelComponent( mesh: .generateSphere(radius: 1E3), materials: [material] )) self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription)) } func updateRotation(for media: WRMedia) { let angle = Angle.degrees( 0.0) let rotation = simd_quatf(angle: Float(angle.radians), axis: SIMD3<Float>(0, 0.0, 0)) self.transform.rotation = rotation } struct WRSubscribeComponent: Component { var subscription: AnyCancellable } } case .failure(let error): assertionFailure("(error)") Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
1
0
249
Apr ’24
What is the maximum data processing speed?
For example: we use DocKit for birdwatching, so we have an unknown field distance and direction. Distance = ? Direction = ? For example, the rock from which the observation is made. The task is to recognize the number of birds caught in the frame, add a detection frame and collect statistics. Question: What is the maximum number of frames processed with custom object recognition? If not enough, can I do the calculations myself and transfer to DokKit for fast movement?
0
0
256
Apr ’24
One app Multi-platform. The concern of a novice developer
Hello . Currently, only the ios version is on sale on the App Store. The application is offering an icloud-linked, auto-renewable subscription. I want to sell to the app store connect with the same identifier, AppID at the same time. I simply added visionos to the existing app project to provide the visionos version early, but the existing UI-related code and the location-related code are not compatible. We used the same identifier with the same name, duplicated and optimized only what could be implemented, and created it without any problems on the actual device. However, when I added the visionos platform to the App Store cennect and tried to upload it through the archive in the app for visionos that I created as an addition, there was an error in the identifier and provisioning, so the upload was blocked. The result of looking up to solve the problem App Group -I found out about the function, but it was judged that a separate app was for an integrated service, so it was not suitable for me. Add an APP to an existing app project via target and manually adjust the platform in Xcode -> Build Phases -> Compile Soures -> Archive upload success?( I haven't been able to implement this stage of information yet.) I explained the current situation. Please give me some advice on how to implement it.visionos has a lot of constraints, so you need to take a lot of features off.
0
0
313
Mar ’24
Visionos upload error Profile doesn't include the com.apple.application-identifier entitlement.
Currently, I'm reducing the ios version being sold on the App Store with the same app id and identifier to visionos, and I'm trying to upload it using the same identifier, developer id, and icloud, but I can't upload it because of an error. I didn't know what the problem was, so I updated the update of the ios version early, but I uploaded the review without any problems. Uploading the visionos single version has a profile problem, so I would appreciate it if you could tell me the solution. In addition, a lot of the code used in the ios version is not compatible with visionos, so we have created a new project for visionos.
0
0
306
Mar ’24
How to show only Spatial video using PHPickerFilter in swift 5.0
I want to get only spatial video while open the Photo library in my app. How can I achieve? One more thing, If I am selecting any video using photo library then how to identify selected video is Spatial Video or not? self.presentPicker(filter: .videos) /// - Tag: PresentPicker private func presentPicker(filter: PHPickerFilter?) { var configuration = PHPickerConfiguration(photoLibrary: .shared()) // Set the filter type according to the user’s selection. configuration.filter = filter // Set the mode to avoid transcoding, if possible, if your app supports arbitrary image/video encodings. configuration.preferredAssetRepresentationMode = .current // Set the selection behavior to respect the user’s selection order. configuration.selection = .ordered // Set the selection limit to enable multiselection. configuration.selectionLimit = 1 let picker = PHPickerViewController(configuration: configuration) picker.delegate = self present(picker, animated: true) } `func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) { picker.dismiss(animated: true) { // do something on dismiss } guard let provider = results.first?.itemProvider else {return} provider.loadFileRepresentation(forTypeIdentifier: "public.movie") { url, error in guard error == nil else{ print(error) return } // receiving the video-local-URL / filepath guard let url = url else {return} // create a new filename let fileName = "\(Int(Date().timeIntervalSince1970)).\(url.pathExtension)" // create new URL let newUrl = URL(fileURLWithPath: NSTemporaryDirectory() + fileName) print(newUrl) print("===========") // copy item to APP Storage //try? FileManager.default.copyItem(at: url, to: newUrl) // self.parent.videoURL = newUrl.absoluteString } }`
1
0
351
Mar ’24
Object Detection using Vision performs different than in Create ML Preview
Context So basically I've trained my model for object detection with +4k images. Under preview I'm able to check the prediction for Image "A" which detects two labels with 100% and its Bounding Boxes look accurate. The problem itself However, inside the Swift Playground, when I try to perform object detection using the same model and same Image I don't get same results. What I expected Is that after performing the request and processing the array of VNRecognizedObjectObservation would show the very same results that appear in CreateML Preview. Notes: So the way I'm importing the model into playground is just by drag and drop. I've trained the images using JPEG format. The test Image is rotated so that it looks vertical using MacOS Finder rotation tool. I've tried, while creating VNImageRequestHandlerto pass a different orientation, with the same result. Swift Playground code This is the code I'm using. import UIKit import Vision do{ let model = try MYMODEL_FROMCREATEML(configuration: MLModelConfiguration()) let mlModel = model.model let coreMLModel = try VNCoreMLModel(for: mlModel) let request = VNCoreMLRequest(model: coreMLModel) { request, error in guard let results = request.results as? [VNRecognizedObjectObservation] else { return } results.forEach { result in print(result.labels) print(result.boundingBox) } } let image = UIImage(named: "TEST_IMAGE.HEIC")! let requestHandler = VNImageRequestHandler(cgImage: image.cgImage!) try requestHandler.perform([request]) } catch { print(error) } Additional Notes & Uncertainties Not sure if this is relevant, but just in case: I've trained the model using pictures I took from my iPhone using 48MP HEIC format. All photos were on vertical position. With a python script I overwrote the EXIF orientation to 1 (Normal). This was in order to be able to annotate the images using the CVAT tool and then convert to CreateML annotation format. Assumption #1 Since I've read that Object Detection in Create ML is based on YOLOv3 architecture which inside the first layer resizes the image dimension, meaning that I don't have to worry about using very large images to train my model. Is this correct? Assumption #2 Also makes me asume that the same thing happens when I try to make a prediction?
0
0
433
Mar ’24