Posts

Post not yet marked as solved
4 Replies
338 Views
I'm having an issue with showing progress in a upload task. I have created a local express app to test it's behavior. But progress / delegate always throughs the same results. I'm also conforming to URLSessionTaskDelegate. This is the way I configure URLSession it's always .default private lazy var session: URLSession = { .init( configuration: self.configuration, delegate: self, delegateQueue: OperationQueue.main ) }() private var observation: NSKeyValueObservation? Then I call it using this func and convert it into a publisher func loadData1(path: String, method: RequestMethod, params: [String: String]) -> AnyPublisher<Data, Error> { let publisher = PassthroughSubject<Data, Error>() guard let imageUrl = Bundle.main.url(forResource: "G0056773", withExtension: "JPG"), let imageData = try? Data(contentsOf: imageUrl) else { return Fail(error: NSError(domain: "com.test.main", code: 1000)).eraseToAnyPublisher() } var request = URLRequest(url: URL(string: "http://localhost:3000/")!) request.httpMethod = "POST" let data = request.createFileUploadBody(parameters: [:], boundary: UUID().uuidString, data: imageData, mimeType: "image/jpeg", fileName: "video") let task = session.uploadTask(with: request, from: data) { data, response, error in if let error = error { return publisher.send(completion: .failure(error)) } guard let response = response as? HTTPURLResponse else { return publisher.send(completion: .failure(ServiceError.other)) } guard 200..<300 ~= response.statusCode else { return publisher.send(completion: .failure(ServiceError.other)) } guard let data = data else { return publisher.send(completion: .failure(ServiceError.custom("No data"))) } return publisher.send(data) } observation = task.progress.observe(\.fractionCompleted) { progress, _ in print("progress: ", progress.totalUnitCount) print("progress: ", progress.completedUnitCount) print("progress: ", progress.fractionCompleted) print("***********************************************") } task.resume() return publisher.eraseToAnyPublisher() } request.createFileUploadBody is just an extension of URLRequest that I have created to create the form data's body. Delegate func urlSession(_ session: URLSession, task: URLSessionTask, didSendBodyData bytesSent: Int64, totalBytesSent: Int64, totalBytesExpectedToSend: Int64) { print(bytesSent) print(totalBytesSent) print(totalBytesExpectedToSend) print("*********************************************") } No matter how big or small is the file that I'm uploading I always get the same results. The delegate is only called once. And the progress is also always the same. progress: 100 progress: 0 progress: 0.0095 *********************************************** progress: 100 progress: 0 progress: 4.18053577746524e-07 *********************************************** 2272436 2272436 2272436 ********************************************* progress: 100 progress: 95 progress: 0.95 *********************************************** progress: 100 progress: 100 progress: 1.0 *********************************************** ^^^^^^ Output from prints Does anyone knows what might I be doing wrong? Thanks in advance
Posted
by MostaZa.
Last updated
.
Post not yet marked as solved
0 Replies
894 Views
Hi! I'm dropping a pdf file in an app, and I'm getting 2 errors I cannot understand. When I drop the file I'm supposed to create a pdf view, which, by the way I can. I perform the drop th in a swiftui view.             GeometryReader { geometry in                 ZStack {                     Rectangle()                         .foregroundColor(.white)                         .overlay(OptionalPDFView(pdfDocument: self.document.backgroundPDF))                                     }.onDrop(of: ["public.data"], isTargeted:nil) { providers, location in                     for provider in providers { provider.loadDataRepresentation(forTypeIdentifier: "public.data") { (data, err) in                             if let safeData = data {                                 self.document.setBackgroundData(data: safeData)                             } else {                                 print("problem")                             }                         }                     } ]                    return self.drop(providers: providers, al: location)                 }             } The problem is if I define and on drop of public.adobe.pdf, nothing is recognized when I drag a file into it, and when I specify public.data and I want to extract the provider data for that identifier, I get this console message : " Cannot find representation conforming to type public.data". I'm trying to build a Mac app (Appkit). Thanks a lot.
Posted
by MostaZa.
Last updated
.
Post not yet marked as solved
0 Replies
2.1k Views
Hi! I want to get an image from video output and display it in a swiftui view. I am detecting an observation using vision core ml with an app example I have downloaded from apple called "Breakfast finder". Now I want to crop the detected image with the bounds of the rectangle and display it in a swiftui view in an image. To do that I get the observation bounds, store in a variable and use it to crop the image from the cvpixelbuffer. The fact is I'm getting an blank image. In line 80 of the third block of code I create the ciimage from the pixel bufferI'm a hobbiest developer would like to know how to make more resources of avfoundation.Thaks a lot.MostaZaSwiftUI View.import SwiftUI struct ContentView: View { @State private var image:Image? @State private var showingCamera = false @State private var inputImage:UIImage? var body: some View { VStack { image? .resizable() .scaledToFit() Button("Show camra") { self.showingCamera = true } } .sheet(isPresented: $showingCamera, onDismiss: loadImage) { CameraCont(croppedImage: self.$inputImage) .edgesIgnoringSafeArea(.top) } } func loadImage() { guard let inputImage = inputImage else {return} image = Image(uiImage: inputImage) } }The viewcontroller link with swiftui (viewcontrollerrepresentable)struct CameraCont:UIViewControllerRepresentable { @Binding var croppedImage:UIImage? func makeCoordinator() -> Coordinator { return Coordinator(croppedImage: $croppedImage) } class Coordinator:NSObject, SendImageToSwiftUIDelegate { @Binding var croppedImage:UIImage! init(croppedImage:Binding<uiimage?>) { _croppedImage = croppedImage } func sendImage(image: UIImage) { croppedImage = image print("Delegate called") } } typealias UIViewControllerType = VisionObjectRecognitionViewController func makeUIViewController(context: Context) -> VisionObjectRecognitionViewController { let vision = VisionObjectRecognitionViewController() vision.sendImageDelegate = context.coordinator return vision } func updateUIViewController(_ uiViewController: VisionObjectRecognitionViewController, context: Context) { } }The protocol to communicate with the classprotocol SendImageToSwiftUIDelegate { func sendImage(image:UIImage) }The subclass of the viewcontrollers apple example where the image gets displayed. Here is where convert the pixelbuffer in a ciimage.class VisionObjectRecognitionViewController: ViewController { var sendImageDelegate:SendImageToSwiftUIDelegate! private var detectionOverlay: CALayer! = nil private var observationWidthBiggherThan180 = false private var rectToCrop = CGRect() // Vision parts private var requests = [VNRequest]() @discardableResult func setupVision() -> NSError? { // Setup Vision parts let error: NSError! = nil guard let modelURL = Bundle.main.url(forResource: "exampleModelFP16", withExtension: "mlmodelc") else { return NSError(domain: "VisionObjectRecognitionViewController", code: -1, userInfo: [NSLocalizedDescriptionKey: "Model file is missing"]) } do { let visionModel = try VNCoreMLModel(for: MLModel(contentsOf: modelURL)) let objectRecognition = VNCoreMLRequest(model: visionModel, completionHandler: { (request, error) in DispatchQueue.main.async(execute: { // perform all the UI updates on the main queue if let results = request.results { self.drawVisionRequestResults(results) } }) }) self.requests = [objectRecognition] } catch let error as NSError { print("Model loading went wrong: \(error)") } return error } func drawVisionRequestResults(_ results: [Any]) { CATransaction.begin() CATransaction.setValue(kCFBooleanTrue, forKey: kCATransactionDisableActions) detectionOverlay.sublayers = nil // remove all the old recognized objects for observation in results where observation is VNRecognizedObjectObservation { guard let objectObservation = observation as? VNRecognizedObjectObservation else { continue } // Select only the label with the highest confidence. let topLabelObservation = objectObservation.labels[0] let objectBounds = VNImageRectForNormalizedRect(objectObservation.boundingBox, Int(bufferSize.width), Int(bufferSize.height)) let shapeLayer = self.createRoundedRectLayerWithBounds(objectBounds) let textLayer = self.createTextSubLayerInBounds(objectBounds, identifier: topLabelObservation.identifier, confidence: topLabelObservation.confidence) shapeLayer.addSublayer(textLayer) detectionOverlay.addSublayer(shapeLayer) if shapeLayer.bounds.size.width > 180 { // perform crop and apply perspective filter self.observationWidthBiggherThan180 = true self.rectToCrop = shapeLayer.bounds } else { self.observationWidthBiggherThan180 = false } } self.updateLayerGeometry() CATransaction.commit() } // MARK: - Function to capure the output of the camera and perform the image recognition override func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } if observationWidthBiggherThan180 { let imageToCrop = CIImage(cvPixelBuffer: pixelBuffer) let perspectiveTransform = CIFilter(name: "CIPerspectiveTransform") sendImageDelegate.sendImage(image: UIImage(ciImage: imageToCrop)) print(imageToCrop) } let exifOrientation = exifOrientationFromDeviceOrientation() let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientation, options: [:]) do { try imageRequestHandler.perform(self.requests) } catch { print(error) } } override func setupAVCapture() { super.setupAVCapture() // setup Vision parts setupLayers() updateLayerGeometry() setupVision() // start the capture startCaptureSession() } func setupLayers() { detectionOverlay = CALayer() // container layer that has all the renderings of the observations detectionOverlay.name = "DetectionOverlay" detectionOverlay.bounds = CGRect(x: 0.0, y: 0.0, width: bufferSize.width, height: bufferSize.height) detectionOverlay.position = CGPoint(x: rootLayer.bounds.midX, y: rootLayer.bounds.midY) rootLayer.addSublayer(detectionOverlay) } func updateLayerGeometry() { let bounds = rootLayer.bounds var scale: CGFloat let xScale: CGFloat = bounds.size.width / bufferSize.height let yScale: CGFloat = bounds.size.height / bufferSize.width scale = fmax(xScale, yScale) if scale.isInfinite { scale = 1.0 } CATransaction.begin() CATransaction.setValue(kCFBooleanTrue, forKey: kCATransactionDisableActions) // rotate the layer into screen orientation and scale and mirror detectionOverlay.setAffineTransform(CGAffineTransform(rotationAngle: CGFloat(.pi / 2.0)).scaledBy(x: scale, y: -scale)) // center the layer detectionOverlay.position = CGPoint(x: bounds.midX, y: bounds.midY) CATransaction.commit() } func createTextSubLayerInBounds(_ bounds: CGRect, identifier: String, confidence: VNConfidence) -> CATextLayer { let textLayer = CATextLayer() textLayer.name = "Object Label" let formattedString = NSMutableAttributedString(string: String(format: "\(identifier)\nConfidence: %.2f", confidence)) let largeFont = UIFont(name: "Helvetica", size: 24.0)! formattedString.addAttributes([NSAttributedString.Key.font: largeFont], range: NSRange(location: 0, length: identifier.count)) textLayer.string = formattedString textLayer.bounds = CGRect(x: 0, y: 0, width: bounds.size.height - 10, height: bounds.size.width - 10) textLayer.position = CGPoint(x: bounds.midX, y: bounds.midY) textLayer.shadowOpacity = 0.7 textLayer.shadowOffset = CGSize(width: 2, height: 2) textLayer.foregroundColor = CGColor(colorSpace: CGColorSpaceCreateDeviceRGB(), components: [0.0, 0.0, 0.0, 1.0]) textLayer.contentsScale = 2.0 // retina rendering // rotate the layer into screen orientation and scale and mirror textLayer.setAffineTransform(CGAffineTransform(rotationAngle: CGFloat(.pi / 2.0)).scaledBy(x: 1.0, y: -1.0)) return textLayer } func createRoundedRectLayerWithBounds(_ bounds: CGRect) -> CALayer { let shapeLayer = CALayer() shapeLayer.bounds = bounds shapeLayer.position = CGPoint(x: bounds.midX, y: bounds.midY) shapeLayer.name = "Found Object" shapeLayer.backgroundColor = CGColor(colorSpace: CGColorSpaceCreateDeviceRGB(), components: [1.0, 1.0, 0.2, 0.4]) shapeLayer.cornerRadius = 7 return shapeLayer } }
Posted
by MostaZa.
Last updated
.
Post not yet marked as solved
0 Replies
366 Views
Hi! I'm trying to update the value of the progress bar I have created, so I call a completion block of a fuction I have created, the problem is that the completion block is called once the life cicle of the function has expired and I don't get a progress as I should.I send you my code.Please if anyone has any suggestions I'll appreciate your time.This is my progress barstruct ProgressBar: View { @Binding var value: Float var body: some View { GeometryReader { geometry in ZStack(alignment: .leading) { Rectangle().frame(width: geometry.size.width , height: geometry.size.height) .opacity(0.3) .foregroundColor(Color(UIColor.systemTeal)) Rectangle().frame(width: min(CGFloat(self.value)*geometry.size.width, geometry.size.width), height: geometry.size.height) .foregroundColor(Color(UIColor.systemBlue)) .animation(.linear) }.cornerRadius(45) } } }This is my read image functionprivate func readImage(image:UIImage, completionHandler:@escaping(([VNRecognizedText]?,Error?)->Void), comp:@escaping((Double?,Error?)->())) { var recognizedTexts = [VNRecognizedText]() let requestHandler = VNImageRequestHandler(cgImage: (image.cgImage)!, options: [:]) let textRequest = VNRecognizeTextRequest { (request, error) in guard let observations = request.results as? [VNRecognizedTextObservation] else { completionHandler(nil,error) return } for currentObservation in observations { let topCandidate = currentObservation.topCandidates(1) if let recognizedText = topCandidate.first { recognizedTexts.append(recognizedText) } } completionHandler(recognizedTexts,nil) } textRequest.recognitionLevel = .accurate textRequest.recognitionLanguages = ["es"] textRequest.usesLanguageCorrection = true textRequest.progressHandler = {(request, value, error) in print(value) DispatchQueue.main.async { comp(value,nil) } } try? requestHandler.perform([textRequest]) }This is my main viewstruct ContentView: View { @State var ima = drawPDFfromURL(url: dalai) @State private var stepperCounter = 0 @State private var observations = [VNRecognizedText]() @State private var progressValue: Float = 0.0 private var originaImage = drawPDFfromURL(url: dalai) var body: some View { VStack{ VStack { Button(action: { //self.observations = readText(image: self.ima!) DispatchQueue.main.async { readImage(image: self.ima!, completionHandler: { (texts, error) in self.observations = texts! }) { (value, err) in self.progressValue = Float(value!) } } }) { Text("Read invoice") } ProgressBar(value: $progressValue).frame(height: 20) }.padding() Stepper("Select Next Box", onIncrement: { self.stepperCounter += 1 self.ima = addBoundingBoxToImage(recognizedText: self.observations[self.stepperCounter], image: self.originaImage!) print("Adding to age") }, onDecrement: { self.stepperCounter -= 1 print("Subtracting from age") }) HStack(alignment: .center) { Text("For word: ") Button(action: {getBoxProperties()}) { Text("Get box properties") } }.padding() Image(uiImage: ima!) .resizable() .scaledToFit() } } }I can get the progress printed, but I cannot pass the value.Please if anyone has any suggestion I'm all ears.Thanks Carlos
Posted
by MostaZa.
Last updated
.
Post marked as solved
6 Replies
2.3k Views
I'm building a custom machine learning algorithm to get parts of an invoice. So I need to feed the words and bounding boxes of them into a model. To achieve that I tokenize the pdf page string and then usefunc findString(_ string: String, withOptions options: NSString.CompareOptions = []) -> [PDFSelection]In NSStringCompareOptions I use .RegularExpression. But I'm not getting any results.Here's my code.// tokenize string and remove empty arrays var dummy = pdfString!.components(separatedBy: "\n").joined(separator: " ").components(separatedBy: " ").filter{$0 != ""} // loop over every token and search for it with its position var resultsFound = [[PDFSelection]]() for word in dummy { let pattern = "\\b" + word + "\\b" resultsFound.append(facturaPDF!.findString(pattern, withOptions: .regularExpression)) } resultsFound.count // add the results to the page for results in resultsFound for result in results { let highlight = PDFAnnotation(bounds: result.bounds(for: paginaPDF!), forType: .highlight, withProperties: nil) highlight.endLineStyle = .square highlight.color = UIColor.orange.withAlphaComponent(0.5) paginaPDF!.addAnnotation(highlight) }If anyone has any suggestions I'll be grateful 🙂
Posted
by MostaZa.
Last updated
.