Post

Replies

Boosts

Views

Activity

URLSession uploadTask not showing progress
I'm having an issue with showing progress in a upload task. I have created a local express app to test it's behavior. But progress / delegate always throughs the same results. I'm also conforming to URLSessionTaskDelegate. This is the way I configure URLSession it's always .default private lazy var session: URLSession = { .init( configuration: self.configuration, delegate: self, delegateQueue: OperationQueue.main ) }() private var observation: NSKeyValueObservation? Then I call it using this func and convert it into a publisher func loadData1(path: String, method: RequestMethod, params: [String: String]) -> AnyPublisher<Data, Error> { let publisher = PassthroughSubject<Data, Error>() guard let imageUrl = Bundle.main.url(forResource: "G0056773", withExtension: "JPG"), let imageData = try? Data(contentsOf: imageUrl) else { return Fail(error: NSError(domain: "com.test.main", code: 1000)).eraseToAnyPublisher() } var request = URLRequest(url: URL(string: "http://localhost:3000/")!) request.httpMethod = "POST" let data = request.createFileUploadBody(parameters: [:], boundary: UUID().uuidString, data: imageData, mimeType: "image/jpeg", fileName: "video") let task = session.uploadTask(with: request, from: data) { data, response, error in if let error = error { return publisher.send(completion: .failure(error)) } guard let response = response as? HTTPURLResponse else { return publisher.send(completion: .failure(ServiceError.other)) } guard 200..<300 ~= response.statusCode else { return publisher.send(completion: .failure(ServiceError.other)) } guard let data = data else { return publisher.send(completion: .failure(ServiceError.custom("No data"))) } return publisher.send(data) } observation = task.progress.observe(\.fractionCompleted) { progress, _ in print("progress: ", progress.totalUnitCount) print("progress: ", progress.completedUnitCount) print("progress: ", progress.fractionCompleted) print("***********************************************") } task.resume() return publisher.eraseToAnyPublisher() } request.createFileUploadBody is just an extension of URLRequest that I have created to create the form data's body. Delegate func urlSession(_ session: URLSession, task: URLSessionTask, didSendBodyData bytesSent: Int64, totalBytesSent: Int64, totalBytesExpectedToSend: Int64) { print(bytesSent) print(totalBytesSent) print(totalBytesExpectedToSend) print("*********************************************") } No matter how big or small is the file that I'm uploading I always get the same results. The delegate is only called once. And the progress is also always the same. progress: 100 progress: 0 progress: 0.0095 *********************************************** progress: 100 progress: 0 progress: 4.18053577746524e-07 *********************************************** 2272436 2272436 2272436 ********************************************* progress: 100 progress: 95 progress: 0.95 *********************************************** progress: 100 progress: 100 progress: 1.0 *********************************************** ^^^^^^ Output from prints Does anyone knows what might I be doing wrong? Thanks in advance
4
0
1k
Feb ’24
Problem when dropping a pdf file
Hi! I'm dropping a pdf file in an app, and I'm getting 2 errors I cannot understand. When I drop the file I'm supposed to create a pdf view, which, by the way I can. I perform the drop th in a swiftui view.             GeometryReader { geometry in                 ZStack {                     Rectangle()                         .foregroundColor(.white)                         .overlay(OptionalPDFView(pdfDocument: self.document.backgroundPDF))                                     }.onDrop(of: ["public.data"], isTargeted:nil) { providers, location in                     for provider in providers { provider.loadDataRepresentation(forTypeIdentifier: "public.data") { (data, err) in                             if let safeData = data {                                 self.document.setBackgroundData(data: safeData)                             } else {                                 print("problem")                             }                         }                     } ]                    return self.drop(providers: providers, al: location)                 }             } The problem is if I define and on drop of public.adobe.pdf, nothing is recognized when I drag a file into it, and when I specify public.data and I want to extract the provider data for that identifier, I get this console message : " Cannot find representation conforming to type public.data". I'm trying to build a Mac app (Appkit). Thanks a lot.
1
0
1.1k
Jul ’20
Create ciimage from cvpixelbuffer
Hi! I want to get an image from video output and display it in a swiftui view. I am detecting an observation using vision core ml with an app example I have downloaded from apple called "Breakfast finder". Now I want to crop the detected image with the bounds of the rectangle and display it in a swiftui view in an image. To do that I get the observation bounds, store in a variable and use it to crop the image from the cvpixelbuffer. The fact is I'm getting an blank image. In line 80 of the third block of code I create the ciimage from the pixel bufferI'm a hobbiest developer would like to know how to make more resources of avfoundation.Thaks a lot.MostaZaSwiftUI View.import SwiftUI struct ContentView: View { @State private var image:Image? @State private var showingCamera = false @State private var inputImage:UIImage? var body: some View { VStack { image? .resizable() .scaledToFit() Button("Show camra") { self.showingCamera = true } } .sheet(isPresented: $showingCamera, onDismiss: loadImage) { CameraCont(croppedImage: self.$inputImage) .edgesIgnoringSafeArea(.top) } } func loadImage() { guard let inputImage = inputImage else {return} image = Image(uiImage: inputImage) } }The viewcontroller link with swiftui (viewcontrollerrepresentable)struct CameraCont:UIViewControllerRepresentable { @Binding var croppedImage:UIImage? func makeCoordinator() -> Coordinator { return Coordinator(croppedImage: $croppedImage) } class Coordinator:NSObject, SendImageToSwiftUIDelegate { @Binding var croppedImage:UIImage! init(croppedImage:Binding<uiimage?>) { _croppedImage = croppedImage } func sendImage(image: UIImage) { croppedImage = image print("Delegate called") } } typealias UIViewControllerType = VisionObjectRecognitionViewController func makeUIViewController(context: Context) -> VisionObjectRecognitionViewController { let vision = VisionObjectRecognitionViewController() vision.sendImageDelegate = context.coordinator return vision } func updateUIViewController(_ uiViewController: VisionObjectRecognitionViewController, context: Context) { } }The protocol to communicate with the classprotocol SendImageToSwiftUIDelegate { func sendImage(image:UIImage) }The subclass of the viewcontrollers apple example where the image gets displayed. Here is where convert the pixelbuffer in a ciimage.class VisionObjectRecognitionViewController: ViewController { var sendImageDelegate:SendImageToSwiftUIDelegate! private var detectionOverlay: CALayer! = nil private var observationWidthBiggherThan180 = false private var rectToCrop = CGRect() // Vision parts private var requests = [VNRequest]() @discardableResult func setupVision() -> NSError? { // Setup Vision parts let error: NSError! = nil guard let modelURL = Bundle.main.url(forResource: "exampleModelFP16", withExtension: "mlmodelc") else { return NSError(domain: "VisionObjectRecognitionViewController", code: -1, userInfo: [NSLocalizedDescriptionKey: "Model file is missing"]) } do { let visionModel = try VNCoreMLModel(for: MLModel(contentsOf: modelURL)) let objectRecognition = VNCoreMLRequest(model: visionModel, completionHandler: { (request, error) in DispatchQueue.main.async(execute: { // perform all the UI updates on the main queue if let results = request.results { self.drawVisionRequestResults(results) } }) }) self.requests = [objectRecognition] } catch let error as NSError { print("Model loading went wrong: \(error)") } return error } func drawVisionRequestResults(_ results: [Any]) { CATransaction.begin() CATransaction.setValue(kCFBooleanTrue, forKey: kCATransactionDisableActions) detectionOverlay.sublayers = nil // remove all the old recognized objects for observation in results where observation is VNRecognizedObjectObservation { guard let objectObservation = observation as? VNRecognizedObjectObservation else { continue } // Select only the label with the highest confidence. let topLabelObservation = objectObservation.labels[0] let objectBounds = VNImageRectForNormalizedRect(objectObservation.boundingBox, Int(bufferSize.width), Int(bufferSize.height)) let shapeLayer = self.createRoundedRectLayerWithBounds(objectBounds) let textLayer = self.createTextSubLayerInBounds(objectBounds, identifier: topLabelObservation.identifier, confidence: topLabelObservation.confidence) shapeLayer.addSublayer(textLayer) detectionOverlay.addSublayer(shapeLayer) if shapeLayer.bounds.size.width > 180 { // perform crop and apply perspective filter self.observationWidthBiggherThan180 = true self.rectToCrop = shapeLayer.bounds } else { self.observationWidthBiggherThan180 = false } } self.updateLayerGeometry() CATransaction.commit() } // MARK: - Function to capure the output of the camera and perform the image recognition override func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } if observationWidthBiggherThan180 { let imageToCrop = CIImage(cvPixelBuffer: pixelBuffer) let perspectiveTransform = CIFilter(name: "CIPerspectiveTransform") sendImageDelegate.sendImage(image: UIImage(ciImage: imageToCrop)) print(imageToCrop) } let exifOrientation = exifOrientationFromDeviceOrientation() let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientation, options: [:]) do { try imageRequestHandler.perform(self.requests) } catch { print(error) } } override func setupAVCapture() { super.setupAVCapture() // setup Vision parts setupLayers() updateLayerGeometry() setupVision() // start the capture startCaptureSession() } func setupLayers() { detectionOverlay = CALayer() // container layer that has all the renderings of the observations detectionOverlay.name = "DetectionOverlay" detectionOverlay.bounds = CGRect(x: 0.0, y: 0.0, width: bufferSize.width, height: bufferSize.height) detectionOverlay.position = CGPoint(x: rootLayer.bounds.midX, y: rootLayer.bounds.midY) rootLayer.addSublayer(detectionOverlay) } func updateLayerGeometry() { let bounds = rootLayer.bounds var scale: CGFloat let xScale: CGFloat = bounds.size.width / bufferSize.height let yScale: CGFloat = bounds.size.height / bufferSize.width scale = fmax(xScale, yScale) if scale.isInfinite { scale = 1.0 } CATransaction.begin() CATransaction.setValue(kCFBooleanTrue, forKey: kCATransactionDisableActions) // rotate the layer into screen orientation and scale and mirror detectionOverlay.setAffineTransform(CGAffineTransform(rotationAngle: CGFloat(.pi / 2.0)).scaledBy(x: scale, y: -scale)) // center the layer detectionOverlay.position = CGPoint(x: bounds.midX, y: bounds.midY) CATransaction.commit() } func createTextSubLayerInBounds(_ bounds: CGRect, identifier: String, confidence: VNConfidence) -> CATextLayer { let textLayer = CATextLayer() textLayer.name = "Object Label" let formattedString = NSMutableAttributedString(string: String(format: "\(identifier)\nConfidence: %.2f", confidence)) let largeFont = UIFont(name: "Helvetica", size: 24.0)! formattedString.addAttributes([NSAttributedString.Key.font: largeFont], range: NSRange(location: 0, length: identifier.count)) textLayer.string = formattedString textLayer.bounds = CGRect(x: 0, y: 0, width: bounds.size.height - 10, height: bounds.size.width - 10) textLayer.position = CGPoint(x: bounds.midX, y: bounds.midY) textLayer.shadowOpacity = 0.7 textLayer.shadowOffset = CGSize(width: 2, height: 2) textLayer.foregroundColor = CGColor(colorSpace: CGColorSpaceCreateDeviceRGB(), components: [0.0, 0.0, 0.0, 1.0]) textLayer.contentsScale = 2.0 // retina rendering // rotate the layer into screen orientation and scale and mirror textLayer.setAffineTransform(CGAffineTransform(rotationAngle: CGFloat(.pi / 2.0)).scaledBy(x: 1.0, y: -1.0)) return textLayer } func createRoundedRectLayerWithBounds(_ bounds: CGRect) -> CALayer { let shapeLayer = CALayer() shapeLayer.bounds = bounds shapeLayer.position = CGPoint(x: bounds.midX, y: bounds.midY) shapeLayer.name = "Found Object" shapeLayer.backgroundColor = CGColor(colorSpace: CGColorSpaceCreateDeviceRGB(), components: [1.0, 1.0, 0.2, 0.4]) shapeLayer.cornerRadius = 7 return shapeLayer } }
0
0
2.3k
May ’20