Posts

Post not yet marked as solved
0 Replies
540 Views
I like to get the url of the ReplayKit screen recording instead of saving the video to my camera roll or forwarding it. From a WWDC 2017 video, it was mentioned that to get the url, one can use the following function: func stopRecording(withOutput url: URL, completionHandler: ((Error?) - Void)? = nil){} But I am having a hard time figuring out how to call/implement this line of code. I have a start recording @IBAction button and a stop recording @IBAction button. The screen recording is working fine. However, can someone show me how or/and where to add this stopRecording function so I can get the url for the screen recording? Appreciate any help pointing me to the right direction. I am still learning Xcode. Thank-you!     @IBAction func StartScreenRec( sender: Any) {         screenrecorder.startRecording { (error) in             if let error = error {                 print(error)             }             self.ScreenStartRecordBtn.isHidden = true             self.StopScreenRecBtn.isHidden = false         }     }          @IBAction func StopScreenRec( sender: Any) {         screenrecorder.stopRecording { (previewVC, error) in             if let previewVC = previewVC {                 previewVC.modalPresentationStyle = .fullScreen                 previewVC.previewControllerDelegate = self                 self.present(previewVC, animated: true, completion: nil)             }             if let error = error {                 print(error)             }             self.ScreenStartRecordBtn.isHidden = false             self.StopScreenRecBtn.isHidden = true         }     }
Posted
by AthenaY .
Last updated
.
Post marked as solved
2 Replies
736 Views
Is there a way to download ALL image files from a website without specifying the name of the image files?  Example, code below, instead of specifying each of the image filename, is there a one line code like maybe listAll() to download all image files? Or maybe a method you can recommend? static let BASEPATH = "website url address here" static let payloadData: [ReferenceImagePayload] = [ ReferenceImagePayload(name: "J3132SitePIc", extensionType: FileSuffix.PNG.name, orientation: .up, widthInM: 0.1), ReferenceImagePayload(name: "J41SitePic", extensionType: FileSuffix.PNG.name, orientation: .up, widthInM: 0.1), ReferenceImagePayload(name: "NWAIRLINKSAAB340large1", extensionType: FileSuffix.JPEG.name, orientation: .up, widthInM: 0.1), ]
Posted
by AthenaY .
Last updated
.
Post not yet marked as solved
0 Replies
519 Views
Anyone know how to setup Assets.xcassets in Xcode project to link to images in Firebase storage, or any cloud storage? So instead of loading hundreds of images in the folder, I have those images in my Firebase Storage and want my app to go there to search and retrieve images. Only download what user needs instead of downloading all hundreds of them and search. Is there a way to do so? It is possible? Maybe Firebase Functions? Or an Xcode feature? Appreciate any ideas, suggestions or solutions you can offer. Thank-you!!!
Posted
by AthenaY .
Last updated
.
Post not yet marked as solved
1 Replies
2.1k Views
Hello, I am unable to get my on-device speech recognition to work continuously. I know there is a 1 minute limit for Server based recognition but recent advances revealed in WWDC 2019 show that there is no limit for on-device recognition. Link below is the WWDC 2019 video on this.Can anyone tell me what's wrong with my code or what I need to do to make it continuous? It is stopping the audio engine once transcription is done. But I dont want it to do so. Even if I by pass stopping the audio engine stop, it is still not recognizing and transcribing anymore speech. Look forward to some insight or recommendations. Thank-you!!!https://developer.apple.com/videos/play/wwdc2019/256/ private func startRecording() throws { // Cancel the previous task if it's running. recognitionTask?.cancel() self.recognitionTask = nil // Configure the audio session for the app. let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.record, mode: .measurement, options: .duckOthers) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) let inputNode = audioEngine.inputNode // Create and configure the speech recognition request. recognitionRequest = SFSpeechAudioBufferRecognitionRequest() guard let recognitionRequest = recognitionRequest else { fatalError("Unable to create a SFSpeechAudioBufferRecognitionRequest object") } recognitionRequest.shouldReportPartialResults = true // Keep speech recognition data on device if #available(iOS 13, *) { recognitionRequest.requiresOnDeviceRecognition = false } // Create a recognition task for the speech recognition session. // Keep a reference to the task so that it can be canceled. recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest) { result, error in var isFinal = false if let result = result { // Update the text view with the results. self.textView.text = result.bestTranscription.formattedString isFinal = result.isFinal print("Text \(result.bestTranscription.formattedString)") } if error != nil || isFinal { // Stop recognizing speech if there is a problem. self.audioEngine.stop() inputNode.removeTap(onBus: 0) self.recognitionRequest = nil self.recognitionTask = nil self.recordButton.isEnabled = true self.recordButton.setTitle("Start Journaling", for: []) } } // Configure the microphone input. let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in self.recognitionRequest?.append(buffer) } audioEngine.prepare() try audioEngine.start() // Let the user know to start talking. textView.text = "(Go ahead, I'm listening)" }
Posted
by AthenaY .
Last updated
.
Post not yet marked as solved
9 Replies
1.5k Views
My trained model is working fine on Xcode (11 Beta) on my computer but when deployed to my iphone to try it out, it is not predicting anything. Anyone encountered this problem and has an idea why or has a solution? Thank-you!
Posted
by AthenaY .
Last updated
.
Post not yet marked as solved
2 Replies
1.4k Views
Hello, I got my trial Text Classifier Model codes working in the Playground and sentiment.write wrote the model file to a URL location /var/folders/38/r6qd24_11kqffnhzv1jjd6pr0000gn/T/com.apple.dt.Xcode.pg/resources/5FB25281-2738-45C5-B8DC-EBF60F0070F4/output.json.mlmodel. But where is this file exactly so I can add it into my App? Hope someone can help. Thank-you!
Posted
by AthenaY .
Last updated
.