Posts

Post not yet marked as solved
1 Replies
153 Views
I am trying to build a Swift app for iOS which depends on an open-source API called BrainFlow. BrainFlow is written in C++. Due to App Store restrictions, the API must be built by hand into a framework so that it can be signed with my developer certificate. So that's what I did. I now have a signed framework that I can embed in my app and that Apple is OK with. BrainFlow depends on a third-party library called SimpleBLE, an open-source API for connecting to BLE devices. So I built that too into its own signed framework. So far so good. The problem comes when my app tries to connect to a BLE device via BrainFlow. BrainFlow is designed to explicitly load its third-party libs like plug-ins, and it is hardcoded to assume that all dylibs are located in the same directory. However, when I build the BrainFlow framework so that it embeds the SimpleBLE dylib in the same directory as the BrainFlow dylib, App Store Connect rejects my app due to a policy violation. One solution might be to query dyld and have it return the resolved location of the SimpleBLE dylib. For example if the dylib is referenced as @rpath/libsimpleble-c.dylib, then the query would return its full path after resolving @rpath. I do not know how to do that or even if it's possible. Another solution might be to embed the SimpleBLE dylib into the BrainFlow framework in such a way that it does not violate App Store policy. Again I am unable to figure out how to do that or even if it is possible. The relevant BrainFlow code can be found in the init_dll_loader() function of the following code: https://github.com/brainflow-dev/brainflow/blob/master/src/board_controller/ble_lib_board.cpp That function calls DLLLoader(), which can be found here: https://github.com/brainflow-dev/brainflow/blob/master/src/utils/inc/runtime_dll_loader.h Thanks in advance for your thoughtful suggestions and comments.
Posted Last updated
.
Post not yet marked as solved
0 Replies
290 Views
I am using SpeechSynthesizer and SpeechRecognizer. After a recognition task completes, the SpeechSynthesizer stops producing audible output. I am using the latest SwiftUI in Xcode 15.2, deploying to an iPhone 14 Pro running iOS 17.3.1. Here's my SpeechSynthesizer function: func speak(_ text: String) { let utterance = AVSpeechUtterance(string: text) utterance.voice = AVSpeechSynthesisVoice(identifier: self.appState.chatParameters.voiceIdentifer) utterance.rate = 0.5 speechSynthesizer.speak(utterance) } And here's the code for setting up the SpeechRecognizer (borrowed from https://www.linkedin.com/pulse/transcribing-audio-text-swiftui-muhammad-asad-chattha): private static func prepareEngine() throws -> (AVAudioEngine, SFSpeechAudioBufferRecognitionRequest) { print("prepareEngine()") let audioEngine = AVAudioEngine() let request = SFSpeechAudioBufferRecognitionRequest() request.shouldReportPartialResults = false request.requiresOnDeviceRecognition = true let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playAndRecord) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) let inputNode = audioEngine.inputNode let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in request.append(buffer) } audioEngine.prepare() try audioEngine.start() return (audioEngine, request) } SpeechSynthesizer works fine as long as I don't call prepareEngine(). Thanks in advance for any assistance.
Posted Last updated
.
Post not yet marked as solved
1 Replies
275 Views
I am building an image recognition app for iPhone, using Swift, Vision, CoreML and the pretrained Resnet50 model. When I run MLUpdateTask, it gives me an error that my model (Resnet50) needs to be re-compiled. I am using Xcode 14.3.1 on Ventura 13.4.1 (c). Here's the error message: EXCEPTION from MLUpdateTask: Error Domain=com.apple.CoreML Code=6 "Failed to unarchive update parameters. Model should be re-compiled." UserInfo={NSLocalizedDescription=Failed to unarchive update parameters. Model should be re-compiled.} Here's the code snippet: struct ImageCoreML { let sourceModelURL = Bundle.main.url(forResource: "Resnet50", withExtension: ".mlmodelc")! static let docsURL: URL = { return FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] }() let updatedModelURL = docsURL.appendingPathComponent("ImageID") .appendingPathExtension("mlmodelc") init() { if !FileManager.default.fileExists(atPath: updatedModelURL.path) { do { try FileManager.default.copyItem(at: sourceModelURL, to: updatedModelURL) return } catch { print("copy models Error: \(error)") } } } private func completionHandler(_ context: MLUpdateContext) { let updatedModel = context.model let fileManager = FileManager.default do { try updatedModel.write(to: self.updatedModelURL) print("Updated model saved to:\n\t\(self.updatedModelURL)") } catch let error { print("Could not save updated model to the file system: \(error)") return } } // Update the CNN model with the saved image. func updateModel(croppedImage: CVPixelBuffer?, personLabel: String) { print("updateModel()") guard let pixelBuffer = croppedImage else { print("ERROR: cannot convert cropped image to cgImage buffer") return } var featureProviders = [MLFeatureProvider]() let imageFeature = MLFeatureValue(pixelBuffer: pixelBuffer) let personLabel = MLFeatureValue(string: personLabel) let dataPointFeatures: [String: MLFeatureValue] = ["image": imageFeature, "personID": personLabel] if let provider = try? MLDictionaryFeatureProvider(dictionary: dataPointFeatures) { featureProviders.append(provider) } let trainingData = MLArrayBatchProvider(array: featureProviders) do { let updateTask = try MLUpdateTask(forModelAt: self.updatedModelURL, trainingData: trainingData, configuration: nil, completionHandler: completionHandler) updateTask.resume() } catch { print("EXCEPTION from MLUpdateTask:\n\(error)") return } } }
Posted Last updated
.
Post not yet marked as solved
2 Replies
860 Views
My Swift app accesses the USB serial port via a C++ API built as a universal dylib. When the dylib is added as a framework via "Frameworks, Libraries, and Embedded Content", it works fine. When instead the dylib is included as a binary target within an xcframework inside a Package, I get an error indicating a permissions issue. The package looks like this: // swift-tools-version:5.3 import PackageDescription let package = Package( name: "BrainFlow", platforms: [ .macOS(.v10_14) ], products: [ .library( name: "BrainFlow", targets: ["BrainFlow", "BoardController", "BrainBitLib", "DataHandler", "GanglionLib", "MLModule", "MuseLib"]) ], dependencies: [ // Dependencies declare other packages that this package depends on. ], targets: [ .target( name: "BrainFlow" ), .binaryTarget( name: "BoardController", path: "BoardController.xcframework" ), .binaryTarget( name: "BrainBitLib", path: "BrainBitLib.xcframework" ), .binaryTarget( name: "DataHandler", path: "DataHandler.xcframework" ), .binaryTarget( name: "GanglionLib", path: "GanglionLib.xcframework" ), .binaryTarget( name: "MLModule", path: "MLModule.xcframework" ), .binaryTarget( name: "MuseLib", path: "MuseLib.xcframework" ), .testTarget( name: "BrainFlowTests", dependencies: ["BrainFlow"] ) ] )
Posted Last updated
.
Post not yet marked as solved
0 Replies
623 Views
How do I adjust the latency timer for the AppleUSBFTDI driver? I am developing an app in Swift using Xcode on a MacBook Pro M1 running Big Sur, for clinical brain-computer interface (BCI) research. The app needs very low-latency streaming from an external USB device. The external device is a headset which connects via Bluetooth to an FT231X chip mounted on a USB-Serial dongle. The FT231X chip reads timestamped EEG data from the headset. The issue is that the AppleUSBFTDI driver is buffering the packets coming in from the headset, which causes jitter in the timestamps. Typically, with proprietary drivers from FTDI, the solution is to reconfigure them to reduce the latency timer to 1ms. The Info.plist is edited to add new key/value pairs. Is there a similar solution for Apple's built-in driver?
Posted Last updated
.