I've been playing around with EventKit with SwiftUI trying to make my own version of a Reminders client, however, most of the tutorials or articles I've found really only explain it from a UIKit perspective and I'm not exactly sure how to make that applicable to SwiftUI, the documentation for EventKit is also of little to no help.
Is there an article or example somewhere that shows how to do this?
Post
Replies
Boosts
Views
Activity
Hi, was following along with this documentation (https://developer.apple.com/documentation/soundanalysis/analyzing_audio_to_classify_sounds) trying classify sounds within my SwiftUI app. Here's what I have:
let noiseDetector = NoiseDetector()
let model: MLModel = noiseDetector.model
let analysisQueue = DispatchQueue(label: "com.apple.AnalysisQueue")
public var noiseType: String = "default"
class ResultsObserver : NSObject, SNResultsObserving, ObservableObject {
func request(_ request: SNRequest, didProduce result: SNResult) {
guard let result = result as? SNClassificationResult,
let classification = result.classifications.first else { return }
noiseType = classification.identifier
let formattedTime = String(format: "%.2f", result.timeRange.start.seconds)
print("Analysis result for audio at time: \(formattedTime)")
let confidence = classification.confidence * 100.0
let percent = String(format: "%.2f%%", confidence)
print("\(classification.identifier): \(percent) confidence.\n")
}
func request(_ request: SNRequest, didFailWithError error: Error) {
print("The the analysis failed: \(error.localizedDescription)")
}
func requestDidComplete(_ request: SNRequest) {
print("The request completed successfully!")
}
}
func startAudioEngine() {
let audioEngine: AVAudioEngine = AVAudioEngine()
let inputBus = AVAudioNodeBus(0)
let inputFormat = audioEngine.inputNode.inputFormat(forBus: inputBus)
do {
try audioEngine.start()
} catch {
print("Unable to start AVAudioEngine: \(error.localizedDescription)")
}
let streamAnalyzer = SNAudioStreamAnalyzer(format: inputFormat)
let resultsObserver = ResultsObserver()
do {
let request = try SNClassifySoundRequest(mlModel: model)
try streamAnalyzer.add(request, withObserver: resultsObserver)
} catch {
print("Unable to prepare request: \(error.localizedDescription)")
return
}
let analysisQueue = DispatchQueue(label: "com.apple.AnalysisQueue")
audioEngine.inputNode.installTap(onBus: inputBus,
bufferSize: 8192,
format: inputFormat) { buffer, time in
analysisQueue.async {
streamAnalyzer.analyze(buffer, atAudioFramePosition: time.sampleTime)
}
}
}
Now obviously I wouldn't be asking this if it was working, I'm just not sure how its broken, I'm sure its because I've read the documentation wrong but I'm not sure how else to interpret it, secondly I tried injecting some print statements into the startAudioEngine function and from what I could tell it was never actually getting to analyzing the stream at line 32 although I'm not entirely sure what's causing that.
All I want to do is just display the classification as text in the UI in case that's helpful.
Thanks for the help as I'm lost here.
I've just begun running into this error and I've seen that a few others have had this problem as well but there never seems to be a general solution.
Full error: The compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions
I know my code isn't necessarily optimized per se but I don't think this should be happening
var body: some View {
return NavigationView {
List {
ForEach(tasks, id: \.dueDate) {
TaskRow(task: $0)
}.onDelete(perform: deleteTask)
Section(header: Text("Projects")
.font(.title2)
.fontWeight(.bold)
.foregroundColor(Color.pink)) {
ForEach(projects, id: \.projectTitle) {
ProjectRow(project: $0)
}.onDelete(perform: deleteProject)
}
}.toolbar {
ToolbarItem (placement: .primaryAction) {
#if os(iOS)
EditButton()
.padding(10)
.background(Color.blue)
.foregroundColor(.white)
.cornerRadius(10)
#endif
}
ToolbarItem (placement: .bottomBar) {
Button(action:
self.isAddingTask = true
self.isAddingProject = false
self.isPresenting = true
}) {
Text("Add Task")
.fontWeight(.semibold)
}.padding(10)
.background(Color.blue)
.foregroundColor(.white)
.cornerRadius(10)
}
ToolbarItem (placement: .bottomBar) {
Button(action: {
self.isAddingTask = false
self.isAddingProject = true
self.isPresenting = true
}) {
Text("Add Project")
.fontWeight(.semibold)
}.padding(10)
.background(Color.pink)
.foregroundColor(.white)
.cornerRadius(10)
}
}
.sheet(isPresented: $isPresenting) {
if isAddingProject {
AddProject() { projectTitle in
self.addProject(projectTitle: projectTitle)
resetModal()
}
} else if isAddingTask {
AddTask() { taskDescription, dueDate in
self.addTask(taskDescription: taskDescription, dueDate: dueDate)
resetModal()
}
}
}
.navigationBarTitle(Text("Tasky"))
}
The error says it's happening at the Navigation View but I have a sneaking suspicion Xcode doesn't actually know.
How can I get SwiftUI text to properly react to different background colors? It seems to me that text will just default to white or black depending on whether you are in light or dark mode but won't take into consideration the background color of their parent. What if you wanted the background color to be user configurable and they select something like yellow, it would work fine in light mode because the text is black, however if the device is in dark mode the text would become white and unreadable on top of bright yellow.
Is there some way I can fix this?
Thanks!