Hello,
I have already sent the attachment via email. Now I am just waiting for a response.
Post
Replies
Boosts
Views
Activity
Hello,
I am not receiving automatic email when I send messages to the Developer Technical support.
Help me.
Hello,
I tried to open the code-level support request twice, but I am not receiving the automatic email.
Hello!
See if
this link works
.
Hello!
I have a few questions'
I don't use GitHub since I am deafblind and a Braille display user. I avoid overloading my usage with extra resources, as accessibility tends to complicate things'
If I provide a link, should the project be temporary, or must I leave it there permanently?
As for a crash log, I don't have one, but the project is very small'
I don’t have an error message either'
Thank you!
Hello,
I'm putting an audio of
AVSpeechSynthesizer.write()
in a video and some photos.
I tried to put a very long text, to the point of having made a video of around 50 minutes.
When saving the video to the gallery, the APP would freeze until it was saved.
In other cases, the APP would crash and I would have to compile again.
I tried to use
PHPhotoLibrary.shared().performChanges()
instead of
UISaveVideoAtPathToSavedPhotosAlbum
.
But the APP would crash until you saved the video to the gallery or the APP
it crashed and wouldn't come back.
Here's the code:
private let synthesizer = AVSpeechSynthesizer()
private var counterImage = 0
let semaphore = DispatchSemaphore(value: 0)
init(_ texts: [String]) {
Misc.obj.lData.removeAll()
Misc.obj.selectedPhotos.append(createBlueImage(CGSize(width: 100, height: 100)))
Misc.obj.selectedPhotos.append(createBlueImage(CGSize(width: 100, height: 100)))
super.init()
synthesizer.delegate = self
DispatchQueue.global().async {
do {
try self.nextText(texts)
msgErro("Completed.")
} catch {
msgErro(error.localizedDescription)
}
}
}
func nextText(_ texts: [String]) throws {
var audioBuffers = [CMSampleBuffer]()
var videoBuffers = [CVPixelBuffer]()
var lTime = [0.0]
for text in texts {
var time = Double.zero
var duration = AVAudioFrameCount.zero
let utterance = AVSpeechUtterance(string: texto)
utterance.voice = AVSpeechSynthesisVoice(language: "pt-BR")
utterance.rate = 0.2
synthesizer.write(utterance) { buffer in
if let buffer = buffer as? AVAudioPCMBuffer, let sampleBuffer = buffer.toCMSampleBuffer(presentationTime: .zero) {
audioBuffers.append(sampleBuffer)
duration += buffer.frameLength
time += Double(buffer.frameLength) / buffer.format.sampleRate
}
}
semaphore.wait()
if Misc.obj.selectedPhotos.indices.contains(contadorImagem) {
let image = Misc.obj.selectedPhotos[counterImage]
//let imagemEscrita = imagem.addTexto(textos[quantTxt])
let pixelBuffer = image.toCVPixelBuffer()
videoBuffers.append(pixelBuffer!)
lTime.append(time)
// Increase counterImage
counterImage += 1
if counterImage == Misc.obj.selectedPhotos.count {
counterImage = 0
}
}
}
let url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0].appendingPathComponent("*****/output.mp4")
try FileManager.default.createDirectory(at: url.deletingLastPathComponent(), withIntermediateDirectories: true)
if FileManager.default.fileExists(atPath: url.path()) {
try FileManager.default.removeItem(at: url)
}
let audioProvider = SampleProvider(buffers: audioBuffers)
let videoProvider = SampleProvider(buffers: videoBuffers, lTempo: lTempo)
let audioInput = createAudioInput(audioBuffers: audioBuffers)
let videoInput = createVideoInput(videoBuffers: videoBuffers)
let adaptor = createPixelBufferAdaptor(videoInput: videoInput)
let assetWriter = try AVAssetWriter(outputURL: url, fileType: .mp4)
assetWriter.add(videoInput)
assetWriter.add(audioInput)
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: .zero)
let writerQueue = DispatchQueue(label: "Asset Writer Queue")
videoInput.requestMediaDataWhenReady(on: writerQueue) {
if let buffer = videoProvider.getNextBuffer() {
adaptor.append(buffer, withPresentationTime: videoProvider.getPresentationTime())
} else {
videoInput.markAsFinished()
if audioProvider.isFinished() {
self.semaphore.signal()
}
}
}
audioInput.requestMediaDataWhenReady(on: writerQueue) {
if let buffer = audioProvider.getNextBuffer() {
audioInput.append(buffer)
} else {
audioInput.markAsFinished()
if audioProvider.isFinished() {
self.semaphore.signal()
}
}
}
semaphore.wait()
assetWriter.finishWriting {
switch assetWriter.status {
case .completed:
msgRelatos("Completed.")
UISaveVideoAtPathToSavedPhotosAlbum(url.path, nil, nil, nil)
case .failed:
if let error = assetWriter.error {
msgErro("Error: \(error.localizedDescription)")
} else {
msgRelatos("No recorded.")
}
default:
msgRelatos("Error not found.")
}
}
}
}
extension TesteFala: AVSpeechSynthesizerDelegate {
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
semaphore.signal()
}
}
My iPhone 11 already has iOS 17 beta 6'
But "personal voice" does not appear'
Is this feature not available on the iPhone 11, even though it already has iOS 17 beta 6?
Another question: can I add with AVSpeechUtterance a synthesized audio with the voice of the "personal voice" in a file or buffer with a APP make for me?
I thank!