Post not yet marked as solved
Hello, I'm experiencing a sad BUG. I am deaf, blind, in a wheelchair. I program in XCode via Braille display. I compile any APP in XCode, including the APP test "Hello world!" and I transfer this APP to my iPhone via USB cable or Wi-Fi. So, when I try to enter the APP, VoiceOver suddenly deactivates itself and no longer activates. I always have to turn off my iPhone and turn it on again. He only reacts with someone he sees as helping. I'm not programming in XCode, because I can't test APPS on my iPhone anymore. Help me.
Software in Mac: XCode 15.4 beta
Software MacOS: latest version;
Software in iPhone:iOS 17.5
Hardware: iPhone 14 Pro Max
The BUG happens always.
Post not yet marked as solved
Hello, I'll be objective: when I compile any APP in my XCode and transfer the APP to my iPhone, including the test APP "Hello world!", whether via network or USB cable, when I open the APP it simply doesn't work and the iPhone crashes. Only that. My XCode is 15.3, iPhone 14 Pro Max, IOS 17.5, macOS latest version.
Post not yet marked as solved
Hello! I'm trying to save videos asynchronously. I've already used performChanges without the completionHandler, but it didn't work. Can you give me an example? Consider that the variable with the file URL is named fileURL. What would this look like asynchronously?
Post not yet marked as solved
Hello,
I am deaf-blind and I program with a braille display.
Currently, I am experiencing a difficulty with one of my APPs.
Basically, I'm converting
AVAudioPCMBuffer
for
CMSampleBuffer
and so far so good.
I want to add several
CMSampleBuffer
in a video
written with
AVAssetWrite
.
The problem is that I can only add up to more or less
2 thousands
CMSampleBuffer
.
I'm trying to create a video.
In this video, I put photos that are in an array and then I put audio from
CMSampleBuffer.
But I can't add many
CMSampleBuffer and only goes up to
2 thousand something.
I do not know what else to do.
Help me.
Below is a small excerpt of the code:
let queue = DispatchQueue(label: "AssetWriterQueue")
let audioProvider = SampleProvider(buffers: audioBuffers)
let videoProvider = SampleProvider(buffers: videoBuffers)
let audioInput = createAudioInput(audioBuffers: audioBuffers)
let videoInput = createVideoInput(videoBuffers: videoBuffers)
let adaptor = createPixelBufferAdaptor(videoInput: videoInput)
let assetWriter = try AVAssetWriter(outputURL: url, fileType: .mp4)
assetWriter.add(videoInput)
assetWriter.add(audioInput)
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: .zero)
await withCheckedContinuation { continuation in
videoInput.requestMediaDataWhenReady(on: queue) {
let time = videoProvider.getPresentationTime()
if let buffer = videoProvider.getNextBuffer() {
adaptor.append(buffer, withPresentationTime: time)
} else {
videoInput.markAsFinished()
continuation.resume()
}
}
}
await withCheckedContinuation { continuation in
audioInput.requestMediaDataWhenReady(on: queue) {
if let buffer = audioProvider.getNextBuffer() {
audioInput.append(buffer)
} else {
audioInput.markAsFinished()
continuation.resume()
}
}
}
Post not yet marked as solved
I've been deaf and blind for 15 years'
I'm not good at pronunciation in English, since I don't hear what I say, much less hear it from others.
When I went to read the phrases to record my personal voice in Accessibility > Personal
Voice, the 150 phrases to read are in English'
How do I record phrases in Brazilian Portuguese?
I speak Portuguese well'
My English is very bad in pronunciation and deafness contributed'
Help me.
Post not yet marked as solved
Hello, I am deaf and blind. So my Apple studies are in text vi aBraille. One question: how do I add my voice as voice synthesis? Do I have to record it somewhere first? What is the complete process, starting with recording my voice?
Do I have to record my voice reading something and then add it as voice synthesis?
What's the whole process of that? There is no text explaining this'
I found one about authorizing personal voice, but not the whole process starting the recording and such'
Thanks!
Post not yet marked as solved
Hello!
I have been facing great difficulty for days now.
Pretty frustrating for me.
Simply, I can't add any array
bookmarks
when I use
AVSpeechSynthesizer
speech marker with
AVSpeechSynthesisMarker
but i'm not helping this in
AVSpeechSynthesizer
.
Here is a simple code, I removed the utterance part:
let synt = AVSpeechSynthesizer()
synt.write(expression, toBufferCallback: {buffer in}, toMarkerCallback: {marks in
mark.append(AVSpeechSynthesisMarker(bookmarkName: "Test1", atByteSampleOffset: 4))
}
Note that this simple code what I want is to add a
AVSpeechSynthesisMarker
no
AVSpeechSynthesizer
.
XCode 15 beta is saying that
brands
It is
immutable
for being the type
'to leave'
.
But then how am I going to add markers in
AVSpeechSynthesizer
?
Please help me.
Thanks!