Posts

Post not yet marked as solved
3 Replies
1.5k Views
I receive a buffer from[AVSpeechSynthesizer convertToBuffer:fromBuffer:] and want to schedule it on an AVPlayerNode. The player node's output format need to be something that the next node could handle and as far as I understand most nodes can handle a canonical format. The format provided by AVSpeechSynthesizer is not something thatAVAudioMixerNode supports. So the following:   AVAudioEngine *engine = [[AVAudioEngine alloc] init];   playerNode = [[AVAudioPlayerNode alloc] init];   AVAudioFormat *format = [[AVAudioFormat alloc] initWithSettings:utterance.voice.audioFileSettings];   [engine attachNode:self.playerNode];   [engine connect:self.playerNode to:engine.mainMixerNode format:format]; Throws an exception: Thread 1: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 \"(null)\"" I am looking for a way to obtain the canonical format for the platform so that I can use AVAudioConverter to convert the buffer. Since different platforms have different canonical formats, I imagine there should be some library way of doing this. Otherwise each developer will have to redefine it for each platform the code will run on (OSX, iOS etc) and keep it updated when it changes. I could not find any constant or function which can make such format, ASDB or settings. The smartest way I could think of, which does not work:   AudioStreamBasicDescription toDesc;   FillOutASBDForLPCM(toDesc, [AVAudioSession sharedInstance].sampleRate,                      2, 16, 16, kAudioFormatFlagIsFloat, kAudioFormatFlagsNativeEndian);   AVAudioFormat *toFormat = [[AVAudioFormat alloc] initWithStreamDescription:&toDesc]; Even the provided example for iPhone, in the documentation linked above, uses kAudioFormatFlagsAudioUnitCanonical and AudioUnitSampleType which are deprecated. So what is the correct way to do this?
Posted
by artium.
Last updated
.
Post marked as solved
1 Replies
274 Views
The method url(forResource:withExtension:) can be called like this: let url = Bundle.module.url(forResource: "foo", withExtension: "txt")! or like this: let url = Bundle.module.url(forResource: "foo.txt", withExtension: nil)! Both ways seems to work. Are there any differences between the two?
Posted
by artium.
Last updated
.
Post marked as solved
1 Replies
1.2k Views
Do run loops use an operation queue under the hood? When I dispatch to the mian queue, I know it will run on the mian thread, which means it will be handled b the main thread's run loop. But is the other way around correct? More specific question, is it possible somehow that the following code will return true? OperationQueue.current != OperationQueue.main && Thread.isMainThread I tried to do for example to dispatch to the main thread using a timer. myQueue.sync { let timer = Timer(timeInterval: 1, repeats: false) { _ in let curr = OperationQueue.current let main = OperationQueue.main print("\(String(describing: curr)) \(main)") } RunLoop.main.add(timer, forMode: .common) } And got the same pointer. But maybe some other way of dispatching to the main run loop will give different results, possibly that is why OperationQueue.current can return nil.
Posted
by artium.
Last updated
.
Post not yet marked as solved
0 Replies
414 Views
I was not able to find info regarding this situation in the official docs. What will happen if a version was not yet released to all, and a new version is phased released? For example if I am phased releasing version 2.0. On day 3, when only 5% of the users have this version, I discovered that this version has a bug. I paused the release of 2.0 and created a hotfix version 2.1. What will happen all the users who did not receive 2.0 when I start phased release of 2.1? Will they get an immediate update to 2.0 or will they skip this version?
Posted
by artium.
Last updated
.
Post not yet marked as solved
0 Replies
678 Views
I have a following simple code to present a video on the screen. This or similar can be found on any tutorial there is about using AVplayer from UISwift. struct PlayerView: UIViewRepresentable { func updateUIView(_ uiView: UIView, context: UIViewRepresentableContext<PlayerView>) { } func makeUIView(context: Context) -> UIView { return PlayerUIView(frame: .zero) } } class PlayerUIView: UIView { private let playerLayer = AVPlayerLayer() override init(frame: CGRect) { super.init(frame: frame) let url = Bundle.main.url(forResource: "my_video", withExtension: "mp4")! let player = AVPlayer(url: url) player.play() playerLayer.player = player layer.addSublayer(playerLayer) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func layoutSubviews() { super.layoutSubviews() playerLayer.frame = bounds } } If I put PlayerView inside an VStack the view will take as much vertical space as possible based on the sizes of the other views in the stack and then will put the video inside filling it horizontally. So if it is the only view, it will always put the video in the center of the screen and leave margins below and above it. Even if I put spacer after it. Even if I put it inside GeometryReader. What I want the view to "hug" the presented video as much as possible, and then bahve as any other SwiftUI view. But I do not understand how SwiftUI does the layout in this case when AVPlayer inside UIViewRepresentable is involved.
Posted
by artium.
Last updated
.
Post not yet marked as solved
1 Replies
672 Views
We are adding an feature in the app and we want to sell a special subscription for the app which lets users use the feature. We might want to stop supporting this feature and remove it from the app in the future. Not just stop selling the subscription, but completely get rid of the feature in future version of the app. Once the feature is removed, the users who bought the special subscription, will not be able to access the feature and they will receive the same experience as users who bough a normal subscript we are currently offering. What is the best approach to deal with this situation? Is it possible to force cancelation of this product or changing auto renew to off for all users who are subscribed to it? Is it possible to force downgrade/upgrade of that product for all users who are subscribed to it? Are we stuck with supporting this feature forever, or is there a best practice which will allow us to phase it out?
Posted
by artium.
Last updated
.
Post marked as solved
1 Replies
656 Views
This is a followup question to this document: https://developer.apple.com/help/app-store-connect/manage-in-app-purchases/schedule-price-changes If we schedule a price change for a subscription product, will it affect existing subscribers in their next billing period? How will they be notified that this change happened?
Posted
by artium.
Last updated
.
Post marked as solved
3 Replies
1.3k Views
There is no API to create an intersection between two CGPaths, however CoreGraphics knows how to do it behind the scenes. When calling CGContextClip (link) it will intersect current and clipping paths and store it in the clipping path. I was thinking to utilize this to perform intersections between paths I have. The problem is I can not find a way to retrieve back the clipping path from CGContext. Am I correct that such API does not exist or did I miss something?
Posted
by artium.
Last updated
.
Post not yet marked as solved
0 Replies
701 Views
I this FAQ it says: What happens when I turn off iCloud Keychain on a device? When you turn off iCloud Keychain for a device, you're asked to keep or delete the passwords and credit card information that you saved. If you choose to keep the information, it isn't deleted or updated when you make changes on other devices. If you don't choose to keep the information on at least one device, your Keychain data will be deleted from your device and the iCloud servers.  So understand data stored by the user such as passwords is removed. What about data stored by an app installed on an iOS device? Is it removed as well? Can we assume that data stored in the keychain will not be removed by other means external to the app, or should we plan our app to handle such situation?
Posted
by artium.
Last updated
.
Post not yet marked as solved
3 Replies
1.1k Views
How to deduce from NSMethodSignature that a struct argument is passed by pointer? Specifically on ARM. For example if I have: @protocol TestProtocol <NSObject> - (void)time:(CMTime)time; - (void)rect:(CGRect)point; @end And then I do: struct objc_method_description methodDescription1 = protocol_getMethodDescription(@protocol(TestProtocol), @selector(time:), YES, YES); struct objc_method_description methodDescription2 = protocol_getMethodDescription(@protocol(TestProtocol), @selector(rect:), YES, YES); NSMethodSignature *sig1 = [NSMethodSignature signatureWithObjCTypes:methodDescription1.types]; NSMethodSignature *sig2 = [NSMethodSignature signatureWithObjCTypes:methodDescription2.types]; const char *arg1 = [sig1 getArgumentTypeAtIndex:2]; const char *arg2 = [sig2 getArgumentTypeAtIndex:2]; NSLog(@"%s %s", methodDescription1.types, arg1); NSLog(@"%s %s", methodDescription2.types, arg2); The output is: v40@0:8{?=qiIq}16 {?=qiIq} v48@0:8{CGRect={CGPoint=dd}{CGSize=dd}}16 {CGRect={CGPoint=dd}{CGSize=dd}} Both look similar, no indication that CMTime will be actually passed as a pointer. But when I print the debug description: NSLog(@"%@", [sig1 debugDescription]); NSLog(@"%@", [sig2 debugDescription]); The first prints: ... argument 2: -------- -------- -------- -------- type encoding (^) '^{?=qiIq}' flags {isPointer} ... While the second prints: ... argument 2: -------- -------- -------- -------- type encoding ({) '{CGRect={CGPoint=dd}{CGSize=dd}}' flags {isStruct} ... So this information is indeed stored in the method signature, but how do I retrieve it without parsing the debug description? Are there rules I can use to deduce this myself? I tried to experiment with different structs but it is hard to spot a pattern.
Posted
by artium.
Last updated
.
Post marked as solved
1 Replies
1.6k Views
We have noticed that sometimes our app spends too much time in the first call of AVAudioSession.sharedInstance and [AVAudioSession setCategory:error:] which we call on app's initialization (during init of apps delegate). I am not sure if the app is stuck in these calls or it simply takes too much time to complete. This probably causes the app to crash due to main thread watchdog. Would it be safe to move these calls to a separate thread?
Posted
by artium.
Last updated
.
Post not yet marked as solved
1 Replies
1.2k Views
To reproduce: Create a new standard iOS project and add a Widget extension to it. Download an SVG from here: https://pixabay.com/vectors/flourish-decorative-ornamental-3166119/ Add the SVG asset into Assets.xcassets. Replace the line Text(entry.date, style: .time) with Image("flourish-3166119").frame(width: 20). When running the extension, I get Thread 1: EXC_RESOURCE RESOURCE_TYPE_MEMORY (limit=30 MB, unused=0x0). Notice that the asset is only 8k but something in the rendering of the SVG into bitmap cause a memory spike. I tested this with Xcode 13.0, iOS 14.4.1 Xs MAX device. Is there a way to overcome this, or should I dump SVG and pre render the images for x2 and x3 scale factors and call it a day? Edit: Just to clarify, the issue happens with only 1 entry as well. I attached the full source for reference. Demo convenience:
Posted
by artium.
Last updated
.
Post not yet marked as solved
0 Replies
666 Views
The default behaviour is that selecting an item will dismiss the menu. I see that some Apple's apps have items which will do some action but keep the menu open. For example in Notes, there is Indent item which will indent the text but keep the menu open. Is it possible to achieve this with the public API?
Posted
by artium.
Last updated
.
Post not yet marked as solved
0 Replies
668 Views
I am trying to implement my own version of UIMenuController. One of the things I am struggling to imitate is what happens when a there is a touch event outside the menu. In this case the menu will be dismissed, but the event will also be passed to the correct view . So I need somehow to handle or capture the event in the menu or some transparent view covering the whole screen but also let it pass to the view below it. One idea is to override hitTest:withEvent: of the menu view, dismiss the menu when it is called (if point is outside the menu) and then return nil as if it was transparent to events. The issue is that hitTest:withEvent: must be pure function without side effect. On other hand, when touchesBegan:withEvent: is called, it is possible dismiss the menu and to pass the event to the nextResponder, but the view directly below the menu is not necessarily the superview of the menu view, so it is not part of the responder chain.
Posted
by artium.
Last updated
.
Post marked as solved
1 Replies
1.3k Views
If the audio buffer is AVAudioPCMBuffer, it is possible to write it into a file using AVAudioFile's writeFromBuffer:error: method. How should plain AVAudioBuffer be handled? In my specific case, I get this buffer from the callback of AVSpeechSynthesizer's writeUtterance:toBufferCallback: In fact the documentation says: // Use this method to receive audio buffers that can be used to store or further process synthesized speech. // The dictionary provided by -[AVSpeechSynthesisVoice audioFileSettings] can be used to create an AVAudioFile. - (void)writeUtterance:(AVSpeechUtterance *)utterance toBufferCallback:(AVSpeechSynthesizerBufferCallback)bufferCallback API_AVAILABLE(ios(13.0), watchos(6.0), tvos(13.0), macos(10.15)) ; But I don't know how exactly AVAudioFile can be used if the buffer is not guaranteed to be an AVAudioPCMBuffer.
Posted
by artium.
Last updated
.