i would also like to know, i'm trying to release an independent watch app with in-app purchases but how do i do that without StoreKit?
Post
Replies
Boosts
Views
Activity
Someone from Apple please provide an update on this.
Looks like in-app purchases are coming to watchOS! Is there a good code tutorial for it, seems extremely complicated.
Very excited about this, are there any code tutorials on how to implement this on watchOS?
I have copied the files over but it doesn't compile:Could not find or use auto-linked framework 'StoreKit'
Undefined symbol: _OBJC_CLASS_$_SKProduct
Undefined symbol: _OBJC_CLASS_$_SKProductsRequest
Hi, I've recently been getting precondition failure: invalid input index: 2 errors too after the latest Xcode 11.4 update. I use GeometryReader to read the user's Watch screen size so i can draw a frame that is relative in size. Unfortunately NavigationView is not available on WatchOS so I'm not sure what other solutions are available. Do you have any ideas? Thanks.
No luck, and haven't heard anything back on a seperate thread I started. I have filed an issue with the Feedback Assistant and hopefully we will get a response from Apple.
I just tried this, and only the first utterance is outputted in Cantonese, no audio output for the second Mandarin utterance. Do you know why this is?
I put it inside applicationDidFinishLaunching as follows:
notificationCenter = UNUserNotificationCenter.current()
		notificationCenter.removeAllPendingNotificationRequests()
		let options: UNAuthorizationOptions = [.alert, .sound, .badge]
		notificationCenter.requestAuthorization(options: options) {
			(didAllow, error) in
			if !didAllow {
				print("User has declined notifications")
			}
		}
I tried the code you posted above:
let synth = AVSpeechSynthesizer()
let cantoUtterance = AVSpeechUtterance(string: chinese)
cantoUtterance.voice = AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.Sin-Ji-compact")
synth.speak(cantoUtterance)
let mandoUtterance = AVSpeechUtterance(string: chinese)
mandoUtterance.voice = AVSpeechSynthesisVoice(language: "zh-CN")
synth.speak(mandoUtterance)
and on my Watch I only hear the first utterance.
Thanks - I created it as a property and it is no longer being deallocated after the first utterance.
OK so using some sample text:
国内と同時放送するニュースの発信を強化し、最新の動きを詳しく伝えます。内外で頻発する自然災害や、大きな事件・事故などの際には、機動的にニュースを編成して的確に情報を発信し、日本語ライフラインとしての役割を果たします。
The output with timestamps is:
2020-07-23 23:31:15.9130: 放送
2020-07-23 23:31:15.9140: する
2020-07-23 23:31:15.9150: ニュース
2020-07-23 23:31:16.4810: の
2020-07-23 23:31:16.4820: 発信
2020-07-23 23:31:17.0840: を
2020-07-23 23:31:17.0860: 強化
2020-07-23 23:31:18.0080: し
2020-07-23 23:31:18.0100: 、
2020-07-23 23:31:18.0110: 最新
2020-07-23 23:31:18.7510: の
2020-07-23 23:31:18.7520: 動き
2020-07-23 23:31:19.2750: を
2020-07-23 23:31:19.2770: 詳し
2020-07-23 23:31:19.8240: く
2020-07-23 23:31:19.8250: 伝え
2020-07-23 23:31:20.5670: ます
2020-07-23 23:31:20.5680: 。
2020-07-23 23:31:20.7660: 内外
2020-07-23 23:31:21.5350: で
2020-07-23 23:31:21.5370: 頻発
2020-07-23 23:31:22.3780: する
2020-07-23 23:31:22.3800: 自然
2020-07-23 23:31:23.7600: 災害
2020-07-23 23:31:23.7620: や
2020-07-23 23:31:23.7620: 、
2020-07-23 23:31:23.7640: 大きな
2020-07-23 23:31:24.2730: 事件
2020-07-23 23:31:25.4670: ・
2020-07-23 23:31:25.4680: 事故
2020-07-23 23:31:25.4690: などの
2020-07-23 23:31:25.4690: 際
2020-07-23 23:31:26.2600: には
2020-07-23 23:31:26.2610: 、
2020-07-23 23:31:26.2620: 機動
2020-07-23 23:31:27.3160: 的
2020-07-23 23:31:27.3180: に
2020-07-23 23:31:27.3190: ニュース
2020-07-23 23:31:27.8350: を
2020-07-23 23:31:27.8370: 編成
2020-07-23 23:31:28.7740: し
2020-07-23 23:31:28.7750: て
2020-07-23 23:31:28.7760: 的確
2020-07-23 23:31:29.5270: に
2020-07-23 23:31:29.5290: 情報
2020-07-23 23:31:30.1660: を
2020-07-23 23:31:30.1670: 発信
2020-07-23 23:31:31.1030: し
2020-07-23 23:31:31.1040: 、
2020-07-23 23:31:31.1050: 日本語
2020-07-23 23:31:32.5260: ライフライン
2020-07-23 23:31:32.5280: と
2020-07-23 23:31:32.5290: し
2020-07-23 23:31:32.9200: ての
2020-07-23 23:31:32.9210: 役割
2020-07-23 23:31:33.6030: を
2020-07-23 23:31:33.6040: 果た
2020-07-23 23:31:34.3960: し
2020-07-23 23:31:34.3980: ます
2020-07-23 23:31:34.3980: 。
So it's not skipping any words but as you suggested in the first paragraph, it looks like there are groups of words that are being spoken closely and it is moving to the next word so fast that I can't see it being highlighted.
It seems unnatural though because the audio is reading the words at a more steady pace but the willSpeakRangeOfSpeechString function seems to be grouping words together and the pace is less steady.
Is there a way to improve this, or is it a current limitation of the framework?
Thanks for that, please keep us informed if there are any future updates.
I just thought about it a bit more and realised there is probably a very quick fix for this. If the timestamps between words are below a certain threshold, they should all just get grouped together so you highlight them in bigger blocks rather than having shorter words that might get quickly spoken and moved onto the next one.
Have you found the solution to this? I am having a similar issue where even if I hide the NavigationLink inside .background() it is still visible in the background.