seems to be working again (6/28/2023)
Post
Replies
Boosts
Views
Activity
apple Really needs to improve its in-app purchase system from the developer side, especially anything involving subscriptions. it is far too complicated. there is no reason for it to be so complicated. ordinary developers have to deal with the most obscure APIs... this is how to make money in apps! subscriptions! it should be the smoooothest part of development! they spent a ton of money making Swift and SwiftUI to make things easier and better, yet ObjC and UIKit isnt that bad once you learn it. however, IAP...holy guacamole it's bad.
wow that is great info! this will be great to win some friendly wagers among my math pals, haha.
FYI i deleted the app container and re-launched and it worked (this is mac catalyst)
i suspect some mac window settings like position etc are stored in there
and may have gotten trashed due to repeated launching & killing of the app
when in the debugger
e.g. look for the directory
~/Library/Containers/yourappbundleid
get rid of it
make sure first there is no data you need in there :)
seeing the same message in debugger for mac catalyst app: Application violated contract by causing UIApplicationMain() to return. This incident will be reported.
any resolution on your side>
FYI as a temp patch we added some script lines to copy *.h files to where the build wants to look for them based on analyzing the build output Compile steps that were failing.
and there was one case where we added a couple of extra header paths manually to the RNSVG module Xcode proj file.
unfortunately, with the yarn package manager and the cocoapods package manager working simultaneously, it is not obvious where the header paths need to be modified.
this, coupled with module maintainers sometimes changing project folder structures in a breaking manner = trouble.
we havent figured out why the new build system would not respect the same header path rules as the old build system.
that is also non obvious.
Non optimized workaround to above thread serialization problem w/ concurrent thread uses of drand48.
The key is the __thread keyword. If you just make these __thread variable static or global, you see similar slowdowns.
My code does not depend so much on pure random numbers so this is good enough to get rolling.
There may be a better solution.
#if 1
const int s_nrandoms = 10000;
__thread double s_someRandoms[s_nrandoms];
__thread int s_init = 0;
__thread int s_lastRandomIndex = 0;
double drand48x() {
if (!s_init) {
for (int i = 0; i < s_nrandoms; i++) {
s_someRandoms[i] = drand48();
}
s_init = 1;
}
s_lastRandomIndex++;
if (s_lastRandomIndex > s_nrandoms-1) s_lastRandomIndex = 0;
return s_someRandoms[s_lastRandomIndex];
}
#define drand48 drand48x
#endif
i think i have traced the problem to drand48() system call random number generator which i use extensively
it seems like when you use the drand48 function a lot in the block sent to dispatch, the various threads
that run the block get serialized or otherwise jammed up because of this function, so your code doesnt speed
up the the expected amount when you dispatch concurrent blocks and just runs the same speed or slower
as on 1 thread (slower due to the extra overhead of other threads, dispatch, thread syncing, etc)
the thing i found is that this slow down doesnt seem to show up in the Instruments app. it shows a little
bit of drand48 taking up CPU as expected, but not huge.... since it is not using CPU power and is just waiting for other
threads to handle memory access, i would guess. such waiting may show up in some portion of instruments i didnt look at.
this post seems to get into the details of why this is occurring:
https://stackoverflow.com/questions/22660535/pthreads-and-drand48-concurrency-performance
will post a workaround. tentatively working on pre-generating some randoms in per-thread arrays or a global
array using a per thread index. if you use a global index to pull from the pre generated randoms, it shows a similar
slowdown as drand48
I guess another question would be, if we set networkAccessAllowed to NO would we get a reduced size (local to device) media or would we get nothing or error?
(E.g. in our original requestContentEditingInputWithOptions call)
PHContentEditingInputRequestOptions * options = [[PHContentEditingInputRequestOptions alloc] init];
options.networkAccessAllowed = YES;
doc doesnt mention this issue:
https://developer.apple.com/documentation/photokit/phcontenteditinginputrequestoptions?language=objc
another question has arisen on this issue. the PHImageManager seems to only return a UIImage. however, the noted API above requestContentEditingInputWithOptions works for live photos and videos also (and our code is using it for that) is there is there a similar fast replacement we can use for live photos and videos? e.g. to avoid the long download from icloud and just use local editions of these other media types? thanks again!
if anyone has this issue, this seemed to be our problem: the iMessage switch in message settings has to be On for both devices when sending. if one device has iMsg off, live photos come thru as regular photos. if any one has any other ideas how to transfer live photos between users, that would be helpful.
Added note: in the Settings for iCloud under Optimize storage, it mentions: "If your phone is low on space, full resolution photos and videos are automatically replaced with smaller, device size versions. We want the smaller device size versions." Anyone know how to get these from photo kit to avoid the big icloud download delay?
thanks!
fyi i tried to export the binary from xcode then used the mac app Transporter (avail on mac app store) to upload, and it worked right away (e.g. showed up as "processing" on app store connect site)
fyi we had an apple office hours meeting to address this, this is the first part of the solution we found based on those discussions
pixel format needed to change, and 3 dictionary key values added to AVAssetReaderTrackOutput outputSettings: argument per below
with these, the UIImage seem to come out much better but we havent done a full check yet
the next step is to get these images written back to a video file w/o losing their color again
this is just a code snippet, if anyone needs to full code we can post the full xcode sample
as we have extracted the buggy dolby stuff into a small test case
the info about the color properties (noted below) we had found previously in the apple docs about this technology, but it
also needed the updated pixel key to cause any positive effect on the outputted images. that part was not clear
from the docs immediately.
NSArray* video_tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack* video_track = [video_tracks firstObject];
NSMutableDictionary* dictionary = [[NSMutableDictionary alloc] init];
// kCVPixelFormatType_32BGRA // this was the what we had
// 420YpCbCr8BiPlanarFullRange // this is the correct one (see next line)
[dictionary setObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
// this is also needed
dictionary[AVVideoColorPropertiesKey] =
@{AVVideoColorPrimariesKey:
AVVideoColorPrimaries_ITU_R_709_2,
AVVideoTransferFunctionKey:
AVVideoTransferFunction_ITU_R_709_2,
AVVideoYCbCrMatrixKey:
AVVideoYCbCrMatrix_ITU_R_709_2};
AVAssetReaderTrackOutput* asset_reader_output = [[AVAssetReaderTrackOutput alloc] initWithTrack:video_track outputSettings:dictionary];