In iOS 11.0+ I use the following statements to add a UISearchController to the navigation bar:[[self navigationItem] setSearchController:[self myUISearchController]];
[[self navigationItem] setHidesSearchBarWhenScrolling:NO];
[self setDefinesPresentationContext:YES];To hide the UISearchController I tried the following:[[self myUISearchController] setActive:NO];
[[self myUISearchController] removeFromParentViewController]; // just a try
[[self navigationItem] setSearchController:nil]; // this should be sufficient
[self myUISearchController:nil];Actually, the search controller disappears but leaves a black rectangle at the position where it was. It seems to be that the UITableViewController inside the UINavigationController does not re-align its table view and therefore leaves a black rectangle.Any ideas?
Post
Replies
Boosts
Views
Activity
I have not found any official documentation that Xcode's formatting is supporting Doxygen comments. In the past Xcode's formatting seems to have worked but with Xcode 11 comment formatting (highlighting) is broken at several places. Therefore, does Xcode support Doxygen comments?
Why is popoverPresentationControllerShouldDismissPopover deprectated, respectively what is going to replace this functionality?
There is actually no documentation about the methods- presentationControllerDidAttemptToDismiss:- presentationControllerDidDismiss:- presentationControllerShouldDismiss:- presentationControllerWillDismiss:When implementing a custom UIPresentationController I assume that the derived class has to send these messages to the delegate at the appropriate times, or?
Assume I have a couple of buttons. Whenever the user touches (and holds) one of the buttons an associated scroll view is scrolling up, down, left or right. It stops scrolling when the user releases the button or the end of the content is reached.How can I make this scrolling smooth? Or better what is the best method?I came up with a couple of ideas:- call setContentOffset:animated: and call it repeatedly- install a timer changing the content offset in small time intervalsBoth proposals I do not like. The first one is not continuous, the second one is a bit over the top. Are there alternatives?
I have seen that a similar question has been asked in 2017 but no there hasn't been a satisfactory answer.I am adding a bar button item with a custom view to the toolbar. How do I adjust the height of the item to the maximum supported size of the toolbar (taking portrait and landscape orientations on an iPhone into consideration)?
Hi,when running my app on a simulator the UIDocumentPickerController correctly opens and displays all files. But when picking one of the files the controller does not send to the delegate a message that one of the items has been picked.The app works correctly on a real device. Is this a known issue or is UIDocumentPickerController not supported on simulators?
I am intializing a UIDocumentPickerViewController with the UTIs public.text and public.text-plain. UIDocumentPickerViewController* documentPickerController = [[UIDocumentPickerViewController alloc] initWithDocumentTypes:[NSArray arrayWithObjects:@"public.text",@"public.text-plain",nil] inMode:UIDocumentPickerModeImport];When selecting a text file I get the error message "Failed to associate thumbnails for picked URL file:///private/var/mobile/Library/Mobile%20Documents/com~apple~CloudDocs/NIC.txt with the Inbox copy file:///private/var/mobile/Containers/Data/Application/B857A5E7-74FE-4479-B899-B61D524B7E0D/tmp/***-Inbox/NIC.txt: Error Domain=QLThumbnailErrorDomain Code=102 "(null)" UserInfo={NSUnderlyingError=0x28250df20 {Error Domain=GSLibraryErrorDomain Code=3 "Generation not found" UserInfo={NSDescription=Generation not found}}}".What can I do to get rid of this error message?
For training the model I need a special loss filter. Though Metal implements a couple of loss filters I need another one. Is it possible to create your own loss filter and if how?PS: Actually I need a combination of a cross entropy and a mean squared average error filter.PPS: The MPSCNN documentation is really sparse or not existing. Does anybody know a site that describes a bit the functionality?
Does anybody know why you hardly find any documentation about MPS in the official documentation? It seems to be that the only documentation available from Apple can be found in the headers or are there any other sources?PS: I filed a documentation bug report.
Is it possible to use a convolution node with a data type MPSDataTypeFloat16 when training the network?In the optimizer's (Adam optimizer) method encodeToCommandBuffer:convolutionGradientState:...resultState: method I get the following error message when using MPSDataTypeFloat16:/BuildRoot/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MetalPerformanceShaders-121.4.2/MPSMatrix/LinearAlgebra/MPSMatrix.mm, line 585: error '[MPSVector initWithBuffer:length:dataType] buffer is too small (600) for vector size (1200 bytes)Yes, I have initialized the gradients with MPSDataTypeFloat16 and then the buffer size has to be 600 bytes. But it seems to be that the method assumes to get a vector of data type MPSDataTypeFloat32.
I am using a neural network with a data type MPSDataTypeFloat32 but it seems to be that the final MPSCNNSoftMaxNode for the inference run returns an image using MPSDataTypeFloat16. Is this always the case? How do the nodes determine the output data type? Can I change them?
In case I am increasing the batch size in Controls.h to a number larger than 64 (e.g. 80) the app stops with an error:validateComputeFunctionArguments:882: failed assertion `Compute Function(cnnConvArray_32x32_64): incorrect type of texture (MTLTextureType2DArray) bound at texture binding at index 16 (expect MTLTextureType2D) for Adest[16].'I have no clue what this error wants to tell me. Does anybody have any clues?
Theoretically, I can also add a couple of images to MPSImage to support batch operation. Why do I need an MPSImageBatch? Besides the fact that some methods use as a parameter MPSImageBatch (but have at the same time the same method using MPSImage).
The MPSCNNBatchNormalizationDataSource's optional methods updateGammaAndBetaWithCommandBuffer:batchNormalizationState: and updateMeanAndVarianceWithCommandBuffer:bathNormalizationState: should be used to update the state. To calculate the new state values (beta, gamma, mean and variance), MPSBatchNormalizationStatistics should be used, I suppose.Naturally, MPSBatchNormalizationStatistics requires for the calculation of these new values "source images" (e.g. encodeBatchToCommandBuffer:sourceImage:batchNormalizationState:). The source images should be the source images of the batch normalization node. But how do I access the input of the batch normalization node when updateGammaAndBetaWithCommandBuffer:batchNormalizationState: is called? I could not find any possibility to access these data. Any clues?