Since Xcode 16 sorting source code files in the navigator with ascending order by name results in:
A.hpp
A.cpp
B.hpp
B.cpp
...
Previous versions of Xcode sorted the files correctly (with respect to ascending order):
A.cpp
A.hpp
B.cpp
B.hpp
Is this a bug or is there any parameter I have to set to get the old ordering back?
Post
Replies
Boosts
Views
Activity
How can I copy a SIMD float4x4 matrix to a vertex shader by using the
setVertexBytes:length:atIndex message of a render command encoder?
The examples in LearnMetalCpp do in the renderer's draw method something like this:
select a frame for the instance buffer
create a new command buffer
call dispatch_semaphore_wait to make sure that the selected instance buffer is not used by Metal
put dispatch_semaphore_signal into the command buffer's completed handler
modify the instance buffer and some other Metal related variables
commit the command buffer
Depending on the number of frames used and the complexity of the scene there can be a couple of frames in the command queue. Suppose the window (and its related view) is closed while Metal is still busy with items in the command queue. This means that the semaphore is still in use.
How is it guaranteed that the semaphore is destroyed after the view is closed? Or is it guaranteed that the view's destructor is only called after the command queue has finished all its work?
Is there any possibility to tell Xcode to include header files with a .hpp suffix in the autocompletion for include paths?
Assume I have got a WKInterfaceGroup object. The group object is used to determine the position of a child object. In case I have as a child a label or an image.
Is it possible to let the label or image draw beyond the bounds of the group object?
Hi,
is there any way to activate the include header path autocomplete functionality again in Xcode 13 using C++ or has this feature been abandoned?
Regards,
Hartwig
In the WWDC video was shown that a GeLU function can be easily implemented using MPSGraph. Can this function also be used for training? And if, how?
Where can if find any documentation about MPSGraph besides the WWDC sample code?
Consider the following code
#include <chrono>
#include <iostream>
#include <random>
#include <vector>
class TestClass
{
public:
int A = 0;
int B = 4;
protected:
private:
};
int main(int argc, const char * argv[])
{
std::random_device randomDevice;
std::mt19937 mersenneTwister(randomDevice());
std::uniform_int_distribution<size_t> distribution(1,255);
for (size_t i=0; i<10000000; ++i)
{
size_t const vectorSize = distribution(mersenneTwister)+1;
TestClass* testVector(reinterpret_cast<TestClass*>(malloc(vectorSize*sizeof(TestClass))));
if (testVector[0].A == 0x0ffeefed)
{
std::cout << "Sorry value hit." << std::endl;
break;
} /* if */
free(testVector);
} /* for */
return 0;
}
Clang completely removes the for-loop with optimisation -O3. I am a bit surprised. Although testVector will contain only garbage, I expected the loop not to be removed (actually also no warning was issued, only the analyser detected that testVector contains garbage).
If I add a line assigning a value to a random element of testVector, the loop is not removed.
PS: I wanted to use the loop for testing the execution speed of malloc and free.
Suppose the directory structure of my project is
Apps.xcworkspace
App A (folder) A.xcodeproj ..
App B (folder) B.xcodeproj ..
A and B project (should) use both versioning. Unfortunately, this does not work (out of the box).
Only when changing the directory structure to
Apps.xcworkspace
A.xcodeproj
App A (folder) ..
B.xcodeproj
App B (folder) ..
automatic version incrementing works.
Is there a possibility to make also the first version work?
PS: I am using a script that is called at the end of a successfully finished build process containing only one line:
`xcrun agvtool next-version -all
`
PPS: In the man pages is mentioned that agvtool is to be started in the folder where the project resides but how can I do this for each project in the workspace?
Is this the right method (see below) to update means and variances in the callback updateMeanAndVarianceWithCommandBuffer:batchNormalizationState:?
(MPSCNNNormalizationMeanAndVarianceState*) updateMeanAndVarianceWithCommandBuffer:(id<MTLCommandBuffer>)commandBuffer batchNormalizationState:(MPSCNNBatchNormalizationState*)batchNormalizationState
{
MPSVector* determinedMeans = [[MPSVector alloc] initWithBuffer:[batchNormalizationState mean] descriptor:[MPSVectorDescriptor vectorDescriptorWithLength:[self featureChannels] dataType:[self dataType]]];
MPSVector* determinedVariances = [[MPSVector alloc] initWithBuffer:[batchNormalizationState variance] descriptor:[MPSVectorDescriptor vectorDescriptorWithLength:[self featureChannels] dataType:[self dataType]]];
[[self meansOptimizer] encodeToCommandBuffer:commandBuffer inputGradientVector:determinedMeans
inputValuesVector:[self meansVector] inputMomentumVector:nil
resultValuesVector:[self meansVector]];
[[self variancesOptimizer] encodeToCommandBuffer:commandBuffer
inputGradientVector:determinedVariances
inputValuesVector:[self variancesVector]
inputMomentumVector:nil
resultValuesVector:[self variancesVector]];
[batchNormalizationState setReadCount:[batchNormalizationState readCount]-1];
return [self meanAndVarianceState];
}
The means and variances optimisers are initialised like:
_meansOptimizer = [[MPSNNOptimizerStochasticGradientDescent alloc] initWithDevice:_device
momentumScale:0.0
useNestrovMomentum:NO
optimizerDescriptor:[MPSNNOptimizerDescriptor optimizerDescriptorWithLearningRate:-0.1 gradientRescale:1.0f regularizationType:MPSNNRegularizationTypeL2 regularizationScale:-1.0f]];
_variancesOptimizer = [[MPSNNOptimizerStochasticGradientDescent alloc] initWithDevice:_device momentumScale:0.0
useNestrovMomentum:NO
optimizerDescriptor:[MPSNNOptimizerDescriptor optimizerDescriptorWithLearningRate:-0.1
gradientRescale:1.0f
regularizationType:MPSNNRegularizationTypeL2
regularizationScale:-1.0f]];
By using this method as in GitHub - https://github.com/apple/turicreate/blob/master/src/ml/neural_net/mps_weight.mm the callback does not crash anymore but I am not sure if this is correct. Especially because the read count has to be manually decremented, is this OK?
PS: [self meansVector] and [self variancesVector] return MPSVector objects.
PPS: [self dataType] returns MPSDataTypeFloat32.
Assume I have a node with w x h x channel == 2x2x2 with elements [row1column1channel1, row1column2channel1, row2column1channel1, row2column2channel1],[row1column1channel2, row1column2channel2, row2column1channel2, row2column2channel2] (w, h, channel ordering). Now, I am reshaping (flattening) the data to 8x1x1 by using a reshaping node.
How, is the reshaping done (what is the order of the elements)? Is there a possibility to manipulate the order of the elements by using the MPSReshapeNode?
Assume that I have a CNNNode with the dimensions h1xw1xchannel1 (kernel sizes do not matter). This node is "converted" by a fully connected node to the dimensions 1x1xchannel2. The condition is that h1xw1xchannel1 >> channel2. Afterwards, there are some batch normalizations, ReLU and one or two more fully connected layers.
Is there any performance benefit to do the reshaping from 1x1xchannel2 to channel2x1x1 or doesn't it matter at all?
Is there a memory benefit?
What is the maximum global variable size or memory space that iOS can handle?Example:static uint8_t g_Variable[???];It seems to be that a bit more than 2MB is too large. Is there any documentation? But probably this limit also depends on the device?!
The MPSCNNBatchNormalizationDataSource's optional methods updateGammaAndBetaWithCommandBuffer:batchNormalizationState: and updateMeanAndVarianceWithCommandBuffer:bathNormalizationState: should be used to update the state. To calculate the new state values (beta, gamma, mean and variance), MPSBatchNormalizationStatistics should be used, I suppose.Naturally, MPSBatchNormalizationStatistics requires for the calculation of these new values "source images" (e.g. encodeBatchToCommandBuffer:sourceImage:batchNormalizationState:). The source images should be the source images of the batch normalization node. But how do I access the input of the batch normalization node when updateGammaAndBetaWithCommandBuffer:batchNormalizationState: is called? I could not find any possibility to access these data. Any clues?