I have some code that I have cobbled togther without really understanding the details.It works great in debug builds, but fails if you build for release.I found this out when I released an update 😟can anyone explain what is going on?extension NetService {
var debugOnlyIPAddresses : [String] {
var ipAddresses:[String] = []
guard let addresses = addresses else {
return []
}
for address in addresses {
let data = address as NSData
let inetAddress: sockaddr_in = data.castToCPointer()
if inetAddress.sin_family == __uint8_t(AF_INET) {
if let ip = String(cString: inet_ntoa(inetAddress.sin_addr), encoding: .ascii) {
ipAddresses.append(ip)
}
} else if inetAddress.sin_family == __uint8_t(AF_INET6) {
let inetAddress6: sockaddr_in6 = data.castToCPointer()
let ipStringBuffer = UnsafeMutablePointer.allocate(capacity: Int(INET6_ADDRSTRLEN))
var addr = inetAddress6.sin6_addr
if let ipString = inet_ntop(Int32(inetAddress6.sin6_family), &addr, ipStringBuffer, __uint32_t(INET6_ADDRSTRLEN)) {
if let ip = String(cString: ipString, encoding: .ascii) {
// IPv6
ipAddresses.append(ip)
}
}
ipStringBuffer.deallocate()
}
}
print("returning \(ipAddresses)")
return ipAddresses
}
}in release builds, inetAddress.sin_family is always 0
Post
Replies
Boosts
Views
Activity
To be clear - I'm talking about the magically generated and updated PreviewProvider previews in Xcode Let's say I have a handful set up for different devices. iPhone X, iPad Pro, iPhone SEIs there an easy way for me to take screenshots to send for review?Obviously I can scroll down the screen and do it manually - but I'm thinking of something that would output a screenshot for each device.It seems like it _should_ be an obvious feature, but I can't find it!
I was just rejected for the following:
Guideline 2.1 - Performance
We found that while you have submitted in-app purchase products for your app, it does not show the expiration date in the app after a subscription purchase is made. However searching section 2.1 I can't even find the letters 'exp' (looking for expires, expiry)
As far as I can tell - this requirement isn't anywhere in the rules. It looks like App Review (or a reviewer) are just making things up.
How should we handle this?
I have a simple button style which animates the size of the button when it is pressed.
Unfortunately, this causes the position of the button to animate (which I don't want).
Is there a way to limit the animation only to the scale? I have tried surrounding the offending animation with .animation(nil) and .animation(.none) modifiers - but that didn't work.
import SwiftUI
struct ExampleBackgroundStyle: ButtonStyle {
func makeBody(configuration: Self.Configuration) -> some View {
configuration.label
.padding()
.foregroundColor(.white)
.background(Color.red)
.cornerRadius(40)
.padding(.horizontal, 20)
.animation(nil)
.scaleEffect(configuration.isPressed ? 0.6 : 1.0)
.animation(.easeIn(duration: 0.3))
.animation(nil)
}
}
when I show a button in a sheet, it animates in from the top left (0,0) to the position in the centre of the sheet after the sheet appears.
If I remove the animation on the scaleEffect, then this doesn't happen.
I have a package which supports all platforms.
At the moment, if I want to check that I have got all my availability attributes right, then I have to manually select Mac as a build target, then build, manually select TV, then build, etc
Is there an easy way to say 'build for each supported platform and show me any bugs'
I have a package that supports multiple platforms.
Is there a way to run the tests on all the platforms easily?
At the moment, I have to manually select each platform, run the tests, then select the next platform, etc...
The 2020 session 'Author fragmented MPEG-4 content with AVAssetWriter' refers to the 2011 session 'Working with media in AVFoundation'
(and when I say 'refers', I mean it says 'I'm not going to cover this important part - just look at the 2011 video)
However - that session seems to be deleted/gone
Frustrating...
#wwdc20-10011
I have an existing unlisted app on MacOS. I'd like to launch a tvOS version.
However I can't figure out how the users would actually install the app.
Unlike Mac/iOS, there isn't a browser for them to open the install link...
I'm running a webserver for a specific service on port 8900
I'm using telegraph to run the webserver, so that opens and claims the port.
I also want to advertise the service on bonjour - ideally with the correct port.
This is trivial with NetService - but that's deprecated, so I should probably move to the Network framework.
I can advertise without specifying a port
listener = try NWListener(service: service, using: .tcp)
but, then my service broadcasts addresses with port:61443
I can advertise using
listener = try NWListener(using: .tcp, on: <myport>)
however, that fails in my use case because (unsurprisingly) the Listener isn't able to get the port (my server already has it)
Is this just a gap in the new API, or am I missing something?
Xcode has this 'special' habit of showing old errors on builds.
Sometimes from long ago.
presumably there is a cache somewhere that isn't getting invalidated.
Clearing the build folder doesn't fix this. Has anyone found a way that does?
I'm trying to run a coreML model.
This is an image classifier generated using:
let parameters = MLImageClassifier.ModelParameters(validation: .dataSource(validationDataSource),
maxIterations: 25,
augmentation: [],
algorithm: .transferLearning(
featureExtractor: .scenePrint(revision: 2),
classifier: .logisticRegressor
))
let model = try MLImageClassifier(trainingData: .labeledDirectories(at: trainingDir.url), parameters: parameters)
I'm trying to run it with the new async Vision api
let model = try MLModel(contentsOf: modelUrl)
guard let modelContainer = try? CoreMLModelContainer(model: model) else {
fatalError("The model is missing")
}
let request = CoreMLRequest(model: modelContainer)
let image = NSImage(named:"testImage")!
let cgImage = image.toCGImage()!
let handler = ImageRequestHandler(cgImage)
do {
let results = try await handler.perform(request)
print(results)
} catch {
print("Failed: \(error)")
}
This gives me
Failed: internalError("Error Domain=com.apple.Vision Code=7 "The VNDetectorProcessOption_ScenePrints required option was not found" UserInfo={NSLocalizedDescription=The VNDetectorProcessOption_ScenePrints required option was not found}")
Please help! Am I missing something?
I'm hitting a limit when trying to train an Image Classifier.
It's at about 16k images (in line with the error info) - and it gives the error:
IOSurface creation failed: e00002be parentID: 00000000 properties: {
IOSurfaceAllocSize = 529984;
IOSurfaceBytesPerElement = 4;
IOSurfaceBytesPerRow = 1472;
IOSurfaceElementHeight = 1;
IOSurfaceElementWidth = 1;
IOSurfaceHeight = 360;
IOSurfaceName = CoreVideo;
IOSurfaceOffset = 0;
IOSurfacePixelFormat = 1111970369;
IOSurfacePlaneComponentBitDepths = (
8,
8,
8,
8
);
IOSurfacePlaneComponentNames = (
4,
3,
2,
1
);
IOSurfacePlaneComponentRanges = (
1,
1,
1,
1
);
IOSurfacePurgeWhenNotInUse = 1;
IOSurfaceSubsampling = 1;
IOSurfaceWidth = 360;
} (likely per client IOSurface limit of 16384 reached)
I feel like I was able to use more images than this before upgrading to Sonoma - but I don't have the receipts....
Is there a way around this?
I have oodles of spare memory on my machine - it's using about 16gb of 64 when it crashes...
code to create the model is
let parameters = MLImageClassifier.ModelParameters(validation: .dataSource(validationDataSource),
maxIterations: 25,
augmentation: [],
algorithm: .transferLearning(
featureExtractor: .scenePrint(revision: 2),
classifier: .logisticRegressor
))
let model = try MLImageClassifier(trainingData: .labeledDirectories(at: trainingDir.url), parameters: parameters)
I have also tried the same training source in CreateML, it runs through 'extracting features', and crashes at about 16k images processed.
Thank you