I have a Swift project with some C code in it. The C code creates a byte array with about 600K elements. Compiling under Xcode, the compilation takes a really long time. When I try to run the code, it fails immediately upon startup. When I cut this large array out of the build, everything else works fine. Does anyone know what's going on here, and what I might do about it?
Swift
RSS for tagSwift is a powerful and intuitive programming language for Apple platforms and beyond.
Posts under Swift tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Is the timeout for session-level authentication challenge handling documented somewhere? For example, if I get the urlSession(_:didReceive:) callback for server trust authentication, how long do I have to invoke the completion handler (or return from the callback if using Swift Concurrency)?
Or is this completely dependent on the server's settings?
I'm encountering an issue with the barcode reader on my iPad 6th generation running iOS 17.4.1. Specifically, when I attempt to use the barcode reader in landscape mode, I do not receive any output or response. However, when I rotate my iPad to portrait mode, the barcode is successfully scanned.
I've tried restarting my iPad, checking for software updates, and adjusting the settings within the barcode scanning app, but the issue persists. I've also tested with different barcode scanning apps, and the problem remains consistent across apps.
This issue seems to be specific to my iPad model and iOS version, as I haven't encountered it on other devices or with previous iOS versions.
Has anyone else experienced a similar issue with barcode scanning in landscape mode on the iPad 6th generation running iOS 17.4.1? Are there any known solutions or workarounds for this problem?
Hi,
I am running into an error on XCode 15 (iOS 17+). When I am trying to play an iframe on the app. I see this error popup.
Warning: -[BETextInput attributedMarkedText] is unimplemented
Failed to request allowed query parameters from WebPrivacy.
How do I fix this issue? I never saw this before so I am sure it is new. The app use to run fine as well.
In this code, I aim to enable users to select an image from their phone gallery and display it with less opacity on top of the z-index. The selected image should appear on top of the user's phone camera feed, allowing them to see the canvas on which they are drawing as well as the low-opacity image. The app's purpose is to enable users to trace an image on the canvas while simultaneously seeing the camera feed.
CameraView.swift
import SwiftUI
import AVFoundation
struct CameraView: View {
let selectedImage: UIImage
var body: some View {
ZStack {
CameraPreview()
Image(uiImage: selectedImage)
.resizable()
.aspectRatio(contentMode: .fill)
.opacity(0.5) // Adjust the opacity as needed
.edgesIgnoringSafeArea(.all)
}
}
}
struct CameraPreview: UIViewRepresentable {
func makeUIView(context: Context) -> UIView {
let cameraPreview = CameraPreviewView()
return cameraPreview
}
func updateUIView(_ uiView: UIView, context: Context) {}
}
class CameraPreviewView: UIView {
private let captureSession = AVCaptureSession()
override init(frame: CGRect) {
super.init(frame: frame)
setupCamera()
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
private func setupCamera() {
guard let backCamera = AVCaptureDevice.default(for: .video) else {
print("Unable to access camera")
return
}
do {
let input = try AVCaptureDeviceInput(device: backCamera)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = .resizeAspectFill
previewLayer.frame = bounds
layer.addSublayer(previewLayer)
captureSession.startRunning()
}
} catch {
print("Error setting up camera input:", error.localizedDescription)
}
}
}
Thanks for helping and your time.
As an exercise in learning Swift, I rewrote a toy C++ command line tool in Swift. After switching to an UnsafeRawBufferPointer in a critical part of the code, the Release build of the Swift version was a little faster than the Release build of the C++ version. But the Debug build took around 700 times as long. I expect a Debug build to be somewhat slower, but by that much?
Here's the critical part of the code, a function that gets called many thousands of times. The two string parameters are always 5-letter words in plain ASCII (it's related to Wordle). By the way, if I change the loop ranges from 0..<5 to [0,1,2,3,4], then it runs about twice as fast in Debug, but twice as slow in Release.
func Score( trial: String, target: String ) -> Int
{
var score = 0
withUnsafeBytes(of: trial.utf8) { rawTrial in
withUnsafeBytes(of: target.utf8) { rawTarget in
for i in 0..<5
{
let trial_i = rawTrial[i];
if trial_i == rawTarget[i] // strong hit
{
score += kStrongScore
}
else // check for weak hit
{
for j in 0..<5
{
if j != i
{
let target_j = rawTarget[j];
if (trial_i == target_j) &&
(rawTrial[j] != target_j)
{
score += kWeakScore
break
}
}
}
}
}
}
}
return score
}
Hello,
I have created a Neural Network → K Nearest Neighbors Classifier with python.
# followed by k-Nearest Neighbors for classification.
import coremltools
import coremltools.proto.FeatureTypes_pb2 as ft
from coremltools.models.nearest_neighbors import KNearestNeighborsClassifierBuilder
import copy
# Take the SqueezeNet feature extractor from the Turi Create model.
base_model = coremltools.models.MLModel("SqueezeNet.mlmodel")
base_spec = base_model._spec
layers = copy.deepcopy(base_spec.neuralNetworkClassifier.layers)
# Delete the softmax and innerProduct layers. The new last layer is
# a "flatten" layer that outputs a 1000-element vector.
del layers[-1]
del layers[-1]
preprocessing = base_spec.neuralNetworkClassifier.preprocessing
# The Turi Create model is a classifier, which is treated as a special
# model type in Core ML. But we need a general-purpose neural network.
del base_spec.neuralNetworkClassifier.layers[:]
base_spec.neuralNetwork.layers.extend(layers)
# Also copy over the image preprocessing options.
base_spec.neuralNetwork.preprocessing.extend(preprocessing)
# Remove other classifier stuff.
base_spec.description.ClearField("metadata")
base_spec.description.ClearField("predictedFeatureName")
base_spec.description.ClearField("predictedProbabilitiesName")
# Remove the old classifier outputs.
del base_spec.description.output[:]
# Add a new output for the feature vector.
output = base_spec.description.output.add()
output.name = "features"
output.type.multiArrayType.shape.append(1000)
output.type.multiArrayType.dataType = ft.ArrayFeatureType.FLOAT32
# Connect the last layer to this new output.
base_spec.neuralNetwork.layers[-1].output[0] = "features"
# Create the k-NN model.
knn_builder = KNearestNeighborsClassifierBuilder(input_name="features",
output_name="label",
number_of_dimensions=1000,
default_class_label="???",
number_of_neighbors=3,
weighting_scheme="inverse_distance",
index_type="linear")
knn_spec = knn_builder.spec
knn_spec.description.input[0].shortDescription = "Input vector"
knn_spec.description.output[0].shortDescription = "Predicted label"
knn_spec.description.output[1].shortDescription = "Probabilities for each possible label"
knn_builder.set_number_of_neighbors_with_bounds(3, allowed_range=(1, 10))
# Use the same name as in the neural network models, so that we
# can use the same code for evaluating both types of model.
knn_spec.description.predictedProbabilitiesName = "labelProbability"
knn_spec.description.output[1].name = knn_spec.description.predictedProbabilitiesName
# Put it all together into a pipeline.
pipeline_spec = coremltools.proto.Model_pb2.Model()
pipeline_spec.specificationVersion = coremltools._MINIMUM_UPDATABLE_SPEC_VERSION
pipeline_spec.isUpdatable = True
pipeline_spec.description.input.extend(base_spec.description.input[:])
pipeline_spec.description.output.extend(knn_spec.description.output[:])
pipeline_spec.description.predictedFeatureName = knn_spec.description.predictedFeatureName
pipeline_spec.description.predictedProbabilitiesName = knn_spec.description.predictedProbabilitiesName
# Add inputs for training.
pipeline_spec.description.trainingInput.extend([base_spec.description.input[0]])
pipeline_spec.description.trainingInput[0].shortDescription = "Example image"
pipeline_spec.description.trainingInput.extend([knn_spec.description.trainingInput[1]])
pipeline_spec.description.trainingInput[1].shortDescription = "True label"
pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(base_spec)
pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(knn_spec)
pipeline_spec.pipelineClassifier.pipeline.names.extend(["FeatureExtractor", "kNNClassifier"])
coremltools.utils.save_spec(pipeline_spec, "../Models/FaceDetection.mlmodel")
it is from the following tutorial: https://machinethink.net/blog/coreml-training-part3/
It Works and I were am to include it into my project:
I want to train the model via the MLUpdateTask:
ar batchInputs: [MLFeatureProvider] = []
let imageconstraint = (model.model.modelDescription.inputDescriptionsByName["image"]?.imageConstraint)
let imageOptions: [MLFeatureValue.ImageOption: Any] = [
.cropAndScale: VNImageCropAndScaleOption.scaleFill.rawValue]
var featureProviders = [MLFeatureProvider]()
//URLS where images are stored
let trainingData = ImageManager.getImagesAndLabel()
for data in trainingData{
let label = data.key
for imgURL in data.value{
let featureValue = try MLFeatureValue(imageAt: imgURL, constraint: imageconstraint!, options: imageOptions)
if let pixelBuffer = featureValue.imageBufferValue{
let featureProvider = FaceDetectionTrainingInput(image: pixelBuffer, label: label)
batchInputs.append(featureProvider)}}
let trainingData = MLArrayBatchProvider(array: batchInputs)
When calling the MLUpdateTask as follows, the context.model from completionHandler is null.
Unfortunately there is no other Information available from the compiler.
do{
debugPrint(context)
try context.model.write(to: ModelManager.targetURL)
}
catch{
debugPrint("Error saving the model \(error)")
}
})
updateTask.resume()
I get the following error when I want to access the context.model: Thread 5: EXC_BAD_ACCESS (code=1, address=0x0)
Can some1 more experienced tell me how to fix this?
It seems like I am missing some parameters?
I am currently not splitting the Data when training into train and test data. only preprocessing im doing is scaling the image down to 227x227 pixels.
Thanks!
I want to add this button with some space in a tab bar I don't know what this button is called and how to add it in Vision Pro app
I've found a strange leak, that looks like a bug.
When we open two sheets, or fullscreenCovers and the last one has a TextField, then after closing both, @StateObject property is not released.
If you delete TextField, there will be no memory leak.
It works well and memory is releasing on iOS 16 built with Xcode 15 (simulators)
Memory is leaking and not releasing on iOS 17 built with Xcode 15 (simulators, device 17.4.1)
import SwiftUI
struct ContentView: View {
@State var isFirstOpen: Bool = false
var body: some View {
Button("Open first") {
isFirstOpen = true
}
.sheet(isPresented: $isFirstOpen) {
FirstView()
}
}
}
struct FirstView: View {
@StateObject var viewModel = LeakedViewModel()
var body: some View {
ZStack {
Button("Open second") {
viewModel.isSecondOpen = true
}
}
.sheet(isPresented: $viewModel.isSecondOpen) {
SecondView(onClose: {
viewModel.isSecondOpen = false
})
}
}
}
final class LeakedViewModel: ObservableObject {
@Published var isSecondOpen: Bool = false
init() { print("LeakedViewModel init") }
deinit { print("LeakedViewModel deinit") }
}
struct SecondView: View {
@State private var text: String = ""
private let onClose: () -> Void
init(onClose: @escaping () -> Void) {
self.onClose = onClose
}
var body: some View {
Button("Close second"){
onClose()
}
TextField("text: $text", text: $text)
// Comment TextField and the leak will disappear, viewModel deinit called
}
}
@main
struct LeaksApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
May be related to https://forums.developer.apple.com/forums/thread/738840
Hi, wondering if IOS supports WebTransport (HTTP/3) yet?
If so, where can I find information on implementing it in my app?
i'm struct dynamic island detail content
dynamicIsland: { context in
DynamicIsland {
expandedContent(context: context)
} compactLeading: {
....
} compactTrailing: {
...
}
i want show different content based on context.
private func expandedContent(context: ActivityViewContext<xxxx>)->DynamicIslandExpandedContent<some View> {
if (context.state.style == 0) {
return expandedControlContent1(context: context)
} else if (context.state.style == 1) {
return expandedControlContent2(context: context)
} else {
return expandedControlContent3(context: context)
}
}
compiles error
Function declares an opaque return type 'some View', but the return statements in its body do not have matching underlying types
Hi. I plan to use a WebView in an iOS app (SWIFT) and this should run a web app with WASM and using IndexedDB for permanent credentials.
I found rumors and information on Apple deleting data in IndexedDB and localStorage after 7 days (see links below). But I found no official information that tells me if this is true for my WebView in my ordinary mobile App (not PWA).
A test cycle over a week to find out is hard to do...
Is there any reliable and clear information on this and am I affected?
Thank you!
.
Links about this topic:
https://news.ycombinator.com/item?id=28158407
https://www.reddit.com/r/javascript/comments/foqxp9/webkit_will_delete_all_local_storage_including/
https://searchengineland.com/what-safaris-7-day-cap-on-script-writeable-storage-means-for-pwa-developers-332519
Hello everyone,
So I will start off by saying I am a very amateur developer with some experience in C++ mostly. Over the summer I want to build an app similar to a board game and launch it on the App Store for me and my friends to play when we don't have the game's physical board. Basically, there would be one person who hosts a "game" while everyone else joins through a code or something like that (maybe there's an easier way if you know everyone would be playing in person with each other). Once a game begins I want cards to show up on peoples's screens and that's it, no fancy graphics or anything like that.
So, to the root of my issue. I am brand new to Swift and Xcode. I began googling and tinkering with it and made a little app where a user can add names and then pick letters from the names to display (very very basic stuff). I also figured out how to import and manipulate images a little bit. My question is about the process of making a game, connecting it to GameKit/Game Center, and then how to actually launch it on the App Store so my friends can also download it.
If anyone has any resources they particularly found useful when starting out using Swift, please let me know. I really really don't like reading straight from the documentation (although who does honestly). Anything helps!! Thank you!
Hello together,
i want to us some classes to manage informations, which get fetched from a firestore database. My idea was, that I will have different classes for the different state of informations. The information which will be common for the different states should have the same properties. Therefore it made sense for me to have a super class which stores the main informations and derive a subclass with extra properties to store the more informations.
My question is, how to define the initializer method properly, so that I can store these data informations fetched from firestore at once without any loss.
Superclass (I reduced it to a minimum, just to show my principal problem):
class GameInfo: Codable, Identifiable {
@DocumentID var id: String? // -> UUID of firestore document
var league: String
var homeTeam: String
var guestTeam: String
enum CodingKeys: String, CodingKey {
case league
case homeTeam
case guestTeam
}
init(league: String, homeTeam: String, guestTeam: String) {
self.league = league
self.homeTeam = homeTeam
self.guestTeam = guestTeam
}
}
the subclass should contain the GameInfo Properties and some others ...
class Game: GameInfo {
var startTime: Date?
enum CodingKeys: String, CodingKey {
case startTime
}
init(league: String, homeTeam: String, guestTeam: String, startTime: Date) {
self.startTime = startTime
super.init(league: league, homeTeam: homeTeam, guestTeam: guestTeam)
}
required init(from decoder: any Decoder) throws {
let values = try decoder.container(keyedBy: CodingKeys.self)
self.startTime = try values.decodeIfPresent(Date.self, forKey: .startTime)
super.init(league: "", homeTeam: "",, guestTeam: "")
}
With the required init-method, informations get decoded and stored. (the data from firestore contain the league, homeTeam, guestTeam and startTime informations). The super.init() method as defined results in empty strings. But what I want is, that the league, homeTeam and guestTeam values will also be decoded from the firestore informations. But I don't know how. If I use the code
super.init(league: league, homeTeam: homeTeam, guestTeam: guestTeam)
within the required init() than I get the compiler error message
'self' used in property access 'league' before 'super.init' call
What is wrong in my thinking ? Any help appreciated.
Thanks and best regards
Peter
Every time I dismiss ImmersiveSpace with progressive ImmersionStyle and open another one I get about 30-40% of immersion level. Can immersion be set to 100% in progressive by default?
We've been working on a SwiftUI app that randomly crashes with an exception. When navigating from one view to another, a rare exception is thrown, maybe every 1 / 200 times or so:
<SwiftUI.UIKitNavigationController: 0x109888400> is pushing the same view controller instance (<_TtGC7SwiftUI32NavigationStackHostingControllerVS_7AnyView_: 0x10792d400>) more than once which is not supported and is most likely an error in the application : com.<companyName>.<appName>
We haven't coded anything to directly push an instance of a view controller outside of what SwiftUI is doing. It seems to happen when the user taps on a NavigationLink view. It happens both in simulator and on device. Does anyone know what might cause this?
Hi
We getting error in
Apple Sign In "Sign-Up not completed", Apple sign in working fine for old Apps and old Bundle ids, But it's not working in new Apps and new Bundle ids
We checked with other Apple Developer team accounts Apple Sign In is working on the same source code. But my Team account is getting an error.
We enabled signing capabilities and added Sign in with Apple and we added Provisioning profile certificate also , but I am still getting the same error.
Hi, I'm trying to build an app to connect to both BR/EDR ("Classic") or BLE devices.
For Classic, the recommended flow in Apple docs is:
Start your app and initialize your CBCentralManager
Pair the phone and the device manually through settings
This should automatically call the connectionEventDidOccur
From then on it depends on the dev to choose what to do with the information from the callback (peripheral, event, etc), but the callback is simply never fired.
Here's my basic implementation of the callback:
func centralManager(_ central: CBCentralManager, connectionEventDidOccur event: CBConnectionEvent, for peripheral: CBPeripheral) {
print("IOS: connectionEventDidOccur for peripheral: %@", peripheral)
if (event == .peerConnected) {
print("IOS: Case is peer connected")
connectToPeripheral(peripheral.identifier.uuidString)
if(!savedPeripheralList.contains(where: { $0.identifier == peripheral.identifier })){
savedPeripheralList.append(peripheral)
}
} else if (event == .peerDisconnected ){
print("IOS: Peer %@ disconnected!", peripheral)
} else {
print("IOS: if statement didn't work")
}
}
I'm essentially:
Printing the fact that the callback was called
Trying to connect the app to the peripheral (by now only the phone is connected)
Saving this newly connected peripheral to a local list
For context, I've been able to scan and connect to BLE devices like earbuds normally, so my general implementation of other callbacks like didDiscover or didConnect works just fine, this is the only non functional callback.
Any ideas would be appreciated, thanks!
The regular case.
Open a sheet by clicking a button. Next close the sheet using a Cancel button on it. The isPresented state variable is changed immediately, but while the dismissing animation isn't totally finished it's impossible to click the Button on the main parent screen and call the sheet presentation again.
As I understand UIKit works differently and lets us click the Button but just calls a new modal view exactly after the previous animation is finished
struct MyView: View {
@State private var isPresented = false
public var body: some View {
VStack {
Button(action: {
isPresented = true
}, label: {
Text("Button")
})
Spacer()
}
.sheet(isPresented: $isPresented) {
sheetContent
}
}
var sheetContent: some View {
VStack {
Text("Cancel")
.onTapGesture {
isPresented = false
}
Spacer()
}
}
}
@Apple Could you please fix it in SwiftUI?
Condition: We have an existing app that runs on iPhone and iPad. We want to make it compatible with macOS, along with it we want to leverage some of the macOS native components. We achieved this using macCatalyst, but now we want to build common components using swiftUI for both macOS and iOS platforms.
Challenge: Using SwiftUI view for mac development
Approach 1:
We created a Mac bundle that contained Mac specific views (using Appkit views).
This approach worked fine for creating and using components that are specific to macOS.
Now while developing and using SwiftUI views in mac bundle we face following error -> (NSHostingViewController symbol not found).
Approach 2:
We tried creating a separate Mac app and make it part of MacCatalyst app.
In this approach we were able to show NSStatusBar and add text using SwiftUI view.
But the status bar appearance is inconsistent, sometimes NSStatusBar icon appears but other times it just won't appear.
Can anyone help with the right approach for this scenarios