Post

Replies

Boosts

Views

Activity

Guidelines for ViewThatFits to avoid run-time crashes
TLDR: What rules ensure you won't have sporadic run-time crashes when using ViewThatFits? My app crashes - luckily reproducible. But the code appeared syntacticly and logically correct. Simplified excerpt: https://github.com/alanrick/Experiment3 The crash is caused by ViewThatFits being overwhelmed by concurrent changes in other views, exacerbated by animation effects. In the original code the problem was even worse because I'd gone overboard and used ViewThatFits in sub-views making the whole thing too dynamic. 

 My first rule is: Do not use nested ViewThatFits. 

 But this alone is not sufficient. What other rules can I apply to ensure I won't have hard-to-detect run-time crashes when using ViewThatFits?
0
0
126
2w
Run identical UI/Unit tests on different build targets
I have different versions of my iOS App (written in SwiftUI). The app on the store, the App Clip, and one or two next version apps not yet released (e.g. A/B comparison). All good. But now I've started creating UI and Unit tests and I'm confused about how to get this working. Each build target has its own scheme. And in that scheme I have a Test plan for that target. E.g. The App Clip scheme has an App Clip test plan. Since all the app variants are very similar, I only have one set of unit tests and one set of UI tests so each test plan includes the same unit test target and the UI test target. Problem: When I selected a scheme (e.g. for the App Clip) and ran the tests, it turned out that all the tests ran for another build target, not the target of the scheme. I think this might be because within the definition of the test target there's a field specifying the host application. I.e. the build target. Question: How can I set up my project so that the test plan uses the relevant target build? Or do I have to duplicate all the test targets (one for each target)? Or do I have to manually change each test target before running it for a particular build target?
1
0
199
Nov ’24
@Observation: Best way of handling binding after injecting a View-Model
In WWDC 2023 there was a good summary of how to handle the iOS 17 Observation capability. But despite the clear graphics, it was still ambiguous (for me.) I want to inject a class (view-model) so that it can be used in the complete view heirarchy, and used in bindings to allow bi-directional communication. As far as I can tell there are 2 ways of declaring the VM (alternatives 1 and 2 in my code), and 2 ways of consuming the VM in a view (alternatives 3 and 4 in my code). Using the flow-diagram I can't determine which is best. Here's the crux of my #Observable problem. import SwiftUI // MARK: - Model struct MyMod { var title = "Hello, World!" } // MARK: - MVV @Observable class MyMVV { var model: MyMod init() { self.model = MyMod() } } // MARK: - App @main struct MyApp: App { @Bindable var myGlobalMVV = MyMVV() // Alternative 1 // @State var myGlobalMVV = MyMVV() // Alternative 2 var body: some Scene { WindowGroup { ContentView() .environment(myGlobalMVV) // inject } } } struct ContentView: View { var body: some View { ContentDeepHierarchyView() } } struct ContentDeepHierarchyView: View { @Environment(MyMVV.self) var myGlobalMVV // digest var body: some View { @Bindable var myLocalMVV = myGlobalMVV // Alternative 3 TextField("The new title", text: $myLocalMVV.model.title) // Alternative 3 TextField("The new title", text: Bindable(myGlobalMVV).model.title) // Alternative 4 } Opinions?
1
0
341
Oct ’24
CreateML for Image recognition with a spreadsheet catalog
I want to use createML to generate an image recognition model. This works when I put the images in separate sub-folders. But my data consists of many images in one folder with a spreadsheet index of the photos according to several different attributes. I suppose I could write a script to move the images into the correct folders, and repeat this for each of the attributes. But I was hoping there’s a way of doing this directly in createML since I’m sure it’s a common requirement. Anyone know if createML can do this or does anyone know of an app that would save me writing the script?
3
0
227
Oct ’24
Struggling with Swift Gesture/Observation
I'm trying to create an equivalent to TabView, but with the difference that the 2nd View slides in over the top of the primary view. Maybe there's a more elegant way of coding this (suggestions appreciated), but I've almost succeeded using the dragGesture. When a user swipes right to left the observed variable showTab2 is set to true, and the 2nd tab glides in over the top of tab 1 and displays 🥳. The only problem is, that when a user happens to start the swipe over a button, the observed status (showTab2) does change as expected but the main view does not catch this change and does not display tab2. And that despite the showTab2 being an @Observable. Any clues what I've missed? Or how to capture that the start of a swipe gesture starts over the top of a button and should be ignored. According to the code in SwipeTabView this screenshot 👆 should never occur. Here's the code: @Observable class myclass { var showTab2 = false } struct SwipeTabView: View { @State var myClass = myclass() @State var dragAmount: CGSize = CGSize.zero var body: some View { VStack { ZStack { GeometryReader { geometryProxy in VStack { tab(tabID: 1, selectedTab: myClass.showTab2) .zIndex(/*@START_MENU_TOKEN@*/1.0/*@END_MENU_TOKEN@*/) .background(.black) .transition(.identity) .swipeable(stateOfViewAdded: $myClass.showTab2, dragAmount: $dragAmount, geometryProxy: geometryProxy, insertion: true) } if myClass.showTab2 || dragAmount.width != 0 { tab(tabID: 2, selectedTab: myClass.showTab2) .zIndex(2.0) .drawingGroup() .transition(.move(edge: .trailing)) .offset(x: dragAmount.width ) .swipeable(stateOfViewAdded: $myClass.showTab2, dragAmount: $dragAmount, geometryProxy: geometryProxy, insertion: false) } } } } } } extension View { func swipeable(stateOfViewAdded: Binding<Bool>, dragAmount: Binding<CGSize>, geometryProxy: GeometryProxy, insertion: Bool) -> some View { self.gesture( DragGesture() .onChanged { gesture in // inserting must be minus, but removing must be positive - hence the multiplication. if gesture.translation.width * (insertion ? 1 : -1 ) < 0 { if insertion { dragAmount.wrappedValue.width = geometryProxy.size.width + gesture.translation.width } else { dragAmount.wrappedValue.width = gesture.translation.width } } } .onEnded { gesture in if abs(gesture.translation.width) > 100.0 && gesture.translation.width * (insertion ? 1 : -1 ) < 0 { withAnimation(.easeOut.speed(Double(gesture.velocity.width))) { stateOfViewAdded.wrappedValue = insertion } } else { withAnimation(.easeOut.speed(Double(gesture.velocity.width))) { stateOfViewAdded.wrappedValue = !insertion } } withAnimation(.smooth) { dragAmount.wrappedValue = CGSize.zero } } ) } } struct tab: View { var tabID: Int var selectedTab: Bool var body: some View { ZStack { Color(tabID == 1 ? .yellow : .orange) VStack { Text("Tab \(tabID) ").foregroundColor(.black) Button(action: { print("Tab2 should display - \(selectedTab.description)") }, label: { ZStack { circle label } }) Text("Tab2 should display - \(selectedTab.description)") } } } var circle: some View { Circle() .frame(width: 100, height: 100) .foregroundColor(.red) } var label: some View { Text("\(tabID == 1 ? ">>" : "<<")").font(.title).foregroundColor(.black) } }
4
0
418
Sep ’24
Where can I configure the App Clip Code Invocation URL?
My SwiftUI App Clip does not have Advanced Experiences but I want to invoke it from an App Clip Code. In Testflight I can set the URL used to Code Generation tool in the section: App Clip Invocations in the URL field, which also includes a field Title. But I can’t find this field in the Apple Store Connection Distribution tab. Any ideas where the Invocation URL is configured in the Distribution form? Should I use the App Clip URL?
0
0
187
Aug ’24
How to create a native visionOS App archive using a MacBook
I coded a native visionOS App using Xcode on a MacBook in SwiftUI but I can't generate an archive for the AppleConnectStore. Apple states that: Developing for visionOS requires a Mac with Apple silicon. so I should satisfy the dev requirements. My Target is for visionOS alone. But I don't see a visionOS native visionOS in the Xcode product destination list. When I select the Any visionOS Device (Designed for iPad, arm64) option the archive is generated and uploads but ends up in the iOS part of the AppleConnectStore. And when I select the Any visionOS Simulator Device (arm64, x86_64) then the archive won't build and the following error message occurs: Any suggestions or pointers to tutorials showing the archive for visionOS step would be be greatly appreciated. Alan
5
0
730
Mar ’24
TestFlight Invites to the wrong version
I've uploaded a new build 0.2 (build 3) to the TestFlight platform, and invites have gone out. But when they click on the invite it takes them to a previous build 0.2 (build 1) (which I've deactivated. So no-one can install or update the new build. Attached are screenshots of what this looks like when I invite myself to the tester group. I've tried deleting the. old version from my phone so that no versions exist on my phone but that doesn't help.
1
0
481
Mar ’24