Posts

Post not yet marked as solved
2 Replies
3.1k Views
Xcode is opening all of my storyboards and xcassets files as text. The menu option for "Open As..." now contains just one grayed out entry "<None>. When re-opening Xcode with an xcasset file it does say "No editor" for a moment before opening the file as text. I did install a macos update this morning which may have triggered this but I can't be sure. Is anybody aware of a fix for this?
Posted Last updated
.
Post not yet marked as solved
17 Replies
25k Views
Is there any documentation on which commands are supported for vim mode now? I know there's the help bar (though I haven't seen it appear since yesterday...). Is there vimrc support, etc...?
Posted Last updated
.
Post not yet marked as solved
3 Replies
6.1k Views
When placing a DragGesture on a view containing a ScrollView, dragging within the ScrollView causes onChanged to trigger, while onEnded does not trigger. Does anybody have a workaround for this?struct ContentView: View { @State var offset: CGSize = .zero var body: some View { ZStack { Spacer() Color.clear if self.offset != .zero { Color.blue.opacity(0.25) } VStack { Color.gray.frame(height: 44.0) ScrollView { ForEach(0..<100, id: \.self) { _ in Text("Don't move please") } } Color.gray.frame(height: 44.0) } .frame(width: 320.0, height: 568.0) .offset(self.offset) .animation(.easeInOut) .gesture( DragGesture(minimumDistance: 10.0, coordinateSpace: .global) .onChanged { (value) in self.offset = value.translation } .onEnded { (_) in self.offset = .zero }) } } }I know I could place gestures on the top/bottom gray areas (they represent bars), but in my actual app the owner of the drag gesture is a container that knows nothing about its contents.
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.5k Views
In UIKit we could do things like:func didTap(_ sender: AnyObject?) { guard let sender = sender as? UIView else { return } // use frame of sender to present a popup, etc... }Will anything similar be made available for SwiftUI? Some method of refering back to the frame of the View that initiates an event would seem very handy for popovers. `Popover` has `targetPoint` and `targetRect` but it's not clear how those can actually be determined during construction of the body when layout hasn't even occurred yet.
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.4k Views
With a UIGestureRecognizer we can always access the `view` that owns the recognizer. Therefore we can determine where the touch is *relative* to that view's bounds, as well as relative to any other view via `convert(_:to:)`.`DragGesture.Value` exposes: `location`, and `translation` however these are relative to the touched view and there is no way to access that view's geometry during the callback. Moreover, you can't access any other view's geometry either (I actually don't need the touched view's geometry, but the containing view's).I'm already using `GeometryReader` and so I tried saving the needed geometry while building my view. However, using `@State var size: CGSize` and then assigning it in the reader's block resulted in an infinite loop.
Posted Last updated
.