The easiest way to explain this is to show it. On any device, open Maps, set it to Driving (which will show traffic). Go to Baltimore Maryland. In the water just south east of the city there is a bridge (Francis Scott Key Bridge). . On Apple Maps the road is colored dark red.
At certain zoom levels, there is a "button" (red circle with a white - in it). When you click on that "button", it says 1 Advisory (Road Closed).
How do I show this "button" on my map. My map shows the dark red color, but no "button" appears.
The only "advisory" that I've been able to find is when you create a route. Of course you can't create a route over a road that fell into the water.
struct ContentView: View {
@State private var position = MapCameraPosition.region(
MKCoordinateRegion(
center: CLLocationCoordinate2D(latitude: 39.22742855118304, longitude: -76.52228412310761),
span: MKCoordinateSpan(latitudeDelta: 0.05407607689684113, longitudeDelta: 0.04606660133347873)
)
)
var body: some View {
Map(position: $position)
.mapStyle(.standard(pointsOfInterest: .all, showsTraffic: true))
.cornerRadius(25)
}
}
Is this a WCDWAD, or is there a way to show the "button"
(We Can't Do What Apple Does)
SwiftUI
RSS for tagProvide views, controls, and layout structures for declaring your app's user interface using SwiftUI.
Posts under SwiftUI tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hey everyone! Totally a newbe question here but cant find the answer anywhere when searching so figured I would just post. Trying to create my first SwiftUI app in Xcode and wanted to create a simple launch screen with an image file, and a text label. however when I select the launch screen in the Project settings it lets me pick it, but then when i navigate away it never saves the value.
Then when I go somewhere else and come back its blank again... Any thoughts?
XCode 16.2
I'm looking to develop a very rich networking macOS app (like social media apps) operated by very large number of users, each user is able to create a number of windows, operate/view each of them, able to customize the app to his liking etc. The UI is expected to be very rich and dynamic.
The question is, should I choose AppKit or SwiftUI?
I have a basic understanding of SwiftUI, its declarative way of defining UI layouts and populating it with data. Not sure if SwiftUI can handle a very rich and dynamic UI customised by large number of users.
Any thoughts? What works best in this scenario? What is Apple's recommendation?
I am developing a macOS app using SwiftUI, and I am encountering an issue when launching the app at login. The app starts as expected, but the window does not appear automatically. Instead, it remains in the Dock, and the user must manually click the app icon to make the window appear.
Additionally, I noticed that the timestamp obtained during the app's initialization (init) differs from the timestamp obtained in .onAppear. This suggests that .onAppear does not trigger until the user interacts with the app. However, I want .onAppear to execute automatically upon login.
Steps to Reproduce
Build the app and add it to System Settings > General > Login Items as an item that opens at login.
Quit the app and restart the Mac.
Log in to macOS.
Observe that the app starts and appears in the Dock but does not create a window.
Click the app icon in the Dock, and only then does the window appear.
Expected Behavior
The window should be created and appear automatically upon login without requiring user interaction.
.onAppear should execute immediately when the app starts at login.
Observed Behavior
The app launches and is present in the Dock, but the window does not appear.
.onAppear does not execute until the user manually clicks the app icon.
A discrepancy exists between the timestamps obtained in init and .onAppear.
Sample Code
Here is a minimal example that reproduces the issue:
LoginTestApp.swift
import SwiftUI
@main
struct LoginTestApp: App {
@State var date2: Date
init(){
date2 = Date()
}
var body: some Scene {
WindowGroup {
MainView(date2: $date2)
}
}
}
MainView.swift
import SwiftUI
struct MainView: View {
@State var date1: Date?
@Binding var date2: Date
var body: some View {
Text("This is MainView")
Text("MainView created: \(date1?.description ?? "")")
.onAppear {
date1 = Date()
}
Text("App initialized: \(date2.description)")
}
}
Test Environment
Book Pro 13-inch, M1, 2020
macOS Sequoia 15.2
Xcode 16.2
Questions
Is this expected behavior in macOS Sequoia 15.2?
How can I ensure that .onAppear executes automatically upon login?
Is there an alternative approach to ensure the window is displayed without user interaction?
Using Unity to develop VisionOS program, pressing the right knob of VisionPro during use will exit the Unity space and destroy the model in the space. The model in the space has been disconnected from the SwiftUI interface. After clicking the right knob, return to the system main interface, and then click the right knob again to return to the inside of the program. However, Unity space cannot be restored, and calling the discisWindow method on the SwiftUI interface has no effect, so the interface cannot be destroyed. Is there any solution??
In macOS application, we are using SwiftUI as an entry point to our application and attaching appdelegate using NSApplicationDelegateAdaptor.
We are using NSViewControllerRepresentable to add a View Controller to the hiracrchy so that we can store intance of viewcontroller and add content to it programatically .
@main
struct TWMainApp: App {
@NSApplicationDelegateAdaptor private var appDelegate: TWAppDelegate
internal var body : some Scene {
TWInitialScene ()
}
}
TWInitialScene :
public struct TWInitialScene : Scene {
public var body : some Scene {
WindowGroup {
TWInitialView ()
}
}
}
TWInitialView :
struct TWInitialView : View {
@Environment(\.scenePhase) private var scenePhase
var body : some View {
TWAppKitToSwiftUIBridge ()
}
}
TWAppKitToSwiftUIBridge :
struct TWNSKitToSwiftUIBridge : NSViewControllerRepresentable {
func makeNSViewController(context: Context) -> TWNSViewController {
let view_hierarchy : TWNSViewController
view_hierarchy = TWStaticContext.sViewController
return view_hierarchy
}
func updateNSViewController(_ nsViewController: TWNSViewController, context: Context) {
}
}
@objc
public class TWStaticContext : NSObject
{
public static let sViewController = TWNSViewController ()
public override init () {}
@objc
public static func GetViewController () -> TWNSViewController
{
return TWStaticContext.sViewController
}
}
public class TWNSViewController : NSViewController {
override public func viewDidLoad ()
{
super.viewDidLoad ()
}
}
To add content to the hirarchy we are accessing viewcontroller's intance and adding content to it like this :
public func PaintInitialScreen () {
let label = NSTextField(labelWithString: "TW window")
label.frame = NSRect(x: 100, y: 200, width: 200, height: 200)
// Adding content to viewcontroller
TWStaticContext.sViewController.view.addSubview(label)
}
We are using this approach because we have a contraint in our application that we have to update UI programatically and on compile time we dont know what we want to show . We will be adding content on runtime based on how many button we want, what label we want , where to place it etc.
When we were using purely appKit application, doing things programatically was simple but since SwiftUI is a declarative application we have to use above approach.
Rational for shifting to SwiftUI entry point is that we want our application to be future safe and since apple is more inclined to SwiffUI, we want to design our entry flow to use SwiftUI entry point . And SwiftUI being declarative, we are using appKit to add content to hiracrchy programtically.
We have used similar apprach in iOS also , where are using UIApplicationDelegateAdaptor inplace of NSApplicationAdaptor . And UIViewControllerReprestable in place of NSViewControllerRepresentable.
Is this right approach to use ?
Hi all, I am looking for a futureproof way of getting the Screen Resolution of my display device using SwiftUI in MacOS. I understand that it can't really be done to the fullest extent, meaning that the closest API we have is the GeometeryProxy and that would only result in the resolution of the parent view, which in the MacOS case would not give us the display's screen resolution. The only viable option I am left with is NSScreen.frame.
However, my issue here is that it seems like Apple is moving towards SwiftUI aggressively, and in order to futureproof my application I need to not rely on AppKit methods as much. Hence, my question: Is there a way to get the Screen Resolution of a Display using SwiftUI that Apple itself recommends? If not, then can I rely safely on NSScreen's frame API?
I have a SwiftUI project which has the following hierarchy:
IOSSceneDelegate (App target) - depends on EntryPoint and Presentation static libs.
Presentation (Static library) - Depends on EntryPoint static lib. Contains UI related logic and updates the UI after querying the data layer.
EntryPoint (Static library) - Contains the entry point, AppDelegate (for its lifecycle aspects) etc.
I've only listed the relevant targets here.
SceneDelegate was initially present in EntryPoint library, because the AppDelegate references it when a scene is created.
public func application(_ application: UIApplication, configurationForConnecting connectingSceneSession: UISceneSession, options: UIScene.ConnectionOptions) -> UISceneConfiguration {
// Set the SceneDelegate dynamically
let sceneConfig: UISceneConfiguration = UISceneConfiguration(name: "mainWindow", sessionRole: connectingSceneSession.role)
sceneConfig.delegateClass = SceneDelegate.self
return sceneConfig
}
The intent is to move the SceneDelegate to the Presentation library.
When moved, the EntryPoint library fails to compile because it's referencing the SceneDelegate (as shown above).
To remove this reference, I tried to set up the SceneDelegate in the old way - In the info.plist file, mention a SceneConfiguration and set the SceneDelegate in Presentation.
// In the Info.plist file
<key>UIApplicationSceneManifest</key>
<dict>
<key>UIApplicationSupportsMultipleScenes</key>
<true/>
<key>UISceneConfigurations</key>
<dict>
<key>UIWindowSceneSessionRoleApplication</key>
<array>
<dict>
<key>UISceneConfigurationName</key>
<string>Default Configuration</string>
<key>UISceneDelegateClassName</key>
<string>Presentation.SceneDelegate</string>
</dict>
</array>
</dict>
</dict>
// In the AppDelegate
public func application(_ application: UIApplication, configurationForConnecting connectingSceneSession: UISceneSession, options: UIScene.ConnectionOptions) -> UISceneConfiguration {
// Refer to a static UISceneconfiguration listed in the info.plist file
return UISceneConfiguration(name: "Default Configuration", sessionRole: connectingSceneSession.role)
}
As shown above, the Presentation.SceneDelegate is referred in the Info.plist file and the reference is removed from the AppDelegate (in EntryPoint library).
The app target compiles, but when I run it, the SceneDelegate is not invoked. None of the methods from the SceneDelegate (scene(_:willConnectTo:options:), sceneDidDisconnect(_:), sceneDidEnterBackground(_:) etc.) are invoked. I only get the AppDelegate logs.
It seems like the Configuration is ignored because it was incorrect. Any thoughts? Is it possible to move the SceneDelegate in this situation?
Hi,
I have created a line graph using LineMark in Charts, which by default includes grid lines and axes lines. My requirement is to remove the grid lines but retain the axes lines and the values.
I have tried the following code:
.chartXAxis {
AxisMarks(preset: .extended, values: .stride(by: 2), stroke: StrokeStyle(lineWidth: 0))
}
This is removing grid lines as well as axes lines.
How to retain axes lines while removing grid lines ?
Hey, I am building some widgets and I'm quite surprised that Swipe gestures for widgets is not supported. It means the user must sacrifice home screen real estate to view multiple widgets to receive the same information. Ideally, swiping left / right inside of the widget should give a developer access to present different views.
I realize that it means that a user would need to swipe outside of the widget, (or swipe to the beginning/end of the series of views inside of the widget) for the page to swipe, but I'd argue that this is the intuitive behavior of what widget scrollview would or should look like anyway.
When using a VStack containing two TextFields inside a ScrollView on an iPad running iOS 18.0, the FocusState of the topmost TextField does not trigger, while the second TextField's FocusState works correctly. Adding an invisible TextField on top resolves the issue, but it appears to be a bug specifically in iOS 18.0 on iPads. This issue does not occur on iOS versions below or above 18.0 (including iOS 18.1).
Code that is not working
struct ContentView: View {
@State var text: String = ""
@FocusState
private var focusState: Bool
var body: some View {
ScrollView(.vertical, content: {
VStack(spacing: 0) {
TextField(text: $text) {
Text("Hello, World!")
}
.border(focusState ? Color.red : Color.gray)
.focused($focusState)
.onChange(of: focusState) { oldValue, newValue in
print(newValue)
}
.onChange(of: text) { oldValue, newValue in
focusState = true
}
}
})
.padding()
}
}
Code that is working
struct ContentView: View {
@State var text: String = ""
@FocusState
private var focusState: Bool
var body: some View {
ScrollView(.vertical, content: {
VStack(spacing: 0) {
// Invisible Text Field
TextField("", text: .constant(""))
.frame(height: 0)
TextField(text: $text) {
Text("Hello, World!")
}
.border(focusState ? Color.red : Color.gray)
.focused($focusState)
.onChange(of: focusState) { oldValue, newValue in
print(newValue)
}
.onChange(of: text) { oldValue, newValue in
focusState = true
}
}
})
.padding()
}
}
I am considering of shifting my codebase from appkit to SwiftUI entry point.
In Appkit, we get control on each NSWindow. So that we can hide/resize window, close window and controll when to present a specific window . Because i have access to NSWindow instance which i can store and perform these actions.
But when i shift to SwiftUI entry point, i declare struct conforming to SwiftUI Scene. And new windows will be created with the instance of this scene.
I am using NSViewControllerRepresentable to add a NSViewController to the hierarchy of these scene. And adding content to this NSViewController's instance to show on screen.
I need help in controlling the size of these windows. How can i close specific window ? Resize specific window ? or Hide specific window?
If i use purely SwiftUI view's , then i can do this by using the Enviorment propery and use DismissWindow to close a window or openWindow with identifier to open a specific window by passing the specificer .
But i am using Appkit's NSViewController where i will add buttons in heirarchy from which i want to trigger these events . And in that case how can i controll a specific window in case of multiwindow application?
I've encountered an issue where storing a throws(PermissionError) closure as a property inside a SwiftUI View causes a runtime crash on iOS 17, while it works correctly on iOS 18.
Here’s an example of the affected code:
enum PermissionError: Error {
case denied
}
struct PermissionCheckedView<AllowedContent: View, DeniedContent: View>: View {
var protectedView: () throws(PermissionError) -> AllowedContent
var deniedView: (PermissionError) -> DeniedContent
init(
@ViewBuilder protectedView: @escaping () throws(PermissionError) -> AllowedContent,
@ViewBuilder deniedView: @escaping (PermissionError) -> DeniedContent
) {
self.protectedView = protectedView
self.deniedView = deniedView
}
public var body: some View {
switch Result(catching: protectedView) {
case .success(let content): content
case .failure(let error): deniedView(error)
}
}
}
@main
struct TestApp: App {
var body: some Scene {
WindowGroup {
PermissionCheckedView {
} deniedView: { _ in
}
}
}
}
Specifically this is the stack trace (sorry for the picture I didn't know how to get the txt):
If I use var protectedView: () throws -> AllowedContent without typed throws it works.
We are trying to write an iOS app that supports regular and constrained widths using a TabView with .tabViewStyle(.sidebarAdaptable). On the surface this seems like a great way to write an app that supports all the different widths that your app may run in. Especially since Stage Manager and Apple Vision have made it easy for users to resize your apps window while it is running.
We are facing many challenges though. I will give a brief one liner of each below, but to truly experience them you need to run the sample app, or watch the sample videos included.
Issues
Basic TabView Issues
Double Navigation Bar: When tabs are collapsed into a "More" tab, there's an unwanted double navigation bar
Selection Sync: Tab selection gets out of sync when switching between narrow/wide layouts through the "More" tab
TabView Crash
Fatal crash occurs when resizing window to narrow width while Tab 5 is selected
Error: SwiftUI/SidebarAdaptableTabViewStyle_iOS.swift:482: Fatal error: Tried to update with invalid selection value
Section Handling Issues
Section Display Bug: Bottom tabs incorrectly show section names instead of tab names in narrow width
Tab Selection Mismatch: Tab identifiers don't match selected tabs in narrow width mode
Customization Issues
Inconsistent "Edit" button behavior in More tab
Unable to properly disable tab customization
Sample app and video
https://github.com/copia-wealth-studios/swiftui-tabview-sample
Hello,
I'm creating an app that stores multiple Date objects: the users selects a date and a time and receives a notification at this time precisely.
The problem that happens is after saving a Date, if the device's timezone changes, the Date also changes - and that is not what's I'm expecting (just want the original time as is without depending on timezone).
I have inspected the TimeZone properties when I'm building the Date from DateComponents but nothing has worked.
Thank you for your answer.
Given Apple's new .limited contact authorization introduced in ios18, I want to be able to present the ContactAccessPicker directly from my app, via ionic capacitor. I present the .contactAccessPicker view via a UIHostingController, and I manage the view controller's dismissal accordingly when the ContactAccessPicker completes and is no longer presented.
Bug: After a few searches or interactions with the Contact Access Picker (ex. searching, selecting contacts, clicking the "show selected" button), the contact access picker crashes and the overlay remains. Any interaction with my app is then blocked because I can't detect that the contact access picker has disappeared when it crashes so I can't dismiss the viewController.
Is there a way for me to prevent the contact access picker from crashing, and how can I detect if it does crash, so I can at least dismiss the view controller if that happens?
struct ContactAccessPickerView: View {
@Binding var isPresented: Bool
let completion: @MainActor ([String]) -> Void
var body: some View {
Group {
if #available(iOS 18.0, *) {
Color.clear
.contactAccessPicker(isPresented: $isPresented) { result in
Task { @MainActor in
completion(result)
}
}
} else {
}
}
}
}
@objc func selectMoreContacts(_ call: CAPPluginCall) {
guard isContactsPermissionGranted() else {
call.resolve(["success": false])
return
}
// Save the call to ensure it's available until we finish
self.bridge?.saveCall(call)
DispatchQueue.main.async { [weak self] in
guard let self = self else { return }
var isPresented = true
let picker = ContactAccessPickerView(isPresented: .init(get: { isPresented }, set: { isPresented = $0 })) { contacts in
call.resolve(["success": true])
self.dismissAndCleanup(call)
}
let hostingController = UIHostingController(rootView: picker)
hostingController.modalPresentationStyle = .overFullScreen
self.bridge?.viewController?.present(hostingController, animated: true)
}
}
I am currently finalizing my Swift Student Challenge submission, and Metal shaders are an essential part of my app. However, during submission, I noticed a note explaining: "Note: Xcode app playgrounds are run in Simulator", which is not possible for my app, as it also requires the camera of a physical device to function. So, I am currently transferring my app from Xcode into Swift Playgrounds, which I presume will run on physical devices.
However, I noticed that Swift Playgrounds do not yet support Metal shaders directly, so I am now pre-compiling my shaders to load them at runtime instead. Note that all the code below was run either in the terminal or in Xcode.
I have already compiled my Metal shaders with:
xcrun -sdk iphoneos metal -o Shaders.ir -c Shaders.metal
xcrun -sdk iphoneos metallib Shaders.ir -o Shaders.metallib
Which seems to have run without any problems.
When I run:
let shaderPath = Bundle.main.path(forResource: "Shaders", ofType: "metallib")
let shaderURL = URL(fileURLWithPath: shaderPath!)
let shaderData = try! Data(contentsOf: shaderURL)
do {
let device = MTLCreateSystemDefaultDevice()!
let library = try shaderData.withUnsafeBytes { bytes -> MTLLibrary? in
let dispatchData = DispatchData(bytes: bytes)
return try device.makeLibrary(data: dispatchData as __DispatchData)
}
print(library!.functionNames)
} catch {
print(error.localizedDescription)
}
My Metal shader functions are printed correctly in the console. However, based on my research, it seems like a MTLLibrary cannot be converted into a SwiftUI ShaderLibrary.
That is why I am now looking at these two initializers:
ShaderLibrary(url: URL)
ShaderLibrary(data: Data)
Which state: Creates a new Metal shader library from the contents of url/data, which must be the contents of precompiled Metal library. Functions compiled from the returned library will only be cached as long as the returned library exists., which I believe should work for my use case.
However, the problem arises when I run this code:
let shaderPath = Bundle.main.path(forResource: "Shaders", ofType: "metallib")
let shaderURL = URL(fileURLWithPath: shaderPath!)
let library = ShaderLibrary(url: shaderURL)
My app consistently seems to crash on the ShaderLibrary initialization, rendering the app unusable. Why does ShaderLibrary(url: shaderURL) cause a crash, even though my .metallib file is valid? Are there additional requirements for loading a ShaderLibrary that I may have missed?
I’m trying to implement my Examples view in my DocumentGroup app on visionOS. I am stuck on what on the surface seems very basic: programmatically opening a document.
Is there an analog to NSDocumentController.shared.openUntitledDocumentAndDisplay on visionOS?
Here’s what I’ve tried so far.
Ideally, this would a be collection of document templates in a DocumentGroupLaunchScene. However, I’ve been unable to get DocumentGroupLaunchScene to work on visionOS.
I’ve tried UIApplication.shared.open(url) with a url to a document in my app bundle. UIApplication.shared.canOpenURL(url) returns true, but open(url) has no effect.
In the macOS build, I use NSDocumentController.shared.openUntitledDocumentAndDisplay, but do not see any iOS or visionOS analog.
@Environment(\.newDocument) private var newDocument would be ideal, but that is not available on visionOS.
UIApplication.shared.activateSceneSession(for: .init()) brings up the document browser in a new window, at which point clicking the “+” button does what I want. Can I invoke that directly somehow?
It would be sufficient if I could programmatically open a new untitled document. On macOS, I do that and sneak the template contents to the Document constructor in a global variable.
I presume I am just overlooking something simple, but I’ve come up blank so far.
Apple published a set of examples for using system gestures to interact with RealityKit entities. I've been using DragGesture a lot in my apps and noticed an issue when using it in an immersive space.
When dragging an entity, if I turn my body to face another direction, the dragged entity does not stay relative to my hand. This can lead to situations where the entity is pulled very close to me, or pushed far way, or even ends up behind me.
In the examples linked above, there are two versions of how they use drag.
handleFixedDrag: This is similar to what I'm doing now. It uses the value from value.gestureValue.translation3D as the basis for the drag
handlePivotDrag: This version aims to solve the problem I described above by using value.inputDevicePose3D as the basis of the gesture.
I've tried the example from handlePivotDrag, but it has one limitation. Using this version, I can move the entity around me as if it were on the inside of an arc or sphere. However, I can no longer move the entity further or closer. It stays within a similar (though not exact) distance relative to me while I drag.
Is there a way to combine these concepts? Ideally, I would like to use a gesture that behaves the same way that visionOS windows do. When we drag windows, I can move them around relative to myself, pull them closer, push them further, all while avoiding the issues described above.
Example from handleFixedDrag
mutating private func handleFixedDrag(value: EntityTargetValue<DragGesture.Value>) {
let state = EntityGestureState.shared
guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") }
if !state.isDragging {
state.isDragging = true
state.dragStartPosition = entity.scenePosition
}
let translation3D = value.convert(value.gestureValue.translation3D, from: .local, to: .scene)
let offset = SIMD3<Float>(x: Float(translation3D.x),
y: Float(translation3D.y),
z: Float(translation3D.z))
entity.scenePosition = state.dragStartPosition + offset
if let initialOrientation = state.initialOrientation {
state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil)
}
}
Example from handlePivotDrag
mutating private func handlePivotDrag(value: EntityTargetValue<DragGesture.Value>) {
let state = EntityGestureState.shared
guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") }
// The transform that the pivot will be moved to.
var targetPivotTransform = Transform()
// Set the target pivot transform depending on the input source.
if let inputDevicePose = value.inputDevicePose3D {
// If there is an input device pose, use it for positioning and rotating the pivot.
targetPivotTransform.scale = .one
targetPivotTransform.translation = value.convert(inputDevicePose.position, from: .local, to: .scene)
targetPivotTransform.rotation = value.convert(AffineTransform3D(rotation: inputDevicePose.rotation), from: .local, to: .scene).rotation
} else {
// If there is not an input device pose, use the location of the drag for positioning the pivot.
targetPivotTransform.translation = value.convert(value.location3D, from: .local, to: .scene)
}
if !state.isDragging {
// If this drag just started, create the pivot entity.
let pivotEntity = Entity()
guard let parent = entity.parent else { fatalError("Non-root entity is missing a parent.") }
// Add the pivot entity into the scene.
parent.addChild(pivotEntity)
// Move the pivot entity to the target transform.
pivotEntity.move(to: targetPivotTransform, relativeTo: nil)
// Add the targeted entity as a child of the pivot without changing the targeted entity's world transform.
pivotEntity.addChild(entity, preservingWorldTransform: true)
// Store the pivot entity.
state.pivotEntity = pivotEntity
// Indicate that a drag has started.
state.isDragging = true
} else {
// If this drag is ongoing, move the pivot entity to the target transform.
// The animation duration smooths the noise in the target transform across frames.
state.pivotEntity?.move(to: targetPivotTransform, relativeTo: nil, duration: 0.2)
}
if preserveOrientationOnPivotDrag, let initialOrientation = state.initialOrientation {
state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil)
}
}
When I was working on my project for Swift Student Challenge, I found an interesting fact regarding camera access. If I create an App project on Xcode, the camera capture works well on it. However, if I copied and pasted the code on an App Playground project on Xcode, it crashed and output several errors. I am wondering why this is happening.