Hello!
I am using the wrapper, UIHostController. It is doing what it is designed to do - to wrap my SwiftUI components in a UIKit project.I have AR components that I have stored locally in my mac it would appear on the bottom showing an HStack with the thumbnails. When pressed it would load. When I run the entire code - it works fine in SwiftUI when I am just calling ContentsView. As for pushing it to UIKit, it appears the thumbnails on the bottom are missing.
Here is my Host Controller code for wrapping:
import UIKit
import SwiftUI
import AVFoundation
class ARHostingController: UIViewController{
let arView = UIHostingController(rootView: ContentsView())
private var heightConstraint: NSLayoutConstraint?
override func viewDidLoad() {
super.viewDidLoad()
heightConstraint = arView.view.heightAnchor.constraint(equalToConstant: 0)
addChild(arView)
view.addSubview(arView.view)
arView.didMove(toParent: self)
// arView.view.sizeToFit()
setupConstraints()
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
arView.view.sizeToFit()
heightConstraint?.constant = arView.view.bounds.height
heightConstraint?.isActive = true
}
fileprivate func setupConstraints() {
arView.view.translatesAutoresizingMaskIntoConstraints = false
arView.view.topAnchor.constraint(equalTo: view.topAnchor).isActive = true
arView.view.bottomAnchor.constraint(equalTo: view.bottomAnchor).isActive = true
arView.view.leftAnchor.constraint(equalTo: view.leftAnchor).isActive = true
arView.view.rightAnchor.constraint(equalTo: view.rightAnchor).isActive = true
}
ContentsView (in SwiftUI) code:
`import SwiftUI
struct ContentsView: View {
@State private var isPlacementEnabled = false
@State private var selectedModel: Model?
@State private var modelConfirmedForPlacement: Model?
private var models: [Model] = {
let fileManager = FileManager.default
guard let path = Bundle.main.resourcePath,
let files = try? fileManager.contentsOfDirectory(atPath: path) else {
return []
}
return files
.filter { $0.hasSuffix(".usdz") }
.compactMap { $0.replacingOccurrences(of: ".usdz", with: "") }
.compactMap { Model(modelName: $0 ) }
}()
var body: some View {
ZStack(alignment: .bottom) {
ARViewRepresentable(
modelConfirmedForPlacement: $modelConfirmedForPlacement
)
if isPlacementEnabled {
PlacementButtonView(
isPlacementEnabled: $isPlacementEnabled,
selectedModel: $selectedModel,
modelConfirmedForPlacement: $modelConfirmedForPlacement
)
} else {
ModelPickerView(
isPlacementEnabled: $isPlacementEnabled,
selectedModel: $selectedModel,
models: models
)
}
}
// .onAppear(){
// self.viewModel.fetchData()
// }
}
}`
Post
Replies
Boosts
Views
Activity
I am using the default HelloPhotogrammetry app you guys made: https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app/
My system originally did not fit the specs because of a GPU issue to run this command line. To solve this issue I bought the Apple supported eGPU Black Magic to allow the graphics issue to function. Here is the error when I run it despite the eGPU: apply_selection_policy_once: prefer use of removable GPUs (via (null):GPUSelectionPolicy->preferRemovable)
I have deduced that there needs to be this with the application running it: https://developer.apple.com/documentation/bundleresources/information_property_list/gpuselectionpolicy
I tried modifying the Terminal.plist to the updated value - but there was no luck with it. I believe the CL within Xcode needs to have the updated value -- I need help on that aspect to be able to allow the system to use the eGPU.
I did create a PropertyList within the MacOS app and added GPUSelectionPolicy with preferRemovable, and I am still having issues with the same above error. Please advice.
Also -- to note, I did try to temporary turn off the Prefer External GPU within Terminal -- and it was doing the processing of the Photogrammetry but it was taking awhile to process (>30 mins plus.) I ended up killing that task. I did have a look at Activity Monitor and I did see that my internal GPU was being used, not my eGPU which is what I am trying to use. Previously -- when I did not have the eGPU plugged in - I would be getting an error saying that my specs did not meet criteria, so it was interesting to see that it assumed my Mac had criteria (which it technically did) it just did processing on the less powerful GPU.
From my understanding you capture images on an iOS device and send it to macOS which uses photogrammetry with Object Capture API to process it to a 3D model…
Is it possible to exclude macOS and pull the API within the app itself so it does the processing all within the app? From scanning to processing? I see on the AppStore, there’s Scanner apps already, so I know it is possible to create 3D models on the iPhone within an app— but can this API do that? If not, any resources to point me in the right direction?
(I’m working on creating a 3D food app, that scans food items and turns them into 3D models for restaurant owners… I’d like the restaurant owner to be able to scan their food item all within the app itself)