After upgrading to the latest version of Xcode, I'm no longer able to select "Constraint" from the layout dropdown in IB for the root view in a view controller. When I create a new view controller in my storyboard, the available options are "Inferred (Autoresizing Mask)" and "Autoresizing Mask" and the popup views for setting constraints are disabled.
Previously created view controllers still show "Inferred (Constraints)" and "Autoresizing Mask" as the available options.
Any ideas on how I can get back to using constraints?
Post
Replies
Boosts
Views
Activity
We have an iOS app in the store that have been available for almost two years. This app is also available on macOS and Windows (and has been available on those platforms for many years before the iOS app was released).
On macOS and Windows, users are given a product key in order to license the application (either purchased directly from us, or given to them by their institution). Depending on the purchase, the product key could be a subscription or a lifetime license.
When we released the iOS app, we made the app "free" as in, there was no cost to install it from the store. On first run, the user is presented with in-app purchasing options, and also given the option to type in a product key if they have it. We are pretty much the definition of "3.1.3(b) - Multiplatform Services." We've had a couple issues in the past getting the app through review because of the product key option, but it's always been a simple matter of pointing out that we are complying with 3.1.3(b).
For our most recent release, we were once again rejected for providing the product key option. However, after we pointed out that 3.1.3(b) allows unlocking based on purchases on a different platform, we were unilaterally told that product keys were not allowed. We had several back-and-forth interactions with the reviewer in an attempt to clarify why 3.1.3(b) doesn't apply, but we were only told that product keys were not allowed.
Since this release is a bug fix only, they agreed to release it as-is but said we would need to address the issue in our next release. We have no intention of removing the product key unlock and are expecting our next release (a new feature release) to be rejected and then force us to appeal the decision.
Has something changed recently that limits the ability to use product keys that were purchased on separate platforms? I've read through all of the relevant sections and I don't see anything materially different from when we first released the app 2 years ago. Is this just a reviewer who doesn't understand their policies? Has anyone gone through anything similar and managed to successfully appeal the decision?
I have a macOS app that links to several static libraries. Each library has its own Xcode project, and we have a workspace that includes the app along with all of the static libraries. After upgrading to Xcode 15, any changes made to the code in one of the libraries will get compiled, but not relinked into the app (so the app will still be run using the old code without the change). So far, the only way I can get it to include the change is to clean the entire workspace and rebuild, which takes several minutes.
Following suggestions I've found online, I've tried turning off "Find Implicit Dependencies" in the scheme but I can't figure out how to explicitly set the dependencies (they don't appear when trying to add a Target Dependency under Build Phases). Also, it seems to be correctly finding the dependencies since it recognizes when a change has been made and recompiles.
I've tried specifying the libraries in both the "Link Binary With Libraries" Build Phase and the Other Linker Flags build setting. Both work in terms of correctly building the app but neither one fixes this issue.
Does anyone know how to fix this issue? Waiting 5 minutes to recompile the entire project after each minor change is destroying my productivity!
I'm building a visionOS app which loads a Reality Composer scene with a large number of models. The app includes several of these scenes, and allows the user to switch between them. Because the scenes have a large number of models, I want to unload the currently loaded scene before loading a different one. So far I have been unable to reclaim all of the used memory by removing the entities from the scene.
I've made a few small changes to the Mixed Immersive app template which demonstrate this behavior which I've included below (apparently I'm unable to upload a zip file with the entire project). Using just the two spheres included in the reality kit content the leaked memory is fairly small, but if you add a couple larger models to the scene (I was able to easily find free ones online) then the memory leak becomes much more obvious.
When the immersive space is initially opened, I'm seeing roughly 44MB of used memory (as shown in the Xcode Debug navigator). Each time I tap the "Load Models" and then "Unload Models" buttons, the memory use decreases but does not get back down to the initial amount. Subsequent loads and unloads will continue to increase the used memory (the amount of increase will depend on the models that you add to the scene).
Also note that I've seen similar memory increases when dynamically creating the entities. Inside ViewModel.loadModels I've included some commented out code that dynamically creates entities instead of loading a Reality Composer scene.
Is there a way to fully reclaim the used memory? I've tried many different ways to clear the RealityKit entities but so far have been unsuccessful.
struct RKMemTestApp: App {
private var viewModel = ViewModel()
var body: some Scene {
WindowGroup {
ContentView()
.environment(viewModel)
}
ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView()
.environment(viewModel)
}
}
}
Add this above the body in ContentView:
@Environment(ViewModel.self) private var viewModel
The ContentView body should be:
VStack {
Toggle("Show ImmersiveSpace", isOn: $showImmersiveSpace)
.font(.title)
.frame(width: 360)
.padding(24)
.glassBackgroundEffect()
Button("Load Models") {
viewModel.loadModels()
}
Button("Unload Models") {
viewModel.unloadModels()
}
}
ImmersiveView:
struct ImmersiveView: View {
@Environment(ViewModel.self) private var viewModel
var body: some View {
RealityView { content in
if let rootEntity = viewModel.rootEntity {
content.add(rootEntity)
}
} update: { content in
if viewModel.rootEntity == nil && !content.entities.isEmpty {
content.entities.removeAll()
} else if let rootEntity = viewModel.rootEntity, content.entities.isEmpty {
content.add(rootEntity)
}
}
}
}
ViewModel:
import Foundation
import Observation
import RealityKit
import RealityKitContent
@Observable
class ViewModel {
var rootEntity: Entity?
init() {
}
func loadModels() {
Task {
if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
Task { @MainActor in
if rootEntity == nil {
rootEntity = Entity()
}
rootEntity!.addChild(scene)
}
}
}
/*if rootEntity == nil {
rootEntity = Entity()
}
for _ in 0..<1000 {
let mesh = MeshResource.generateSphere(radius:0.1)
let material = SimpleMaterial(color: .blue, roughness: 0, isMetallic: true)
let entity = ModelEntity(mesh: mesh, materials: [material])
entity.position = [Float.random(in: 0.0..<1.0), Float.random(in: 0.5..<1.5), -Float.random(in: 1.5..<2.5)]
rootEntity!.addChild(entity)
}*/
}
func unloadModels() {
rootEntity?.children.removeAll()
rootEntity?.removeFromParent()
rootEntity = nil
}
}
In my visionOS app I am attempting to get the location of a finger press (not a tap, but when the user first presses their fingers together). As far as I can tell, the only way to get this event is to use a SpatialEventGesture.
I've currently got a DragGesture and I am able to use the convert functions in the passed in EntityTargetValue to convert the location3D from the DragEvent to my hit tested entity. But as far as I can tell the SpatialEventGesture doesn't use an EntityTargetValue. I've tried using the convert functions in my targeted entity (ie, myEntity.convert(position: from:)) but these do not return valid values.
My questions are:
Is SpatialEventGesture the correct way to get notified of finger presses?
How do I convert the location3D in the SpatialEventGesture to my entity space?