I'm working on an iOS project that is almost entirely SwiftUI, save for the UIViewRepresentable ARView that I'm using. I'm using RealityKit.
When calling MeshResouce.generateText(), fairly often some or all Entities will fail to generate their Mesh with the proper font and will instead render with SF Pro Regular. The font size is not lost, nor any other attributes of the entity. The same data model that generates the entity also provides for a 2D representation of the model, in which the font is never lost. If an entity is generated during makeUIView() of the ARView or during onAppear() of its parent view, the font will never be lost. The font is only lost when the entity is generated in response to user input.
Results are unfazed by using .ttf vs .otf for the font files.
Very often (maybe always?), once one entity fails to render with its provided font, the subsequently generated entities will also fail to render with their font. I have successfully rendered an Entity then rerendered it and it has lost its font.
Possibly related, generating text entities always throws the error CoreText performance note: Client called CTFontCreateWithName() using name "CUSTOM FONT NAME" and got font with PostScript name "CUSTOMFONT-NAME". For best performance, only use PostScript names when calling this API.
This error will not be thrown if the first text entity (and therefore all subsequent entities) fails to render properly. If a single text entity gets generated successfully, then subsequent failing entities are more likely to also throw the error.
Also always get error warning: using linearization / solving fallback
when starting AR Session, but it doesn't seem to be related .
Also often (always?) get error 2023-03-27 15:10:30.938146-0700 appName[38594:50629667] [Technique] ARWorldTrackingTechnique <0x15b8d0570>: World tracking performance is being affected by resource constraints [25]
Also often (always?) get error 2023-03-27 15:10:30.060163-0700 appName[38594:50629490] [TraitCollection] Class CKBrowserSwitcherViewController overrides the -traitCollection getter, which is not supported. If you're trying to override traits, you must use the appropriate API.
I don't know if any of those other errors are related but I figured I should include them. This has been happening intermittently for a long time and with a number of different fonts.
I created a very simple version of this in a separate project that will eventually reproduce the error (if you have enough patience). You'll just need to add a non-Apple font to the project. It should render in the provided font sometimes, but when you rerun the project, or if you add the entity enough times, it should fail. It's more likely to fail with the first entity than with subsequent entities, so running the project repeatedly is the most efficient way to reproduce the bug.
import SwiftUI
import RealityKit
class MockRenderQueue: ObservableObject {
@Published var renderActions: [(_ arView: ARView, _ cameraAnchor: AnchorEntity) -> Void] = []
}
struct ContentView : View {
@StateObject var mockRenderQueue = MockRenderQueue()
var body: some View {
ZStack(alignment: .bottom) {
ARViewContainer(renderQueue: mockRenderQueue).edgesIgnoringSafeArea(.all)
Button(action: {
mockRenderQueue.renderActions.append( addTextEntity )
}) {
ZStack {
RoundedRectangle(cornerRadius: 20)
.frame(height: 48)
Text("Add Text Entity")
.bold()
.foregroundColor(.white)
}
}.padding()
}
}
func addTextEntity(to arView: ARView, cameraAnchor: AnchorEntity) {
let fontName = "Font-Name" // Put your font name here
guard let customFont = UIFont(name: fontName, size: 0.1) else {
print("Error: Could not find font")
return
}
// Make text Entity
let textMesh = MeshResource.generateText("hello world", extrusionDepth: 0.001, font: customFont)
let textEntity = ModelEntity(mesh: textMesh, materials: [UnlitMaterial(color: .black)])
// Make an anchor and position it 1m back and centered, facing the user
let anchorEntity = AnchorEntity()
anchorEntity.look(at: [0,0,0], from: [textMesh.bounds.extents.x / -2,0,-1], relativeTo: cameraAnchor)
anchorEntity.transform.rotation *= simd_quatf(angle: .pi, axis: [0,1,0])
// Add TextEntity to anchor
anchorEntity.addChild(textEntity)
// Add the anchor to the camera anchor
cameraAnchor.addChild(anchorEntity)
}
}
struct ARViewContainer: UIViewRepresentable {
@ObservedObject var renderQueue: MockRenderQueue
@State private var cameraAnchor = AnchorEntity(.camera)
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.scene.addAnchor(cameraAnchor)
return arView
}
func updateUIView(_ arView: ARView, context: Context) {
for renderAction in renderQueue.renderActions {
renderAction(arView, cameraAnchor)
}
}
}
I've tried a handful of different implementations, exposing the ARView through a Coordinator, as a singleton, and other things and I just can't break this bug. Does anyone else have this problem? It is a significant detraction from the user experience and I really need to find out how to fix this.
Thank you in advance.