I've been trying to make and view an @IBDesignable view in Interface Builder, but unfortunately with little success.When I select the view in my Storyboard, the Custom Class inspector shows the Designables: highlight as "updating", without any changes regardless of my actions.I've done a clean build and build, and the build compiles with no warnings.When I run the code in the simulator, I'm able to see the resulting visual code and there's no warnings or errors, but it doesn't appear to be showing up in the Interface Builder.I've tried removing the DerivedData and then restarting my laptop and Xcode to clear any ephemeral issues, to no effect.I have "Automatically Refresh Views" disabled at the moment - I've tried it both enabled and disabled, with no effect.When I attempt to select the view and choose Editor -> Debug Selected Views, the status spinner at the top of XCode shows "Preparing to debug instance of "GridColorSelectionView" - and an error is reported in Xcode:The specific PID changes with the attempts, but the error code is consistent. I've looked in ~/Library/Logs/DiagnosticReports/, but not seen anything that would give me a hint of what might be failing.Is there something I'm missing, a setting or something - or does anyone have a suggestion on how I might be able to debug this issue?I'm not conversant enough to know if this is an issue with something I've done (or am doing), or if it's a candidate for a bug report on Xcode.- This is with the latest Xcode - Version 10.2 (10E125), running on MacOS 10.14.4 (18E226)
Post
Replies
Boosts
Views
Activity
There are a lot of articles and questions already on the forums from the past years - but which didn't include tags since they weren't part of the system earlier. However, I can still see my past articles - so it would be nice if I could go back and retroactively tag those articles.
Then, perhaps, there's be at least ONE article related to Combine in the forums... and with a great answer from Quinn that really helped me.
I love what I'm seeing about MX*Diagnostic (MXCrashDiagnostic - https://developer.apple.com/documentation/metrickit/mxcrashdiagnostic and friends) and the capability to use it to track down crashes and issues. What I'd like to know is if there's a way I can trigger the diagnostic output deterministically at the end of the CI-driven integration test so that I can build in a scan for these and harvest them to help identify regressions.
Without such na mechanism, am I left having to leave the device (or simulator) running for a huge amount of time and hope for the best? I want to use this specifically with CI, which may have multiple runs per day - and would like to identify which runs of code were aligned without something akin to a "spend a day testing this single iteration of the code" kind of constraint.
I'm trying to capture snapshots as a visual validation of the state of a view in my UI tests. The work that I want to capture is after an async view is loaded (currently an embedded MapKit view). I'm wondering if there's a way that I can expose a marker when the Map has completed its loading and rendering to UIApplication() through the usability system, even if it's something I intentionally expose up from the app with my own code that isn't done by default.
Right now I'm waiting on a Link to show up with:
UIApplication().links["Legal"].waitForExistence(timeout: 2)
as Map is currently exposing a link to legal from within itself after the rendering is set up, but that's a detailed side effect of MapKit specifically. For other views that my code loads asynchronously, can I do something to expose an event or callback book that I can react to within the UIApplication() structure to know that relevant background processing has completed, or is observing for UIElements to exist in some form the only exposed way of getting this kind of information?
Each of the tag pages says that's related to a specific thing - the whole point of the tag. Quite number of recent tags are for WWDC sessions, which is awesome. However, those tag pages - while they list the WWDC session - don't provide any linked way of getting back to that session or its details.
For example, the tag page: https://developer.apple.com/forums/tags/wwdc20-10091
burns up this huge visual space with a description of the session, but nothing in that content provide an actual *link* to the content: https://developer.apple.com/videos/play/wwdc2020/10091
The whole WWDC+Forums thing would be hugely improved by not treating these as separate resources, but allowing or encouraging them to intermix, and just adding a single active link from the tag page header to the session itself would be a good start.
Hello,
I recently downloaded the same from Creating a Game with SceneUnderstanding - https://developer.apple.com/documentation/realitykit/creating_a_game_with_sceneunderstanding, and the out-of-the-box compilation failed, with what appears to be some structural misses in RealityKit.
This is with Xcode 12.4 (12D4e) - and a number of compilation complaints appear to be Objective-C elements not being correctly bridged, or otherwise unavailable to the compiler.
Missing: ARView.RenderOptions
.session within an ARView instance
SceneUnderstanding from within ARView.Environment
.showSceneUnderstanding from ARView.DebugOptions
and a few others. I tried changing the deployment target from iOS 14.0 to iOS 14.4, clean build and rebuild, and wiping out DerivedData just to take a stab at things, but to no avail.
Is there something I can/should update to get the scene understanding elements back available from the ARView class?
If the library I want to document is primarily tracked with a Package.swift cross-platform definition, what's the optimal way to build and render documentation?
Can I still use xcodebuild docbuild with the Package.swift format for the project, or does the documentation system require a fully-defined Xcode project wrapped around it?
Or do I need to open the Package.swift file in Xcode and then generate the docc archive from Xcode's Product menu?
Is any of the sample code from Explore advanced rendering with RealityKit2 used to generate the dynamic mesh resource, or the geometry modifier, available as a standalone project?
I have a few sections in my content where I have a repeated example that I use to illustrate how to use a few methods in my library together - is there a way to move this into an external snippet and include it inline within existing content?
What's the most effective equivalent to the accessibilityElement API when you're creating visualizations in SwiftUI using the new Canvas drawing mechanism?
When I'm drawing with Canvas, I'd know the same roughly positioning information as drawing in UIKit or AppKit and making charts visualizations, but I didn't spot an equivalent API for marking sections of the visualization to be associated with specific values. (akin to setting the frame coordinates in a UIView accessibility element)
I've only just started playing with Canvas, so I may be wrong - but it seems like a monolithic style view element that I can't otherwise easily break down into sub-components like I can by applying separate CALayers or UIViews on which to hang accessibility indicators.
For making the equivalent bar chart in straight up SwiftUI, is there a similar mechanism, or is this a place where I'd need to pop the escape hatch to UIKit or AppKit and do the relevant work there, and then host it inside a SwiftUI view?
I'd love to be able to run the HTTP traces on a few specific integration-style tests that I have written using XCUITesting. Is there a way, even with external scripting, to invoke the test and have the profiler capture a trace like this?
I'm using DocC to add documentation to an existing swift package, and the basics are working great - properties and methods showing up, linked pages, all the stuff you'd hope for.
I'd like to update the structure within the struct (or class) to organize the methods and properties a bit more akin to how Apple does it in their documentation - not just the default headers, but organized by why you'd use them.
Can this be done in DocC using headings within the doc comments in the source file directly, or do we need to add an Extension file with the same symbol name at the top to override and provide that additional structure?
The extension template looks like it might be redundant with the summary and overview sections, since that same area is (I think) what comes from the doc comments in the source headers.
# ``Symbol``
Summary
## Overview
Text
## Topics
### Group
- ``Symbol``
I've been trying to build and use the USD command line toolchain pieces from source on an M1/arm64 Mac - and while I've gotten it to compile, I get a crash as python attempts to load up the USD internals:
------------------------------ python terminated -------------------------------
python crashed. FATAL ERROR: Failed axiom: ' Py_IsInitialized() '
in operator() at line 148 of /Users/heckj/src/USD/pxr/base/tf/pyTracing.cpp
writing crash report to [ Sparrow.local:/var/folders/8t/k6nw7pyx2qq77g8qq_g429080000gn/T//st_python.79467 ] ... done.
--------------------------------------------------------------------------------
I found a number of issues at Github, which hints that this is a potentially known and ongoing problem:
https://github.com/PixarAnimationStudios/USD/issues/1620
referenced issue: https://github.com/PixarAnimationStudios/USD/issues/1466
referenced issue: https://github.com/PixarAnimationStudios/USD/issues/1736
With some suggestions, but no clear resolutions.
I tried the build commands that are referenced in the release details for USDPython as available on developer downloads, and fiddling it a bit got it to compile:
python build_scripts/build_usd.py \
--build-args TBB,arch=arm64 \
--python --no-imaging --no-usdview \
--prefer-safety-over-speed \
--build-monolithic /opt/local/USD
But I'm repeatedly hitting the crash where python isn't initializing. I've tried Python 3 from home-brew, an Anaconda version of python (intel through Rosetta), the basing it on the python included with Xcode (universal binary), and the most recent trial was with miniforge3 arm-native python that's recommended from the Metal for Tensorflow marketing page.
WIth the warnings about Python disappearing from the default install, I'd like to get a mechanism in place that I'm comfortable with to get an install of USDtools, and ideally that are native to the M1 processor.
I was experimenting in a playground with the specifying the kind of measurement with MeasurementUnit and then formatting it with the new foundation formatters, specifically MeasurementFormatter.
The particular example I was using is a UnitDuration, exploring the various options and outputs.
let times = [1,10,100,1000,10103,2354,83674,182549].map {
Measurement<UnitDuration>(value: $0, unit: .microseconds)
}
print("unitStyle: .medium, unitOptions: .providedUnit")
f.unitOptions = .providedUnit
f.unitStyle = .medium
for t in times {
print(f.string(from: t))
}
What I gathered from this exploration is that the unit you use when you create a Measurement<UnitDuration> is stashed within it - in this case I created them with microseconds.
What I'd like to do have a uniform scale for the times I'm printing that more aligned with the size based on the values. For example, if all my examples are recorded in microseconds, but more usefully represent milliseconds - is there a way to deal with this just in the formatting itself?
I tried updating the unit representation to transform the MeasurementUnit<Duration> instances, but the unit property is a constant: Cannot assign to property: 'unit' is a 'let' constant. I did find the convert(to:) that would let me transform to another specific unit.
Is there, by chance, any way to pass in the unit to use directly to a MeasurementFormatter instance, or do I need to back out and transform the various instances to get to a specific unit?
Likewise, are there any built-in methods to MeasurementUnit that align the SI units chosen to the scale of the data being used as a value? Or is that something I'd need to create a heuristic/algorithm to and do and apply it myself when I wanted that result?
I'm trying to determine how best to achieve a desired result within Swift Charts.
I'm trying to make a sort-of "reversed" log chart. The use case here is for driving into the details of the percentiles from a series latency values stored within a histogram.
The X axis percentile values I was hoping to achieve was something along the lines of:
10, 50, 90, 99, 99,9, 99.99, 99.999
With those values being fairly evenly distributed across the chart. If I look at this from an inverted sequence of:
( 1 - (some value) ).reversed() where the values are:
0.00001, 0.0001, 0.0001, 0.001, 0.01, 0.5, 0.1
But I've been struggling with how to start with those values, scale the X axis using a 1-value kind of setup, and then overlaying the values that are more human readable to achieve the end result.
Any suggestions on how to tackle this scenario with customizing a chart axis?