We have some PDFs that were created with JPEG2000 images in them (*.jp2 files). Using PDFKit, and also WebKit, the images in these PDFs in some cases don't display at all, and in others cases appear badly warped. You can view the PDF using either Adobe Reader, or the Chrome app, and the images appear fine, so it's clearly not an issue of a corrupt PDF. Also, 3rd-party commercial iOS controls like PSPDF and FoxIt display the images fine, so it's clearly an issue with PDFKit and WebKit. Any known workarounds?
Post
Replies
Boosts
Views
Activity
We are connecting to a web service that requires a certificate from a *.pfx. It works fine when the *.pfx is included in the app bundle and extracted from there, as mentioned in this discussion in thread #77694.
The problem is, each device will have a unique certificate that will be pushed to it from an MDM; we don't have a single generic certificate that we can include in the bundle for all devices to use.
For testing, we dragged the *.pfx certificate onto Settings, and it appears under "Configuration Profile", as shown in the attached picture.
Questions:
Is "Configuration Profile" the iOS equivalent of the Mac Keychain?
When an MDM pushes a *.pfx certificate onto an iOS device, will it appear under "Configuration Profile"? Or somewhere else? The MDM isn't functional yet so we haven't seen how it works.
If the answer to #2 is yes, is it possible to access the "Configuration Profile" certificates from within the app? Some articles I've read said this isn't possible due to security--you can only access your app's certificates. If this is true, how will the MDM make the certificates available to our app specifically and not just the device?
Thanks so much for any help,
James T
I'm using Xcode 13.4.1 and targeting iOS 15.0. I have aSwiftUI app that crashes as soon as you click a button that changes a variable from true to false:
Button("Close Side by Side Mode") {
mainScreenRecs.isViewingSideBySide = false
}
This is the only place where the variable is changed; everything else is just SwiftUI reading the variable to determine whether to show views or not, like this:
var body: some View {
VStack(spacing: 0) {
HStack {
if mainScreenRecs.isViewingSideBySide {
Text(catalogModel.title)
}
When I look at the debug stack, I can see that the previous modification says LayoutComputer.EngineDelegate.childGeometries(at:origin:), which makes me wonder if it's related to SwiftUI:
I see this in the debug output, which has the additional note about AttributeGraph: cycle detected through attribute, another possible SwiftUI problem:
I tried wrapping this code in DispatchQuery.main.async, like this:
Button("Close Side by Side Mode") {
DispatchQueue.main.async {
mainScreenRecs.isViewingSideBySide = false
}
}
but it didn't help. Is it possible this is a SwiftUI bug? I hate to think that because it leaves me stuck without a solution, but I can't figure out what else I could check or try.
I'm doing UI testing of a SwiftUI view, and I have the following code
to fetch a List element to check the count of staticTexts in it:
let pred = NSPredicate(format: "identifier == 'detailViewTable'")
let detailViewTable = app.descendants(matching: .any).matching(pred).firstMatch
let arrOfTexts = detailViewTable.staticTexts
let ct = arrOfTexts.count
print("Count is: \(ct)") // This prints 0
print("Count is now: \(detailViewTable.staticTexts.count)") // Also prints 0
The count always prints as 0.
The SwiftUI view basically looks like this:
List {
Section(header: Text("Item list")) {
HStack {
Text("Number")
Spacer()
Text(rec.recNum)
}
}
// Lots more Sections/HStacks/Texts/etc here
// ...
// ...
}
.accessibility(identifier: "detailViewTable")
.listStyle(.grouped)
When I put a breakpoint as shown,
and use po in the Debug Console, I get 0 for ct, but 21 if calling count directly:
(lldb) po ct
0
(lldb) po detailViewTable.staticTexts.count
21
Why is the var ct set to 0, but calling the count directly gives me the correct number 21?
It makes me wonder if the XCUIElementQuery takes a moment to run and
returns the answer in some kind of implicit callback, and the data isn't
ready at the moment the variable ct is set, but will
return the correct answer in the debug console because it waits for the
response. However, I don't see any discussion of a callback in the Apple
documentation:
https://developer.apple.com/documentation/xctest/xcuielementquery
Our app runs a background task while it's locked that calls a web service, and presents a URLCredential with a SecIdentity. It works fine if you simulate the background task using,
e -l objc -- (void)[[BGTaskScheduler sharedScheduler] _simulateLaunchForTaskWithIdentifier:@"com.ourapp.extendedtask"]
but when it actually runs in the background with the device locked, and I try to fetch the SecIdentity from the Keychain using SecItemCopyMatching, it fails with
-25308 (errSecInteractionNotAllowed)
I tried adding kSecAttrAccessibleAfterFirstUnlock when writing to the Keychain like this,
let keychainAddQuery: [String: Any] = [
kSecValueRef as String: identity,
kSecAttrLabel as String: "certKey",
kSecAttrAccessible as String: kSecAttrAccessibleAfterFirstUnlock
]
let addResult = SecItemAdd(keychainAddQuery as CFDictionary, nil)
But it still fails when the background task tries to fetch it from the Keychain using SecItemCopyMatching. I could try using
kSecAttrAccessibleAlways
when I write to the Keychain, but the documentation says that is deprecated.
Is there a way to write the SecIdentity to a file and storing that within the Application Support folder rather than in the Keychain? So it's accessible when the BackgroundTask runs while the device is locked?
Our original approach was to put the PFX file in the Application Support folder, and store the password in the Keychain, then use SecPKCS12Import to generate the SecIdentity. However, it never gets that far, because fetching the password with SecItemCopyMatching didn't work due to the errSecInteractionNotAllowed issue. I then tried writing the SecIdentity itself to the Keychain using SecItemAdd as shown above, but encountered the same problem when the Background Task tried to fetch it using SecItemCopyMatching. I realize now it's not an issue with the specifics of the data being fetched from the Keychain, but rather a security issue reading from the Keychain while in the background.
To me this is a clear bug (and I submitted it as one: FB11813464), unless there's something I'm missing?
When the confirmationDialog is showing on an iPad, and you click outside it, the confirmationDialog disappears, as it should. This is the same behavior as the popover in UIKit; the outside-click acts in lieu of a Cancel button when you're on an iPad. HOWEVER, the outside-click also triggers behaviors elsewhere on the screen, for ex if the place you click has a Button, or an onTapGesture listener. As demonstrated in this picture, when the confirmationDialog is showing, and you click on this button, the confirmationDialog disappears, which is correct, but the print action also runs, which shouldn't.
I’m running Xcode 14.1, targeting iOS 16.1, and running on an iPad 10th gen simulator.
This example is pretty contrived, but it illustrates the behavior. I know you can use .accessibilityIdentifier to uniquely identify a control, but I'm just trying to better understand the interplay between XCUIElement and XCUIElementQuery.
Let's say you have an app like this:
import SwiftUI
struct ContentView: View {
@State var showRedButton = true
var body: some View {
VStack {
if showRedButton {
Button("Click me") {
showRedButton = false
}
.background(.red)
}
else {
HStack {
Button("Click me") {
showRedButton = true
}
.background(.blue)
Spacer()
}
}
}
}
}
And you are UI testing like this:
import XCTest
final class MyAppUITests: XCTestCase {
func testExample() throws {
let app = XCUIApplication()
app.launch()
print(app.debugDescription)
// At this point, the Element subtree shows
// a single Button:
// Button, 0x14e40d290,
// {{162.3, 418.3}, {65.3, 20.3}}, label: 'Click me'
let btn = app.buttons["Click me"]
// This tap makes the red button disappear
// and shows the blue button
btn.tap()
print(app.debugDescription)
// Now, the Element subtree shows a single Button
// that has a different ID
// and different x-y coordinates:
// Button, 0x15dc12e50,
// {{0.0, 418.3}, {65.3, 20.3}}, label: 'Click me'
// This tap now works on the blue button??
// Without requerying?
btn.tap()
print(app.debugDescription)
// The red button reappears,
// but with a different ID (which makes sense).
}
}
Why does the second tap work, even though it's a different control? This must mean that SwiftUI is automatically re-running the XCUIElementQuery to find the button that matches "Click me". Apparently the variable btn isn't linked to the control with the ID 0x14e40d290. Does this mean XCUIElement actually represents an XCUIElementQuery?
I expected it to require me to explicitly re-run the query like this,
btn = app.buttons["Click me"]
prior to running the 2nd tap, or the tap would've said that btn was no longer available.
The final print of the Element subtree shows that the red button has a
different ID now. This makes sense, because when SwiftUI redraws the
red button, it's not the same instance as the last time. This is
explained well in the WWDC videos. Nevertheless, at the moment I
connected the variable "btn" to the control, I thought there was a
tighter affiliation. Maybe UI testing has to behave this way because
SwiftUI redraws controls so frequently?
We have a NavigationView embedded within a TabView, like this:
TabView(selection: $tabSelection) {
NavigationView {
When a view gets pushed onto the stack in the NavigationView, the NavigationBar appears too high, almost under the StatusBar, as shown in the attached picture. If you touch the StatusBar, somehow it alerts the NavigationBar to scoot downward into its correct position.
I discovered a hack where I quickly toggle the StatusBar off, then back on, which accomplishes the same thing. My question, though, is why is this necessary? Why isn't the NavigationBar in the correct place to begin with?
Here's the hack that fixes it:
.onAppear {
withAnimation(.linear(duration: 0.3)) {
appViewModel.hideStatusBar.toggle()
}
DispatchQueue.main.asyncAfter(deadline: .now() + 0.3) {
withAnimation(.easeInOut(duration: 0.3)) {
appViewModel.hideStatusBar.toggle()
}
}
}
As Natascha notes in her helpful article
https://tanaschita.com/20230807-migrating-to-observation/
pre-iOS17 was like this,
Stateview Subview
-----------------------------------------
Value Type @State @Binding
Ref Type @StateObject @ObservedObject
With iOS 17, it's like this:
Stateview Subview
-----------------------------------------
Value Type @State @Binding
Ref Type @State @Bindable
I like how they simplified @State and @StateObject into just @State for both cases. I'm curious, though, why didn't they simplify @Binding and @ObservedObject into just @Binding? Why did they need to maintain the separate property wrapper @Bindable? I'm sure there's a good reason, just wondering if anybody knew why.
Interestingly, you can use both @Binding and @Bindable, and they both seem to work. I know that you're supposed to use @Bindable here, but curious why @Binding works also.
import SwiftUI
@Observable
class TestClass {
var myNum: Int = 0
}
struct ContentView: View {
@State var testClass1 = TestClass()
@State var testClass2 = TestClass()
var body: some View {
VStack {
Text("testClass1: \(testClass1.myNum)")
Text("testClass2: \(testClass2.myNum)")
// Note the passing of testClass2 without $. Xcode complains otherwise.
ButtonView(testClass1: $testClass1, testClass2: testClass2)
}
.padding()
}
}
struct ButtonView: View {
@Binding var testClass1:TestClass
@Bindable var testClass2:TestClass
var body: some View {
Button(action: {
testClass1.myNum += 1
testClass2.myNum += 2
} , label: {
Text("Increment me")
})
}
}
How can I use the Network framework to establish a "client-server" type relationship between a server iPad and, say, 3 client iPads?
I've downloaded the TicTacToe sample app,
https://developer.apple.com/documentation/network/building_a_custom_peer-to-peer_protocol
which demonstrates nicely a connection between a PeerListener and a PeerBrowser.
However, I then tried to make an array of PeerConnection objects rather than a single one, and send data to each one separately from the PeerListener. However, what appears to happen is that the 1st PeerBrowser connects successfully, but when the 2nd PeerBrowser connects, it replaces the 1st PeerBrowser, and both PeerConnection objects in the array point to the 2nd PeerBrowser, so when I send data via either PeerConnection, the data arrives at the 2nd PeerBrowser.
Is it possible to do this? If so, how can I establish multiple PeerConnections between 1 "server" iPad and multiple "client" iPads?
As noted here,
https://developer.apple.com/forums/thread/116799
the Network framework probably won't have a connection available when running in the background.
We've been using the BGTask for a couple years now to start a URLSession and pull data from a web server. It works very nicely and reliably.
Do we have any options if we want to connect to another iPad, though? I ran a test and even if I have a "server" iPad running a Network framework listener (NWListener), and the app is in the foreground and the screen on, a "client" iPad (NWBrowser) cannot connect to the NWListener when trying to connect from the BGTask; it gives a DefunctConnection error.
Why does the Network framework not have the network available to it, but a URLSession does?
Is this a limitation of the iPad, or the Network framework? If I had an iPad running as a web server like this project,
https://github.com/swisspol/GCDWebServer
and an iPad client tries to connect a URLSession to it, would that work?
If this is an iPad limitation, could I use a MacBook on the network as a web server and connect to that instead?
We inherited some code that has a variable that begins in the "SwiftUI world", so to speak, and is copied over to a global variable in the "Swift world" for use in non-SwiftUI classes (POSOs? Plain Ol' Swift Objects?).
Here's a contrived example showing the basic gist of it. Note how there's an AppViewModel that maintains the state, and an .onChange that copies the value to a global var, which is used in the plain class DoNetworkStuff. I would like to weed out the redundant global var, but I kind of see why it was done this way--how DO you bridge between the 2 worlds? I don't think you can add a ref to AppViewModel inside DoNetworkStuff. I was thinking you could add a function to the AppViewModel that returns devid, and stash a ref to the function in a var for use whenever devid is needed., so at least you're eliminating the var value being stored in 2 places, but that might be confusing a year from now. I'm trying to think of a way to rewrite this without ripping out too much code (it could be that maybe it's better to leave it).
var gblDevID = "" //global var
class AppViewModel: ObservableObject {
@Published var devid = ""
...
}
struct ContentView: View {
@StateObject var appViewModel = AppViewModel()
var body: some View {
TextField("Enter device id", text: $appViewModel.devid)
.onChange(of: appViewModel.devid) { newVal in
gblDevID = newVal
}
...
}
}
class DoNetworkStuff {
func networkingTask() {
doSomeTask(gblDevID)
}
}
We have been using the BGTask (specifically a BGProcessingTask) reliably for the last couple of years for our app. Up until now they wake up automatically while the screen is off, the iPad is plugged in, and the app is running (that is, in the background), but never while the screen is on (that is, never when the scenePhase == .active).
For the last month or so, I've noticed that they are triggering now while the screen is displayed. How is this possible??? Did something change with a recent version of iOS?
It's violating Apple's own documentation, which describes the BGProcessingTask as:
"A time-consuming processing task that runs while the app is in the background."
When I register & schedule a Background Task on an iPad, it runs properly. Running the exact same code on an M1 MacBook Pro, though, never schedules the task. There's no error, just a failure to schedule. After scheduling and calling getPendingTaskRequests, on the iPad you can see that it has a pending task, but not on the Mac. Why would this be?
BGTaskScheduler.shared.register(forTaskWithIdentifier: taskIdentifier, using: nil) { [self] task in
print("task to run")
}
do {
try BGTaskScheduler.shared.submit(request)
BGTaskScheduler.shared.getPendingTaskRequests { [self] tasks in
print(tasks.count) //Prints 1 on iPad, prints 0 on Mac
}
} catch {
//Code never comes here.
print(error)
}