I'm trying to make a magnifying glass that shows up when the user presses a button and follows the user's finger as it's dragged across the screen.
I came across a UIKit-based solution (https://github.com/niczyja/MagnifyingGlass-Swift), but when implemented in my SKScene, only the crosshairs are shown. Through experimentation I've found that magnifiedView?.layer.render(in: context) in:
public override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
context.translateBy(x: radius, y: radius)
context.scaleBy(x: scale, y: scale)
context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y)
removeFromSuperview()
magnifiedView?.layer.render(in: context)
magnifiedView?.addSubview(self)
}
can be removed without altering the situation, suggesting that line is not working as it should. But this is where I hit a brick wall. The view below is shown but not offset or magnified, and any attempt to add something to context results in a black magnifying glass.
Does anyone know why this is? I don't think it's an issue with the code, so I'm suspecting its something specific to SpriteKit or SKScene, likely related to how CALayers work.
Any pointers would be greatly appreciated.
.
.
.
Full code below:
import UIKit
public class MagnifyingGlassView: UIView {
public weak var magnifiedView: UIView? = nil {
didSet {
removeFromSuperview()
magnifiedView?.addSubview(self)
}
}
public var magnifiedPoint: CGPoint = .zero {
didSet {
center = .init(x: magnifiedPoint.x + offset.x, y: magnifiedPoint.y + offset.y)
}
}
public var offset: CGPoint = .zero
public var radius: CGFloat = 50 {
didSet {
frame = .init(origin: frame.origin, size: .init(width: radius * 2, height: radius * 2))
layer.cornerRadius = radius
crosshair.path = crosshairPath(for: radius)
}
}
public var scale: CGFloat = 2
public var borderColor: UIColor = .lightGray {
didSet {
layer.borderColor = borderColor.cgColor
}
}
public var borderWidth: CGFloat = 3 {
didSet {
layer.borderWidth = borderWidth
}
}
public var showsCrosshair = true {
didSet {
crosshair.isHidden = !showsCrosshair
}
}
public var crosshairColor: UIColor = .lightGray {
didSet {
crosshair.strokeColor = crosshairColor.cgColor
}
}
public var crosshairWidth: CGFloat = 5 {
didSet {
crosshair.lineWidth = crosshairWidth
}
}
private let crosshair: CAShapeLayer = CAShapeLayer()
public convenience init(offset: CGPoint = .zero, radius: CGFloat = 50, scale: CGFloat = 2, borderColor: UIColor = .lightGray, borderWidth: CGFloat = 3, showsCrosshair: Bool = true, crosshairColor: UIColor = .lightGray, crosshairWidth: CGFloat = 0.5) {
self.init(frame: .zero)
layer.masksToBounds = true
layer.addSublayer(crosshair)
defer {
self.offset = offset
self.radius = radius
self.scale = scale
self.borderColor = borderColor
self.borderWidth = borderWidth
self.showsCrosshair = showsCrosshair
self.crosshairColor = crosshairColor
self.crosshairWidth = crosshairWidth
}
}
public func magnify(at point: CGPoint) {
guard magnifiedView != nil else { return }
magnifiedPoint = point
layer.setNeedsDisplay()
}
private func crosshairPath(for radius: CGFloat) -> CGPath {
let path = CGMutablePath()
path.move(to: .init(x: radius, y: 0))
path.addLine(to: .init(x: radius, y: bounds.height))
path.move(to: .init(x: 0, y: radius))
path.addLine(to: .init(x: bounds.width, y: radius))
return path
}
public override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
context.translateBy(x: radius, y: radius)
context.scaleBy(x: scale, y: scale)
context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y)
removeFromSuperview()
magnifiedView?.layer.render(in: context)
//If above disabled, no change
//Possible that nothing's being rendered into context
//Could it be that SKScene view has no layer?
magnifiedView?.addSubview(self)
}
}
Post
Replies
Boosts
Views
Activity
I need a magnifying glass function for one of my SwiftUI Views, but can't find a way to implement it as needed.
I found a Youtube video where the author renders the view twice, overlaying the second over the first, then scaling and masking it to create the illusion of magnification, but this is expensive and doesn't work in many cases where more complex views are presented (e.g. a LazyVGrid).
I've also explored continually capturing partial screenshots and scaling them up to create the illusion of magnification, but there's no straightforward way to achieve this with SwiftUI without getting into the messiness of UIViewRepresentables.
Any help would be greatly appreciated
This morning I bought my first-ever Apple Watch for the sole purpose of development and proceeded to spend six hours failing at the first step of development: getting the device to enter developer mode and connect to Xcode.
Since I'm not seeing any WatchOS 11 posts on this issue, it might just be me. This is why I'm making a new thread that's specific to WatchOS 11, Xcode 16, and maybe Series 10.
Some particulars for my case:
Overall
__Followed Xcode 16.0 documentation
On a watchOS device that you use for development, go to Settings > Privacy > Developer Mode. To toggle Developer mode, use the Developer Mode switch.
To pair an Apple Watch to a Mac, connect its companion iPhone to the Mac with a cable, and ensure that the iPhone is paired for development. After this step, follow any instructions on the Apple Watch to trust the Mac. When paired through an iPhone running iOS 17 or later, Xcode connects to the Apple Watch over Wi-Fi
__Tried all the folk remedies listed in the (many) previous posts on enabling development mode and connecting to Xcode
iOS 18.0
__In developer mode
__Connected to macOS via USB, trusts computer
WatchOS 11.0
__Prompt to trust computer appears and trust is established
__‘Developer Mode’ list item never appears at end of the ‘Privacy’ menu under ‘Settings’
__‘Developer’ item sometimes appears at the end of ‘Settings’
Despite never having seen or toggled ‘Developer Mode’ under ‘Privacy’
Persists across reboots
Possible that WatchOS 11 eliminated the item under Settings > Privacy? If so, documentation not up to date
Xcode 16.0
__Watch never appears under ‘Manage Run Destinations’
After installing sample app to phone, then attempting to install WatchOS app via iOS Watch app, “Cannot install at this time” alert appears
App icon appears on watch, and tapping on it leads to an alert with, “This app cannot be installed because its integrity could not be verified”, despite wi-fi working
Watch apps for other apps (e.g. Apple Store) can be successfully installed via iOS Watch app
Above suggests the watch isn't truly in developer mode despite Settings > Developer appearing and persisting across reboots
__The network path from Xcode to WatchOS should be clear
Reconfigured router such that devices on the same network can talk to each other
iPad and iPhone appear with network icon when not connected via cable and Xcode can run code on them
Watch on same network as iPad and iPhone
macOS 15.0
__Due to security policy, cannot use Wi-Fi (disabled both physically and via sudo /usr/sbin/networksetup -setnetworkserviceenabled 'Wi-Fi' off)
Possible that Xcode can only establish a connection to WatchOS via Wi-Fi and not via ethernet bridged to wifi. If so, a confirmation would be hugely helpful.
This is currently my prime suspect. Wi-fi cannot be re-enabled, so I'm trying workarounds like connecting watch to phone's hotspot (doesn't work) and somehow using phone to provide network to the Mac.
__Due to security policy, firewall configured to block all incoming connections
Shouldn't be an issue since Xcode doesn't need incoming connections to see non-watch devices
__Due to security policy, mDNSResponder and mDNSResponderHelper disabled
Also shouldn't be an issue, but including just in case
I'm making a loading screen, but I can't figure out how to make the loading indicator animate smoothly while work is being performed. I've tried a variety of tactics, including creating confining the animation to a .userInitiated Task, or downgrading the loading Task to .background, and using TaskGroups. All of these resulted in hangs, freezes, or incredibly long load times.
I've noticed that standard ProgressViews work fine when under load, but the documentation doesn't indicate why this is the case. Customized ProgressViews don't share this trait (via .progressViewStyle()) also choke up. Finding out why might solve half the problem.
Note: I want to avoid async complications that come with using nonisolated functions. I've used them elsewhere, but this isn't the place for them.
I need to find a way to allow recording from the mic while outputting two different sound streams to two different devices (speaker and headphones).
I've done a fair bit of reading around using AVAudioSession.Category.multiroute but haven't found any modern examples. @theanalogkid posted a nice example using obj-C nine years ago, but others have noted that the code isn't readily translatable to Swift.
To make matters worse, this is one of the very few examples on how to properly use multirouting. The official documentation is lacking, to say the least, and the WWDC 2012 session is, well, old enough to attend middle school and be a Taylor Swift fan, but definitely not in Swift. The few relevant forum posts here are spread over this middle schooler's life span and likely outdated, with most having no responses other than the poster's own plightful echo. They don't paint a pretty picture of .multiroute's health, with a recent poster noting that volume buttons don't work in this mode, contacting DTS and finding that there's no fix; another finding that it just doesn't work for certain devices, etc.
Audio is giving me enough of a headache so I'd like to avoid slogging through this if possible. .multiroute feels like the developer mode of AVAudioSession, but without documentation.
tl;dr - Without using .multiroute, is there a way to allow an app to output two different devices while simultaneously recording audio? If .multiroute is the only way to achieve this, can someone give me a quick rundown of how this category works?
If I encrypt user data with Apple's newly released homomorphic encryption package and send it to servers I control for analysis, how would that affect the privacy label for that app?
E.g. If my app collected usage data plus identifiers, then sent it for collection and analysis, would I be allowed to say that we don't collect information linked to the user? Does it also automatically exclude the relevant fields from the "Data used to track you" section?
Is it possible to make even things that were once considered inextricably tied to a user identity (e.g. purchases in an in-app marketplace) something not linked, according to Apple's rules?
How would I prove to Apple that the relevant information is indeed homomorphically encrypted?
This is a follow-up to my previous question: How to attribute/credit Apple Fonts added to app?
In that previous post, I misremembered what I did and said I found fonts via macOS' FontBooks, when instead I came acrossUIFont.familyNames. Since these are included via UIKit, the legal implications should be different.
I looked at various license agreements that govern iOS app development but haven't found anything mentioning fonts. Since these are included as part of UIKit, its reasonable to assume that developers are allowed to include these fonts--but in what ways?
Am I allowed to let users create, say, documents with these fonts?
Am I only allowed to display these fonts?
There are 84 fonts, and judging by their FontBook entries, there is a wide range of licenses and restrictions. It seems unnecessarily harsh to have every iOS developer verify each one and figure out which they can legally keep if they want to offer their users access to all (for, say, a text-editing app). There must be some overarching rule that supersedes/encapsulates them, but this rule isn't clear to me after hours of research. I'm not a lawyer, and I don't think Apple expects every app developer to consult their lawyers on whether they can use system fonts.
I'm about to send an email to Apple's legal team (I will post their response here if allowed), but in the meantime I want to hear what other devs think about this.
In Xcode, entering UIFont.familyNames returns the following:
["Academy Engraved LET", "Al Nile", "American Typewriter", "Apple Color Emoji", "Apple SD Gothic Neo", "Apple Symbols", "Arial", "Arial Hebrew", "Arial Rounded MT Bold", "Avenir", "Avenir Next", "Avenir Next Condensed", "Baskerville", "Bodoni 72", "Bodoni 72 Oldstyle", "Bodoni 72 Smallcaps", "Bodoni Ornaments", "Bradley Hand", "Chalkboard SE", "Chalkduster", "Charter", "Cochin", "Copperplate", "Courier New", "Damascus", "Devanagari Sangam MN", "Didot", "DIN Alternate", "DIN Condensed", "Euphemia UCAS", "Farah", "Futura", "Galvji", "Geeza Pro", "Georgia", "Gill Sans", "Grantha Sangam MN", "Helvetica", "Helvetica Neue", "Hiragino Maru Gothic ProN", "Hiragino Mincho ProN", "Hiragino Sans", "Hoefler Text", "Impact", "Kailasa", "Kefa", "Khmer Sangam MN", "Kohinoor Bangla", "Kohinoor Devanagari", "Kohinoor Gujarati", "Kohinoor Telugu", "Lao Sangam MN", "Malayalam Sangam MN", "Marker Felt", "Menlo", "Mishafi", "Mukta Mahee", "Myanmar Sangam MN", "Noteworthy", "Noto Nastaliq Urdu", "Noto Sans Kannada", "Noto Sans Myanmar", "Noto Sans Oriya", "Optima", "Palatino", "Papyrus", "Party LET", "PingFang HK", "PingFang SC", "PingFang TC", "Rockwell", "Savoye LET", "Sinhala Sangam MN", "Snell Roundhand", "STIX Two Math", "STIX Two Text", "Symbol", "Tamil Sangam MN", "Thonburi", "Times New Roman", "Trebuchet MS", "Verdana", "Zapf Dingbats", "Zapfino"]
My app lets users create things with text, and I've included Apple fonts so users can format their text with them. These were fonts I found in the Font Book app that comes with macOS. My assumption is that these, like the San Francisco font, can be distributed with apps.
Do I need to attribute these fonts to their creators and publish a license in my "About" page? If so, where do I find the license(s) and what is the proper way of publishing them? Is there anything else I should know?
Please let me know if this should've been posted under a different topic and tag
I'm a new app developer and I've read through most relevant posts on this topic here and elsewhere. Many of the forum posts here are specific to Objective-C, or old enough to be considered outdated in the fast-moving world of computing. Many of the posts elsewhere are about protecting authentication secrets, which doesn't apply in my case, and a lot are by someone with a product to sell, which I've ignored.
My app is 99.9% Swift and I'm not going to store any authentication secrets in the IPA. What I'd like to protect is the core mechanism of my product, which has to be included in the binary and is small (< 10k lines). I want to make it so it's harder to steal the source code than it is to recreate my functionality from scratch, which is difficult even with the app in front of them.
From what I gathered, Swift code compiled by Xcode is protected from reverse engineering / decompilation by the following:
Symbolization of the app
Native builds from Xcode destroys names of variable, functions, etc.
Swift code is compiled in such a way that makes stealing harder than Objective-C
This should make me feel better, but the threat-level is increasing with the availability of free, commercial-grade decompilers (e.g. Ghidra) and machine learning. The fact that iOS 18 supports a checkm8 (i.e. jailbreakable) device means that decrypting the IPA from memory is still trivial.
Questions
People talk about stealing authentication secrets via reverse-engineering, but is the same true for mechanisms (i.e. code)?
How common is the issue of source-code stealing in iOS apps?
Can machine learning be leveraged to make decompilation/reverse engineering easier?
Will I get rejected by App Review for obfuscating a small portion of my code?
I'm trying to add JPEG-XL encoding/decoding capabilities to my app and haven't been able to find a trustworthy pre-compiled version. The only one I've found is in https://github.com/awxkee/jxl-coder-swift.
As a result I've been trying to compile my own iOS version from the reference implementation (https://github.com/libjxl/libjxl), having done virtually no compiling before. When I started out, my gut said, "Compiling for a different platform should be easy since it's not like I'm actually writing or modifying the implementation", but the more I research and try, the more doubtful I've become. So far I've figured out it means compiling all the dependencies (brotli, highway, libpng, skcms, etc.) too, but I've also gotten nowhere with them, having tried my hand at modifying cmake toolchains and CMakeList.txt files.
As a novice, am I biting off more than I can chew with this? Is the seemingly simple task, "Compile this C++ library for iOS" actually something that freelancers charge huge amounts for? (If so, this makes the free compiled version mentioned above even more questionable)
Any help or pointers would be greatly appreciated.
My app stores and transports lots of groups of similar PNGs. These aren't compressed well by official algorithms like .lzfse, .lz4, .lzbitmap... not even bz2, but I realized that they are well-suited for compression by video codecs since they're highly similar to one another.
I ran an experiment where I compressed a dozen images into an HEVCWithAlpha .mov via AVAssetWriter, and the compression ratio was fantastic, but when I retrieved the PNGs via AVAssetImageGenerator there were lots of artifacts which simply wasn't acceptable. Maybe I'm doing something wrong, or maybe I'm chasing something that doesn't exist.
Is there a way to use video compression like a specialized archive to store and retrieve PNGs losslessly while retaining alpha? I have no intention of using the videos except as condensed storage.
Any suggestions on how to reduce storage size of many large PNGs are also welcome. I also tried using HEVC instead of PNG via the new UIImage.hevcData(), but the decompression/processing times were just insane (5000%+ increase), on top of there being fatal errors when using async.
I'm a new developer who is looking to make my first app easier to manage on my end by staying in the Apple ecosystem. My ideal backend is just pure and simple CloudKit. This should help me cut down on costs and increase my security, or so I thought.
The more I looked into the issue of mobile app security --more specifically, preventing fraudulent access to backend APIs-- the more it seems like CloudKit is a disaster waiting to happen. While data in transit is encrypted and there's even end-to-end encryption for private DBs, securing an app's public DB in the presence of modified apps is a daunting, if not impossible task. My assumption is that a modified app cannot be trusted to make honest assertions about itself, the device, or its iCloud account, and can potentially lie its way into restricted areas of the DB. If an app is compromised, CloudKit queries from that app can be used to make malicious queries or even changes to the databases.
I'm hoping App Attest, even with its potentially circular logic, can at least make life harder for fraudsters, competitors, and vandals (when combined with other security measures like jailbreak, debugging, hooking, and tampering detections), but I have not found a single mention on how App Attest might be used to protect CloudKit. There doesn't even seem to be a verified way for me to build a third party server that can handle App Attest and then tell CloudKit to allow a user through (with all the security hazards a new developer faces when configuring an authentication server). The message seems to be: App Attest is important, but you can't use it with CloudKit, so build your own server.
Questions
Is my assumption that a compromised app can make malicious queries or changes to an app's CloudKit DB correct?
Can App Attest be made to protect a CloudKit public DB, with or without the involvement of a third-party server to handle attestations?
Private Access Tokens (PATs) are headlined as something that can eliminate CAPTCHAs, but also includes app-to-server communications in its use cases. Because of this, they seem to perform a very similar function to DeviceCheck, since both aim to attest to the health of the device in question.
I don't really understand the difference between the two and find this confusing. Since PATs are newer and more general, I'm more inclined to adopt them, but where does this leave DeviceCheck? Is it redundant? How does App Attest fit into all of this?
If my goal is to minimize if not eliminiate fraudulent/malicious use of my app's APIs, should I use Private Access Tokens, DeviceCheck, and App Attest simultaneously to maximize my protection? If not, what is accepted to be the best practice?
I admire Apple's dedication to privacy and security, but as a new developer I feel Apple could make it easier for their app developers to find out and implement the latest best practices.
Hi all,
My app uses SpriteKit views which are rendered into images for various uses. For some reason, the same code performs worse on a newer CPU than on an older one.
My A13 Bionic flies through the task at high resolution and 60FPS while CPU usage is <60%, while the A15 Bionic chokes and sputters at a lower resolution and 30FPS.
Because of how counterintuitive this is, it took me a while to isolate the call directly responsible--with UIView.drawHierarchy commented out, both devices returned to their baseline performances.
guard let sceneView = skScene.view
else { return }
let size = CGSize(width: outputResolution, height: outputResolution)
return UIGraphicsImageRenderer(size: size).image { context in
let rect = CGRect(origin: .zero, size: size)
sceneView.drawHierarchy(in: rect, afterScreenUpdates: false)
}
Does anyone know why this is the case, and how to fix it?
I tried using UIView.snapshotView, which is supposedly much faster, but it only returns blank images. Am I using it wrong or does it simply not work in this context?
sceneView.snapshotView(afterScreenUpdates: false)?.draw(rect)
Any hints or pointers would be greatly appreciated
Hello,
I had a WWDC Lab with two CloudKit engineers who asked me to file a "Feedback Request" for critical information regarding CloudKit. I've filed the FB and have also decided to post a forum post to increase my chances of having these critical questions answered. If allowed, I will also post responses to my FB here.
CKAssets
I would like to know how large assets attached to a CKAsset can get before being rejected by the system. If the figure differs for private and public databases, please also let me know.
CloudKit pricing information
There used to be pricing information available on the website, but there's basically no information now. This makes it hard to calibrate user upload limits for my app in order to avoid overage fees.
I'm not looking to game the system, (something this strange opaqueness is likely meant to prevent); I'm just looking to avoid a situation where competitors and vandals abuse my the content upload system so I get smacked by large bills out of nowhere. A rough figure of how many GB of data each active user adds to my app's CloudKit public database would suffice.
While we're at it, if I have two apps that share a public database (if that's possible), do the active user counts of both contribute to the public database's free threshold?