Performance

RSS for tag

Improve your app's performance.

Posts under Performance tag

40 Posts
Sort by:
Post not yet marked as solved
36 Replies
20k Views
After creating a basic table the application crashed while running a select query. Translated Report (Full Report Below) Process:               MySQLWorkbench [1687] Path:                  /Applications/MySQLWorkbench.app/Contents/MacOS/MySQLWorkbench Identifier:            com.oracle.workbench.MySQLWorkbench Version:               8.0.32.CE (1) Code Type:             X86-64 (Translated) Parent Process:        launchd [1] User ID:               501 Date/Time:             2023-02-02 11:58:46.6293 +0530 OS Version:            macOS 13.2 (22D49) Report Version:        12 Anonymous UUID:        7B20408B-A4BE-478D-8402-4595E3D3D9C4 Time Awake Since Boot: 880 seconds System Integrity Protection: enabled Crashed Thread:        0  Dispatch queue: com.apple.main-thread Exception Type:        EXC_BAD_INSTRUCTION (SIGILL) Exception Codes:       0x0000000000000001, 0x0000000000000000 Termination Reason:    Namespace SIGNAL, Code 4 Illegal instruction: 4 Terminating Process:   exc handler [1687] Application Specific Backtrace 0: 0   CoreFoundation                      0x00007ff8141d3cb3 __exceptionPreprocess + 242 1   libobjc.A.dylib                     0x00007ff813d2210a objc_exception_throw + 48 2   CoreFoundation                      0x00007ff81426abbe -[NSObject(NSObject) __retain_OA] + 0 3   CoreFoundation                      0x00007ff81413eab0 forwarding + 1324 4   CoreFoundation                      0x00007ff81413e4f8 _CF_forwarding_prep_0 + 120 5   WBExtras                            0x0000000109c371a1 -[MResultsetViewer tableView:willDisplayCell:forTableColumn:row:] + 393 6   AppKit                              0x00007ff81753282e -[NSTableView _delegateWillDisplayCell:forColumn:row:] + 104 7   AppKit                              0x00007ff817477676 -[NSTableView preparedCellAtColumn:row:] + 1835 8   MySQLWorkbench                      0x0000000100bdfa98 -[MGridView preparedCellAtColumn:row:] + 54 9   AppKit                              0x00007ff817476e41 -[NSTableView _drawContentsAtRow:column:withCellFrame:] + 42 10  AppKit                              0x00007ff817476ac1 -[NSTableView drawRow:clipRect:] + 1638 11  AppKit                              0x00007ff81747613c -[NSTableView drawRowIndexes:clipRect:] + 707 12  AppKit                              0x00007ff817401cc4 -[NSTableView drawRect:] + 1670 13  AppKit                              0x00007ff817334144 _NSViewDrawRect + 121 14  AppKit                              0x00007ff817b193c3 -[NSView _recursive:displayRectIgnoringOpacity:inContext:stopAtLayerBackedViews:] + 1810 15  AppKit                              0x00007ff817333870 -[NSView(NSLayerKitGlue) _drawViewBackingLayer:inContext:drawingHandler:] + 753 16  QuartzCore                          0x00007ff81c16db59 CABackingStoreUpdate_ + 254 17  QuartzCore                          0x00007ff81c1d13c1 ___ZN2CA5Layer8display_Ev_block_invoke + 53 18  QuartzCore                          0x00007ff81c16cd66 -[CALayer _display] + 2275 19  AppKit                              0x00007ff8173334c5 -[_NSBackingLayer display] + 462 20  AppKit                              0x00007ff8172ab455 -[_NSViewBackingLayer display] + 554 21  QuartzCore                          0x00007ff81c16bd08 _ZN2CA5Layer17display_if_neededEPNS_11TransactionE + 900 22  QuartzCore                          0x00007ff81c2e56c6 _ZN2CA7Context18commit_transactionEPNS_11TransactionEdPd + 648 23  QuartzCore                          0x00007ff81c14cb35 _ZN2CA11Transaction6commitEv + 725 24  AppKit                              0x00007ff81734496f __62+[CATransaction(NSCATransaction) NS_setFlushesWithDisplayLink]_block_invoke + 285 25  AppKit                              0x00007ff817b5c767 ___NSRunLoopObserverCreateWithHandler_block_invoke + 41 26  CoreFoundation                      0x00007ff81415b3e1 CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION + 23 27  CoreFoundation                      0x00007ff81415b309 __CFRunLoopDoObservers + 482 28  CoreFoundation                      0x00007ff81415a866 __CFRunLoopRun + 877 29  CoreFoundation                      0x00007ff814159e7f CFRunLoopRunSpecific + 560 30  HIToolbox                           0x00007ff81dfec766 RunCurrentEventLoopInMode + 292 31  HIToolbox                           0x00007ff81dfec576 ReceiveNextEventCommon + 679 32  HIToolbox                           0x00007ff81dfec2b3 _BlockUntilNextEventMatchingListInModeWithFilter + 70 33  AppKit                              0x00007ff8171e5293 _DPSNextEvent + 909 34  AppKit                              0x00007ff8171e4114 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1219 35  AppKit                              0x00007ff8171d6757 -[NSApplication run] + 586 36  AppKit                              0x00007ff8171aa797 NSApplicationMain + 817 37  dyld                                0x0000000200ff6310 start + 2432 Kernel Triage: VM - pmap_enter retried due to resource shortage Thread 0 Crashed::  Dispatch queue: com.apple.main-thread 0   AppKit                            0x7ff81754dc26 -[NSApplication _crashOnException:] + 287 1   AppKit                            0x7ff817344bab __62+[CATransaction(NSCATransaction) NS_setFlushesWithDisplayLink]_block_invoke + 857 2   AppKit                            0x7ff817b5c767 ___NSRunLoopObserverCreateWithHandler_block_invoke + 41 3   CoreFoundation                    0x7ff81415b3e1 CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION + 23 4   CoreFoundation                    0x7ff81415b309 __CFRunLoopDoObservers + 482 5   CoreFoundation                    0x7ff81415a866 __CFRunLoopRun + 877 6   CoreFoundation                    0x7ff814159e7f CFRunLoopRunSpecific + 560 7   HIToolbox                         0x7ff81dfec766 RunCurrentEventLoopInMode + 292 8   HIToolbox                         0x7ff81dfec576 ReceiveNextEventCommon + 679 9   HIToolbox                         0x7ff81dfec2b3 _BlockUntilNextEventMatchingListInModeWithFilter + 70 10  AppKit                            0x7ff8171e5293 _DPSNextEvent + 909 11  AppKit                            0x7ff8171e4114 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1219 12  AppKit                            0x7ff8171d6757 -[NSApplication run] + 586 13  AppKit                            0x7ff8171aa797 NSApplicationMain + 817 14  dyld                                 0x200ff6310 start + 2432 Thread 1:: com.apple.rosetta.exceptionserver 0   runtime                           0x7ff7ffc73614 0x7ff7ffc6f000 + 17940 1   runtime                           0x7ff7ffc7f530 0x7ff7ffc6f000 + 66864 2   runtime                           0x7ff7ffc80f30 0x7ff7ffc6f000 + 73520 Thread 2: 0   runtime                           0x7ff7ffc9187c 0x7ff7ffc6f000 + 141436
Posted
by anshgupta.
Last updated
.
Post not yet marked as solved
0 Replies
523 Views
Hi everyone, Wondering if you know how the device decide which compute unit (GPU, CPU or ANE) to use when compute units are set to ALL? I'm working on optimizing a GPT2 model to run on ANE. I ran the performance report for the existing model and the report showed me operators not supported by ANE. Then I went onto remove these operators and converted the model to CoreML again. This time the performance report showed that every operator is supported by ANE but the device still prefers GPU when the compute units are set to ALL and perfers CPU when the compute units are set to CPU and ANE. ALL CPU and ANE Does anyone know why? Thank you in advance!
Posted
by dcdcdc123.
Last updated
.
Post not yet marked as solved
1 Replies
607 Views
Hi folks, I'm working on converting a GPT2 model to coreml with KV caching enabled. I have a GPT2 model runinng on GPU with static input shape It seems once I enable flexible shape (i.e. either range shape or enumerated shape), the model will be run on CPU according to the performance report. I can see new operators being added ( get_shape and general_slice ) and it is not supported by GPU / ANE Wondering if there's any way to get around this to get the model running on GPU / ANE? How does the machine decide whether to run the model on GPU / Neural Engine? Thanks!
Posted
by dcdcdc123.
Last updated
.
Post marked as solved
3 Replies
626 Views
We are using the CoreBluetooth framework to communicate with a BLE device. We have the requirement to take many RSSI measures over a short span of time. To obtain these measurements, we call readRSSI on the peripheral object. The behavior we observe is that Core Bluetooth invokes didReadRSSI only once every second. This behavior does not seem to be documented anywhere. We have found several reports of the same issue, which have not been answered. For Example: https://developer.apple.com/forums/thread/698235 https://developer.apple.com/forums/thread/77277 https://stackoverflow.com/questions/61216589/fast-update-rssi-bluetooth-in-ios The first question would be: Is this the intended behavior? Secondly, is it documented somewhere? And lastly, are there any workarounds that would allow for a higher rate of taking RSSI measurements than 1 per second?
Posted
by aruhk.
Last updated
.
Post marked as solved
1 Replies
427 Views
CoreBluetooth will only update the RSSI value for a connected BLE device at least one second after the last call to readRSSI, effectively limiting the rate at which the RSSI value can be read to once every second. Is this a momentary measurement, or does it represent some kind of average over this one second? If this is not an average, what is the rationale for limiting the rate at which the RSSI value can be measured? Rate limitation verified in this thread: https://developer.apple.com/forums/thread/739727
Posted
by aruhk.
Last updated
.
Post not yet marked as solved
0 Replies
593 Views
I’m trying to achieve vertical split view in SwiftUI. There are two views, top one is Map (MKMapView actually, wrapped with UIViewRepresentable) and bottom one for this example could be Color.red. Between them there’s a handle to resize the proportions. Initial proportions are 0.3 of height for bottom, 0.7 for map. When resizing map frame, it feels choppy and slow. Replacing MapView with any other view does not produce the same issue. Issue appears only on my real device (iPhone 11 Pro Max) simulator works fine.
Posted
by dvxiel.
Last updated
.
Post not yet marked as solved
0 Replies
593 Views
Hi, I am running Mac Intel I7 post 2020 with Sonoma and clang 15.0 . The Clang 15.0 makes c++ code running 5 times slower than it was before upgrade from Ventura 13.6 and Clang 14.3.1 The other trouble is that Sonoma does not allow to revert to Clang 14.3.1 . I do not use Xcode only command line tools . here my options : g++ -std=c++17 -Ofast -march=native -funroll-loops -flto -DNDEBUG -o a prog.cpp So what happened to C++ ?
Posted
by djm44.
Last updated
.
Post not yet marked as solved
1 Replies
805 Views
I noticed that UITextViews get stuck in memory after a preview has been shown. I mean the preview you get when you long press a url. For this to work editable should be false and dataDetectorTypes should include .link. Include a url to the text and long press it. A preview should show. When you quit the preview and you remove the text view with removeFromSuperView() (or just close the ViewController containing the text view), it won't deinit anymore. You can check by overriding deinit or by checking if a weak reference (like IBOutlet) has become nil. I also noticed that two system ViewControllers stay in memory too, namely SFSafariViewController and SFBrowserRemoteViewController. I don't know if this is by design. Tried on iOS 16.2 and 16.3.1.
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.7k Views
I use the rendering pipeline urp 12.1.7, and then use the unity2021.3.11f1 version to export the xcode14.2 project, and then run it on the iPhone 11 pro max (16.3), and then click the "M" button to perform gpu capture workload. Once this operation is performed, the memory usage will rise sharply, triggering the out of memory. For example, the memory of the game itself is about 1.3G. Once the gpu capture is performed, it will become more than 2.3G, resulting in the inability to profile the game.
Posted Last updated
.
Post not yet marked as solved
1 Replies
895 Views
I'm using ScrollView to display my course list, and each card in the list, named 'CourseCardView' as shown in the code. When I test it on both the simulator and a physical device, I notice that scrolling is not smooth and feels laggy. I'm not sure how to address this issue. Here is my code: // // CourseListView.swift // LexueSwiftUI // // Created by bobh on 2023/9/3. // import SwiftUI struct CourseCardView: View { let cardHeight: CGFloat = 150 let cardCornelRadius: CGFloat = 10 let cardHorizontalPadding: CGFloat = 10 @State var courseName = "course name" @State var courseCategory = "category name" @State var progress = 66 var body: some View { ZStack { Image("default_course_bg2") .resizable() .blur(radius: 5, opaque: true) .cornerRadius(cardCornelRadius) .padding(.horizontal, cardHorizontalPadding) .frame(height: cardHeight) Color.white .cornerRadius(cardCornelRadius) .padding(.horizontal, cardHorizontalPadding) .frame(height: cardHeight) .opacity(0.1) VStack(alignment: .leading, spacing: 2) { Spacer() Text(courseName) .bold() .font(.title) .foregroundColor(.white) .lineLimit(1) .shadow(color: .black.opacity(0.5), radius: 5, x: 0, y: 2) .padding(.leading, 10) Text(courseCategory) .bold() .font(.headline) .foregroundColor(.white) .lineLimit(1) .shadow(color: .black.opacity(0.5), radius: 5, x: 0, y: 2) .padding(.leading, 10) .padding(.bottom, 5) ProgressView(value: Double(progress) / 100.0) .padding(.horizontal, 10) .padding(.bottom, 10) .accentColor(.white) .shadow(color: .black.opacity(0.3), radius: 5, x: 0, y: 2) } .frame(height: cardHeight) .frame(maxWidth: .infinity, alignment: .leading) .padding(.horizontal, cardHorizontalPadding) VStack { HStack { Spacer() Button(action: { }) { Image(systemName: "star") .foregroundColor(.white) .font(.system(size: 24).weight(.regular)) .shadow(color: .black.opacity(0.3), radius: 5, x: 0, y: 2) } .padding(.trailing, 10) .padding(.top, 10) } Spacer() } .frame(height: cardHeight) .frame(maxWidth: .infinity, alignment: .leading) .padding(.horizontal, cardHorizontalPadding) } .shadow(color: .black.opacity(0.3), radius: 5, x: 0, y: 2) } } private struct ListView: View { @Binding var courses: [CourseShortInfo] @Binding var isRefreshing: Bool @Environment(\.refresh) private var refreshAction @ViewBuilder var refreshToolbar: some View { if let doRefresh = refreshAction { if isRefreshing { ProgressView() } else { Button(action: { Task{ await doRefresh() } }) { Image(systemName: "arrow.clockwise") } } } } var body: some View { VStack { ScrollView(.vertical) { LazyVStack(spacing: 20){ ForEach(courses) { item in CourseCardView(courseName: item.shortname!, courseCategory: item.coursecategory!, progress: item.progress!) .listRowSeparator(.hidden) } } } .toolbar { refreshToolbar } } } } struct CourseListView: View { @State private var courseList = GlobalVariables.shared.courseList @State var isRefreshing: Bool = false @State var searchText: String = "" func testRefresh() async { Task { isRefreshing = true Thread.sleep(forTimeInterval: 1.5) withAnimation { isRefreshing = false } } } var body: some View { NavigationView { VStack { ListView(courses: $courseList, isRefreshing: $isRefreshing) .refreshable { print("refresh") await testRefresh() } } .searchable(text: $searchText, prompt: "Search the course") .navigationTitle("Course") .navigationBarTitleDisplayMode(.large) } } } struct CourseListView_Previews: PreviewProvider { static var previews: some View { CourseListView() } } The global variable code: import Foundation import SwiftUI class GlobalVariables { static let shared = GlobalVariables() @Published var isLogin = true @Published var courseList: [CourseShortInfo] = [ CourseShortInfo(id: 11201, shortname: "数据结构与C++程序设计", progress: 66, coursecategory: "自动化学院"), CourseShortInfo(id: 11202, shortname: "数值分析", progress: 20, coursecategory: "数学学院"), CourseShortInfo(id: 11203, shortname: "数据结构与C++程序设计", progress: 66, coursecategory: "自动化学院"), CourseShortInfo(id: 11204, shortname: "数值分析", progress: 20, coursecategory: "数学学院"), CourseShortInfo(id: 11205, shortname: "数据结构与C++程序设计", progress: 66, coursecategory: "自动化学院"), CourseShortInfo(id: 11206, shortname: "数值分析", progress: 20, coursecategory: "数学学院"), CourseShortInfo(id: 11207, shortname: "数据结构与C++程序设计", progress: 66, coursecategory: "自动化学院"), CourseShortInfo(id: 11208, shortname: "数值分析", progress: 20, coursecategory: "数学学院"), CourseShortInfo(id: 11209, shortname: "数据结构与C++程序设计", progress: 66, coursecategory: "自动化学院"), CourseShortInfo(id: 11210, shortname: "数值分析", progress: 20, coursecategory: "数学学院") ] var debugMode = true private init() { } }
Posted
by BobH2003.
Last updated
.
Post not yet marked as solved
3 Replies
660 Views
I have an system that is designed around a collection of devices (iPhones or iPads) discovered via bonjour and connected with a NWConnection over TCP. Commands issued from one of the devices are sent to each of the peers and should be executed as soon as they are received. The problem I am encountering is a high variability in transit time device to device that I am having a hard time accounting for. By 'high' I am noting anywhere from 20-80ms of latency device to device. CPU utilization on each iPhone is essentially 0. Keepalives are enabled and firing off every 2 seconds. Additionally, the physical devices all have bluetooth off (as recommended in other posts) The interesting part is when I add into the connection mix an iPhone simulator (running on either a M1 MacBook Air, or my M1U Studio). When a command is issued from the simulator instance, all connected devices report back anywhere within ~0-3ms of deviation from the initiator, which is more what I expect from the network. Thinking that it's perhaps the M1 series of chips being far and away more competent than the A15's in the iPhone 13's and 14 that are in my testbed, I added my M1 iPad to the mix. Invoking a command from the iPad has similar variability as invoking it from one of the iPhones. The code is stupid simple and I'm posting here prior to opening up a DTS case in the hope that there's a magic "shouldUseSpeedholes=true" flag I can set. I have gone through several variations: using UDP instead of TCP (worse variations), changing from a listener/browser on each device to a single browser, multiple listener (no difference), changing from keeping things on the main queue (as in the docs) to a separate concurrent high priority dispatch queue (no difference). There is no TLS in the mix. I have tried both allowing peer-to-peer as well as not (no difference). I'm even using a single purpose project instead of my main codebase to isolate everything else that could be messing with scheduling with communications. I've tried each band of my WIFI (2.4 and 2x5ghz SSIDs) - no change. Sending func send(option: AppFramingOptions, withData data: Data?) { let message = NWProtocolFramer.Message(appFramingOption: option) let context = NWConnection.ContentContext(identifier: "\(option.rawValue)", metadata: [message]) for (uniqueKey, connection) in connections { if uniqueKey.contains(serviceName) { connection.send(content: data, contentContext: context, isComplete: true, completion: .idempotent) } } } In the above function, I am looking for the serviceName because I want to use connections connected via the browser as opposed to the listener (which isn't tagged with service name info in the endpoint). The check avoids a device receiving the command twice. Receiving connection.receiveMessage { content, contentContext, isComplete, error in guard error == nil else { connection.cancel() return } if let msg = contentContext?.protocolMetadata(definition: AppFraming.definition) as? NWProtocolFramer.Message { switch msg.appFramingOption { default: self.messageReceivedHandler?(content, msg.appFramingOption, Date().timeIntervalSince1970, withUniqueKey) } } receiveMessage() } } It's very much patterned off the TicTacToe example code (with a mechanism for multiple connections). My next step is embedding a web server in each device and making REST calls rather than commands over a TCP stream (which is CRAZY INSANE I KNOW). I also do not want to have to have a Macintosh dependency for this system because I cannot get predictable(ish) transit times. Any help is appreciated!
Posted
by bxlewi1.
Last updated
.
Post not yet marked as solved
0 Replies
444 Views
Is there a way to sample performance monitoring counters (PMC) using ktrace utility? kperf sources contain some evidences of a PMC sampler presence (https://github.com/apple-oss-distributions/xnu/blob/main/osfmk/kperf/action.h#L48). Maybe there is a way to specify it as ktrace artrace's --kperf sampler along with counters to sample?
Posted
by fzhinkin.
Last updated
.
Post not yet marked as solved
0 Replies
843 Views
For my iPhone 13 Pro Max, the current beta's (Developer Beta 6 and Public Beta 4), there has been issues with the Apple Podcasts App. When I try to refresh to check for new episodes, it's takes way too long to load new episodes. It takes about 1-2 minutes to display the latest episode. My Wi-Fi internet speed and my cellular data is fast and I did not have this issue on the previous beta's. I deleted the Apple podcast app multiple times and I deleted all of my downloaded podcasts/episodes and yet no resolution. I even enrolled my iPAD Pro, with the current Beta's and it's still the same problem. I even reset my phone to factory settings and it's still a problem. As a result, I went back to iOS 16.6 because of this podcast bug. Hopefully in the next beta, the update fixes this problem. Is anyone else having this issue?
Posted
by ANT96NBA.
Last updated
.
Post not yet marked as solved
1 Replies
517 Views
I have QA doing testing with Xcode performance state, but to help catch human error, I'd like to abort the test if GPU Performance state condition is not actually active, or is wrong. Is there a way I can do that, from the iOS app itself? It seems like it doesn't not set any environment variable or anything to indicate it is active.
Posted
by StevenAn.
Last updated
.
Post not yet marked as solved
0 Replies
440 Views
Hello, one of my users is experiencing this kind of issue: He taps on my app attached to the Dock. App icons turns gray, indicating the process is started. This state with him looking at his iOS homepage with icon grayed out stays for a second or two. App launches immediately, without displaying the Launch Screen. iPhone 14 Pro Max iOS 16.3.1 (video is from this system, but on 16.6 it happened for the same user too) He claims that the issue happens only when the app was turned off for a longer time (for example a day of inactivity). After this first longer broken launch, next launches works much faster and display launch screen properly.
Posted Last updated
.
Post not yet marked as solved
1 Replies
531 Views
Hi! Just got Xcode 15-beta and VisionOS SDK installed and tried to run the "hello world" program. As my device got extremely slow, the preview crashed a couple times and is barely usable. I tried running the simulator, it took a long time to load and the interaction with the simulated VisionOS was very slow. I understand it was quite an outdated device and I just want to know if it's something Apple could work on in the future updates OR it is an indicator to get a latest model? Thank you.
Posted Last updated
.
Post not yet marked as solved
0 Replies
740 Views
I've been playing with functionality of CADisplayLink to programmatically monitor rendering performance in an iOS app. Also spent quite some time in Instruments, of course (e.g. checking the Scroll Hitch Rate), but curious of what it looks like in the field and for specific screens in the app. The documentation says that a CADisplayLink timer object allows your app to synchronize its drawing to the refresh rate of the display. which makes me expect that the provided selector is called somewhat close to vsync events emitted by the underlying hardware. Also, the timestamp property is defined as follows: The time interval that represents when the last frame displayed. So I just use the two successive callbacks to figure out the on-screen duration of the frame while scrolling and decide if there was a hitch or not, like so: @objc private func handleDisplayUpdate(_ displayLink: CADisplayLink) { defer { cachedTimestamp = displayLink.timestamp } guard let cachedTimestamp = cachedTimestamp else { return } let frameTime = displayLink.timestamp - cachedTimestamp if frameTime > aThreshold { // do some lightweight logging } } By using CACurrentMediaTime() call I can see that the handler is called pretty close to what the latest timestamp holds, so it is 1ms-ish behind the recent frame update. However the frameTime intervals are not always a multiple of 16.67ms (assuming the device renders @60fps): while scrolling I can observe the numbers being e.g. 25ms or 43ms. Given that frames are swapped onto the display at vsync points in time only, does the above mean that the timestamp property actually represents something else in the whole render loop?
Posted
by montepaul.
Last updated
.
Post marked as solved
1 Replies
1.4k Views
When I run the performance test on a CoreML model, it shows predictions are 834% faster running on the Neural Engine as it is on the GPU. It also shows, that 100% of the model can run on the Neural Engine: GPU only: But when I set the compute units to all: let config = MLModelConfiguration() config.computeUnits = .all and profile, it shows that the neural engine isn’t used at all. Well, other than loading the model which takes 25 seconds when allowed to use the neural engine versus less than a second when not allowing the neural engine: The difference in speed is the difference between the app being too slow to even release versus quite reasonable performance. I have a lot of work invested in this, so I am really hoping that I can get it to run on the Neural Engine. Why isn't it actually running on the Neural Engine when it shows that it is supported and I have the compute unit set to run on the Neural Engine?
Posted
by 3DTOPO.
Last updated
.
Post not yet marked as solved
2 Replies
3.9k Views
I am working with a ScrollView and a LazyVStack that can have many rows, sometimes 500 or more. My rows are expandable on tap. Performance of expand/collapse is unacceptable when the number of rows exceeds 100. The more rows, the more performance degrades. This is the gist of my code: struct PlaceView: View { 		/* ... */ 		var place: Place 		@Binding expandedPlaceId: Int 		var content: some View { 				VStack(alignment: .leading) { 						Text(place.name) 						 						if self.expandedPlaceId.wrappedValue == place.id { 								VStack(alignment: .leading) { 										Text(place.city) 								} 						} 				} 		} } struct PlacesListView: View { 		/*...*/ 		@State expandedPlaceId = -1 		var body: some View { 			ScrollView { 				LazyVStack { 					ForEach(places, id: \.id) { place in 						PlaceView(place: place, expandedPlaceId: self.$expandedPlaceId) 								.onTapGesture { self.selectDeselect(place) } 								.animation(.linear(duration: 0.3)) 					} 			 } 		 } 	} } I created an init function in PlaceView with the following: print("calling int for place \(place.id)") What I have noticed is when expanding, every single PlaceView calls init. For 20 or 30 PlaceView rows this is not a problem, but with several hundred views this creates a choppiness in animation and delays the expansion of the detail section. Is there any workaround for this problem? It does not matter if I use LazyVStack or VStack (which itself is unexpected). Here is an example project that can easily be modified to highlight the problem: https://github.com/V8tr/ExpandableListSwiftUI In the above project, simply add the following to PlaceView.swift init(place: Place, isExpanded: Bool) {         self.place = place         self.isExpanded = isExpanded         print("PlaceView init \(self.place.id)") }
Posted
by recursion.
Last updated
.