iPhone12 Pro, iOS 15.0.1 * Occurs on most iPhone devices
I am using True-Depth on the Front side that I got from ARKit for my own application. (ARFrame.capturedDepthData)
...
func session(_ session: ARSession, didUpdate frame: ARFrame)
{
// get capturedDepthData
guard let depthData = frame.capturedDepthData else { return }
...
There are many other processes, so the CPU gets hot as soon as I launch the app. When the CPU is hot, the True-Depth frame rate drops from about 15 to 5 fps.
Is there a way to maintain a constant frame rate?
Is there a way to get a lower resolution? QVGA resolution is enough for my app.
A simple code is attached below.
How to use:
- Create a new project in Xcode13.0 (select"App", "Storyboard" template)
- Replace "ViewController.swift" in the generated code
- Prepare a suitable UI in "Main" and attach it
- Build the app
Please use another app to heat up the CPU.
capture:
- ThermalState.nominal
- ThermalState.serious
// ViewController.swift
import Foundation
import UIKit
import ARKit
import RealityKit
class ViewController: UIViewController
{
// Outlet
@IBOutlet weak var arView: ARView!
@IBOutlet weak var depthImageView: UIImageView!
@IBOutlet weak var fpsLabel: UILabel!
private let fpsManager: FpsManager = FpsManager()
override func viewDidLoad()
{
super.viewDidLoad()
// Do any additional setup after loading the view.
arView.session.delegate = self
arView.session.run(ARFaceTrackingConfiguration())
}
}
extension ViewController: ARSessionDelegate
{
func session(_ session: ARSession, didUpdate frame: ARFrame)
{
// get capturedDepthData
guard let depthData = frame.capturedDepthData else { return }
let depthMap = depthData.depthDataMap
depthImageView.image = UIImage(ciImage: CIImage.init(cvPixelBuffer: depthMap))
// update log
guard let fps = fpsManager.Update() else { return }
let thermalStateStr = ProcessInfo.processInfo.thermalState.toString()
let depthWidth = CVPixelBufferGetWidth(depthMap)
let depthHeight = CVPixelBufferGetHeight(depthMap)
fpsLabel.text = String(format: "[%d, %d], fps: %2.2f, thermalState: \(thermalStateStr)",
depthWidth, depthHeight, fps)
}
}
class FpsManager {
private var start: Date = Date()
private var frames: Int = 0
private let waitTime: TimeInterval = 1.0
public func Update() -> Double? {
let now = Date()
let secFromStart = now.timeIntervalSince(start)
if secFromStart < waitTime {
frames += 1
return nil
}
let fps = Double(frames) / secFromStart as Double
start = now
frames = 0
return fps
}
}
extension ProcessInfo.ThermalState {
func toString() -> String {
switch self {
case .nominal:
return "nominal"
case .fair:
return "fair"
case .serious:
return "serious"
case .critical:
return "critical"
default:
return "[error] toString(): unknown type"
}
}
}
Your best bet is to try optimizing your app to reduce its overall performance impact. One thing that you could try is to reduce the resolution and frame rate of your face tracking configuration, loop through ARFaceTrackingConfiguration.supportedVideoFormats to find the video format with the lowest resolution and frame rate (dropping from 60 fps to 30 fps should have a significant impact on thermals).
Beyond that, my recommendation is that you utilize Time Profiler to see if there is any room for further optimization in your code.
Also, keep in mind that to some degree the thermal state can be completely out of your control (if for example, the user is using their device outdoors in direct sunlight and it is very hot).