Video Memory Leak when Backgrounding

While trying to control the following two scenes in 1 ImmersiveSpace, we found the following memory leak when we background the app while a stereoscopic video is playing.

ImmersiveView's two scenes:

  • Scene 1 has 1 toggle button
  • Scene 2 has same toggle button with a 180 degree skysphere playing a stereoscopic video

Attached are the files and images of the memory leak as captured in Xcode.

To replicate this memory leak, follow these steps:

  1. Create a new visionOS app using Xcode template as illustrated below.
  2. Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist.
  3. Replace all swift files with those you will find in the attached texts.
  4. In ImmersiveView, replace the stereoscopic video to play with a large 3d 180 degree video of your own bundled in your project.
  5. Launch the app in debug mode via Xcode and onto the AVP device or simulator
  6. Display the memory use by pressing on keys command+7 and selecting Memory in order to view the live memory graph
  7. Press on the first immersive space's button "Open ImmersiveView"
  8. Press on the second immersive space's button "Show Immersive Video"
  9. Background the app
  10. When the app tray appears, foreground the app by selecting it
  11. The first immersive space should appear
  12. Repeat steps 7, 8, 9, and 10 multiple times
  13. Observe the memory use going up, the graph should look similar to the below illustration.

In ImmersiveView, upon backgrounding the app, I do:

  • a reset method to clear the video's memory
  • dismiss of the Immersive Space containing the video (even though upon execution, visionOS raises the purple warning "Unable to dismiss an Immersive Space since none is opened". It appears visionOS dismisses any ImmersiveSpace upon backgrounding, which makes sense..)

Am I not releasing the memory correctly? Or, is there really a memory leak issue in either SwiftUI's ImmersiveSpace or in AVFoundation's AVPlayer upon background of an app?

Answered by Vision Pro Engineer in 808397022

Hey @VaiStardom,

Upon further investigation this behaves correctly. There is a memory leak in your view model where your completion block in addObserver(forName:object:queue:using:) is capturing the view model preventing it from being destroyed. To avoid a retain cycle, use a weak reference to self inside the block when self contains the observer as a strong reference.

In your case this would look like the following:

NotificationCenter.default.addObserver(
    forName: .AVPlayerItemDidPlayToEndTime,
    object: avPlayer.currentItem,
    queue: .main) { [weak self] _ in
        guard let self else { return }

        Task {
            self.avPlayer.seek(to: .zero)
            self.avPlayer.play()
        }
    }

Let me know if you have additional questions,
Michael

Hey @VaiStardom,

Rather than trying to replicate your setup with the files provided could you file a bug report for this issue?

I'd greatly appreciate it if you could open a bug report, include a sample project that replicates the issue and sysdiagnose from the Apple Vision Pro, and post the FB number here once you do. Bug Reporting: How and Why? has tips on creating your bug report.

Thanks!
Michael

Accepted Answer

Hey @VaiStardom,

Upon further investigation this behaves correctly. There is a memory leak in your view model where your completion block in addObserver(forName:object:queue:using:) is capturing the view model preventing it from being destroyed. To avoid a retain cycle, use a weak reference to self inside the block when self contains the observer as a strong reference.

In your case this would look like the following:

NotificationCenter.default.addObserver(
    forName: .AVPlayerItemDidPlayToEndTime,
    object: avPlayer.currentItem,
    queue: .main) { [weak self] _ in
        guard let self else { return }

        Task {
            self.avPlayer.seek(to: .zero)
            self.avPlayer.play()
        }
    }

Let me know if you have additional questions,
Michael

The above makes sense. We solved our memory leak by

  • not using a model
  • isolating our video feature to a RealityView who job it is to only show video (in our case in 180 degree sphere) taking in the appropriate URL.

So, the RealityView itself has all the AVFoundation objects and logic to play and display video. When we navigate to and away from the view, the RealityView is either recreated our destroyed, we actually observe the memory created and released.

We are no longer using a notification though, we take note of the weak reference for when we'll add additional features to video playback.

Video Memory Leak when Backgrounding
 
 
Q