Hello Apple,
I am concerned about the new iOS Screen Mirroring that is available on iOS.
I have an app that is only meant to be viewed on iPhones (not Macs or Computers, due to security reasons.
I am assuming that Screen Mirroring is using AirPlay underneath, otherwise is there an API being planned or coming that can disable this functionality or is there a way for my app to opt out out of iOS Screen Mirroring?
Thanks.
Video
RSS for tagIntegrate video and other forms of moving visual media into your apps.
Posts under Video tag
90 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi Team,
I'm using AVPlayer on Apple TV 2nd generation which has that siri click pad with four buttons around and we need to detect where the user clicked on the Siri remote fast forward/backward. I have tested many different approaches to do it, but nothing is working for me.
Does anyone have any idea how to resolve it in Swift?
In the code example provided there is a bool in the Video object to set a video as 3D:
/// A Boolean value that indicates whether the video contains 3D content. let is3D: Bool
I have a hosted spatial video that I know works correctly on the AVP player. When I point the Videos.json file to the this URL and set is3D=true my 3D video doesn't show up and I get the follow error:
iPVC/1-0 Received playback error: [Error Domain=AVFoundationErrorDomain Code=-11850 "Operation Stopped" UserInfo={NSLocalizedFailureReason=The server is not correctly configured., NSLocalizedDescription=Operation Stopped, NSUnderlyingError=0x30227c510 {Error Domain=CoreMediaErrorDomain Code=-12939 "byte range length mismatch - should be length 2 is length 2434" UserInfo={NSDescription=byte range length mismatch - should be length 2 is length 2434, NSURL=https: <omitted for post> }}}]
Can anyone tell me what might be going on? The error is telling me my server is not configured correctly. For context, I'm using a google drive to deliver dynamic images/videos using:
https://drive.google.com/uc?export=download&id= <file ID>
And the above works great for my images and 2d videos. Is there something I need to do specifically when delivering MV-HEVC videos?
I've created a Full Immersive VisionOS project and added a spacial video player in the ImmersiveView swift file. I have a few buttons on a different VideosView swift file on a floating window and i'd like switch the video playing in ImmersiveView when i click on a button in VideosView file.
Video player working great in ImmersiveView:
RealityView { content in
if let videoEntity = try? await Entity(named: "Video", in: realityKitContentBundle) {
guard let url = Bundle.main.url(forResource: "video1", withExtension: "mov") else {fatalError("Video was not found!")}
let asset = AVURLAsset(url: url)
let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer()
videoEntity.components[VideoPlayerComponent.self] = .init(avPlayer: player)
content.add(videoEntity)
player.replaceCurrentItem(with: playerItem)
player.play()
}else {
print("file not found!")
}
}
Buttons in floating window from VideosView:
struct VideosView: View {
var body: some View {
VStack{
Button(action: {}) {
Text("video 1").font(.title)
}
Button(action: {}) {
Text("video 2").font(.title)
}
Button(action: {}) {
Text("video 3").font(.title)
}
}
}
}
In general how do I control the video player across views and how do I replace the video when each button is selected. Any help/code/links would be greatly appreciated.
I'm trying to cast the screen from an iOS device to an Android device.
I'm leveraging ReplayKit on iOS to capture the screen and VideoToolbox for compressing the captured video data into H.264 format using CMSampleBuffers. Both iOS and Android are configured for H.264 compression and decompression.
While screen casting works flawlessly within the same platform (iOS to iOS or Android to Android), I'm encountering an error ("not in avi mode") on the Android receiver when casting from iOS. My research suggests that the underlying container formats for H.264 might differ between iOS and Android.
Data transmission over the TCP socket seems to be functioning correctly.
My question is:
Is there a way to ensure a common container format for H.264 compression and decompression across iOS and Android platforms?
Here's a breakdown of the iOS sender details:
Device: iPhone 13 mini running iOS 17
Development Environment: Xcode 15 with a minimum deployment target of iOS 16
Screen Capture: ReplayKit for capturing the screen and obtaining CMSampleBuffers
Video Compression: VideoToolbox for H.264 compression
Compression Properties:
kVTCompressionPropertyKey_ConstantBitRate: 6144000 (bitrate)
kVTCompressionPropertyKey_ProfileLevel: kVTProfileLevel_H264_Main_AutoLevel (profile and level)
kVTCompressionPropertyKey_MaxKeyFrameInterval: 60 (maximum keyframe interval)
kVTCompressionPropertyKey_RealTime: true (real-time encoding)
kVTCompressionPropertyKey_Quality: 1 (lowest quality)
NAL Unit Handling: Custom header is added to NAL units
Android Receiver Details:
Device: RedMi 7A running Android 10
Video Decoding: MediaCodec API for receiving and decoding the H.264 stream
I am a bit confused on whether certain Video Toolbox (VT) encoders support hardware acceleration or not.
When I query the list of VT encoders (VTCopyVideoEncoderList(nil,&encoderList)) on an iPhone 14 Pro device, for avc1 (AVC / H.264) and hevc1 (HEVC / H.265) encoders, the kVTVideoEncoderList_IsHardwareAccelerated flag is not there, which -based on the documentation found on the VTVideoEncoderList.h- means that the encoders do not support hardware acceleration:
optional. CFBoolean. If present and set to kCFBooleanTrue, indicates that the encoder is hardware accelerated.
In fact, no encoders from this list return this flag as true and most of them do not include the flag at all on their dictionaries.
On the other hand, when I create a compression session using the VTCompressionSessionCreate() and pass the kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder as true in the encoder specifications, after querying the kVTCompressionPropertyKey_UsingHardwareAcceleratedVideoEncoder using the following code, I get a CFBoolean value of true for both H.264 and H.265 encoder.
In fact, I get a true value (for both of the aforementioned encoders) even if I don't specify the kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder during the creation of the compression session (note here that this flag was introduced in iOS 17.4 ^1).
So the question is: Are those encoders actually hardware accelerated on my device, and if so, why isn't that reflected on the VTCopyVideoEncoderList() call?
xtension Entity {
func addPanoramicImage(for media: WRMedia) {
let subscription = TextureResource.loadAsync(named:"image_20240425_201630").sink(
receiveCompletion: {
switch $0 {
case .finished: break
case .failure(let error): assertionFailure("(error)")
}
},
receiveValue: { [weak self] texture in
guard let self = self else { return }
var material = UnlitMaterial()
material.color = .init(texture: .init(texture))
self.components.set(ModelComponent(
mesh: .generateSphere(radius: 1E3),
materials: [material]
))
self.scale *= .init(x: -1, y: 1, z: 1)
self.transform.translation += SIMD3(0.0, -1, 0.0)
}
)
components.set(Entity.WRSubscribeComponent(subscription: subscription))
}
func updateRotation(for media: WRMedia) {
let angle = Angle.degrees( 0.0)
let rotation = simd_quatf(angle: Float(angle.radians), axis: SIMD3<Float>(0, 0.0, 0))
self.transform.rotation = rotation
}
struct WRSubscribeComponent: Component {
var subscription: AnyCancellable
}
}
case .failure(let error): assertionFailure("(error)")
Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
Link sanbox: https://codesandbox.io/p/sandbox/webrtc-ios-lasted-issue-jzx9h5
Issue: Black video screen when url changed.
Reproduce step:
Get the source code on sanbox repo above
Install packages by command "npm install"
Start local web-app under https by command "HTTPS=true npm start"
Update url by click button "Update URL search param"
OS: iOS v17.4.1
Browser: Safari
Device: iPhone 11 pro
Anyone can help?
Note: it's works on iPhone X iOS version 16
Link video issue: https://streamable.com/rj07u8
Issue: WebRTC - User medias - Black video screen when url changed.
Reproduce step:
Get the source code on sanbox repo above
Install packages by command "npm install"
Start local web-app under https by command "HTTPS=true npm start"
Update url by click button "Update URL search param"
OS: iOS v17.4.1
Browser: Safari Device:
iPhone 11 pro
Anyone can help?
Note: it's works on iPhone X iOS version 16
Link video issue: https://streamable.com/rj07u8
I have a AVPlayer() which loads the video and places it on the screen ModelEntity in the immersive view using the VideoMaterial. This also makes the video untappable as it is a VideoMaterial.
Here's the code for the same:
let screenModelEntity = model.garageScreenEntity as! ModelEntity
let modelEntityMesh = screenModelEntity.model!.mesh
let url = Bundle.main.url(forResource: "<URL>",
withExtension: "mp4")!
let asset = AVURLAsset(url: url)
let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer()
let material = VideoMaterial(avPlayer: player)
screenModelEntity.components[ModelComponent.self] = .init(mesh: modelEntityMesh, materials: [material])
player.replaceCurrentItem(with: playerItem)
return player
I was able to load and play the video. However, I cannot figure out how to show the player controls (AVPlayerViewController) to the user, similar to the DestinationVideo sample app.
How can I add the video player controls in this case?
<div class="container" style="background-size: contain; user-select: none; pointer-events: none; height: 787.5px; width: 1400px;">
<div class="container__header">header</div>
<span>
<div class="video-container" style="inset: 17.853% 68% 11.747% 1%; z-index: 2; opacity: 1;">
<div class="video-container__placeholder-image">image</div>
<div class="video-container__content">
<div class="some-info"></div>
<div class="video-canvas"></div>
<div class="other-info"></div>
</div>
</div>
<div class="video-container" style="inset: 17.853% 1% 11.747% 33%; z-index: 1; opacity: 1;">
<div class="video-container__placeholder-image">image</div>
<div class="video-container__content">
<div class="video-canvas">
<div class="player" style="width: 100%; height: 100%; position: relative; overflow: hidden; background-color: black;">
<video playsinline="" muted="" style="object-fit: cover; width: 100%; height: 100%; position: absolute; left: 0px; top: 0px;"></video>
</div>
</div>
</div>
</div>
</span>
</div>
The page looks like
Then, the html changed as follows,
<div class="container" style="background-size: contain; user-select: none; pointer-events: none; height: 787.5px; width: 1400px;">
<div class="container__header">header</div>
<span>
<div class="video-container" style="inset: 100% 100% 0% 0%; z-index: 2; opacity: 0;">
<div class="video-container__placeholder-image">image</div>
<div class="video-container__content">
<div class="some-info"></div>
<div class="video-canvas"></div>
<div class="other-info"></div>
</div>
</div>
<div class="video-container" style="style="inset: 6.106% 5.98719% 0%; z-index: 3; opacity: 1;"">
<div class="video-container__placeholder-image">image</div>
<div class="video-container__content">
<div class="video-canvas">
<div class="player" style="width: 100%; height: 100%; position: relative; overflow: hidden; background-color: black;">
<video playsinline="" muted="" style="object-fit: cover; width: 100%; height: 100%; position: absolute; left: 0px; top: 0px;"></video>
</div>
</div>
</div>
</div>
</span>
</div>
From the mac developer tools, the width of the video is 1400px, but it render like the size is same as before in iOS17+(iOS17.1 and iOS17.3.1).
The expected results looks like
the actual results are looks like
I tried the same operators in iOS 14.6 and 16.4 and it worked as expected, this problem likes only exists in iOS17+.
Please help me to resolve this problom. Thanks.
We utilized AVFragmentedAssetMinder to refresh the player data. While notifications for AVAssetDurationDidChange were consistently received whenever the player duration changed.
However, following the release of iOS 17, notifications for AVAssetDurationDidChange ceased to be received.
Could you please advise anyone why this notification is not being triggered? what we have to change
NotificationCenter.default.addObserver(self, selector: #selector(self.onVideoUpdate), name: .AVAssetDurationDidChange, object: nil)
#AVPLAyer, #AVMUtableMovie
Hello, I've noticed that my server-hosted video that is larger than 19 MB doesn't work on iOS mobile devices. What is the maximum size limit (in MB) for the video html tag on iOS mobile devices?
Hi guys, I'm implementing FairPlay support for a video streaming application. I've managed to get as far as generating the SPC and acquiring a license from the license server. However when it comes to parsing the license (CKC) returned from the server, the FPS module returns error code -42671. Has anyone else faced this before and / or knows what the fix is? I thought passing it the license should be enough unless additional data is required?
How Can I update the cookies of the previously set m3u8 video in AVPlayer without creating the new AVURLAsset and replacing the AVPlayer current Item with it
Hi everyone, I need to add spatial video maker in my app which was wrote in objective-c. I found some reference code by swift, can you help me with converting the code to objective -c?
let left = CMTaggedBuffer(
tags: [.stereoView(.leftEye), .videoLayerID(leftEyeLayerIndex)], pixelBuffer: leftEyeBuffer)
let right = CMTaggedBuffer(
tags: [.stereoView(.rightEye), .videoLayerID(rightEyeLayerIndex)],
pixelBuffer: rightEyeBuffer)
let result = adaptor.appendTaggedBuffers(
[left, right], withPresentationTime: leftPresentationTs)
Does the new MV-HEVC vision pro spatial video format supports having an alpha channel? I've tried converting a side by side video with alpha channel enabled by using this Apple example project, but the alpha channel is being removed.
https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc
Is there any way to play panoramic or 360 videos in an immersive space, without using VideoMaterial on a sphere?
I've tried using local videos with 4k and 8k quality and all of them look pixelated using this approach.
I tried both simulator as well as the real device, and I can't ever get a high-quality playback.
If the video is played on a regular 2D player, on the other hand, it shows the expected quality.
I want to get spatial videos in HEVC format, but after sharing to the share extension, I found that the video was automatically transcoded to AVC format.
Using version 14.3 of Safari can autoplay, version 15 and above requires user interaction to autoplay, I don't want the user to interact, what should I do?