Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics

Post

Replies

Boosts

Views

Activity

WWDC24 iPhone Mirroring API
Hello everyone, I am thrilled about the iPhone Mirroring demo on WWDC24 and I have a few thoughts to share. Will it work through a local network, or can the iPhone be accessed within a global network? Will there be an API to initiate iPhone mirroring via an app? This would be a great feature for MDMs, allowing administrators to provide support for their users. Could you share more details from the development perspective?
6
0
282
1w
ImmersiveSpaceを切り替えるとAVAudioPlayerで再生していたBGMの音が聞こえなくなる / When switching ImmersiveSpace, background music played by AVAudioPlayer is not heard.
【手順】 1.アプリを起動する。 2.ImmersiveSpace1がopenされ、3Dオブジェクトのアニメーションが再生される。 3.アニメーションが終了するとImmersiveSpace1をdismissしてImmersiveSpace2をopenする。 【期待値】 ImmersiveSpace1をopenするとBGMが再生され、ImmersiveSpace2がopenしても引き続きBGMが再生されていること。 【結果】 ImmersiveSpace1をopenするとBGMが再生され、ImmersiveSpace2がopenするとBGMの再生が止まる。 【環境】 ・実機(VisionOS2)にて発生。 ・シミュレータでは発生しない。 ・Xcode:Version 15.2 (15C500b) 【ログ】 ImmersiveSpace2をopenした際に実機で出力されている。シミュレータでは出力されない。 AVAudioSession_iOS.mm:2223 Server returned an error from destroySession:. Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service with pid 39 named com.apple.audio.AudioSession was invalidated from this process." UserInfo={NSDebugDescription=The connection to service with pid 39 named com.apple.audio.AudioSession was invalidated from this process.} 【Procedure】 Launch the application. ImmersiveSpace1 is opened and the animation of the 3D object is played. When the animation finishes, ImmersiveSpace1 is dismissed and ImmersiveSpace2 is opened. 【Expected value】 When ImmersiveSpace1 is opened, the background music should play, and when ImmersiveSpace2 is opened, the background music should continue to play. 【Result】 When ImmersiveSpace1 is opened, the BGM is played, and when ImmersiveSpace2 is opened, the BGM stops playing. 【Environment】 This problem occurs on an actual machine (VisionOS2). It does not occur on the simulator. Xcode: Version 15.2 (15C500b) 【Log】 Output on actual device when ImmersiveSpace is opened. It is not output on the simulator. AVAudioSession_iOS.mm:2223 Server returned an error from destroySession:. Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service with pid 39 named com.apple.audio.AudioSession was invalidated from this UserInfo={NSDebugDescription=The connection to service with pid 39 named com.apple.audio.AudioSession was invalidated from this AudioSession was invalidated from this process.}
0
1
31
5h
AVPlayer with multiple audio tracks plays audio differently when start
Hi, I'm trying to play multiple video/audio file with AVPlayer using AVMutableComposition. Each video/audio file can process simultaneously so I set each video/audio in individual tracks. I use only local file. let second = CMTime(seconds: 1, preferredTimescale: 1000) let duration = CMTimeRange(start: .zero, duration: second) var currentTime = CMTime.zero for _ in 0...4 { let mutableTrack = composition.addMutableTrack( withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid ) try mutableTrack?.insertTimeRange( duration, of: audioAssetTrack, at: currentTime ) currentTime = currentTime + second } When I set many audio tracks (maybe more than 5), the first part sounds a little different from original when it starts. It seems like audio's front part is skipped. But when I set only two tracks, AVPlayer plays as same as original file. avPlayer.play() How can I fix it? Why do audio tracks affect that don't have any playing parts when start? Please let me know.
1
2
696
Dec ’23
Integrating Apple Music Subscriptions into a React Native App
Hi everyone, I'm currently developing an iOS app using React Native and recently got accepted into the Apple Music Global Affiliate Program. To fully utilize this opportunity, I need to implement the following functionalities: Authorize Apple Music usage Play Apple Music within my app Identify if a user has an Apple Music subscription Initiate and complete Apple Music subscription within my app I've successfully implemented the first three functionalities using the react-native-apple-music module. Now, I need your help to understand how I can directly trigger the Apple Music subscription process from within my app. Thank you for your help!
0
0
44
19h
When adding a VideoPlayerComponent to an Entity placed in ImmersiveSpace and attempting to play an 8K video, the application crashes.
OS:VisionOS 1.0 Xcode:15.2 In the application under development, do the following Open ImmersiveSpace Add VideoPlayerComponent to Entity Play 8K Video the App crash The Apple symbol appears and returned to the Home but, The problem does not occur if the application is created by extracting only the part of the 8K video to be played back. Error Log apply fence tx failed (client=0x6fbf0fcc) [0xfffffecc (ipc/mig) server died] Failed to commit transaction (client=0x58510d43) [0x10000003 (ipc/send) invalid destination port] nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_protocol_socket_reset_linger [C1:2] setsockopt SO_LINGER failed [22: Invalid argument] Failed to set override status for bind point component member. Message from debugger: Terminated due to signal 9 I can't share the entire application, but is anyone else experiencing the same problem? Is this a memory issue?
0
0
54
23h
Detect the end of queue in MPMusicPlayerController
Hello, this is building off of another post in which several other posters and I had already attempted solving the issue in hacky ways. I am using MPMusicPlayerController.applicationQueuePlayer. My end goal here is to dynamically add items to the queue when it has ended based on my application's business logic. There is no way for me to know what these items will be when I am initially setting the queue. I have an updated implementation that seems to cover most edge cases, except for a glaringly obvious one – if there is just one item in the queue, and the user skips the track via MPRemoteCommandCenter (eg. lock screen), then it does not work. Currently, when I receive a MPMusicPlayerControllerPlaybackStateDidChange notification, I run this block: if player.playbackState == .paused,            player.currentPlaybackTime == 0,            player.indexOfNowPlayingItem == 0 {             EndOfQueueManager.handle()         } In the absence of a mechanism to detect the end of the queue from the framework, I would love to add the ability to add a target to MPRemoteCommand, like you can do for AVPlayer. I have tried to do exactly that, but it does not work: MPRemoteCommandCenter.shared().nextTrackCommand.addTarget { (event) -> MPRemoteCommandHandlerStatus in         if queue.count == 1 {             EndOfQueueManager.handle() }         return .success } I already have a functioning AVPlayer implementation that achieves my goal without any compromises or edge cases. I would be very disappointed if there is no way to do this with MPMusicPlayerController – being notified about the queue ending feels like a fairly rudimentary API hook.
1
2
676
Oct ’22
Reducing storage of similar PNGs by compressing them into a video and retrieving them losslessly--possibility or dumb idea?
My app stores and transports lots of groups of similar PNGs. These aren't compressed well by official algorithms like .lzfse, .lz4, .lzbitmap... not even bz2, but I realized that they are well-suited for compression by video codecs since they're highly similar to one another. I ran an experiment where I compressed a dozen images into an HEVCWithAlpha .mov via AVAssetWriter, and the compression ratio was fantastic, but when I retrieved the PNGs via AVAssetImageGenerator there were lots of artifacts which simply wasn't acceptable. Maybe I'm doing something wrong, or maybe I'm chasing something that doesn't exist. Is there a way to use video compression like a specialized archive to store and retrieve PNGs losslessly while retaining alpha? I have no intention of using the videos except as condensed storage. Any suggestions on how to reduce storage size of many large PNGs are also welcome. I also tried using HEVC instead of PNG via the new UIImage.hevcData(), but the decompression/processing times were just insane (5000%+ increase), on top of there being fatal errors when using async.
18
0
288
1w
When VideoTool Box compress JPEG and H264, the output format(YUV420) is different from the input format.
The M series utilizes VideoToolBox GPU compression with a YUV422 format kCVPixelFormatType_422YpCbCr8BiPlanarVideoRange input, and the compressed output JPEG image format remains YUV420. For the Intel series GPU compression, a YUV420 format kCVPixelFormatType_420YpCbCr8Planar input is required, and the compressed output JPEG image format is YUV422. The output format after compression is not consistent with the input format. Does VideoToolBox GPU compression support output YUV422 or YUV444 JPEG images and H.264 streams?
0
0
113
3d
Focus Peaking/Contrast Detection as seen in Final Cut Camera App
Hello everyone, with the release of Apple's new Final Cut Camera App, we see the possibility to overlay a Focus Peaking indicator over the camera feed, showing focussed areas. We have already had a contrast based autofocus system for some time via the AVCaptureDevice.Format.AutoFocusSystem.contrastDetection, but I haven't found a way to actually present contrast areas to the user. Given that Apple now natively has such an algorithm for the Final Cut Camera App, I wonder if we devs now also get access to this. If not, does anybody know of implementations of focus peaking out there? Thanks and with best regards
0
0
117
3d
WWDC Lab feedback
I am writing to follow up with my lab in WWDC24. I had 1:1 lab with Mr. Kavin, we had good 30 minutes lab and for follow up questions Kavin asked me to post it using feedback. Following is my questin: We have screenshare in our application and trying to use CFMessagePort for passing CVPixelBufferRef from broadcast extension to Applicaiton. Questions: How to copy planes of IOSurface backed CVPixelBufferRef onto another one without using memcpy, is there a zero-copy method? How to get notified when an IOSurface backed CVPixelBufferRef data get changed by another process. How to send an IOSurface backed CVPixelBufferRef from Broadcast Extension to application. How to pass unowned IOSurfaceRef from the Broadcast Extension to appliction.
0
0
38
3d
How to identify audio and video AVCaptureDevices that are from the same hardware?
I'm working on a macOS application that captures audio and video. When the user selects a video capture source (most likely an elgato box), I would like the application to automatically select the audio input from the same device. I was achieving this by pairing video and audio sources that had the same name, but this doesn't work when the user plugs in two capture devices of the same make and model. With the command system_profiler SPUSBDataType I can list all the USB devices, and I can see that the two elgato boxes have different serial numbers. If I could find this serial number, then I could figure out which AVCaptureDevices come from the same hardware. Is there a way to get the manufacturer's serial number from the AVCaptureDevice object? Or a way to identify the USB device for an AVCaptureDevice, and from there I could get the serial or some other unique ID?
1
3
125
5d
Audio transition using MPMusicPlayerApplicationController
Hi. I saw that in iOS 18 Beta there is a property "transition" on the Music Kit's ApplicationMusicPlayer. However, in my app I am using MPMusicPlayerApplicationController because I want to play Apple Music songs, local songs and podcasts. But I didn't find an analogue property on MPMusicPlayerApplicationController to specify transitions between songs. Am I missing something? Thanks, Dirk
0
0
92
4d
Add the info of each picture in the photo app, which is derived from the name of the user's self-built album.
Well, I will collect a lot of memes from the Internet and save them on my iPhone. I will name and classify them, but I will click on a photo in "All Photos", and its info does not show which album I added to, which makes me very distressed. If I have this function, I will easily manage the memes that I did not correctly add to the corresponding album.
1
0
111
5d
ProRes 4444 blocky compression artifacts
I’m creating a objective C command-line utility to encode RAW image sequences to ProRes 4444, but I’m encountering, blocky compression artifacts in the ProRes 4444 video output. To test the integrity of the image data before encoding to ProRes, I added a snippet in my encoding function that saves a 16-bit PNG before encoding to ProRes and the PNG looks perfect, I can see all detail in every part of the image dynamic range. Here’s a comparison between the 16-bit PNG(on the right) and the ProRes 4444 output. (on the left) As a further test, I re-encoded the ‘test PNG’ to ProRes 4444 using DaVinci Resolve, and the ProRes4444 output video from Resolve doesn’t have any blocky compression artifacts. Looks identical. In short, this is what the utility does: Unpacks the 12-bit raw data into 16-bit values. After unpacking, the raw data is debayered to convert it into a standard color image format (BGR) using OpenCV. Scale the debayered pixel values from their original 12-bit depth to fit into a 16-bit range. Up to this point everything is fine and confirmed by saving 16bit PNGs. The images are encoded to ProRes 4444 using the AVFoundation framework. The pixel buffers are created and managed using dictionary method with ‘kCVPixelFormatType_64RGBALE’. I need help figuring this out, I’m a real novice when it comes to AVfoundation/encoding to ProRes. See relevant parts of my 'encodeToProRes' function: void encodeToProRes(const std::string &outputPath, const std::vector<std::string> &rawPaths, const std::string &proResFlavor) { NSError *error = nil; NSURL *url = [NSURL fileURLWithPath:[NSString stringWithUTF8String:outputPath.c_str()]]; AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeQuickTimeMovie error:&error]; if (error) { std::cerr << "Error creating AVAssetWriter: " << error.localizedDescription.UTF8String << std::endl; return; } // Load the first image to get the dimensions std::cout << "Debayering the first image to get dimensions..." << std::endl; Mat firstImage; int width = 5320; int height = 3900; if (!debayer_image(rawPaths[0], firstImage, width, height)) { std::cerr << "Error debayering the first image" << std::endl; return; } width = firstImage.cols; height = firstImage.rows; // Save the first frame as a PNG 16-bit image for validation std::string pngFilePath = outputPath + "_frame1.png"; if (!imwrite(pngFilePath, firstImage)) { std::cerr << "Error: Failed to save the first frame as a PNG image" << std::endl; } else { std::cout << "First frame saved as PNG: " << pngFilePath << std::endl; } NSString *codecKey = nil; if (proResFlavor == "4444") { codecKey = AVVideoCodecTypeAppleProRes4444; } else if (proResFlavor == "422HQ") { codecKey = AVVideoCodecTypeAppleProRes422HQ; } else if (proResFlavor == "422") { codecKey = AVVideoCodecTypeAppleProRes422; } else if (proResFlavor == "LT") { codecKey = AVVideoCodecTypeAppleProRes422LT; } else { std::cerr << "Error: Invalid ProRes flavor specified: " << proResFlavor << std::endl; return; } NSDictionary *outputSettings = @{ AVVideoCodecKey: codecKey, AVVideoWidthKey: @(width), AVVideoHeightKey: @(height) }; AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings]; videoInput.expectsMediaDataInRealTime = YES; NSDictionary *pixelBufferAttributes = @{ (id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_64RGBALE), (id)kCVPixelBufferWidthKey: @(width), (id)kCVPixelBufferHeightKey: @(height) }; AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoInput sourcePixelBufferAttributes:pixelBufferAttributes]; ... [assetWriter startSessionAtSourceTime:kCMTimeZero]; CMTime frameDuration = CMTimeMake(1, 24); // Frame rate of 24 fps int numFrames = static_cast<int>(rawPaths.size()); ... // Encoding thread std::thread encoderThread([&]() { int frameIndex = 0; std::vector<CVPixelBufferRef> pixelBufferBuffer; while (frameIndex < numFrames) { std::unique_lock<std::mutex> lock(queueMutex); queueCondVar.wait(lock, [&]() { return !frameQueue.empty() || debayeringFinished; }); if (!frameQueue.empty()) { auto [index, debayeredImage] = frameQueue.front(); frameQueue.pop(); lock.unlock(); if (index == frameIndex) { cv::Mat rgbaImage; cv::cvtColor(debayeredImage, rgbaImage, cv::COLOR_BGR2RGBA); CVPixelBufferRef pixelBuffer = NULL; CVReturn result = CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &pixelBuffer); if (result != kCVReturnSuccess) { std::cerr << "Error: Could not create pixel buffer" << std::endl; dispatch_group_leave(dispatchGroup); return; } CVPixelBufferLockBaseAddress(pixelBuffer, 0); void *pxdata = CVPixelBufferGetBaseAddress(pixelBuffer); for (int row = 0; row < height; ++row) { memcpy(static_cast<uint8_t*>(pxdata) + row * CVPixelBufferGetBytesPerRow(pixelBuffer), rgbaImage.ptr(row), width * 8); } CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); pixelBufferBuffer.push_back(pixelBuffer); ... Thanks very much!
1
0
114
5d