I have a question about the Apple Music preview app for Windows 11.
It has a setting called Sound Check.
Is that feature available on the Apple Music web player and the Apple Music Android app?
If not, is that a planned feature for those?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
There is a need to obtain data on the position of the TruDepth camera matrix. Couldn't find anything in the documentation. Has anyone solved this problem? Is it generally possible to obtain this data?
There is a need to obtain data on the position of the TrueDepth camera matrix. Couldn't find anything in the documentation. Has anyone solved this problem? Is it generally possible to obtain this data?
Hi Everyone need your help . I am working on an application where I am capturing photo from the back camera using AVCaptureSession. It is working fine with the devices running iOS17+ but I am facing an error on device iPhone X running iOS 16.7.4
ERROR:
error: Optional(Error Domain=AVFoundationErrorDomain Code=-11803 "Cannot Record" UserInfo={NSUnderlyingError=0x283f0b780 {Error Domain=NSOSStatusErrorDomain Code=-16409 "(null)"}, NSLocalizedRecoverySuggestion=Try recording again., AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record})
Here is my Code:
`final class CedulaScanningVC: UIViewController {
var captureSession: AVCaptureSession!
var stillImageOutput: AVCapturePhotoOutput!
var videoPreviewLayer: AVCaptureVideoPreviewLayer!
var delegate: ScanCedulaDelegate?
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
self.captureSession.stopRunning()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
setupCamera()
}
// MARK: - Configure Camera
func setupCamera() {
captureSession = AVCaptureSession()
captureSession.sessionPreset = .medium
guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video)
else {
print("Unable to access back camera!")
return
}
let input: AVCaptureDeviceInput
do {
input = try AVCaptureDeviceInput(device: backCamera)
//Step 9
stillImageOutput = AVCapturePhotoOutput()
if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) {
captureSession.addInput(input)
captureSession.addOutput(stillImageOutput)
setupLivePreview()
}
}
catch let error {
print("Error Unable to initialize back camera: \(error.localizedDescription)")
}
}
func setupLivePreview() {
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer.videoGravity = .resizeAspectFill
videoPreviewLayer.connection?.videoOrientation = .portrait
self.view.layer.addSublayer(videoPreviewLayer)
//Step12
DispatchQueue.global(qos: .userInitiated).async { [weak self] in
self?.captureSession.startRunning()
//Step 13
DispatchQueue.main.async {
self?.videoPreviewLayer.frame = self?.view.bounds ?? .zero
}
}
}
func failed() {
let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .alert)
ac.addAction(UIAlertAction(title: "OK", style: .default))
present(ac, animated: true)
captureSession = nil
}
// MARK: - actions
func cameraButtonPressed() {
let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
stillImageOutput.capturePhoto(with: settings, delegate: self)
}
}
extension CedulaScanningVC: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
print("error: \(error)")
captureSession.stopRunning()
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in
guard let self = self else {return}
guard let imageData = photo.fileDataRepresentation()
else {
print("NO image captured")
return
}
let image = UIImage(data: imageData)
self.delegate?.capturedImage(image: image)
}
}
}`
I don't know what am doing wrong ?
Hi iOS community need your help. I am working on an application where I am capturing photo from the back camera using AVCaptureSession. It is working fine with the devices running iOS17+ but I am facing an error on device iPhone X running iOS 16.7.4
ERROR:
` error: Optional(Error Domain=AVFoundationErrorDomain Code=-11803 "Cannot Record" UserInfo={NSUnderlyingError=0x283f0b780 {Error Domain=NSOSStatusErrorDomain Code=-16409 "(null)"}, NSLocalizedRecoverySuggestion=Try recording again., AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record})
My Code here:
final class CedulaScanningVC: UIViewController {
var captureSession: AVCaptureSession!
var stillImageOutput: AVCapturePhotoOutput!
var videoPreviewLayer: AVCaptureVideoPreviewLayer!
var delegate: ScanCedulaDelegate?
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
self.captureSession.stopRunning()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
setupCamera()
}
// MARK: - Configure Camera
func setupCamera() {
captureSession = AVCaptureSession()
captureSession.sessionPreset = .medium
guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video)
else {
print("Unable to access back camera!")
return
}
let input: AVCaptureDeviceInput
do {
input = try AVCaptureDeviceInput(device: backCamera)
//Step 9
stillImageOutput = AVCapturePhotoOutput()
if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) {
captureSession.addInput(input)
captureSession.addOutput(stillImageOutput)
setupLivePreview()
}
}
catch let error {
print("Error Unable to initialize back camera: \(error.localizedDescription)")
}
}
func setupLivePreview() {
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer.videoGravity = .resizeAspectFill
videoPreviewLayer.connection?.videoOrientation = .portrait
self.view.layer.addSublayer(videoPreviewLayer)
//Step12
DispatchQueue.global(qos: .userInitiated).async { [weak self] in
self?.captureSession.startRunning()
//Step 13
DispatchQueue.main.async {
self?.videoPreviewLayer.frame = self?.view.bounds ?? .zero
}
}
}
func failed() {
let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .alert)
ac.addAction(UIAlertAction(title: "OK", style: .default))
present(ac, animated: true)
captureSession = nil
}
// MARK: - actions
func cameraButtonPressed() {
let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
stillImageOutput.capturePhoto(with: settings, delegate: self)
}
}
extension CedulaScanningVC: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
print("error: \(error)")
captureSession.stopRunning()
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in
guard let self = self else {return}
guard let imageData = photo.fileDataRepresentation()
else {
print("NO image captured")
return
}
let image = UIImage(data: imageData)
self.delegate?.capturedImage(image: image)
}
}
}
I don't know what am doing wrong ?
This isn't just my observation but lots of people around me and also you can find tonnes of feedback on the inter webs.
The processing of images taken with the front facing camera on the 15 (and I think 14 before) is so over processed that I'm aware of people jumping to other phones. And they're right. The 15 exacerbates that even more. You can turn off HDR (a viewing thing), you can prioritise speed over processing but really you cannot turn this off. You can take a Live Photo and then choose a different frame and the processing is less.
As a developer I look at that and think it's bonkers, it's just software so why hasn't anyone produced a camera app that makes faces look good (not AI processing) from the front camera.
I can be all enthusiastic and say I will develop one but it seems like a simple, obvious fix for Apple.
To have the settings so bad that I have friends returning their phones, seems pretty bad. And as a photographer I would agree. There's a lot to love with Apple on the 15 and the log and prores but a simple selfie produces such ugly results. That's an actual problem.
So throwing it it out there. What does everyone think?
cheers
Paul
As the title states. Would like to use MusicKit for Web instead of the Swift integration.
Is it necessary to enroll to Apple devloper program to get into apple news publisher to get my Apple news API credentials. At last. I need a gudance how to publish the articles on the apple News using News API. a detailed explanation till getting Apple News API credentials ( Key, secret key channel id) is much appreciated!
I have a navigation controller with two VCs. One VC is pushed onto the NavController, the other is presented on top of the NavController. The presented VC has a relatively complex animation involving a CAEmitter -> Animate birth rate down -> Fade out -> Remove. The pushed VC has an 'inputAccessoryView' and can become first responder.
The expected behavior is open presented VC -> Emitter Emits pretty pictures -> emitter stops gracefully.
The animation works perfectly. UNLESS I open pushed VC -> Leave -> go to presented VC. In this case when I open the presented VC the emitter emits pretty pictures -> they never stop. (Please do not ask me how long it took to figure this much out 🤬😔)
The animation code in question is:
let animation = CAKeyframeAnimation(keyPath: #keyPath(CAEmitterLayer.birthRate))
animation.duration = 1
animation.timingFunction = CAMediaTimingFunction(name: .easeIn)
animation.values = [1, 0 , 0]
animation.keyTimes = [0, 0.5, 1]
animation.fillMode = .forwards
animation.isRemovedOnCompletion = false
emitter.beginTime = CACurrentMediaTime()
let now = Date()
CATransaction.begin()
CATransaction.setCompletionBlock { [weak self] in
print("fade beginning -- delta: \(Date().timeIntervalSince(now))")
let transition = CATransition()
transition.delegate = self
transition.type = .fade
transition.duration = 1
transition.timingFunction = CAMediaTimingFunction(name: .easeOut)
transition.setValue(emitter, forKey: kKey)
transition.isRemovedOnCompletion = false
emitter.add(transition, forKey: nil)
emitter.opacity = 0
}
emitter.add(animation, forKey: nil)
CATransaction.commit()
The delegate method is:
extension PresentedVC: CAAnimationDelegate {
func animationDidStop(_ anim: CAAnimation, finished flag: Bool) {
if let emitter = anim.value(forKey: kKey) as? CALayer {
emitter.removeAllAnimations()
emitter.removeFromSuperlayer()
} else {
}
}
}
Here is the pushed VC:
class PushedVC: UIViewController {
override var canBecomeFirstResponder: Bool {
return true
}
override var canResignFirstResponder: Bool {
return true
}
override var inputAccessoryView: UIView? {
return UIView()
}
}
So to reiterate - If I push pushedVC onto the navController, pop it, present PresentedVC the emitters emit, but then the call to emitter.add(animation, forKey: nil) is essentially ignored. The emitter just keeps emitting.
Here are some sample happy print statements from the completion block:
fade beginning -- delta: 1.016232967376709
fade beginning -- delta: 1.0033869743347168
fade beginning -- delta: 1.0054619312286377
fade beginning -- delta: 1.0080779790878296
fade beginning -- delta: 1.0088880062103271
fade beginning -- delta: 0.9923020601272583
fade beginning -- delta: 0.99943196773529
Here are my findings:
The issue presents only when the pushed VC has an inputAccessoryView
AND canBecomeFirstResponder is true
It does not matter if the inputAccessoryView is UIKit or custom, has size, is visible, or anything.
When I dismiss PresentedVC the animation is completed and the print
statements show. Here are some unhappy print examples:
fade beginning -- delta: 5.003802061080933
fade beginning -- delta: 5.219511032104492
fade beginning -- delta: 5.73025906085968
fade beginning -- delta: 4.330522060394287
fade beginning -- delta: 4.786169052124023
CATransaction.flush() does not fix anything
Removing the entire CATransaction block and just calling
emitter.add(animation, forKey: nil) similarly does nothing - the
birth rate decrease animation does not happen
I am having trouble creating a simple demo project where the issue is reproducible (it is 100% reproducible in my code, the entirety of which I'm not going to link here) so I think getting a "solution" is unrealistic. What I would love is if anyone had any suggestions on where else to look? Any ways to debug CAAnimation? I think if I can solve the last bullet - emitter.add(animation, forKey: nil) called w/o a CATransaction - I can break this whole thing. Why would a CAAnimation added directly to the layer which is visible and doing stuff refuse to run?
Hello, can anybody help me with this ? I am downloading video in FS, and when I give that url to player it gives me this error. but this comes up only in case of m3u8. other format like mp4 are working fine locally. please help !
{"error": {"code": -12865, "domain": "CoreMediaErrorDomain", "localizedDescription": "The operation couldn’t be completed. (CoreMediaErrorDomain error -12865.)", "localizedFailureReason": "", "localizedRecoverySuggestion": ""}, "target": 13367}
struct AlbumDetails : Hashable {
let artistId: String?
}
func fetchAlbumDetails(upc: String) async throws -> AlbumDetails {
let request = MusicCatalogResourceRequest<Album>(matching: \.upc, equalTo: upc)
let response = try await request.response()
guard let album = response.items.first else {
throw NSError(domain: "AlbumNotFound", code: 0, userInfo: nil)
}
do {
let artistID = try await fetchAlbumDetails(upc: upc)
print("Artist ID: \(artistID)")
} catch {
print("Error fetching artist ID: \(error)")
}
return AlbumDetails(artistId: album.artists?.first?.id)
with this function, i can return nearly everything except the artist ID so i know its not a problem with the request but I know there has to be a way to get the artistID, there has too. If anyone has a solution to this I would reallly appricate it
Hello! I'm trying to save videos asynchronously. I've already used performChanges without the completionHandler, but it didn't work. Can you give me an example? Consider that the variable with the file URL is named fileURL. What would this look like asynchronously?
I have a custom USB device that includes a microphone. I can see the microphone on macOS when I plug in the device so I know that it is working with the kernel and AV subsystems. I can enumerate and reference the microphone using AVCaptureDevice but I have not been able to figure out how to use this device reference with AVAudioEngine. I'm trying to accomplish two things with this microphone.
I want to stream audio from the microphone and have it rendered to the speakers on my MacBook Pro.
I want to capture sound data from the microphone and forward it to a live streaming API.
To my mind, from what I've read, I need AVAudioEngine to do this but I'm having trouble determining from the documentation just how to go about it on macOS. It seems that there is a lot more information for iOS or iPadOS but since USB-C support is sparsely documented on those operating systems, I'm focusing on the desktop (macOS) for now.
Can I convert an AVCaptureDevice into and audio input for AVAudioEngine? If not, how can I accomplish what I'm trying to do using whatever is available on AVFoundation?
My app uses camera and photo library. I found that if a user follows certain steps, they will no longer be able to change the photo permissions for my app in the Settings app.
The steps are as follows
Press the camera button in the app to launch the camera.
Take a picture with camera permissions granted.
grant ".addOnly" permission to the photo library.
Press the photo library button in the app to read photo library.
Deny ".readWrite" permission to the photo library.
After step 5, the Settings app only shows items to switch ".addOnly" permissions, but not ".readWrite" permissions.
I am aware that in iOS14 or later, the permission required after a photo is taken with the camera should be ".addOnly". Therefore, I suspect that this problem is occurring in other apps.
So far I have devised my app to deal with this problem, but is this the expected behavior of the Settings app? If so, how can I avoid this problem?
Since iOS 17.2. the video player in Safari becomes black if I jump forward in a HLS video stream. I only hear the sound of the video. If I close the full screen and reopen it the video continious normally.
I checked if the source meets all the requirements mentioned here and it does.
Does anybody have the same issue or maybe a solution for this problem?
Per FP Streaming programming guide,
The SPC includes a specific TLLV to provide the state of the media content playback. And total value length of this is 16 in decimals.
Here i'm trying to retrieve the Playback State. which is of 20-23 ByteRange.
byte[] mediaPlaybackStateBlock = getBlock(MEDIA_PLAYBACK_STATE).getValueData();
playbackState = Arrays.copyOfRange(mediaPlaybackStateBlock, 20, 24);
I'm endup in issue - arraycopy: length -4 is negative.
I'm bit confused on how to retrieve the playback state from 20-23 ByteRange when its length jus 16..
Kindly clarify
Is it possible using MusicKit API's to access the About information displayed on an artist page in Apple Music?
I hoped Artist.editiorialNotes would give me the information but there is scarce information in there. Even for Taylor Swift, only editorialNotes.short displays brief info: "The genre-defying singer-songwriter is the voice of a generation."
If currently not possible, are there plans for it in the future?
Also, with the above in mind, and never seen any editorialNotes for a song, is it safe to assume editorialNotes are mainly used for albums?
Hello,
I came across the Object Capture for iOS example from WWDC23, which utilizes LiDAR sensor.
However, I’m interested in using the TrueDepth camera system instead.
What I have tried is to save depth photos (.HEIC) to the Images/ folder (based on modifying the example below), which is hopefully used by the Photogrammetry session. But I haven’t been successful so far in starting the 3D reconstruction.
Could there be something I’ve missed, or is the Object Capture sample code exclusively designed for LiDAR? Or maybe .HEIC is not the right format to use?
Thank you for your assistance.
import AVFoundation
import UIKit
class DepthPhotoCapture: NSObject, AVCapturePhotoCaptureDelegate {
let photoOutput = AVCapturePhotoOutput()
let captureSession = AVCaptureSession()
override init() {
super.init()
setupCaptureSession()
}
func setupCaptureSession() {
// Get the front camera (TrueDepth camera)
guard let frontCamera = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .front) else {
print("Unable to access front camera!")
return
}
do {
// Create an input object from the camera
let input = try AVCaptureDeviceInput(device: frontCamera)
// Add the input to the capture session
captureSession.addInput(input)
} catch {
print("Unable to create AVCaptureDeviceInput: \(error)")
}
// Check if depth data capture is supported
if photoOutput.isDepthDataDeliverySupported {
// Enable depth data capture
photoOutput.isDepthDataDeliveryEnabled = true
}
// Add the photo output to the capture session
captureSession.addOutput(photoOutput)
// Start the capture session
captureSession.startRunning()
}
func captureDepthPhoto() {
// Create a photo settings object
let photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc])
photoSettings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliveryEnabled
// Capture a photo with depth data
photoOutput.capturePhoto(with: photoSettings, delegate: self)
}
// Implement the AVCapturePhotoCaptureDelegate method
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let imageData = photo.fileDataRepresentation() else {
print("Error while generating image from photo capture data.")
return
}
// Get the documents directory
let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
// Append the image directory and a unique image name
let fileURL = documentsDirectory.appendingPathComponent("Images/").appendingPathComponent(UUID().uuidString).appendingPathExtension("heic")
do {
// Write the image data to the file
try imageData.write(to: fileURL)
print("Saved photo with depth data to \(fileURL)")
} catch {
print("Failed to write the image data to disk: \(error)")
}
}
}
- (void)cameraDevice:(ICCameraDevice*)camera
didReceiveMetadata:(NSDictionary* _Nullable)metadata
forItem:(ICCameraItem*)item
error:(NSError* _Nullable) error API_AVAILABLE(ios(13.0)){
NSLog(@"metadata = %@",metadata);
if (item) {
ICCameraFile *file = (ICCameraFile *)item;
NSURL *downloadsDirectoryURL = [[NSFileManager defaultManager] URLsForDirectory:NSDocumentDirectory inDomains:NSUserDomainMask].firstObject;
downloadsDirectoryURL = [downloadsDirectoryURL URLByAppendingPathComponent:@"Downloads"];
NSDictionary *downloadOptions = @{ ICDownloadsDirectoryURL: downloadsDirectoryURL,
ICSaveAsFilename: item.name,
ICOverwrite: @YES,
ICDownloadSidecarFiles: @YES
};
[self.cameraDevice requestDownloadFile:file options:downloadOptions downloadDelegate:self didDownloadSelector:@selector(didDownloadFile:error:options:contextInfo:) contextInfo:nil];
}
}
- (void)didDownloadFile:(ICCameraFile *)file
error:(NSError* _Nullable)error
options:(NSDictionary<NSString*, id>*)options
contextInfo:(void* _Nullable) contextInfo API_AVAILABLE(ios(13.0)){
if (error) {
NSLog(@"Download failed with error: %@", error);
}
else {
NSLog(@"Download completed for file: %@", file);
}
}
I don't know what's wrong. I don't know if this is the right way to get the camera pictures. I hope someone can help me
I found that the app reported a crash of a pure virtual function call, which could not be reproduced.
A third-party library is referenced:
https://github.com/lincf0912/LFPhotoBrowser
Achieve smearing, blurring, and mosaic processing of images
Crash code:
if (![LFSmearBrush smearBrushCache]) {
[_edit_toolBar setSplashWait:YES index:LFSplashStateType_Smear];
CGSize canvasSize = AVMakeRectWithAspectRatioInsideRect(self.editImage.size, _EditingView.bounds).size;
[LFSmearBrush loadBrushImage:self.editImage canvasSize:canvasSize useCache:YES complete:^(BOOL success) {
[weakToolBar setSplashWait:NO index:LFSplashStateType_Smear];
}];
}
- (UIImage *)LFBB_patternGaussianImageWithSize:(CGSize)size orientation:(CGImagePropertyOrientation)orientation filterHandler:(CIFilter *(^ _Nullable )(CIImage *ciimage))filterHandler
{
CIContext *context = LFBrush_CIContext;
NSAssert(context != nil, @"This method must be called using the LFBrush class.");
CIImage *midImage = [CIImage imageWithCGImage:self.CGImage];
midImage = [midImage imageByApplyingTransform:[self LFBB_preferredTransform]];
midImage = [midImage imageByApplyingTransform:CGAffineTransformMakeScale(size.width/midImage.extent.size.width,size.height/midImage.extent.size.height)];
if (orientation > 0 && orientation < 9) {
midImage = [midImage imageByApplyingOrientation:orientation];
}
//图片开始处理
CIImage *result = midImage;
if (filterHandler) {
CIFilter *filter = filterHandler(midImage);
if (filter) {
result = filter.outputImage;
}
}
CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]];
UIImage *image = [UIImage imageWithCGImage:outImage];
CGImageRelease(outImage);
return image;
}
This line trigger crash:
CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]];
b9c90c7bbf8940e5aabed7f3f62a65a2-symbolicated.crash