Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics

Post

Replies

Boosts

Views

Activity

CGImageDestinationAddImageFromSource causes issues in iOS 18 / macOS 15
There seems to be an issue in iOS 18 / macOS 15 related to image thumbnail generation and/or HEIC. We are transcoding JPEG images to HEIC when they are loaded into our app (HEIC has a much lower memory footprint when loaded by Core Image, for some reason). We use Image I/O for that: guard let source = CGImageSourceCreateWithURL(inputURL, nil), let destination = CGImageDestinationCreateWithURL(outputURL, UTType.heic.identifier as CFString, 1, nil) else { throw <error> } let primaryImageIndex = CGImageSourceGetPrimaryImageIndex(source) CGImageDestinationAddImageFromSource(destination, source, primaryImageIndex, nil) When we use CGImageDestinationAddImageFromSource, we get the following warnings on the console: createImage:1445: *** ERROR: bad image size (0 x 0) rb: 0 CGImageSourceCreateThumbnailAtIndex:5195: *** ERROR: CGImageSourceCreateThumbnailAtIndex[0] - 'HJPG' - failed to create thumbnail [-67] {alw:-1, abs: 1 tra:-1 max:4620} writeImageAtIndex:1025: ⭕️ ERROR: '<app>' is trying to save an opaque image (4620x3466) with 'AlphaPremulLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha. It seems that CGImageDestinationAddImageFromSource is trying to extract/create a thumbnail, which fails somehow. I re-wrote the last part like this: guard let primaryImage = CGImageSourceCreateImageAtIndex(source, primaryImageIndex, nil), let properties = CGImageSourceCopyPropertiesAtIndex(source, primaryImageIndex, nil) else { throw <error> } CGImageDestinationAddImage(destination, primaryImage, properties) This doesn't cause any warnings. An issue that might be related has been reported here. I've also heard from others having issues with CGImageSourceCreateThumbnailAtIndex.
0
0
116
6d
AV1 Hardware Decoding
Recently I've been trying to play some AV1-encoded streams on my iPhone 15 Pro Max. First, I check for hardware support: VTIsHardwareDecodeSupported(kCMVideoCodecType_AV1); // YES Then I need to create a CMFormatDescription in order to pass it into a VTDecompressionSession. I've tried the following: { mediaType:'vide' mediaSubType:'av01' mediaSpecific: { codecType: 'av01' dimensions: 394 x 852 } extensions: {{ CVFieldCount = 1; CVImageBufferChromaLocationBottomField = Left; CVImageBufferChromaLocationTopField = Left; CVPixelAspectRatio = { HorizontalSpacing = 1; VerticalSpacing = 1; }; FullRangeVideo = 0; }} } but VTDecompressionSessionCreate gives me error -8971 (codecExtensionNotFoundErr, I assume). So it has something to do with the extensions dictionary? I can't find anywhere which set of extensions is necessary for it to work 😿. VideoToolbox has convenient functions for creating descriptions of AVC and HEVC streams (CMVideoFormatDescriptionCreateFromH264ParameterSets and CMVideoFormatDescriptionCreateFromHEVCParameterSets), but not for AV1. As of today I am using XCode 15.0 with iOS 17.0.0 SDK.
8
3
5.8k
Oct ’23
Failure of AudioUnitSetProperty when using MacCatalyst (works on macOS)
I was trying to set custom audio output device for a generated audio on macCatalyst. While using let status = AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout.size)) kAudioOutputUnitProperty_CurrentDevice is invalid, and status = -10879, indicating an error. STEPS TO REPRODUCE Set Run Destination to MacOS and run the program. "AudioUnitSetProperty: 0" should be printed, indicating it works fine. Set Run Destination to Mac Catalyst and run the program. "Error setting output device: -10879" should be printed, indicating an error.
0
0
86
6d
EXIF creation date of ICCameraFile always nil?
I am using ImageCaptureCore to access and (sometimes) download media files from a digital camera connected via USB (either to a Mac oder to an iOS device with Apple lightning to USB3 camera adapter). This works very well in general, but what puzzles me is that for the ICCameraFile's EXIF creation/modification date, it always returns nil. I can access the ICCameraItem's creation/modification date instead, which, as it says in the documentation "usually [is] the same as its EXIF creation date", but, well not always. Generally the EXIF tags are more reliable than the file dates, especially the modification date is easily messed up when copying files. As for my cameras, they show the stable EXIF date on their display, so for consistency I would prefer to use the same in my app. Is there a way to get it without downloading the image from the camera and reading it from the file? Does it possibly depend on the brand of camera (I mostly have Canon) whether ICCameraFile.exifCreationDate is ever populated or always nil? For a thumb drive with DCIM folder, which is treated just like a camera, it is also nil.
3
0
218
2w
AVAssetWriter & AVTimedMetadataGroup in AVMultiCamPiP
I'm trying to add metadata every second during video capture in the Swift sample App "AVMultiCamPiP". A simple string that changes every second with a write function triggered by a Timer. Can't get it to work, no matter how I arrange it, always ends up with the error "Cannot create a new metadata adaptor with an asset writer input that has already started writing". This is the setup section: // Add a metadata input let assetWriterMetaDataInput = AVAssetWriterInput(mediaType: .metadata, outputSettings: nil, sourceFormatHint: AVTimedMetadataGroup().copyFormatDescription()) assetWriterMetaDataInput.expectsMediaDataInRealTime = true assetWriter.add(assetWriterMetaDataInput) self.assetWriterMetaDataInput = assetWriterMetaDataInput This is the timed metadata creation which gets triggered every second: let newNoteMetadataItem = AVMutableMetadataItem() newNoteMetadataItem.value = "Some string" as (NSCopying & NSObjectProtocol)? let metadataItemGroup = AVTimedMetadataGroup.init(items: [newNoteMetadataItem], timeRange: CMTimeRangeMake( start: CMClockGetTime( CMClockGetHostTimeClock() ), duration: CMTime.invalid )) movieRecorder?.recordMetaData(meta: metadataItemGroup) This function is supposed to add the metadata to the track: func recordMetaData(meta: AVTimedMetadataGroup) { guard isRecording, let assetWriter = assetWriter, assetWriter.status == .writing, let input = assetWriterMetaDataInput, input.isReadyForMoreMediaData else { return } let metadataAdaptor = AVAssetWriterInputMetadataAdaptor(assetWriterInput: input) metadataAdaptor.append(meta) } I have an older code example in objc which works OK, but it uses "AVCaptureMetadataInput appendTimedMetadataGroup" and writes to an identifier called "quickTimeMetadataLocationNote". I'd like to do something similar in the above Swift code ... All suggestions are appreciated !
0
0
107
6d
How to Capture 48MP Photos with Ultra-Wide Camera During AR Session on iPhone 16 Pro?
Hello Developers, I am working on an app where I need to capture 48MP high-resolution photos using the ultra-wide camera of the iPhone 16 Pro while an AR session is running. The goal is to take these photos without interrupting or impacting the AR session, which uses the main wide-angle camera. Despite extensive testing and various approaches, we have been unable to achieve the desired functionality. What We Have Tried So Far 1. Using AVCaptureMultiCamSession: • We attempted to leverage AVCaptureMultiCamSession to simultaneously use the wide-angle camera for ARKit and the ultra-wide camera for photo capture. • However, this approach resulted in resource conflicts, with errors such as Cannot Record (OSStatus error -16409) and dropped frames. Additionally, the ultra-wide camera feed would frequently freeze or stop. 2. Dedicated AVCaptureSession for the Ultra-Wide Camera: • We separated the ultra-wide camera into its own AVCaptureSession while letting ARKit exclusively use the wide-angle camera. • This setup showed initial promise, but the ultra-wide camera feed would still stop running after a very short time (under one second). • Debugging logs indicated potential system-level interruptions, possibly due to resource prioritization by iOS. 3. Notification-Based Monitoring: • We implemented monitoring for session interruptions (AVCaptureSession.wasInterruptedNotification), but this provided limited insights into the exact cause of the session stopping. • We suspect iOS is de-prioritizing the ultra-wide camera session due to resource management policies or conflicts with ARKit. 4. Adjusting Camera Configurations: • We attempted to simplify both ARKit and AVCaptureSession configurations by reducing features like depth data and by using lower session presets for video capture. However, the core issue persisted. The Core Problem • The ultra-wide camera session frequently stops or freezes when used alongside ARKit. • Capturing high-resolution 48MP photos during the AR session is critical to the functionality of our app. Question Has anyone successfully implemented a similar setup? Specifically: • Capturing 48MP photos with the ultra-wide camera while ARKit is actively using the main camera. • Avoiding conflicts between ARKit and AVCaptureSession for the ultra-wide camera. Any insights, suggestions, or alternative approaches would be greatly appreciated. Thank you in advance for your help! 😊
0
0
116
6d
iOS 18.0 and above systems, Control Center Video Effects bug
My app is not a VOIP application. I use devices that support character centering, such as iPad 10 or iPad 13.18. The system is iOS 18.0, 18.1, or 18.1.1. When entering live classes, the "Character Centering" button does not appear in the control center, as shown in the following picture. However, if Voice over IP is selected for Background Modes in the project and the app is run again, it will not be reproduced, even if it is uninstalled or reinstalled. Could you please help me investigate the reason? thank you!
0
0
107
1w
Any API for AirPods Pro 2
Hi, May I ask if there is any iOS API or similar way to switch between the transparency and ANC modes of AirPods Pro 2? I know there is one way to configure and activate the shortcut in the APP, which requires an inconvenient manual setting. May I ask for any other advice? Thx in advance!
0
0
50
1w
API to switch the mode of Airpods Pro 2
Hi, May I ask if there is any API or similar way inside the iOS app to set up/switch the transparency and ANC modes of the AirPods Pro 2? One way is to set up one shortcut and activate that shortcut in the app, but it requires manually setting for a shortcut, which is not convenient. Thx for any advice on that!
0
0
49
1w
AVMIDIPlayer not working for all instruments
Hi, I test AVMIDIPlayer in order to replace classes written based on AVAudioEngine with callbacks functions sending MIDI events to test, I use an NSMutableData filled with: the MIDI header a track for time signature a track containing a few midi events. I then create an instance of the AVMIDIPlayer using the data Everything works fine for some instrument (00 … 20) or 90 but not for other instruments 60, 70, … The MiDI header and the time signature track are based on the MIDI.org sample, https://midi.org/standard-midi-files-specification RP-001_v1-0_Standard_MIDI_Files_Specification_96-1-4.pdf the midi events are: UInt8 trkEvents[] = { 0x00, 0xC0, instrument, // Tubular bell 0x00, 0x90, 0x4C, 0xA0, // Note 4C 0x81, 0x40, 0x48, 0xB0, // TS + Note 48 0x00, 0xFF, 0x2F, 0x00}; // End for (UInt8 i=0; i<3; i++) { printf("0x%X ", trkEvents[i]); } printf("\n"); [_midiTempData appendBytes:trkEvents length:sizeof(trkEvents)]; A template application is used to change the instrument in a NSTextField I was wondering if specifics are required for some instruments? The interface header: #import <AVFoundation/AVFoundation.h> NS_ASSUME_NONNULL_BEGIN @interface TestMIDIPlayer : NSObject @property (retain) NSMutableData *midiTempData; @property (retain) NSURL *midiTempURL; @property (retain) AVMIDIPlayer *midiPlayer; - (void)createTest:(UInt8)instrument; @end NS_ASSUME_NONNULL_END The implementation: #pragma mark - typedef struct _MThd { char magic[4]; // = "MThd" UInt8 headerSize[4]; // 4 Bytes, MSB first. Always = 00 00 00 06 UInt8 format[2]; // 16 bit, MSB first. 0; 1; 2 Use 1 UInt8 trackCount[2]; // 16 bit, MSB first. UInt8 division[2]; // }MThd; MThd MThdMake(void); void MThdPrint(MThd *mthd) ; typedef struct _MIDITrackHeader { char magic[4]; // = "MTrk" UInt8 trackLength[4]; // Ignore, because it is occasionally wrong. } Track; Track TrackMake(void); void TrackPrint(Track *track) ; #pragma mark - C Functions MThd MThdMake(void) { MThd mthd = { "MThd", {0, 0, 0, 6}, {0, 1}, {0, 0}, {0, 0} }; MThdPrint(&mthd); return mthd; } void MThdPrint(MThd *mthd) { char *ptr = (char *)mthd; for (int i=0;i<sizeof(MThd); i++, ptr++) { printf("%X", *ptr); } printf("\n"); } Track TrackMake(void) { Track track = { "MTrk", {0, 0, 0, 0} }; TrackPrint(&track); return track; } void TrackPrint(Track *track) { char *ptr = (char *)track; for (int i=0;i<sizeof(Track); i++, ptr++) { printf("%X", *ptr); } printf("\n"); } @implementation TestMIDIPlayer - (id)init { self = [super init]; printf("%s %p\n", __FUNCTION__, self); if (self) { _midiTempData = nil; _midiTempURL = [[NSURL alloc]initFileURLWithPath:@"midiTempUrl.mid"]; _midiPlayer = nil; [self createTest:0x0E]; NSLog(@"_midiTempData:%@", _midiTempData); } return self; } - (void)dealloc { [_midiTempData release]; [_midiTempURL release]; [_midiPlayer release]; [super dealloc]; } - (void)createTest:(UInt8)instrument { /* MIDI Header */ [_midiTempData release]; _midiTempData = nil; _midiTempData = [[NSMutableData alloc]initWithCapacity:1024]; MThd mthd = MThdMake(); MThd *ptrMthd = &mthd; ptrMthd->trackCount[1] = 2; ptrMthd->division[1] = 0x60; MThdPrint(ptrMthd); [_midiTempData appendBytes:ptrMthd length:sizeof(MThd)]; /* Track Header Time signature */ Track track = TrackMake(); Track *ptrTrack = &track; ptrTrack->trackLength[3] = 0x14; [_midiTempData appendBytes:ptrTrack length:sizeof(track)]; UInt8 trkEventsTS[]= { 0x00, 0xFF, 0x58, 0x04, 0x04, 0x04, 0x18, 0x08, // Time signature 4/4; 18; 08 0x00, 0xFF, 0x51, 0x03, 0x07, 0xA1, 0x20, // tempo 0x7A120 = 500000 0x83, 0x00, 0xFF, 0x2F, 0x00 }; // End [_midiTempData appendBytes:trkEventsTS length:sizeof(trkEventsTS)]; /* Track Header Track events */ ptrTrack->trackLength[3] = 0x0F; [_midiTempData appendBytes:ptrTrack length:sizeof(track)]; UInt8 trkEvents[] = { 0x00, 0xC0, instrument, // Tubular bell 0x00, 0x90, 0x4C, 0xA0, // Note 4C 0x81, 0x40, 0x48, 0xB0, // TS + Note 48 0x00, 0xFF, 0x2F, 0x00}; // End for (UInt8 i=0; i<3; i++) { printf("0x%X ", trkEvents[i]); } printf("\n"); [_midiTempData appendBytes:trkEvents length:sizeof(trkEvents)]; [_midiTempData writeToURL:_midiTempURL atomically:YES]; dispatch_async(dispatch_get_main_queue(), ^{ if (!_midiPlayer.isPlaying) [self midiPlay]; }); } - (void)midiPlay { NSError *error = nil; _midiPlayer = [[AVMIDIPlayer alloc]initWithData:_midiTempData soundBankURL:nil error:&error]; if (_midiPlayer) { [_midiPlayer prepareToPlay]; [_midiPlayer play:^{ printf("Midi Player ended\n"); [_midiPlayer stop]; [_midiPlayer release]; _midiPlayer = nil; }]; } } @end Call from AppDelegate - (IBAction)actionInstrument:(NSTextField*)sender { [_testMidiplayer createTest:(UInt8)sender.intValue]; }
0
0
170
1w
AVAudioEngine - How to archive configured nodes to file?
I’m looking to add DAW-like capabilities to my macOS music app, and AVAudioEngine seems like the right tool for the job. However, I haven’t been able to find any documentation on how to save the user’s AVAudioEngine configuration—specifically the connections between nodes and the internal states of each node—to a file. Does AVAudioEngine provide any API for saving and restoring this state, or does it need to be handled manually? If it’s manual, are there any sample "DAW" apps or resources that demonstrate how this can be implemented? Any guidance would be greatly appreciated. Thanks, BD
0
0
122
1w
PHPhoto localIdentifier to cloudIdentifier conversion
The sample code in the Apple documentation found in  PHCloudIdentifier does not compile in xCode 13.2.1. Can the interface for identifier conversion be clarified so that the answer values are more accessible/readable. The values are 'hidden' inside a Result enum It was difficult (for me) to rewrite the sample code because I made the mistake of interpreting the Result type as a tuple. Result type is really an enum. Using the Result type as the return from library.cloudIdentifierMappings(forLocalIdentifiers: ) and .localIdentifierMappings( for: ) puts the actual mapped identifiers inside the the enum where they need additional access via a .stringValue message or an evaluation of an element of the result enum. For others finding the same compile issue, here is a working version of the sample code. This compiles in xCode 13.2.1. func localId2CloudId(localIdentifiers: [String]) -> [String] {         var mappedIdentifiers = [String]()        let library = PHPhotoLibrary.shared()         let iCloudIDs = library.cloudIdentifierMappings(forLocalIdentifiers: localIdentifiers)         for aCloudID in iCloudIDs {           let cloudResult: Result = aCloudID.value             // Result is an enum .. not a tuple             switch cloudResult {                 case .success(let success):                     let newValue = success.stringValue                     mappedIdentifiers.append(newValue)                 case .failure(let failure):                     // do error notify to user                       }         }         return mappedIdentifiers     } ``` swift func func cloudId2LocalId(assetCloudIdentifiers: [PHCloudIdentifier]) -> [String] {             // patterned error handling per documentation         var localIDs = [String]()         let localIdentifiers: [PHCloudIdentifier: Result<String, Error>]  = PHPhotoLibrary.shared() .localIdentifierMappings(                   for: assetCloudIdentifiers)         for cloudIdentifier in assetCloudIdentifiers {             guard let identifierMapping = localIdentifiers[cloudIdentifier] else {                 print("Failed to find a mapping for \(cloudIdentifier).")                 continue             }             switch identifierMapping {                 case .success(let success):                     localIDs.append(success)                 case .failure(let failure) :                     let thisError = failure as? PHPhotosError                     switch thisError?.code {                         case .identifierNotFound:                             // Skip the missing or deleted assets.                             print("Failed to find the local identifier for \(cloudIdentifier). \(String(describing: thisError?.localizedDescription)))")                         case .multipleIdentifiersFound:                             // Prompt the user to resolve the cloud identifier that matched multiple assets.                             print("Found multiple local identifiers for \(cloudIdentifier). \(String(describing: thisError?.localizedDescription))") //                            if let selectedLocalIdentifier = promptUserForPotentialReplacement(with: thisError.userInfo[PHLocalIdentifiersErrorKey]) { //                                localIDs.append(selectedLocalIdentifier)                         default:                             print("Encountered an unexpected error looking up the local identifier for \(cloudIdentifier). \(String(describing: thisError?.localizedDescription))")                     }               }             }         return localIDs     }
1
0
632
Feb ’22
MusicKit: How do we search by title or name only?
I can't find any way to search for a song by title only. You can search for songs, but any term you provide appears to be applied to any metadata associated with the song. Look at the largely nonsensical results when I search for a song with the letters "de": In many cases, that string doesn't appear anywhere. I used MusicCatalogSearchRequest(term: searchTerm, types: [Song.self]) Likewise it stands to reason that people want to search for artist and album names using text strings. How do we do that?
0
0
141
1w
Captured photos in wrong orientation
I'm building a custom camera screen that displays the camera image on a preview layer and then captures an image, using AVCaptureSession. When the picture is captured, I immediately load it into a UIImageView in order to display it to the user for approval. I've actually done this many times before, but this is the first time I've tried to do it in an app that supports interface rotation. If I hold the phone in Portrait mode and capture a picture, everything works as expected. When the user rotates the phone into Landscape orientation, I detect this and I replace the preview layer (AVCaptureVideoPreviewLayer) with a new one, specifying connection.videoRotationAngle in order to make the image appear in the right orientation. I'm a little surprised that this is necessary, and it's not a smooth transition, but that doesn't matter. What does matter is that when I capture the image, it is in the wrong orientation. I tried rotating it myself, but this doesn't seem to make any difference. What am I doing wrong?
2
0
127
1w
Need the information of minimum focus distance of different cameras in each iPhone model
Our app involves using the camera to scan barcodes or QR codes, with a working distance of about 5 cm. However, we’ve noticed variations in the focus distance of camera lenses across different iPhone models. Currently, we mainly use two types of lenses: wide-angle and ultra-wide-angle. • For iPhone 13 and earlier models, we use the wide-angle lens. • For iPhone 13 Pro and later models, we use the ultra-wide-angle lens. We are not certain if this setup is correct since we don’t have all iPhone models to test.
There is a users have reported focus issues on his iPhone 15. We would like to ask if there’s a resource where we can find the minimum focus distance of different cameras in each iPhone model. This is to verify whether our current configuration is accurate. Alternatively, if such data is not readily available, could apple tam advise which camera should be used on various iPhone models for scenarios with a working distance of approximately 5 cm? Thank you!
1
0
126
1w
Selecting an appropriate AVCaptureDeviceFormat
My app currently captures video using an AVCaptureSession set with the AVCaptureSessionPreset1920x1080 preset. However, I'd like to update this behavior, such that video can be recorded at a range of different resolutions. There isn't a preset aligning to each desired resolution, so I thought I'd instead directly set the AVCaptureDeviceFormat. For any desired resolution, I would find the format that is closest without going under the desired resolution, and then crop it down as a post-processing step. However, what I've observed is that there can be a range of available formats for a device at each resolution, with various differing settings. Presumably there is logic within AVCaptureSession that selects a reasonable default based on all these different settings, but since I am applying the format directly, I think I don't have a way to make use of that default logic? And it is undocumented? Does this mean that the only way to select a format is to implement a comparison function that considers all different values of all different properties on AVCaptureDeviceFormat, and then sort the formats according to this comparator? If so, what if some new property is added to AVCaptureDeviceFormat in the future? The sort would not take this new property into account, and the function might select a format with some new undesired property. Are there any guarantees about what types for formats will be supported on a device? For example, can I take for granted that a '420v' format will exist at each resolution? If so I could filter the formats down only to those with this setting without risking filtering out all of the supported formats. I suspect I may be missing something obvious. Any help would be greatly appreciated!
2
0
147
1w
AVSpeechSynthesizer - just not working on 15.1.1
So get a swift file and put this in it import Foundation import AVFoundation let synthesizer = AVSpeechSynthesizer() let utterance = AVSpeechUtterance(string: "Hello, testing speech synthesis on macOS.") if let voice = AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-GB.Daniel") { utterance.voice = voice print("Using voice: \(voice.name), \(voice.language)") } else { print("Daniel voice not found on macOS.") } synthesizer.speak(utterance) I get no speech output and this log output Error reading languages in for local resources. Error reading languages in for local resources. Using voice: Daniel, en-GB Program ended with exit code: 0 Why? and whats with "Error reading languages in for local resources." ?
0
0
145
1w
How to get the actual distance of the depth map image subject from the true depth camera
I was able to obtain the depth map image using AVCapturePhotoOutput from the delegate method func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) I convert the depth map to kCVPixelFormatType_DepthFloat32 format and get the pixel values of the depth map using the below code func convertDepthData(depthMap: CVPixelBuffer) -> [[Float32]] { let width = CVPixelBufferGetWidth(depthMap) let height = CVPixelBufferGetHeight(depthMap) var convertedDepthMap: [[Float32]] = Array( repeating: Array(repeating: 0, count: width), count: height ) CVPixelBufferLockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2)) let floatBuffer = unsafeBitCast( CVPixelBufferGetBaseAddress(depthMap), to: UnsafeMutablePointer<Float32>.self ) for row in 0 ..< height { for col in 0 ..< width { if floatBuffer[width * row + col].isFinite{ convertedDepthMap[row][col] = floatBuffer[width * row + col] } } } CVPixelBufferUnlockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2)) return convertedDepthMap } Is this the right way of accessing the depth float values from a depth map. And what will be the unit for it. Because some times the depth values are in range of 0.7 when I keep the device close to the subject around 15 to 30 cm.
1
0
137
1w