I have some confusion over the definition of MediaType versus DeviceType on the calls. It seems to me that because of similarities between the various cameras and the vague dichotomy between TrueDepth and LiDARData for measuring Depth there may be some need for refinement of the calls. My own preference would be to pass an Attribution Mask based on the bit position of the various characteristics of the device and having the bits prioritized so the BEST selection of the available devices would be chosen. For example With True Depth and LiDAR cameras both have depth data but the LiDAR is preferrable for accuracy reasons but some applications could get by with the photogrammetry mode True Depth camera if the LiDAR camera was unusable for some reason. I could see an option that said "Depth but no LiDAR" being selected. The same thing would be true for telephoto lenses versus combined with Wide or Ultrawide (or Hyper-Wide later). Audio streaming would have the same kind of issues with performance levels for input and output being narrowly defined. The question also arises why Device Type allows an array to be passed but not Media Type. It would seem reasonable for both.