Posts

Post not yet marked as solved
0 Replies
454 Views
This isn't just my observation but lots of people around me and also you can find tonnes of feedback on the inter webs. The processing of images taken with the front facing camera on the 15 (and I think 14 before) is so over processed that I'm aware of people jumping to other phones. And they're right. The 15 exacerbates that even more. You can turn off HDR (a viewing thing), you can prioritise speed over processing but really you cannot turn this off. You can take a Live Photo and then choose a different frame and the processing is less. As a developer I look at that and think it's bonkers, it's just software so why hasn't anyone produced a camera app that makes faces look good (not AI processing) from the front camera. I can be all enthusiastic and say I will develop one but it seems like a simple, obvious fix for Apple. To have the settings so bad that I have friends returning their phones, seems pretty bad. And as a photographer I would agree. There's a lot to love with Apple on the 15 and the log and prores but a simple selfie produces such ugly results. That's an actual problem. So throwing it it out there. What does everyone think? cheers Paul
Posted Last updated
.
Post not yet marked as solved
0 Replies
369 Views
Working some some video related stuff and in the system preferences for display the profiles of the MacBook Pro XDR display include a check box called "Apply System Gamma Boost" some of these profiles have it check (HDTV, Rec709) which an adjustment of 2.2 This appears to darken the screen (crush the shadows) and actually makes the screen look incorrect compared to having it unchecked (which now looks fairly close to a 709 reference display) I can find no documentation here on what that check box is supposed to do Can anyone point me in the right direction? Docs or advise? Right now it looks like it should be 'off' a see no benefit to having it on... thanks Paul
Posted Last updated
.
Post not yet marked as solved
0 Replies
425 Views
This is very much a broad question as i don't know enough yet. Is it feasible to get two or more aTV to sync together in terms of playback. So 3 of them and 3 screens with a left, right and centre movie playing? I don't know how accurate the starting of playback can be, nor whether the internal clocks can be accurately synced together or whether it could even read external sync over a network for example. If there are any resources, thoughts or articles of interest I'd really appreciate some pointers and how to start this deep dive?
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.1k Views
Okay,Not so experienced with AVFoundation or iOS so finding my way.I am trying to play a video on iOS that is sourced from a HLS stream and goes through a LUT (ColorCube)I had it working using a .mp4 file from a server, but that approach doesn't work with a HLS stream. The general approach wasGet the Asset (AVAsset from URL)Get the AVAssetTrack for the videoCreate AVMutableComposition with that trackCreate a AVMutableVideoComposition with that composition above and then pass the CIFilter set up in that constructorCreate a player item from the composition and then pass the video composition as the VideoCompositionNow that appeared to work but frankly it seems a pretty odd set upAnd i assume the reason why the HLS source doesn't work is because of step 2 and there not being a track until it starts playing?So i'm really confused about the correct set up for passing the video through a CI Filter from a HLS source.Can anyone point me in the vague direction where i should be looking?thanks!Paul
Posted Last updated
.