2 Replies
      Latest reply on Nov 7, 2019 9:10 AM by dhoerl
      dhoerl Level 1 Level 1 (0 points)

        My company purchased a 1 min looping video from Pond5 in this format: "1920x1080 @25fps / mov / 224.8MB / H.264". The video isn't really a minute long: the first 30 seconds are an animated object, and the last 30 are the an "alpha mask" (white and black image, white the animation object).

         

        When I use AVPlayer to loop the first 30 seconds, the background is pure black. I want to mask it out so its transparent. I assume there is some way to combine the two halves of this video to achieve this. I actually found a post by an Apple Engineer on Medium (I recall) that shows how, if the video has each frame divided into two halves, the top the video and the bottom the mask, then using a CIFilter you can in real time achieve this.

         

        My preference would be to use some tool I have on the Mac (or could get free) to reprocess the video, or failing that, do it in real time while its playing. That said, I have little experience in this area and would greatly appreciate any pointers!

         

        PS: it has also occured to me that I might be able to use the CIColorCube filter to achieve this and just ignore the existing mask video.

        • Re: How to combine two video files into one?
          developer100 Level 1 Level 1 (0 points)

          So do you need it to be transparent on top of something else and during playback?

          Any preprocessing you can do ahead of time to reduce the amount of work needed at playback is usually better.

          For the Mac you can use After Effects or Premiere to do what you're talking about.(create mask for black or making a mask.

          You can also use ffmpeg (free) but will been to sort through the commands.

            • Re: How to combine two video files into one?
              dhoerl Level 1 Level 1 (0 points)

              Yes, it needs to be transparent. I don't have any of those fancy programs like After Effects, nor do I want to learn them for just this one task. The Pond5 video has a black background embedded in it - the first 30 seconds. The second 30 seconds are the appropriate mask for the equivalent point in the core video.

               

              I read a really interesting article on how to apply a mask in real time - I realize I could preprocess it too. Since links here can be problematic, try searching on terms quentinfasquel ios-transparent-video-with-coreimage

               

              In the end, I think I can do this in code: I'll run the video once, from start to finish. I'll cache every frame (30*25 of them), then when I get past the 30 second mark, I'll add the original frame to the mask frame, and output that as an AVAsset (to a writer), saving this final 30 seconds with both images placed such that I can pull each out (per the link above).

               

              I'll get more experinece with AVFoundation and not waste time learning to use some tools that I'll never use again.