Building an RTSP client application

Hi,


I've been tasked with building an app that receives video from a device over RTSP. The device is a security camera that has a built-in Wifi hotspot. Both the camera and the app are intended to be used in a setting where internet access won't be available, so the app has to communicate directly with the camera. Also, I know that the device transmits an h.264 video stream, which should be compatible with iOS.


The app I'm required to build has to do more than just display a live picture from the camera -- it must also be able to record video to a file, possibly with transcoding to a lower bitrate.


As best I could determine, there isn't anything in AVFoundation or iOS that can directly help me connect to an RTSP server. (If I'm wrong about this, I would love to know about it).


I found some example code on the internet that uses the ffmpeg library as a starting point. I was able to construct a crude video player out of the ffmpeg functions, but it leaves plenty to be desired. In a nutshell, here's what I'm doing:


1. Establish a connection to the RTSP server. This is easily done by passing an RTSP URL to ffmpeg's avformat_open_input function.


2. Try to read a packet. If successful, I append the packet to a frame. If not successful, I wait about 10ms and try again. (This is ugly, but if I do it off the main thread, it seems to be practical).


3. After appending one or more packets to a frame, I get a completed frame that can then be converted to an RGB image by calling the sws_scale function in the ffmpeg library. Then the RGB image can be converted to a CGImageRef and finally a UIImage that is able to be displayed.


Step 3 is really just a test so I can see what the camera is sending and verify that the app is actually receiving video. It is, but the conversion of the frames into UIImages is so CPU-intensive as to be impractical. I'm pretty sure that what I need to do instead is take the packet data from step 2 and generate some sort of an AVAsset out of it, so that AVFoundation can take over from there and give me something that I can play in a standard media player, or stream out to a file.


Should this be possible? Will it be relatively easy, meaning will AVFoundation be able to deal directly with the data from the camera, or will it require a complex transformation? I'm not even sure which AVFoundation objects I should be looking at to do this (assets, tracks, etc).


I'd appreciate any feedback or advice.


Thanks,

Frank

Try using ffmpeg


Your other option would be to feed CMSampleBuffers to an AVAssetWriter, but ffmpeg is probably easier

I am attempting to build an app to accomplish nearly the exact same thing. We have reached the point where we have successfully streamed H.264 from a raspberrypi camera to an iPhone. The video is successfully being displayed on the iPhone. The data is currently represented as CMSampleBuffers, but need to be converted to CVPixelBuffers.

Between using ffmpeg or feeding CMSampleBUffers, may I ask which route ended up working best for you? And if you can provide any insight or resources on how.

Building an RTSP client application
 
 
Q