I'm trying to make an app that is able to quietly run in the background. It needs to detect other apps' or the system's incoming video and/or audio, using only on-device resources to determine if it might be a scam caller.
It will tap into an escalating cascade of resources to do so. For video/image scam detection, it uses OpenCV to detect faces, then refers to a known database of reported scam imagery. For audio scam calls, we defer to known techniques of voice modulation in frequency and/or amplitude. Each video and/or audio result will be relayed via notification banner as well as recorded in-app. Crucially, if the results are uncertain, users have the option to submit it to a global collaborative cloud database for investigative teams; 60 second audio snippets or series of images where faces were detected (60 second equivalent).
In the end, we expect to deploy this app across most parts of Asia and Africa, thereby protecting generations of iPhone and iPad users.
However, we have not been able to find a method that does this, and there is no known correspondance able to provide such technical guidance.
Please assist.
ReplayKit
RSS for tagRecord or stream video from the screen and audio from the app and microphone using ReplayKit.
Posts under ReplayKit tag
15 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I am recording video on iOS using ReplayKit and found that after copying data in the processSampleBuffer:withType: callback using memcpy, the data changes. This occurs particularly frequently when the screen content changes rapidly, making it look like the frames are overlapping.
I found that the values starting from byte 672 in the video data on my device often change. Here is the test demo:
- (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer withType:(RPSampleBufferType)sampleBufferType {
switch (sampleBufferType) {
case RPSampleBufferTypeVideo: {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
int ret = 0;
uint8_t *oYData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
size_t oYSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
uint8_t *oUVData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
size_t oUVSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
if (oYSize <= 672) {
return;
}
uint8_t tempValue = oYData[672];
uint8_t *tYData = malloc(oYSize);
memcpy(tYData, oYData, oYSize);
if (tYData[672] != oYData[672]) {
NSLog(@"$$$$$$$$$$$$$$$$------ t:%d o:%d temp:%d", tYData[672], oYData[672], tempValue);
}
free(tYData);
CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
break;
}
default: {
break;
}
}
}
Output:
$$$$$$$$$$$$$$$$------ t:110 o:124 temp:110
$$$$$$$$$$$$$$$$------ t:111 o:133 temp:111
$$$$$$$$$$$$$$$$------ t:124 o:138 temp:124
$$$$$$$$$$$$$$$$------ t:133 o:144 temp:133
$$$$$$$$$$$$$$$$------ t:138 o:151 temp:138
$$$$$$$$$$$$$$$$------ t:144 o:156 temp:144
$$$$$$$$$$$$$$$$------ t:151 o:135 temp:151
$$$$$$$$$$$$$$$$------ t:156 o:78 temp:156
$$$$$$$$$$$$$$$$------ t:135 o:76 temp:135
$$$$$$$$$$$$$$$$------ t:78 o:77 temp:78
$$$$$$$$$$$$$$$$------ t:76 o:80 temp:76
$$$$$$$$$$$$$$$$------ t:77 o:80 temp:77
$$$$$$$$$$$$$$$$------ t:80 o:79 temp:80
$$$$$$$$$$$$$$$$------ t:79 o:80 temp:79
I understand that there are no delegate methods for this. But to determine a positive consent from user to Record Screen needs to be evaluated via block parameter.
When user denies the permission by mistake, and, if user tries again, the alert is not showing up. How do I reset the permission and throw the below alert again?
We are currently working with the Enterprise APIs for visionOS 2 and have successfully obtained the necessary entitlements for passthrough camera access. Our goal is to capture images of external real-world objects using the passthrough camera of the Vision Pro, not just take screenshots or screen captures.
Our specific use case involves:
1. Accessing the raw passthrough camera feed.
2. Capturing high-resolution images of objects in the real world through the camera.
3. Processing and saving these images for further analysis within our custom enterprise app.
We would greatly appreciate any guidance, tutorials, or sample code that could help us achieve this functionality. If there are specific APIs or best practices for handling real-world image capture via passthrough cameras with the Enterprise APIs, please let us know.
Hello all,
This is my first post on the developer forums.
I am developing an app that records the screen of my app, using AVAssetWriter and RPScreenRecorder startCapture.
Everything is working as it should on most cases. There are some seemingly random times where the file generated is of some kb and it is corrupted. There seems to be no pattern on what the device is or the iOS version is. It can happen on various phones and iOS versions.
The steps I have followed in order to create the file are:
configuring the AssetWritter
videoAssetWriter = try? AVAssetWriter(outputURL: url!, fileType: AVFileType.mp4)
let size = UIScreen.main.bounds.size
let width = (Int(size.width / 4)) * 4
let height = (Int(size.height / 4)) * 4
let videoOutputSettings: Dictionary<String, Any> = [
AVVideoCodecKey : AVVideoCodecType.h264,
AVVideoWidthKey : width,
AVVideoHeightKey : height
]
videoInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoOutputSettings)
videoInput?.expectsMediaDataInRealTime = true
guard let videoInput = videoInput else { return }
if videoAssetWriter?.canAdd(videoInput) ?? false {
videoAssetWriter?.add(videoInput)
}
let audioInputsettings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 12000,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioInputsettings)
audioInput?.expectsMediaDataInRealTime = true
guard let audioInput = audioInput else { return }
if videoAssetWriter?.canAdd(audioInput) ?? false {
videoAssetWriter?.add(audioInput)
}
The urlForVideo returns the URL to the documentDirectory, after appending and creating the folders needed. This part seems to be working as it should as the directories are created and the video file exists on them.
Start the recording
if RPScreenRecorder.shared().isRecording { return }
RPScreenRecorder.shared().startCapture(handler: { [weak self] sample, bufferType, error in
if let error = error {
onError?(error.localizedDescription)
} else {
if (!RPScreenRecorder.shared().isMicrophoneEnabled) {
RPScreenRecorder.shared().stopCapture { error in
if let error = error { return }
}
onError?("Microphone was not enabled")
}
else {
succesCompletion?()
succesCompletion = nil
self?.processSampleBuffer(sample, with: bufferType)
}
}
}) { error in
if let error = error {
onError?(error.localizedDescription)
}
}
Process the sampleBuffers
guard CMSampleBufferDataIsReady(sampleBuffer) else { return }
DispatchQueue.main.async { [weak self] in
switch sampleBufferType {
case .video:
self?.handleVideoBaffer(sampleBuffer)
case .audioMic:
self?.add(sample: sampleBuffer, to: self?.audioInput)
self?.audioInput)
default:
break
}
}
// The add function from above
fileprivate func add(sample: CMSampleBuffer, to writerInput: AVAssetWriterInput?) {
if writerInput?.isReadyForMoreMediaData ?? false {
writerInput?.append(sample)
}
// The handleVideoBaffer function from above
fileprivate func handleVideoBaffer(_ sampleBuffer: CMSampleBuffer) {
if self.videoAssetWriter?.status == AVAssetWriter.Status.unknown {
self.videoAssetWriter?.startWriting()
self.videoAssetWriter?.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
} else {
if (self.videoInput?.isReadyForMoreMediaData) ?? false {
if self.videoAssetWriter?.status == AVAssetWriter.Status.writing {
self.videoInput?.append(sampleBuffer)
}
}
}
}
}
Finally the stop recording
func stopRecording(completion: @escaping (URL?, URL?, Error?) -> Void) {
RPScreenRecorder.shared().stopCapture { error in
if let error = error {
completion(nil, nil, error)
return
}
self.finish { videoURL, _ in
completion(videoURL, nil, nil)
}
}
}
// The finish function mentioned above
fileprivate func finish(completion: @escaping (URL?, URL?) -> Void) {
let dispatchGroup = DispatchGroup()
dispatchGroup.enter()
finishRecordVideo {
dispatchGroup.leave()
}
dispatchGroup.notify(queue: .main) {
print("Finish with url:\(String(describing: self.urlForVideo()))")
completion(self.urlForVideo(), nil)
}
}
// The finishRecordVideo mentioned above
fileprivate func finishRecordVideo(completion: @escaping ()-> Void) {
videoInput?.markAsFinished()
audioInput?.markAsFinished()
videoAssetWriter?.finishWriting {
if let writer = self.videoAssetWriter {
if writer.status == .completed {
completion()
}
else if writer.status == .failed {
// Print the error to find out what went wrong
if let error = writer.error {
print("Video asset writing failed with error: \(error.localizedDescription). Url: \(writer.outputURL.path)")
} else {
print("Video asset writing failed, but no error description available.")
}
completion()
}else {
completion()
}
}
}
}
What could it be the reason of the corrupted files generated? This issue has never happened to my devices so there is no way to debug using xcode. Also there are no errors popping out on the logs.
Can you spot any issues on the code that can create this kind of issue? Do you have any suggestions on the problem at hand?
Thanks
When I use ReplayKit's exportClipToURL function on iOS to capture a 15-second replay, the resulting video quality is poor, with snowy artifacts on the damaged visuals and audio distortion.
Hi guys.
I am currently working on an App where one of the functionalities is a screen recording function. The function should record the ENTIRE phone screen and be able to upload it to a Firestore database. I am currently having issues with the screen recording. I used the ReplayKit package and I am able to screen record just the screen of the App and not the entire phone screen. Essentially, the screen recording function should allow a user to record other apps screens that the user goes to for about 30 seconds. I am unable to figure out how to record the entire screen. Does anybody know way (package, API, or process) for me to achieve such functionality.
Thank you in advance,
Hanav Modasiya
Hello, I'm new here, I was developing a screen recording extension for an IOS application, I used the RPSBroadcastSampleHandler livekit as a basis, in tests a few months ago it worked, but after the long wait for publishing authorization the extension stopped working, I noticed which is not just mine but screen sharing from Google Meet, Zoom Mettings and others also don't work, I tested it on iPhone 14 pro and iPhone 6s, nothing worked, the option to select the extension appears but when clicking "start sharing" nothing happens and after a few seconds the sharing button returns to "start sharing", the same behavior in all tested apps, does anyone know what happens? Did you change the way you record and no app has updated? Internal error in IOS? Nothing logs in terminal just doesn't work.
how do I stop screen recorder/capture on iOS 14. have any native API for this?. have privacy/security concern for my app so kindly help me.
Thanks
We have an app with a broadcast extension with a RPBroadcastSampleHandler. The implementation is working fine, however for quite some users the extension suddenly crashes during the broadcast.
The stacktrace stacktrace of the crashing thread always looks like the shortened sample below. (Full crash reports and stack traces are attached to the submitted Feedbacks.) Looking at the stacktrace none of our code is running, just ReplayKit code handling XPC messages at that moment:
Thread:
#0 0x00000001e2cf342c in __pthread_kill ()
#1 0x00000001f6a51c0c in pthread_kill ()
#2 0x00000001a1bfaba0 in abort ()
#3 0x00000001a9e38588 in malloc_vreport ()
#4 0x00000001a9e35430 in malloc_zone_error ()
[...]
#18 0x0000000218ac91bc in -[RPBroadcastSampleHandler processPayload:completion:] ()
#19 0x0000000198b81360 in __NSXPCCONNECTION_IS_CALLING_OUT_TO_EXPORTED_OBJECT_S2__ ()
Is anyone aware of there issues with ReplayKit? Are there known workarounds? Could anything we're doing affect crashes like this?
Would greatly appreciate it if anyone from Apple DTS could look into this and flag the below Feedbacks to the relevant teams!
Feedback IDs: FB13949098, FB13949188
Hi, i have try with RPBroadcastSampleHandler Broadcast Extension and RPSystemBroadcastPickerView but i don't understand how can i mirror my iOS screen to Android smart TV?
I'm trying to cast the screen from an iOS device to an Android device.
I'm leveraging ReplayKit on iOS to capture the screen and VideoToolbox for compressing the captured video data into H.264 format using CMSampleBuffers. Both iOS and Android are configured for H.264 compression and decompression.
While screen casting works flawlessly within the same platform (iOS to iOS or Android to Android), I'm encountering an error ("not in avi mode") on the Android receiver when casting from iOS. My research suggests that the underlying container formats for H.264 might differ between iOS and Android.
Data transmission over the TCP socket seems to be functioning correctly.
My question is:
Is there a way to ensure a common container format for H.264 compression and decompression across iOS and Android platforms?
Here's a breakdown of the iOS sender details:
Device: iPhone 13 mini running iOS 17
Development Environment: Xcode 15 with a minimum deployment target of iOS 16
Screen Capture: ReplayKit for capturing the screen and obtaining CMSampleBuffers
Video Compression: VideoToolbox for H.264 compression
Compression Properties:
kVTCompressionPropertyKey_ConstantBitRate: 6144000 (bitrate)
kVTCompressionPropertyKey_ProfileLevel: kVTProfileLevel_H264_Main_AutoLevel (profile and level)
kVTCompressionPropertyKey_MaxKeyFrameInterval: 60 (maximum keyframe interval)
kVTCompressionPropertyKey_RealTime: true (real-time encoding)
kVTCompressionPropertyKey_Quality: 1 (lowest quality)
NAL Unit Handling: Custom header is added to NAL units
Android Receiver Details:
Device: RedMi 7A running Android 10
Video Decoding: MediaCodec API for receiving and decoding the H.264 stream
I have the conflict when i use replaykit. There is other app or system use the recording. I want my app can recognize them and let them use the replaykit preferentially.
I am using ReplayKit's RPScreenRecorder to record my app. When I use it in a mixed immersive space, nothing is actually recorded. The video is entirely blank.
Is this a feature or a bug? I am trying to record everything the user sees, including passthrough. Is there another way to do this?
I'm currently working on live screen broadcasting app which allows the user's to record their screen to save a mp4 video. I write video file by AVAssetWriter, and it works fine. But, when there is 1GB-2BG of storage space remaining on the device, errors such as "Attempted to start an invalid broadcast session" frequently occur, and video files cannot be played due to not call assetWriter.finishWriting().
Occur on device:
iPhone se3
iPhone 12 pro max
iPhone 13
iPad 19
iPad air 5
I have tried the movieFragmentInterval of AVAssetWriter to write movie fragments , set shouldOptimizeForNetworkUse true/false , not working.The video can not be played.
I want to known how to observe or catch this error? Thanks!