I get the following error when configuring MCBrowserViewController to look for nearby peers. And this is despite appending the required info in info.plist namely,
[MCNearbyServiceBrowser] NSNetServiceBrowser did not search with error dict [{
NSNetServicesErrorCode = "-72008";
NSNetServicesErrorDomain = 10;
}].
NSLocalNetworkUsageDescription
<string>Need permission to discover and connect to My Service running on peer iOS device</string>
NSBonjourServices
<array>
<string>_my-server._tcp</string>
<string>_my-server._udp</string>
</array>
Here is my code:
let browser = MCBrowserViewController(serviceType: "my-server", session: session)
browser.delegate = self
browser.minimumNumberOfPeers = kMCSessionMinimumNumberOfPeers
browser.maximumNumberOfPeers = 1
self.present(browser, animated: true, completion: nil)
Post
Replies
Boosts
Views
Activity
I have tried everything but it looks to be impossible to get MTKView to display full range of colors of HDR CIImage made from CVPixelBuffer (in 10bit YUV format). Only builtin layers such as AVCaptureVideoPreviewLayer, AVPlayerLayer, AVSampleBufferDisplayLayer are able to fully display HDR images on iOS. Is MTKView incapable of displaying full BT2020_HLG color range? Why does MTKView clip colors no matter even if I set pixel Color format to bgra10_xr or bgra10_xr_srgb?
convenience init(frame: CGRect, contentScale:CGFloat) {
self.init(frame: frame)
contentScaleFactor = contentScale
}
convenience init(frame: CGRect) {
let device = MetalCamera.metalDevice
self.init(frame: frame, device: device)
colorPixelFormat = .bgra10_xr
self.preferredFramesPerSecond = 30
}
override init(frame frameRect: CGRect, device: MTLDevice?) {
guard let device = device else {
fatalError("Can't use Metal")
}
guard let cmdQueue = device.makeCommandQueue(maxCommandBufferCount: 5) else {
fatalError("Can't make Command Queue")
}
commandQueue = cmdQueue
context = CIContext(mtlDevice: device, options: [CIContextOption.cacheIntermediates: false])
super.init(frame: frameRect, device: device)
self.framebufferOnly = false
self.clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0)
}
And then rendering code:
override func draw(_ rect: CGRect) {
guard let image = self.image else {
return
}
let dRect = self.bounds
let drawImage: CIImage
let targetSize = dRect.size
let imageSize = image.extent.size
let scalingFactor = min(targetSize.width/imageSize.width, targetSize.height/imageSize.height)
let scalingTransform = CGAffineTransform(scaleX: scalingFactor, y: scalingFactor)
let translation:CGPoint = CGPoint(x: (targetSize.width - imageSize.width * scalingFactor)/2 , y: (targetSize.height - imageSize.height * scalingFactor)/2)
let translationTransform = CGAffineTransform(translationX: translation.x, y: translation.y)
let scalingTranslationTransform = scalingTransform.concatenating(translationTransform)
drawImage = image.transformed(by: scalingTranslationTransform)
let commandBuffer = commandQueue.makeCommandBufferWithUnretainedReferences()
guard let texture = self.currentDrawable?.texture else {
return
}
var colorSpace:CGColorSpace
if #available(iOS 14.0, *) {
colorSpace = CGColorSpace(name: CGColorSpace.itur_2100_HLG)!
} else {
// Fallback on earlier versions
colorSpace = drawImage.colorSpace ?? CGColorSpaceCreateDeviceRGB()
}
NSLog("Image \(colorSpace.name), \(image.colorSpace?.name)")
context.render(drawImage, to: texture, commandBuffer: commandBuffer, bounds: dRect, colorSpace: colorSpace)
commandBuffer?.present(self.currentDrawable!, afterMinimumDuration: 1.0/Double(self.preferredFramesPerSecond))
commandBuffer?.commit()
}
I have an AVComposition playback via AVPlayer where AVComposition has multiple audio tracks with audioMix applied. My question is how is it possible to compute audio meter values for the audio playing back through AVPlayer? Using MTAudioProcessingTap it seems you can only get callback for one track at a time. But if that route has to be used, it's not clear how to get sample values of all the audio tracks at a given time in a single callback?
I deleted Derived data of project and after that I can't remove any line of code in the project. The moment I delete any character in function, XCode 13 is duplicating that line as seen in the below image.
And I can't delete even a comment. The code doesn't even build now. What do I do?
A big chunk of commented code I tried to delete, it deletes but then shows that deleted code again upwards in another function.
I have the following class in Swift:
public class EffectModel {
var type:String
var keyframeGroup:[Keyframe<EffectParam>] = []
}
public enum EffectParam<Value:Codable>:Codable {
case scalar(Value)
case keyframes([Keyframe<Value>])
public enum CodingKeys: String, CodingKey {
case rawValue, associatedValue
}
...
...
}
public class Keyframe<T:Codable> : Codable {
public var time:CMTime
public var property:String
public var value:T
enum CodingKeys: String, CodingKey {
case time
case property
case value
}
...
}
The problem is compiler doesn't accepts the generic EffectParam and gives the error
Generic parameter 'Value' could not be inferred
One way to solve the problem would be to redeclare the class EffectModel as
public class EffectModel <EffectParam:Codable>
But the problem is this class has been included in so many other classes so I will need to incorporate generic in every class that has object of type EffectModel, and then any class that uses objects of those classes and so on. That is not a solution for me. Is there any other way to solve the problem in Swift using other language constructs (such as protocols)?
I have doubts about Core Image coordinate system, way transforms are applied and way the image extent is determined. I couldn't find much in documentation or on internet so I tried the following code to rotate CIImage and display it in UIImageView. As I understand there is no absolute coordinate system in Core Image. The bottom left corner of an image is supposed to be (0,0). But my experiments show something else.
I created a prototype to rotate a CIImage by pi/10 radians on each button click. Here is the code I wrote.
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
imageView.contentMode = .scaleAspectFit
let uiImage = UIImage(contentsOfFile: imagePath)
ciImage = CIImage(cgImage: (uiImage?.cgImage)!)
imageView.image = uiImage
}
private var currentAngle = CGFloat(0)
private var ciImage:CIImage!
private var ciContext = CIContext()
@IBAction func rotateImage() {
let extent = ciImage.extent
let translate = CGAffineTransform(translationX: extent.midX, y: extent.midY)
let uiImage = UIImage(contentsOfFile: imagePath)
currentAngle = currentAngle + CGFloat.pi/10
let rotate = CGAffineTransform(rotationAngle: currentAngle)
let translateBack = CGAffineTransform(translationX: -extent.midX, y: -extent.midY)
let transform = translateBack.concatenating(rotate.concatenating(translate))
ciImage = CIImage(cgImage: (uiImage?.cgImage)!)
ciImage = ciImage.transformed(by: transform)
NSLog("Extent \(ciImage.extent), Angle \(currentAngle)")
let cgImage = ciContext.createCGImage(ciImage, from: ciImage.extent)
imageView.image = UIImage(cgImage: cgImage!)
}
But in the logs, I see the extent of images have negative origin.x and origin.y. What does it mean? Relative to whom it is negative and where exactly is (0,0) then? What exactly is image extent and how does Core Image coordinate system work?
2021-09-24 14:43:29.280393+0400 CoreImagePrototypes[65817:5175194] Metal API Validation Enabled
2021-09-24 14:43:31.094877+0400 CoreImagePrototypes[65817:5175194] Extent (-105.0, -105.0, 1010.0, 1010.0), Angle 0.3141592653589793
2021-09-24 14:43:41.426371+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.6283185307179586
2021-09-24 14:43:42.244703+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.9424777960769379
I have prototyped multilayer timeline with custom cells, where:
a. Each cell has possibly different size. Some cell sizes can be more than visible rect of ScrollView,
b. The gap between cells may be different (even though it appears same in the picture below), except the first(base) layer where the cell gap is fixed to 2 points,
c. Each cell can be selected and trimmed/expanded from each end using UIPanGestureRecognizer. Trimming/Expansion have custom rules. For the base layer, cell simply pushes other cells as it expands or contracts. For other layers however, the trimming or expansion have to respect boundaries of neighbouring cells.
d. Timeline can be zoomed horizontally which has the effect of scaling cells
e. Cells can be dragged and dropped to other rows subject to custom rules.
I have implemented all this using UIScrollView. By default all cells are initialized and added to UIScrollView, whether they are visible or not. But now I am hitting limits as I draw more content on each cell. Which means I need to reuse cells and draw only visible content. I discussed this with Apple Engineers in WWDC labs and one of the engineer suggested I use UICollectionView with custom layout where I can get lot of functionality for free (such as cell reuse, drag and drop). He suggested me looking into WWDC 2018 video (session 225) on UICollectionView. But as I look at custom layout of UICollectionView, it's not clear to me:
Q1. How to manually trim/expand select cells in UICollectionView with custom layout using UIPanGesture? In case of UIScrollView, I just have a UIPanGestureRecognizer on cell and do the trimming and expansion of it's frame (respecting given boundary conditions).
Q2. How to scale all the cells with a given zoom factor? With UIScrollView, I simply scale the frames of each cell and then calculate contentOffset to reposition UIScrollView around the point of zoom.
Even with UICollectionView with just one cell which has width say 10x of UICollectionView frame width, I will need further optimization to draw content on only visible portion rather than the whole cell. How is that possible with UICollectionViewCell to draw only part of the cell that's visible on screen?
Dear AVFoundation/Camera Capture Engineers,
Can you please walk through the new APIs and functionality in camera capture that is available in iOS 15 and iPhone 13 devices? Specifically, is cinematic video capture mode available via API to third party developers?
I have a AVVideoComposition with customCompositor. The issue is sometimes AVPlayer crashes on seeking, especially when seektolerance is set to CMTime.zero. The reason for crash is request.sourceFrame(byTrackID: trackId) returns nil even though it should not. Below is a sample of 3 instructions and time ranges, and all contain only track 1.
2021-09-09 12:27:50.773825+0400 VideoApp[86227:6913831] Instruction 0.0, 4.0
2021-09-09 12:27:50.774105+0400 VideoApp[86227:6913831] ...Present TrackId 1 in this instruction
2021-09-09 12:27:50.774196+0400 VideoApp[86227:6913831] Instruction 4.0, 5.0
2021-09-09 12:27:50.774258+0400 VideoApp[86227:6913831] ...Present TrackId 1 in this instruction
2021-09-09 12:27:50.774312+0400 VideoApp[86227:6913831] ...Present TrackId 1 in this instruction
2021-09-09 12:27:50.774369+0400 VideoApp[86227:6913831] Instruction 5.0, 18.845
2021-09-09 12:27:50.774426+0400 VideoApp[86227:6913831] ...Present TrackId 1 in this instruction
VideoApp /VideoEditingCompositor.swift:141: Fatal error: No pixel buffer for track 1, 4.331427
Here is the simple line of code that produces this error:
guard let pixelBuffer = request.sourceFrame(byTrackID: trackId) else {
fatalError("No pixel buffer for track \(trackId), \(request.compositionTime.seconds)")
}
As can be seen, time 4.331427 seconds is very much in time limits of second instruction that runs from 4.0 seconds to 5.0 seconds. Why the custom compositor returns nil pixel buffer then? And the times are random (the time values for crash keep changing), next time i run the program to specifically seek at this time, it does return valid pixel buffer! So it has nothing to do with particular time instant. Also playback is totally fine (without seeking). It's something that is to do with AVFoundation framework than the app.
Has anyone seen such an error ever?
I had a project that was created on XCode 12. Next I opened it in XCode 13 beta 5 and made lot of edits. Now it does not build at all in XCode 12.5. I tried clean build but still doesn't work.
Command CompileSwift failed with a nonzero exit code
1. Apple Swift version 5.4 (swiftlang-1205.0.26.9 clang-1205.0.19.55)
2. Running pass 'Module Verifier' on function '@"xxxxxxxxxxxxxxxxH0OSo014AVAsynchronousA18CompositionRequestCtF"'
0 swift-frontend 0x000000010796fe85 llvm::sys::PrintStackTrace(llvm::raw_ostream&) + 37
1 swift-frontend 0x000000010796ee78 llvm::sys::RunSignalHandlers() + 248
2 swift-frontend 0x0000000107970446 SignalHandler(int) + 262
3 libsystem_platform.dylib 0x00007fff204fbd7d _sigtramp + 29
4 libdyld.dylib 0x00007fff204d0ce8 _dyld_fast_stub_entry(void*, long) + 65
5 libsystem_c.dylib 0x00007fff2040b406 abort + 125
6 swift-frontend 0x0000000102b92a31 swift::performFrontend(llvm::ArrayRef<char const*>, char const*, void*, swift::FrontendObserver*)::$_1::__invoke(void*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool) + 1169
7 swift-frontend 0x00000001078c52d0 llvm::report_fatal_error(llvm::Twine const&, bool) + 288
8 swift-frontend 0x00000001078c51ab llvm::report_fatal_error(char const*, bool) + 43
9 swift-frontend 0x000000010786537f (anonymous namespace)::VerifierLegacyPass::runOnFunction(llvm::Function&) + 111
10 swift-frontend 0x00000001077ff0b9 llvm::FPPassManager::runOnFunction(llvm::Function&) + 1353
11 swift-frontend 0x00000001077fe3a0 llvm::legacy::FunctionPassManagerImpl::run(llvm::Function&) + 112
12 swift-frontend 0x0000000107805835 llvm::legacy::FunctionPassManager::run(llvm::Function&) + 341
13 swift-frontend 0x0000000102f3e3e8 swift::performLLVMOptimizations(swift::IRGenOptions const&, llvm::Module*, llvm::TargetMachine*) + 1688
14 swift-frontend 0x0000000102f3f486 swift::performLLVM(swift::IRGenOptions const&, swift::DiagnosticEngine&, llvm::sys::SmartMutex<false>*, llvm::GlobalVariable*, llvm::Module*, llvm::TargetMachine*, llvm::StringRef, swift::UnifiedStatsReporter*) + 2582
15 swift-frontend 0x0000000102b9e863 performCompileStepsPostSILGen(swift::CompilerInstance&, std::__1::unique_ptr<swift::SILModule, std::__1::default_delete<swift::SILModule> >, llvm::PointerUnion<swift::ModuleDecl*, swift::SourceFile*>, swift::PrimarySpecificPaths const&, int&, swift::FrontendObserver*) + 3683
16 swift-frontend 0x0000000102b8fd22 swift::performFrontend(llvm::ArrayRef<char const*>, char const*, void*, swift::FrontendObserver*) + 6370
17 swift-frontend 0x0000000102b11e82 main + 1266
18 libdyld.dylib 0x00007fff204d1f3d start + 1
error: Abort trap: 6 (in target 'VideoEditing' from project 'VideoEditing')
I want to know under what conditions does -[AVAsynchronousVideoCompositionRequest sourceFramebytrackID] returns nil. I have a custom compositor and when seeking AVPlayer, I find the method sometimes returns nil, particularly when seek tolerance is set to zero. No issues are seen if I simply play the composition. Only seeking throws these errors and only some of the times.
This is a weird XCode 13 beta bug (including beta 5). Metal Core Image kernels fail to load from the library giving error
2021-08-26 12:05:23.806226+0400 MetalFilter[23183:1751438] [api] +[CIKernel kernelWithFunctionName:fromMetalLibraryData:options:error:] Cannot initialize kernel with given library data.
[ERROR] Failed to create CIColorKernel: Error Domain=CIKernel Code=6 "(null)" UserInfo={CINonLocalizedDescriptionKey=Cannot initialize kernel with given library data.}
But there is no such error with XCode 12.5. The kernel loads fine. Only on XCode 13 beta there is an error.
I filed a bug and the status in Feedback Assistant now shows "Potential fix identified - In iOS 15". But the bug is still visible in iOS 15 beta 6. What does the status mean? Does it say it will be fixed in iOS 15 main build?
Is it necessary for AVVideoCompositionInstruction (custom) to have atleast one video asset track? If one needs to just generate video out of motion graphics and pictures, does one still need to add dummy video track added to composition first?
AVAudioSession API for setting stereo orientation on supported devices (iPhone XS and above) is completely broken on iOS 15 betas. Even the 'Stereo Audio Capture' sample code does not work anymore on iOS 15. Even AVCaptureAudioDataOutput fails when setting stereo orientation on AVAudioSession. I am wondering if Apple Engineers are aware of this issue and whether this would be fixed in upcoming betas and most importantly, iOS 15 mainline.