Hi,I'm trying to send text via the Network protocol to a websocket. I have the following code: func send(data: Data) {
let metaData = NWProtocolWebSocket.Metadata(opcode: .text)
let context = NWConnection.ContentContext (identifier: "context", metadata: [metaData])
self.connection.send(content: data, contentContext: context, isComplete: true, completion: .contentProcessed( { error in
if let error = error {
self.connectionDidFail(error: error)
return
}
print("connection \(self.id) did send")
}))
}This sends the text to the NWConnection, however it logs an error: __nw_frame_claim Claiming bytes failed because start (7) is beyond end (0 - 0)(with 7 being the length of the data). Is there something I'm doing wrong here? Any ideas?
Post
Replies
Boosts
Views
Activity
I have an app that uses HomeKit. I want to give the ability for the user to update a HMAccessory name. I used the following function and it works properly and updates the HMAccessory name, but when I open the Home app, it never get's updated.
I implemented the HMAccessoryDelegate to observe changes and that properly works. If I rename an accessory in the Home app, that propagates to my app. But if I perform the rename from my app it doesn't send the change to the Home app. It's as if the Home app doesn't use the HMAccessoryDelegate.
Is this a known issue?
func updateName(to: String) {
hmAccessory.updateName(to) { (error) in
if let error = error {
print(error)
} else {
print("Changed Name to", to)
}
}
}
Hi,
I'm trying out Xcode Cloud but can't get it to build in my scenario. The use case is the following:
My project has multiple swift packages that depend on each other. "App" has a local package dependency called "SPMKit". "SPMKit" has a local package dependency called "SPMKit2".
The issue I run into is if "SPMKit2" has any dependencies, it doesn't resolve in Xcode Cloud.
Are local dependencies like this not supported in Xcode Cloud? All the packages are in the same directory as my app. I'd like to not have to put each package in it's own git..
I'm working on feeding a AVPlayer into a Metal view. The current test video I'm playing is ProRes 422.
If I read the buffer as RGB pixels (kCVPixelFormatType_32BGRA), then all works as expected. But this pixel format uses more processing on the ProRes decoder, and per Apple's WWDC session the preferred format is kCVPixelFormatType_422YpCbCr16.
So there's a couple issues I'm running into when using the YpCbCr format..
Issues / Questions:
Is there a set of fragment shader code to convert the texture from YpCbCr to RGB? When changing the pixel format, everything becomes grainy / distorted. I assume I need a function in the Metal pipeline to convert it. Is a fragment shader even the best way to handle converting it? I need it to get to RGB for rendering to a view.
Is there a major difference between kCVPixelFormatType_422YpCbCr16 and kCVPixelFormatType_422YpCbCr16BiPlanarVideoRange. I know the latter brings the texture in as two planes (which seems easier and more performant per activity monitor), but Apple recommended traditional kCVPixelFormatType_422YpCbCr16 with a single plane. Just curious what the best option to use is.
I'm making an app that reads a ProRes file, processes each frame through metal to resize and scale it, then outputs a new ProRes file. In the future the app will support other codecs but for now just ProRes. I'm reading the ProRes 422 buffers in the kCVPixelFormatType_422YpCbCr16 pixel format. This is what's recommended by Apple in this video https://developer.apple.com/wwdc20/10090?time=599.
When the MTLTexture is run through a metal performance shader, the colorspace seems to force RGB or is just not allowing yCbCr textures as the output is all green/purple. If you look at the render code, you will see there's a commented out block of code to just blit copy the outputTexture, if you perform the copy instead of the scaling through MPS, the output colorspace is fine. So it appears the issue is from Metal Performance Shaders.
Side note - I noticed that when using this format, it brings in the YpCbCr texture as a single plane. I thought it's preferred to handle this as two separate planes? That said, if I have two separate planes, that makes my app more complicated as I would need to scale both planes or merge it to RGB. But I'm going for the most performance possible.
A sample project can be found here: https://www.dropbox.com/scl/fo/jsfwh9euc2ns2o3bbmyhn/AIomDYRhxCPVaWw9XH-qaN0?rlkey=sp8g0sb86af1u44p3xy9qa3b9&dl=0
Inside the supporting files, there is a test movie. For ease, I would move this to somewhere easily accessible (i.e Desktop).
Load and run the example project.
Click 'Select Video'
Select that video you placed on your desktop
It will now output a new video next to the selected one, named "Output.mov"
The new video should just be scaled at 50%, but the colorspace is all wrong.
Below is a photo of before and after the metal performance shader.
I'm looking to see if there's any suggested libraries / frameworks to use for transferring files between Macs. The setup is a few dozen Macs (each with 10g networking) and a custom app that handles the transfer of files between them on a local network.
I looked into raw TCP sockets but the file sizes will make this tricky. Some files can be up to 150gb. Maybe SFTP to AFP? But not sure how this looks in code and I know I don't want to be mounting computers in finder - ideally it's an app that hosts it's own server to handle the transfers.
Any insight on this would be helpful. Thanks!
I'm using this library for encoding / decoding RSA keys. https://github.com/Kitura/BlueRSA
It's worked fine up until macOS sequoia. The issue I'm having is the tests pass when in Debug mode, but the moment I switch to Release mode, the library no longer works.
I ruled this down the swift optimization level.
If I change the Release mode to no optimization, the library works again. Wondering where in the code this could be an issue? How would optimization break the functionality?