What am I doing wrong when trying to draw a VNC frame-buffer quickly to an NSView on macOS?

Greetings!

I'm using libvncclient to create a specialized VNC viewer on macOS (developing on Mojave). I have already completely written this app with C++ and FLTK on Linux, *BSD and even on macOS. I want something that is 'native' macOS, so I chose Cocoa and Objective-C. I do not wish to use Swift right now. Also I'm writing this all programmatically and NOT using Xcode. I really don't want to use Xcode due to how buggy it is on Mojave.

For the VNC viewer, I'm using a subclassed NSView with the following setup. This viewer is embedded in an NSScrollView:

Code Block objective-c
- (id)initWithFrame:(NSRect)frame
{
self = [super initWithFrame:frame];
return self;
}
- (BOOL)isFlipped
{
return YES;
}
- (BOOL)opaque
{
return YES;
}


There are two major events that are used for drawing: invalidating the changed rectangle(s) on the NSView viewer and then actually telling the viewer to draw. Here's the invalidating part:

Code Block objective-c
/* this fires *every* time something is changed on the vnc server's screen */
static void handleFrameBufferUpdate (rfbClient * cl, int x, int y, int w, int h)
{
dispatch_async(dispatch_get_main_queue(),
^{
NSRect rUp = NSMakeRect(x, y, w, h);
[vncViewer setNeedsDisplayInRect:rUp];
});
}


Here is the event that tells the view to draw itself after a certain number of invalidation calls:

Code Block objective-c
static void handleFinishedFrameBufferUpdate (rfbClient * cl)
{
dispatch_async(dispatch_get_main_queue(),
^{
[vncViewer displayIfNeededIgnoringOpacity];
});
}


All of the pixel data from the VNC server is written by libvncclient to a frame-buffer -- an array of uint8_t (or uchar, depending on your architecture) in RGBA format. It's 32-bits-per-pixel, 8 bits-per-sample, 4 samples-per-pixel.

For the actual drawing, here is the relevant code (the pointer to the frame-buffer array of uint8_t is referred to as: vnc.vncClient->frameBuffer below):

Code Block objective-c
- (void)drawRect:(NSRect)dirtyRect
{
//...
[vnc setBytesPerPixel:vnc.vncClient->format.bitsPerPixel / 8];
[vnc setBuffSize:vnc.vncClient->width * vnc.vncClient->height * vnc.bytesPerPixel];
NSImage * img = [[NSImage alloc] initWithSize:NSMakeSize(vnc.vncClient->width, vnc.vncClient->height)];
NSBitmapImageRep * rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:&vnc.vncClient->frameBuffer
pixelsWide:vnc.vncClient->width
pixelsHigh:vnc.vncClient->height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:vnc.vncClient->width *
[vnc bytesPerPixel]
bitsPerPixel:32
];
[img addRepresentation:rep];
[img drawInRect:[self bounds]];
//...
}


When I connect to a VNC server, the screen isn't fully filled out, but has multiple white non-image rectangles shifting around with each update:

(I'd like to post a pic here, but this system doesn't allow it)

I've found out, unfortunately, that either the server or libvncclient is setting the alpha part of each pixel to 0, effectively hiding it. I added this hackish code to set every pixel to full alpha before I do any drawing:

Code Block objective-c
/* if there's an alpha byte, set it to 255 */
if ([vnc bytesPerPixel] == 2 ||
[vnc bytesPerPixel] == 4)
{
for (int i = ([vnc bytesPerPixel] - 1); i < [vnc buffSize]; i+= [vnc bytesPerPixel])
vnc.vncClient->frameBuffer[i] = 255;
}


The viewer now fills more of the image, but I'm still getting some shifting areas of white rectangles.

(I'd like to post a picture here, but this system won't allow it)

Is there any way I can get the NSView to 'retain' what has been drawn on it without it clearing each time there is an update?

Is this subclassed NSView the right tool for this job, or should I be using something else? The VNC server updates the frame-buffer many times a second, and I need my viewer to be responsive and not 'laggy'.

Thanks!

What am I doing wrong when trying to draw a VNC frame-buffer quickly to an NSView on macOS?
 
 
Q