Drawing in multiply blend mode with Metal

I have posted a few questions recently about the drawing app I have been building over the last few years. I consider myself an intermediate coder and am reaching a point where my app is nearing completion. There are just a few things I have been unable to figure out on my own that I need help with.

One feature that I was able to achieve with a UIImageView was the ability to draw brushstrokes on the screen in a multiply blend mode. I loved this! However once I decided to switch over to using metal, I have been unable to achieve the same effect. I have tried different variations on the descriptor color attachments because that seems to be where I would make this feature happen. But alas I have been unable to make it work.

I am using an MTKView, capturing Apple Pencil touches, drawingPrimitives, and present(currentDrawable). My textures for the brushstrokes are pdf images.

Here is the descriptor code I am using in the pipeline setup for regular drawing. It works great:
Code Block
let descriptor = MTLRenderPipelineDescriptor()
descriptor.vertexFunction = vertProg
descriptor.fragmentFunction = fragProg
descriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
descriptor.colorAttachments[0].isBlendingEnabled = true
descriptor.colorAttachments[0].rgbBlendOperation = MTLBlendOperation.add
descriptor.colorAttachments[0].alphaBlendOperation = MTLBlendOperation.add
descriptor.colorAttachments[0].sourceRGBBlendFactor = MTLBlendFactor.sourceAlpha
descriptor.colorAttachments[0].sourceAlphaBlendFactor = MTLBlendFactor.one
descriptor.colorAttachments[0].destinationRGBBlendFactor = MTLBlendFactor.oneMinusSourceAlpha
descriptor.colorAttachments[0].destinationAlphaBlendFactor = MTLBlendFactor.oneMinusSourceAlpha



Answered by MoreLightning in 650830022
That is because blend modes of that kind can only be done manually in your fragment or kernel, not in the pipeline descriptor options. You have to write the math for it yourself and structure the render passes appropriately to perform the compositing.

Also fyi, you are going to get banding with that pixel format.

Nothing in your blending setup looks out-of-the-ordinary, it looks like a commonly used blend equation. However, I'm not entirely sure it's set up for what you want. What do expect to see and what are you actually seeing?

Nonetheless, it's likely something else in the app. Are you using textures to emulate wide lines? What do the textures you're using look like? What does the fragment shader look like?


Accepted Answer
That is because blend modes of that kind can only be done manually in your fragment or kernel, not in the pipeline descriptor options. You have to write the math for it yourself and structure the render passes appropriately to perform the compositing.

Also fyi, you are going to get banding with that pixel format.

Ohhhh, okay. I had not considered the fragment shaders or kernel. I will study and work on those. The effect I want is to be able to paint over dark colors with light colors and still have the dark colors show through. Currently my light colors are completely opaque and cover the dark colors when painting over them.

So to prevent banding, what would be a better pixel format to use?
The banding artifacts come from the bit size per component being too small. So the solution is to increase the bit size, but it has a daisy chain of consequences - you will see situations like that a lot as you go further. Things unravel into complex choice-consequence chains, when you really try to make the real thing.

Let's say you chose MTLPixelFormatBGRA10_XR, it would have less banding, but now it is slower and you are in a different colorspace entirely that goes beyond the bounds of 0.0 to 1.0. You then have to rewrite the rest of the code to work in that format - and if you want to save to a file, you also will face a battle converting out of 10 bits. (But you don't convert out of 10 bits, you would have to accept the consequence of needing to render a different format for saving)

If you choose a higher bit per component like MTLPixelFormatRGBA16Float, you will get even less artifacts as you go up, but again, slower, and it's in a different order.

I noticed that you are using .bgra8Unorm.

Some of this banding could be resolved if you used .bgra8Unorm_sRGB. The sRGB pixel format devote more precision to the darker end of the spectrum, where banding most often occurs, to compensate for the fact that displays do not present values linearly.

If you're storing color data¹ you should either use an sRGB format or something with higher precision like rgba16Unorm, rg11b10Float, or even rgba16Float. An _sRGB format will have the same performance as a non-sRGB format that has the same bit size, while the others may be slightly slower but better for your case.

¹Although counter-intuitive, textures are often use for things besides an image with color data, such as normals or even vertex positions. In those cases, sRGB formats do not make sense.
I noticed that you are using .bgra8Unorm.

Some of this banding could be resolved if you used .bgra8Unorm_sRGB. The sRGB pixel format devote more precision to the darker end of the spectrum, where banding most often occurs, to compensate for the fact that displays do not present values linearly.

If you're storing color data¹ you should either use an sRGB format or something with higher precision like rgba16Unorm, rg11b10Float, or even rgba16Float. An _sRGB format will have the same performance as a non-sRGB format that has the same bit size, while the others may be slightly slower but better for your case.

¹Although counter-intuitive, textures are often use for things besides an image with color data, such as normals or even vertex positions. In those cases, sRGB formats do not make sense.
Dear Apple Graphics Engineer,

That isn't a valid solution in this context.

Because if the artist has the observational acuity to see the artifacts in the first place, then merely moving to sRGB won't satisfy the problem. You can see clearly what I mean if you actually try it.
Dear MoreLighting,

Moving to sRGB adds more precision in the dark end of the spectrum just as a higher precision format add precision by adding more bits. The human eye (of an artist for instance) can definitely see banding in dark areas. Granted, if there is banding in brighter areas, moving to sRGB would not help, but this is far less common.

Dear Apple Graphics Engineer,

I can clearly see it. The craftsmen I work with can tell.

You essentially say you can not see it, or that you think it is insignificant, or perhaps you just haven't tried it yet to know how bad the artifacts are in the context of a drawing tool.

Think about that situation.

This should normally go without explanation, but an artist is someone who's skillset is developed by refining the perceptual acuity of their senses.

They are a person who exercises physical organs beyond a normal person to the extent that we can measure changes in the physical matter across even 2-3 months with MRI.

We are talking about a group of people who are physically more developed to see better than the average person.

In order to be more acute in their work, they must actively and intentionally look, observe, and pay attention to subtle details that a normal person's brain would skip over.

They see things that others are blind to.

Relatively, the perceptually more blind people would say, "It looks good to me, I can not tell the difference", while the people who developed more refined and acute perceptual and observation abilities, can not only distinguish that level, but even further levels depending on their level of physical development.

As an engineer, you likely have experienced this phenomena to some degree yourself in another area of practice, where your acuity increased in response to how you trained your body.

It is also well known, that without continued exercise, the organs will not maintain their abilities, and the brain will reorient itself toward the demand of more present activities.

Because of this, it is only the people who do the skills regularly and exercise the body in the proper form, who retain the deeper levels of perceptual acuity.

When a person comes to us and says, "I am the target audience in question, and I can see and am observant of things that you are not, and they do matter", then we have to accept that our own abilities are inadequate to clearly see and understand. Even if you were informed of the reality of it, you wouldn't physically be able to see it yourself, and thus be inclined to believe that it looks okay, when it really looks terrible and sticks out like a sore thumb.

To try to develop truly sincere drawing tools for a select few craftsmen in another country, I exercised the body and senses to become extremely critically aware of things like this beyond the average person. I then rigorously analyzed why the iPad hardware and software was failing for the sincere use case of real craftsmen. I came to be very familiar with a sea of specific core problems at Apple with the people and resulting products.

This observational and perceptual issue is one of those core problems. Even if you feel and say to me, "I personally do genuinely care", if your present perceptual ability is not at a high enough level, you won't be able to make proper judgements of quality, and then people like me will think "this person has shallow perception and/or is disingenuous about quality". - They just wont vocalize it clearly because it's likely to be taken offensively instead of just an assessment of the reason the problem exists.

It's the same reason Apple staff thinks the lag problems are not significant.

But, because neglected root problems like this have led the Apple engineers' and managements' output to get much worse over the years - not just on functional use issues, but also oversights of severe global health issues, I am becoming a more vocal detractor.




Drawing in multiply blend mode with Metal
 
 
Q