From UITextView to point-sized SKTextures, excluding blank ones

Someone demoed something similar to what I need in "Attempt 3" at http://sound-of-silence.com/?article=20170205. That code was written for macOS, however, and used NSBitmapImageRep (and in a way that it didn't elaborate on) to exclude blank images/textures.


My iOS app converts and processes text information as follows: UITextViews to UIImages to SKTextures to thousands of very small SK Textures [using SKTexture(rect:in:)] to thousands of SKSpriteNodes, which are then animated.


The problem with my approach is that most of the thousands of SKTextures/SKSpriteNodes are blank, with alpha = 0. So the app effectively creates a lot of nothing (in addition to the visible stuff) and moves it around unnecessarily. That seems very wasteful.


What I'd like to know is how I could get image info so as to avoid creating, or at least not further process, blank SKTextures. Any insights or suggestions would be appreciated.

Accepted Reply

As it turns out, SpriteKit already culls invisible and offscreen nodes by default, which may be why my app didn't perform poorly without testing for and removing fully transparent SKSpriteNodes. (But maybe my understanding is wrong, if by "invisible" Apple merely means "set to isHidden.")


See Apple's documentation of the shouldCullNonVisibleNodes instance property. Its default value is true.


A better, upstream solution would be to test for invisibility before creating the images/textures, but that would involve a deeper understanding of low-level iOS graphics manipulation than I currently have.

Replies

As it turns out, SpriteKit already culls invisible and offscreen nodes by default, which may be why my app didn't perform poorly without testing for and removing fully transparent SKSpriteNodes. (But maybe my understanding is wrong, if by "invisible" Apple merely means "set to isHidden.")


See Apple's documentation of the shouldCullNonVisibleNodes instance property. Its default value is true.


A better, upstream solution would be to test for invisibility before creating the images/textures, but that would involve a deeper understanding of low-level iOS graphics manipulation than I currently have.