Posts

Post marked as solved
2 Replies
The fastest way would be to compare two "boxes" in 3D space. You simply check if any one of the box A's points are inside box B. Since SceneKit's "getBoundingBoxMin:Max:" is still broken, you will have to make up with the locations of the points yourself. Simply create some variables for your points, and extend their locations based on the node's position. If your game is not moving around vertically so much, you may not need to compare all the points of the two boxes. Instead just compare the 2D locations as if the world was flattened. (Or even go further to simply compare the node's singular point position against a 2D box) That would be the most efficient way. Both SceneKit's physics engine and doing a distance calculation, (as gchiste proposed), would be heavier than comparing box points. - And it is not much effort to compare points like that.
Post marked as solved
9 Replies
Dear Apple Graphics Engineer, I can clearly see it. The craftsmen I work with can tell. You essentially say you can not see it, or that you think it is insignificant, or perhaps you just haven't tried it yet to know how bad the artifacts are in the context of a drawing tool. Think about that situation. This should normally go without explanation, but an artist is someone who's skillset is developed by refining the perceptual acuity of their senses. They are a person who exercises physical organs beyond a normal person to the extent that we can measure changes in the physical matter across even 2-3 months with MRI. We are talking about a group of people who are physically more developed to see better than the average person. In order to be more acute in their work, they must actively and intentionally look, observe, and pay attention to subtle details that a normal person's brain would skip over. They see things that others are blind to. Relatively, the perceptually more blind people would say, "It looks good to me, I can not tell the difference", while the people who developed more refined and acute perceptual and observation abilities, can not only distinguish that level, but even further levels depending on their level of physical development. As an engineer, you likely have experienced this phenomena to some degree yourself in another area of practice, where your acuity increased in response to how you trained your body. It is also well known, that without continued exercise, the organs will not maintain their abilities, and the brain will reorient itself toward the demand of more present activities. Because of this, it is only the people who do the skills regularly and exercise the body in the proper form, who retain the deeper levels of perceptual acuity. When a person comes to us and says, "I am the target audience in question, and I can see and am observant of things that you are not, and they do matter", then we have to accept that our own abilities are inadequate to clearly see and understand. Even if you were informed of the reality of it, you wouldn't physically be able to see it yourself, and thus be inclined to believe that it looks okay, when it really looks terrible and sticks out like a sore thumb. To try to develop truly sincere drawing tools for a select few craftsmen in another country, I exercised the body and senses to become extremely critically aware of things like this beyond the average person. I then rigorously analyzed why the iPad hardware and software was failing for the sincere use case of real craftsmen. I came to be very familiar with a sea of specific core problems at Apple with the people and resulting products. This observational and perceptual issue is one of those core problems. Even if you feel and say to me, "I personally do genuinely care", if your present perceptual ability is not at a high enough level, you won't be able to make proper judgements of quality, and then people like me will think "this person has shallow perception and/or is disingenuous about quality". - They just wont vocalize it clearly because it's likely to be taken offensively instead of just an assessment of the reason the problem exists. It's the same reason Apple staff thinks the lag problems are not significant. But, because neglected root problems like this have led the Apple engineers' and managements' output to get much worse over the years - not just on functional use issues, but also oversights of severe global health issues, I am becoming a more vocal detractor.
Post marked as solved
7 Replies
I'm practically retired and don't really work for money, so I can't help you with that. But, what I would suggest, since you've been working on it long enough... Is to reassess your Application Definition Statement. Take a hard look at what solution and differentiator you are actually providing. Why do you believe your prototype depicts something that a more experienced developer would need to see? What is the real purpose of making such a prototype? ... As for the health problems... It comes from: A) The touch pad B) The induction coil C) The magnet D) Bluetooth modulation (distinct from older modulation formats) E) The iPad's touchscreen itself F) General lowering standards of material choices, like GF2 The last few existed along with the Apple Pencil 1, and they were measurably bad, but the additions and design of the Apple Pencil 2 are on another league of damage. The thing here, is if I just list these things, no one will actually understand anything. In fact, they might assume that I don't know what I'm talking about, and am just one of those tin foil hat people who are afraid of microwave radiation. (As opposed to being informed with actual measurements and having constructed custom antenna and hardware that replicate the phenomena) We can only describe things in relation to things you already understand. If I talk about specific measurements I personally took, and describe how certain very specific kinds of electromagnetic field patterns interact with different specific materials in specific layouts, and how those interact with the body - then we'd have to go very deep in describing the internal hardware and measurements. The problem with having such a conversation in public, is similar to how we have trouble talking about the fundamentals of programming in Metal. Just like there are very complex 'decision-consequence' chains in writing your Metal program, there are similar phenomena of 'decision-consequence' chains in the hardware for health safety. Most people don't even understand the surface level of basic signal noise phenomena - where we can even pollute the battery on purpose to demonstrate functional use errors in the form of drawing stroke gaps. (Also notice how the Apple graphics engineer's responses are not always informed enough.) Just like how Apple releases obvious bugs in software that show they don't do proper testing, it's the same with the hardware. They are not measuring like I was. Otherwise, fundamental mistakes in material science and antenna design would have never have gotten this far out of hand. The problems of the electronics are a result of the same people problems that are present in the software. And it's not like we really needed those features in the Apple Pencil 2. Especially the touch pad. That's quite a big health problem for something I personally didn't even want to use. Using the induction coil to charge, means that the signal is always more polluted that what we had in the Apple Pencil 1. And like I pointed out, the signal noise will trigger stroke gaps - because the touch screen is an analog antenna array. It's just outright ignorance for engineers to do these things. And to not understand how the materials, purity, layout, and fabrication choices influence the interactions, is like saying the engineer is missing fundamental understanding required to be adequate for their role. Why are we repeating age old mistakes like how the shell is reverberating the signal of the processor at an audible level, and out through the cable? And I can almost predict that even though they were told years ago, the material choices and construction of the next iPad will be even worse. You would think a rational company that says "we care" would listen to things they could easily avoid ahead of time, and that's what makes me so upset about it.
Post marked as solved
9 Replies
Dear Apple Graphics Engineer, That isn't a valid solution in this context. Because if the artist has the observational acuity to see the artifacts in the first place, then merely moving to sRGB won't satisfy the problem. You can see clearly what I mean if you actually try it.
Post marked as solved
9 Replies
The banding artifacts come from the bit size per component being too small. So the solution is to increase the bit size, but it has a daisy chain of consequences - you will see situations like that a lot as you go further. Things unravel into complex choice-consequence chains, when you really try to make the real thing. Let's say you chose MTLPixelFormatBGRA10_XR, it would have less banding, but now it is slower and you are in a different colorspace entirely that goes beyond the bounds of 0.0 to 1.0. You then have to rewrite the rest of the code to work in that format - and if you want to save to a file, you also will face a battle converting out of 10 bits. (But you don't convert out of 10 bits, you would have to accept the consequence of needing to render a different format for saving) If you choose a higher bit per component like MTLPixelFormatRGBA16Float, you will get even less artifacts as you go up, but again, slower, and it's in a different order.
Post marked as solved
7 Replies
While UIKit should really never be used - rendering to offscreen textures has other performance consequences and dozens of adjacent things you have to implement for a typical drawing tool. Sometimes it is necessary, but not always, and the difference is large. If it's slowing down already, you should first address that as I did - and you may not want to hear this, but I would bet on it that your app is also suffering from cpu performance problems in how you manage the stroke data, the memory, and of course, that you are using Swift (which by now, if it isn't clear, is in fact ironically slow and not appropriate for a performant drawing tool) From reading what you wrote in the other threads, I could predict with high certainty there are dozens of other common fundamental things causing the performance problems you are experiencing, well before you even confront the ones Apple is responsible for. I've built many drawing tools that optimize for different use case workflows, and each one required different arrangements based on their specifics to truly get a stable 120 fps (That is, before apple messed up the Display Link path). You may not want to hear this, but you are only around 10% there. But, really, there is a bigger problem. If we are to try to be moral people who care about each other, then we have to pause before talking about all that code, and talk about the serious health damage introduced with the Apple Pencil 2nd generation. If we are making tech demos or toys for ourselves and we accept the health damage for our own body that is one thing, but when we talk about releasing apps that have a tendency to lure people to use the Pencil more, then these moral health concerns have to be assessed. Are we not morally responsible for the activities we encourage through the apps we make? If we become aware as third party developers that the Apple Pencil 2nd gen is a serious health problem, and that the users are not aware of it - don't we then have an obligation to be responsible for that? The science of how bad it really is, is over the head of the average person, and it's not something we can clearly and fully warn them about. Apple has continued to ignore the reports, and we can assume they have chosen a position of disingenuous morality - because of this, as developers it really does come down to our personal moral character. You, me, the other developers releasing on the store, our actions and how we choose to respond to the immoral circumstances will determine the extent of the negative impact until Apple listens. So, as much as you may feel attached to your work, and are still struggling with the fundamentals, there are some serious moral concerns with releasing apps for the Apple Pencil 2nd gen today.
Post marked as solved
9 Replies
That is because blend modes of that kind can only be done manually in your fragment or kernel, not in the pipeline descriptor options. You have to write the math for it yourself and structure the render passes appropriately to perform the compositing. Also fyi, you are going to get banding with that pixel format.
Post marked as solved
12 Replies
I see, I would also recommend writing a test that does not use a CVPixelBuffer at all. This is because it is important to identify if the culprit is reading from the texture or writing to the buffer. You can do this easily with C inside the Objective-C version. Simply call malloc with the size of your buffer, and pass that pointer to getBytes instead of the CVPixelBuffer. If you find that there is no slow down, you will know for certain that the issue is with CVPixelBuffer's delay and not getBytes. Then you can inform Apple about this in the report. Send me a copy and I will confirm that the test code was written correctly.
Post not yet marked as solved
26 Replies
No, what you provided is not enough. It is not merely tracking api calls. You should provide a project that compiles to allow them to use the full diagnostic tools that are unavailable with just the binary. If you don't provide them this, then the other party has to write it themselves. Often, this causes several tangent problems to occur during triage, that delays identifying and resolving the true problems. (As opposed to the misconstrued notions of what the problems are thought to be) These things occur unnecessarily, and you can do something about that today. You can go to File -> New Project in Xcode, and make a minimal project that replicates what you are seeing in your main project. I am available to provide a second look on your work today to confirm without doubt the issues, but if you neglect to make the sample project and provide this, it will sit on the shelf further. After you have sent this to me, and I have confirmed it's entirety, we can both submit crystal clear reports, to make the complaint more effective. (Also, in case it isn't obvious, the projects you submit should be in Objective C, not swift or C++, and they genuinely should be the minimum that depicts the bug without dependencies.)
Post marked as solved
12 Replies
Have you confirmed that the assembly code, OS, etc are all identical? (And even before this, did you verify with the gpu debugger that everything is identical? Such as the pixel format and size? Did you verify it against a non-Swift implementation?)
Post not yet marked as solved
26 Replies
Please send an actual xcode project for analysis. What you uploaded has telling signs that some of this is Apple's fault, and some of it, like #3 should be an expected error. But if you want someone outside or inside Apple to try to help, you'll need to spoon feed everyone a normal Xcode project. On a normal day, they are inadequate to address basic problems and do not test thoroughly, but they are even less inclined to help if you send it in this format.
Post marked as solved
2 Replies
It's more a consequence of the problems of scale: As you increase the scale and number of parts, communication slows down. The fast memory on the chip used for tile memory is smaller. TBDR is about taking advantage of faster memory access with smaller amounts of data that can fit in smaller amounts of physical memory. As far as I am aware, for desktop gpus, there was only one line of Nvidia cards that had something similar. If you run a compute kernel on the M1, you would expect it to be more efficient in memory access in this regard compared to the other gpus. However, in situations that go beyond the M1's storage and processing ability, of course, you would see a point where the overall results would seem like it is underpowered. This is how you know what strategies and use cases are more appropriate for the M1 vs an eGPU. An eGPU is slow in communication no matter how powerful a card you put in there, so it really isn't appropriate for smooth interactive rendering views. It's more appropriate to use eGPUs for non-realtime offscreen rendering. The many other pipeline benefits of TBDR will come into play if your use case is something that really matches what it is best at - but that may often not be the case, and you may still need multiple render passes.
Post not yet marked as solved
26 Replies
Link a minimal project that demonstrates this - then I can review it and confirm or deny whether it is your error, or Apple's.
Post not yet marked as solved
5 Replies
To extend this a little more... (As a side note: It should go without saying, that even though you can call C from Swift, it is not the same and there is overhead.) I almost always write software tools that try to be as efficient as possible, but they still hit the hardware limits for realtime interaction, and I need it to do more. Because of this, in a real tool, it never makes sense for me to use slower languages and existing frameworks for UI either. The performance gain from rolling custom UI at the same level as the rest of the Metal code is really significant. Especially, if you are using any of the features of Swift that people do like - because those actually come at the highest cost. I write most all my code in raw C, and only use Objective-C for Metal calls, because I am trying to get the most out of the hardware, at the sacrifice of making things convenient for the programmer. It's not like I enjoy this at all, I just accept the suffering in order to get the final tool to be better. I ended up that way, because I wasn't getting as good results the other ways. The initial brainstorming ideas that lead to Swift 1 and playgrounds, reflected concepts like Bret Victors thoughts of interactivity. But the ideas were greatly misunderstood. Ideally: the final compiled machine code, should be the most optimized and stable as it can, because the end result is the first priority the second priority is how we interact with the machine as a developer, and to distinguish features of the text language as distinct from the IDE - Swift's core implementation misunderstood this - even though LLVM's AST format is completely able to implement the proper solutions if the people had thought this through correctly. This was obvious all along, but it goes back to the cultural problems and miscommunication. (After all, I believe C uses the same underlying AST format, does it not? The failure of all other languages to achieve the same level of performance, comes from how people think about the languages/interface, system, and goals) There are also additional moral concerns where performance is actually related to physical energy consumption and environmental damage. If we take the responsibility for the impact of our efficiency choices as developers as a whole, then it means confronting that as well. If millions of coders take the position of thinking it doesn't matter so much (let's take for instance the wide spread adoption of Unity and Unreal), then as a whole, the impact is quite large, so it becomes something we'd have to address collectively as a culture. It's similar to how, if everyone litters only a little it is a big problem in the whole - though each person will say "It's just one".
Post not yet marked as solved
6 Replies
Yes. The gpu debugging tools have had numerous issues throughout OS releases. The cause is merely quality assurance problems. - And it infects every corner of the company. The way we can resolve it, is by raising standards, and making it clear to the staff that their current work is not acceptable. If we say, it is not acceptable, and if they truly care and are not disingenuous, then the managers would then be more critical and reject more of the sloppy and half-baked work. It means that the developer culture itself has to become more sincere about it's standards as well - so as to be more thorough before submitting work. A lot of this comes from the way people go about achieving the results - not just the results themselves. Accepting that there are formally right ways to do things, and that the current culture inside and outside the company is wrong, would be the first step we could do today. Beginning to practice the right ways are also things that can and should be done today, and not put off like, "It is a nice idea, but I don't feel like working that hard, and it doesn't matter that much, it is already quite good, so I wont do it that way today." I often hear this and similar patterns, where the teams take attitudes of mind where they pat themselves on the back, and tell each other that it is good, and further validate themselves by positive reviews - (without taking into account the dynamics of reviews in a culture that is trying overly hard to be positive.) As soon as the team says, "We think it is good and we worked very hard" - It is over in that very moment, they have blocked themselves from looking at the core problems and fixing them. What we have to do, is be perfectly clear, that "No, it is not good. It is not meeting the minimum standard of quality", and that we have to provoke them to confront the reality of how bad things actually are, so that they will take them seriously enough to fix them. Because the third party developers often don't complain as clearly, nor do we organize ourselves together to communicate effectively, and because the guys at the top of the company don't care enough, these problems will go on for years until we put our foot down and be more strict about what quality actually means, and how much we actually care to get it.