I regularly see questions, both here on DevForums and via DTS code-level support requests, from developers who are working with a security auditor. This is a tricky topic, and I’m using this post to collect my thoughts on it.
If you have questions or comments, please start a new thread. Put it in Privacy & Security > General and tag it with Security; that way I’m more likely to see it.
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
Security Audit Thoughts
DTS is a technical support organisation, not a security auditing service. Moreover, we don’t work with security auditors directly. However, we regularly get questions from developers who are working with a security auditor, and those often land in my queue. Given that, I’ve created this post to collect my ideas on this topic.
I see two types of security audits:
-
static analysis — This looks at the built code but doesn’t run it.
-
dynamic analysis — This runs the code and looks at its run-time behaviour.
While both techniques are valid, it’s critical that you interpret the resulting issues correctly. Without that, you run the risk of wasting a lot of time investigating issues that are not a problem in practice. In some cases it’s simply impossible to resolve an issue. And even if it is possible to resolve an issue, it might be a better use of your time to focus on other, more important work.
A good security auditor should understand the behaviour of the platform you’re targeting and help you prioritise issues based on that. My experience is that many security auditors are not that good )-:
Static Analysis
The most common issue I see relates to static analysis. The security auditor runs their auditing tool over your built product, it highlights an issue, and they report that to you.
These issues are usually reported with logic like this:
-
Routine
f
could be insecure. -
Your program imports routine
f
. -
Therefore your program is insecure.
This is logically unsound. The problem is with step 1: Just because a routine might be insecure, doesn’t mean that your use of that function is insecure.
Now, there are routines that are fundamentally insecure (I’m looking at your gets
!). Your security auditor is right to highlight those. However, there are many routines that are secure as long as you call them correctly. Your security auditor should understand the difference.
The canonical example of this is malloc
. Calling malloc
is not a fundamentally insecure operation. Sure, the world would be a better place if everyone used memory-safe languages [1], but that’s not the world we live in.
If your security auditor highlights such a routine, you have two options:
-
Rewrite your code to avoid that routine.
-
Audit your use of that routine to ensure that it’s correct.
This is something that you’ll have to negotiate with you security auditor.
[1] Or would it? (-: The act of rewriting all that code is likely to produce its own crop of security bugs.
Tracking Down the Call Site
In most cases it’s easy to find the call site of a specific routine. Let’s say your security auditor notices that you’re calling gets
and you agree that this is something you really should fix. To find the call site, just search your source code for gets
.
In some case it’s not that simple. The call site might be within a framework, a static library, or even inserted by the compiler. I have a couple of posts that explain how to track down such elusive call sites:
The first is short and simple; the second is longer but comprehensive.
Apple Call Sites
In some cases the call site might be within Apple code. You most commonly see this when the Apple code is inserted in your product by the toolchain, that is, programs like the compiler and linker that are used to build your product.
There are two ways you can audit such call sites:
-
Disassemble the code and audit the assembly language.
-
Locate the source of the code and audit that.
The latter only works when the toolchain code is open source. That’s commonly true, but not universally.
If you’re unable to track down the source for an Apple call site, please start a thread here on DevForums with the details and we’ll try to help out.
If you’re analysis of the Apple call site indicates that it uses a routine incorrectly, you should absolutely file a bug about that.
Note Don’t file a bug that says “The Swift compiler inserted a call to malloc
and that’s insecure.” That just wastes everyones time. Only file a bug if you can show that the code uses malloc
incorrectly.
Dynamic Analysis
The vast majority of security audit questions come from static analysis, but every now and again I’ll see one based on dynamic analysis. However, that doesn’t change the fundamentals: Dynamic analysis is not immune from faulty logic. For example, the following sequence suffers from exactly the same logic problem that I highlighted for static analysis:
-
Routine
f
could be insecure. -
Something in your process calls routine
f
. -
Therefore your program is insecure.
However, there are two additional wrinkles. The first is that you might not have any control over the code in question. Let’s say you’re using some Apple framework and it calls gets
[1]. What can you do about that?
The obvious thing is not use that framework. But what if that framework is absolutely critical to your product. Again, this is something you need to negotiate with your security auditor.
The second wrinkle is the misidentification of code. Your program might use some open source library and an Apple framework might have its own copy of that open source library [2]. Are you sure your security auditor is looking at the right one?
[1] Gosh, I hope that’s never the case. But if you do see such an obvious security problem in Apple’s code, you know what to do.
[2] That library’s symbols might even be exported, a situation that’s not ambiguous because of the two-level namespace used by the dynamic linker on Apple platforms. See An Apple Library Primer for more on that.