Updated: 29th Sep 2024 Reading: 16 minutes

Apple's Hypocrisy

Six 10 years. Thats how long Apple has been focusing on privacy and security. Or at least, their PR department has been.

Since 2014, the worlds most profitable company has been trying to separate itself from rivals by painting itself as the most private and secure option. Is that actually the case, or is their virtue signaling merely skin deep?

Let me be clear right from the start. This is not a comparison between Apple and Google. That wouldnt be fair since Googles business model relies entirely on mining user data. That said, Ive never been a fan of “lesser evil” arguments. Here, I look at Apple in a vacuum. Mostly.

Remember 2014? Before the mere concept of a Trump presidency polarized the entire world? When all we had to worry about was Ebola, ISIS, and Russia annexing Crimea? What a simpler time.

2014 was also when Apple first acknowledged the need to focus on privacy and security, after countless celebrities got their iCloud accounts hacked and their nudes leaked generally referred to as “The Fappening”. That same year, Apple acquired Beats By Dr. Dre. Both of these events took place in August.

Tim Cook clearly understood how those events could affect Apples brand image, which would have been particularly damaging to the upcoming Apple Pay launch a couple of months away. So he added a new privacy section to the Apple website, chalk full of platitudes, including his famous “if you’re not paying for a product, then you are the product” quote.

It took them a whole year to completely iron out with their PR plan, however.

Apple’s PR Machine


On September 29 2015, The Washington Post published a piece titled “How Apple is trying to protect your privacy as its products get more personal”. More specifically, they were referring to the newly announced Apple News app which curated stories based on your preferences.

Naturally, users concerned with privacy tend to get jumpy whenever the words “preferences”, “recommendations” or “personalized” are uttered. To get around this, Apple “revamped” their privacy policy, and The Washington Post “reviewed” it to see how Apple attempted to “lay out how its philosophy on data collection distinguishes itself from its tech industry rivals”. Read: We’re not Google, the advertising behemoth behind Android. The article continues: “the company is telling customers it is not interested in their personal data.”

Apple continued to slowly plant these seeds in the minds of their fans over the following months. All of this quickly came to the forefront of media attention in February of 2016, when the FBI asked Apple to help them gain access into the iPhone 5C that belonged to the San Bernardino mass shooter. Of course, its not unprecedented for the government to make such a demand. Google and Facebook also receive similar court orders, for search history or emails. These are judgment calls made on a case-by-case basis.

What was strange about the San Bernardino case was that the FBI went beyond asking for access. They asked Apple to create a new version of iOS with weaker security, so the government can bruteforce into it. Or, outright add a government backdoor into iOS. Thats certainly unprecedented, and terrifying. Of course, Apple refused. The FBI then went ahead and hired a bunch of hackers to get into the phone, as they probably would have anyway, rendering the entire scandal and Apple’s posturing rather irrelevant.

Now, Im not a fan of conspiratorial thinking, But even back then I found it odd how much media attention that story was getting. Sure, it was probably entirely organic. After all, there’s certainly no shortage of passion and cultish fanboyism surrounding Apple. But a comment later made by FBI Director James Comey made me rethink that. After obtaining a tool from the aforementioned hackers, Comey felt the need to clarify that this tool only works “on a narrow slice of iPhones”. Essentially, he assured the public that newer iPhone models with a fingerprint sensor cannot be accessed by this tool.

Im not saying there isnt a perfectly logical explanation for why the tool couldnt work on newer iPhones. My question is: why would the director of the FBI be concerned with reassuring iPhone users of the security of their devices? Or even encouraging them to upgrade to newer models? Wouldnt any criminal simply do that to avoid the possibility of the FBI hacking into their phone? Then there was the NSA’s claim that they can’t hack iPhones because “bad guys” simply don’t use them. Thats not only a silly notion, but also a very odd thing to say by the NSA. If anything, it strangely mirrors Apple’s sentiment regarding not allowing bad guys in movies to use Apple devices, to remove whatever subconscious effect that may have on its users.

It seems likely to me that the FBI was incentivized to target Apple with a harsher-than-usual demand, to make Apple look like the good guy for refusing to comply and standing up for everyones privacy. Many such stories have taken place since 2015. A similar one seems to be happening right now in 2020, with online advertising agencies supposedly being mad at Apple for the new privacy changes implemented in iOS 14. This kind of subtle, genius PR strategy is something only Apple would be able to pull off.

The Track Record


Now you might be wondering, understandably, why Im so cynical. Thats because the reality of Apple’s privacy and security practices does not line up with all of this PR. For starters, Apple does sell ads as well. They have for over a decade now. Though, granted, they do so on a much smaller scale and only within their ecosystem. Specifically in the App Store and in the News app.

And despite their recent “reverse scandal” regarding targeted advertising, their ads are also interest-based. Of course, you can opt-out, as is industry standard. But as is also industry standard, that option is buried deep within the settings, somewhere most users are not likely to ever see. So much for wanting to be “more transparent about data tracking in iOS 14”, eh Apple?

Speaking of which, their recent efforts in iOS 14 and Safari to require consent before tracking in the former case or block them entirely in the latter only applies to third-party trackers. Meaning Apples own apps and services can still track you without getting blocked or even requiring consent. And you might say that you dont mind that Apple tracks you and has so much data about you, and that they use this data to show you more relevant ads. After all, if you trust Apple to keep your data safe, I suppose it isnt all that bad. Still hypocritical though.

Unfortunately, Apples claims of security that is, their ability to keep your data safe within their ecosystem are also rather flimsy. As of this writing in 2020, there are over 4,500 security vulnerabilities affecting Apple devices that were published to CVE (Common Vulnerabilities and Exploits), 1,655 of them affecting iOS specifically. Of course, a lot of these exploits target older versions of iOS and MacOS, but far too many target current versions. Its not uncommon to have this many security vulnerabilities be ignored by the company responsible for fixing them. But it does go against their highly publicized stance on privacy and security. Especially when a lot of these do undermine the privacy of their users and the security of their data.

I recently covered one example of such exploits that allowed iOS apps to obtain whatever is in the clipboard without any kind of user consent, without even having any justifiable need for it. Many games and news apps, without any ability to enter text in them, were spying on the iOS clipboard. To make matters worse, if you had universal clipboard enabled, these apps were also able to spy on your MacOS clipboard. This vulnerability was reported to Apple in January 2020. Apple responded saying they see nothing wrong with it, calling it “intended behavior”. Apple has now addressed the vulnerability in their upcoming iOS 14 release, due to launch in the fall. But they didnt actually prevent the behavior or implement any way of controlling it. They now simply alert the user when it happens. Yay for transparency!

Another example from this year 2020 is the Mail app vulnerability that allowed malicious actors complete access to your emails by simply sending you a specifically crafted email. Versions of iOS from as far back as iOS 6 are vulnerable. It is unknown if previous versions are vulnerable as well. Apple downplayed the severity of this exploit and claimed there was no evidence of it being actually used in the real world. The group that discovered the vulnerability, ZecOps, responded saying it was in fact “widely exploited”, in places like Japan, Germany, Saudi Arabia and Israel. Their findings claim that this is a “nation-state” attack, being used to target and keep tabs on persons or groups of interest. ZecOps did not suggest that Apple had willingly left a backdoor open for this purpose. They instead speculate that the nation-states purchased the exploit from a third-party researcher.

The Updates Myth


This may seem somewhat off-topic, but I also need to address the myth that Apples devices get supported for much longer than any other. If youve ever owned an Apple device, and were as geeky as me, you probably kept up with announcements of new features in upcoming versions of iOS. Only to then install the update when it arrives, and notice that a bunch of those features are not there. What gives?

Well, despite Apples long list of “supported devices” that they so proudly show off whenever they release a new software version, what they conveniently forget to point out is that theyre not talking about 100% support. A lot of the new features that come in new versions of iOS are actually not supported on older devices. So before anyone brags about how the iPhone SE will receive the iOS 14 update, you might want to check just how much of that update is actually coming to the iPhone SE.

And yes, I know its a matter of hardware that those devices lack. I just think its incredibly deceptive to claim that these devices are supported and then make it incredibly tedious if not outright impossible to find out exactly how much of the new iOS version youll be getting. To make matters worse, you cant even find this information online. You wont find any articles covering what features would be missing in an update for a particular device. On the flip side, youll find plenty of mentions of how a 5 or 6 year old device is still supported. Call it Apple magic!

This is yet another stroke of Apples PR genius. They give everyone the perception that their devices will remain top of the line for 5 or 6 years, while refusing to ever have the difficult, nuanced discussion about how that “support” isnt entirely what they claim it to be.

On the Android side, Googles Pixel line receives updates for four 7 years. Obviously its apples to oranges, pardon the pun, but can iOS updates even be considered “OS updates” if they only include one or two new features? Besides, Android is extremely diverse in terms of feature set. Each OEM puts out their own spin on Android, with however many, or little, added features of their own.

Samsung for example is notorious for stuffing as many gimmicky, sometimes very useful, features as they can, but theyre also notoriously bad with updates. Not only does it often take six months for Samsung devices to receive a new version of Android, but they only do so for two four years. Thats two four Android versions and that’s it. And that only applies to Samsungs flagship devices. Their cheaper devices are often lucky to get a single OS update.

I guess what Im trying to say is, its not as simple as saying “Apple devices receive updates for 5 years and Androids only get 2”

Jailbreaking


Traditionally, those who wanted to jailbreak their devices had to be mindful of their iOS version, their device model, and even their SoC. Different exploits targeted different combinations, and any update could instantly undo your jailbreak, and make it impossible to do it again. Until now.

In September 2019, iOS hacker aci0mX discovered a vulnerability that allowed iOS devices since the 2010 iPhone 4 to be jailbroken, regardless of their iOS version or SoC. Another Twitter user had also stumbled upon this vulnerability back in March 2019, but was only able to exploit it on certain devices. The vulnerability was patched in iOS 13.5.1, on June 1st 2020. Thats more than a year after it was discovered and exploited in the wild.

Now, to be clear, Im not upset about the jailbreak itself. I think its great, personally, to liberate yourself from many of Apples restraints. From a security standpoint however, jailbreaking is a pretty big deal. Jailbreak exploits work by finding ways to undermine the security of iOS and implement their own code changes. So when there is a vulnerability that allows anyone to break through Apple’s security, and that vulnerability affects potentially every iPhone ever made, that’s alarming.

In most cases, jailbreaking is used to customize the device and add features. But since your iPhone is now much less secure, its no surprise that it also becomes vulnerable to a myriad of attacks.

Of course, there will always be that guy who says “just don’t do it. Simply learn to live with Apple’s extreme restrictions. Someone who goes through the complex process of jailbreaking should be aware that their device is now less secure, and should be extra careful accordingly. Or suffer the consequences.” If you’re that guy, first of all, you’re not wrong. But here’s my response: what if someone else jailbroke your device without your knowledge? The worst part about a closed ecosystem is that you cant even find out if your device has been tampered with, as Vice explored in a May 2019 piece.

The Danger of Walled Gardens


“The bad guys will find a way in one way or another. Shouldnt we enable the good guys to do their job?” Zuk Avraham, ZecOps Founder.

The problem with closed systems is that Apple holds absolute control. This kind of extreme centralization is never a good idea. A single point of failure is a vulnerability. Apple is not perfect, nor should we expect them to be. So why are they the only ones who can diagnose and fix problems?

In contrast, open-source systems publicly release their code, allowing anyone to independently analyze it, find flaws in it and even submit corrections or improvements to it. This is where Android truly shines and I dont mean the version that ships with your phone. Android, as it comes from Samsung, Sony, or even Google or any other OEM, while still far more open than Apple, is still largely dependent on whatever the OEM decided would be best to implement.

The mere fact that Android is open-source allows you to make any changes to the device you own, including implementing utmost security and privacy protection. Android customization is often thrown around as a fanboy talking point, but most people dont really know just how deep it goes. Back in the KitKat days, Android included a surprisingly useful permissions manager called AppOps vastly superior to anything offered by the OS today. For whatever reason, Google removed access to it, making it only possible to access it with root (thats jailbreaking in Android speak) and a command line interface.

That is, until the CyanogenMod community brought us Privacy Guard, which was little more than an interface to be able to use the built-in permission management system. It was super basic and easy to use. It showed you the frequency with which each app uses each permission, making it a god-send for figuring out which pesky app is draining your battery (protip: Its probably Facebook). If Android was locked down as tightly as iOS is, this never would have been possible. When Apple removes access to a feature, it’s gone forever.

Then came XPrivacy, a far more advanced, granular and customizable approach at permissions management. Fun fact, it allowed you to block apps from accessing your clipboard way back in 2014. It also allowed you to block access to other things like internet access, which was a great way to stop offline apps, including games, from being able to show ads or send back data about you.

Another, perhaps more taboo app to bring up is Lucky Patcher. Its largely associated with its ability to pirate apps and get stuff for free within them, but my favorite feature was always the ability to see all the different components of each app and disable whichever ones you wanted. This was great back when I used Facebook. It allowed me to disable anything that mentioned “telemetry” or “ad”, reducing its overall ability to fingerprint me. I also used LP to disable GIFs from my keyboard, because I never used them so I might as well reclaim the RAM that was used up by that feature.

And these are all just simple examples of apps and tweaks that can enhance your privacy, made possible by Androids open-source nature. Not enough? You can even make your own version of Android, only allowing what you want and nothing else. Make it use your own servers to sync contacts, for example, instead of Google’s or Apple’s.

If you can’t code, that’s fine too. Plenty of Android forks exist, called ROMs. Many of those ROMs are focused on privacy and security from the ground up, making them a much better option than any iPhone or Android device you can buy from an OEM. Projects like GrapheneOS, #!os (read: hashbangOS), CalyxOS, RattlesnakeOS, and /e/.

These are all hobby projects by enthusiasts, so don’t expect OEM-level polish. That said, some like /e/ and iode have done a pretty good job at making themselves rather easy for the average user to get up and running. Some even sell refurbished devices running privacy-friendly OS right out of the box, removing the hassle of the installation process for non-technical users.

Link copied to clipboard

Get in touch