Sometimes, when I’m using a new app and have to enter my personal information for the umpteenth time, I like to figure out how the app processes that data. How are my data sent? What about access control? Is there any indication that data are visible that do not directly belong to my account? Are there any other issues that might suggest that security deserves a bit more attention?
Nowadays, almost everything is encrypted; this makes it all the more difficult to find out what exactly is going on. Encryption takes place between the client (app) and the server (backend), so you generally need access to the app or backend to see what’s going on. You can of course jailbreak your phone, but fortunately, there are also proxy tools like mitmproxy or Burp that can be of use. Redirecting your smartphone’s data through such a tool often shows what an app does. Mitmproxy intercepts traffic from the app, pretends to be the (web) server and then passes the (potentially modified) data to the real web server. The server’s response returns in the same manner.
If HTTPS is used, this doesn’t 'just' happen, which is a good thing. If decryption was that easy, HTTPS would be of much less value. In operating systems, browsers, but also on smartphones, there is a so-called 'Trusted Certificate Store'. In this store is defined who may sign certificates (Trusted Certificate Authorities - CAs), certificates not signed by one of those CAs are not trusted. Mitmproxy also uses CAs - it intercepts traffic, uses SNI and certificate pre-fetching to see for which host a certificate must be generated, generates it, signs it with its own CA key, and presents the signed certificate to the client. However, the mitmproxy CA is generated at first use and is not by default in the ‘Trusted Certificate Store’ of the client. The connection therefore should not be established. Of course, you are free to decide to import the mitmproxy CA into your own client's ‘Trusted Certificate Store’ so that you can still view traffic, but you do need to do something with your client (browser, smartphone) and it therefore only works for you. Usually that is - sometimes you have to jump through a few extra hoops like pinning or TLSA.
Browsers have been checking the signature of certificates for quite some time now, it has become a standard part of security. It’s such a basic thing that you don’t even really think about it anymore - when using a common browser it's probably ok. And although exceptions can be made (rightly - to protect or defend users - or not), and for many of us, the Diginotar debacle is still fresh in our minds, the security of up-to-date browsers on sites with green padlocks is fine, generally speaking. And if you want, you can easily find out who signed a certificate in any common browser (though Chrome made it a little harder for the average user). But what about apps?
The majority of apps essentially contain a form of mini-browser. A browser that can often only go to 1 website - an API to the backend of the app. Of course, not every app developer needs to create this browser themselves; the programming language in which an app is written (e.g. Objective C for iOS) by default has the ability to retrieve and send data about HTTP(S). And this is generally quite safe. One of the things that is handled by default is checking who signed certificates and blocking certificates of which the signing party (the CA) is not present in the Trusted Certificate Store. Of course, as an app developer, you can also choose to make it less secure and accept all certificates. This might be useful if you do not feel like paying for an HTTPS certificate in the test phase - though this excuse has not been tenable for some time now because of the rise of Let's Encrypt, which allows you to obtain free trusted HTTPS certificates in a few (simple, possibly automated) steps. But, apparently, that is not yet easy enough and some developers still feel the need to switch off those checks.
"But... if an app has been published, surely that’ll be in order? And how can you verify this?"
Back to the app I was working on. As is often the case, I was curious about what exactly the app is communicating to the backend. After redirecting my iPhone's traffic through mitmproxy I restarted the app, and the requests began to be pop up. Nice... now we can see what’s going on. But then it hit me; "didn’t I recently reinstall my host with mitmproxy? So that means that I haven’t yet imported the CA of mitmproxy into the Trusted Certificate Store on my iPhone, right?" After checking, it turned out that this was indeed not the case. Even regular websites and other apps give a warning or simply don’t work - but this app does. A rather significant security problem. The most annoying thing here is that the average user can’t actually check this. In browsers, this is quite clearly shown nowadays with green bars/padlocks, 'Safe' and so on, and red bars and warnings if there's something wrong. But if a developer reduces security and someone intercepts (and decrypts) your traffic, you won’t notice a thing.
Later that day I informed the owner of the app concerned by e-mail, but after a few days, I still didn't receive a response. I even tried another e-mail address - still no response. Later I reached out to them by phone, left a message but I’m still waiting to be called back. This is perhaps even more frustrating than the security issue itself.
What is the potential impact?
It is often quite easy to intercept traffic from clients on the same network (using Arp spoofing) on a (public) network. However, an attacker such as this still can’t do very much to clients (browsers and apps) and servers that take security seriously; traffic is after all encrypted and decryption isn't possible in normal circumstances. Because the attacker doesn’t have a ‘trusted CA’, it cannot decrypt the victim's traffic through the aforementioned tools, at least not without the user becoming aware of it. Attempts at such a man-in-the-middle play result in warnings within the client.
This is very different with apps that interfere with HTTPS security by accepting all CAs. They allow traffic to be intercepted and modified - without the user being aware of it. Do not be surprised if your data falls in the wrong hands when you're using an app on a network with other users (public hotspot, or even a corporate network). Or that one transaction ends up at the wrong person. Of the 42 iOS apps I've tested there are definitely 3 with this problem. Moreover, they’re not just any apps: an insurer, a fintech app and an app in the automotive category (relax... I haven’t yet managed to hack someone else's car this way). Perhaps more of these 42 apps have this problem, but in the case of Apps that require a login, at least the loginpage itself is safe. Well - statistically, it may not be significant because of the small subset, but still, with 2.2 million apps in the Apple App Store, there are definitely many more apps with this problem. And Android isn’t immune to this problem either - 1 of 3 iOS apps that suffer from this issue also suffer from it on Android. And of the remaining 39 there may well be a few there that behave well on iOS - but not on Android. Conclusion: Users have no guarantee on Android either.
I'm definitely not the first to worry about the security of many apps and this particular issue has been discovered before as well. Ten years after the introduction of the iPhone, we clearly see a significant need for improvement. While a website owner is directly interested in keeping encryption somewhat safe (users don’t like warnings and red bars), this is completely different for Apps. Users don’t see it anyway and it’s difficult to determine. Unlike a regular website, these failures do not appear in functional tests, neither do test users see it. Time for the Apples and Googles of this world to become somewhat stricter. As far as I'm concerned they can remove the option to make unsafe connections so developers can no longer use it without user permission, just like with browsers. If a connection is established with an unsecured certificate (which usually happens when someone tries to steal your data), always generate a warning or block it completely. If you click 'accept' the warning, maybe a red bar of a few pixels high. But, is that enough? Security is not binary; it is not either on or off. If a publicly available news app is just secure enough, is it OK for an Internet banking app to apply the same level of security? How can (more technical) users easily see what security measures are in place or are missing (TLS version, Cipher strength, HSTS, DNSSEC, TLSA, ...)? Should that info be kept behind a ‘padlock’ made by Apple and Google?
Finally, think about 'responsible disclosure' and clearly state on your site where people can turn to if they discover a security issue. And act upon it.