I tend to talk about topics that aren't very exciting to most developers, but I also try to make it a habit to discuss topics that are fundamental for software development. Security ticks both boxes. Few developers get excited when I bring up security. The truth is that security should always be top of mind when you are creating software.
Before I talk about security, I want to stress that I am no security expert. In this episode, I discuss what I have learned over the years. Use the practices outlined in this episode at your own risk. I, Cocoacasts, or my company cannot be held liable for any claim, damages, or other liability arising from the application of these practices.
Be Prepared
You always need to be mindful that the software you build can be compromised. If a skilled attacker has set their eyes on your application, then they can compromise your application and the secrets it stores. Don't make the mistake of assuming that you have built a piece of software that is impenetrable.
It's important to know and understand what can happen if your application is compromised. As a developer, you need to be able to answer two key questions, (1) "What are the consequences if your application and the secrets it stores are compromised?" and (2) "How do you respond when that happens?"
Before you share your application with the world, these questions need to have an answer. It is possible that the answers are short and simple for the application you are building. If you are working on a banking application, then the answers are a bit more complicated.
What Is a Secret?
This episode focuses on the protection of secrets, which brings up the question "What is a secret?" I need to emphasize that this episode focuses on security on the client, more specifically an iOS, tvOS, macOS, or watchOS application. I don't cover security on the server in this episode. That is a different discussion.
I define a secret as any piece of sensitive information that identifies your application and that is specific to your application. Your application uses secrets to identify and authenticate itself with other services. Examples include API keys and secrets. In the rest of this episode, I use the more generic term secrets.
Public and Private Secrets
It's important to make a distinction between public and private secrets. The goal is to keep both types of secrets out of the hands of other parties. The difference between these types of secrets is that public secrets are intended to be exposed to some extent. They should not cause major issues if they are compromised. Having a private secret exposed is a major security breach with significant consequences.
Let's take Fabric as an example. Fabric asks developers to add the Fabric API key to the application's Info.plist. It isn't complicated to extract the Info.plist from an application you downloaded from Apple's App Store. The consequences are minor if an attacker extracts the Fabric API key from your application's Info.plist. They may be able to send data to Fabric, which is inconvenient and something you don't want to happen. But it isn't a major issue and it doesn't harm the users of your application. I refer to this type of secrets as public secrets. They are knowingly exposed to some extent.
It's a different story if your application uses a secret to communicate with a payment provider. That secret needs to stay private at any cost. No attacker should be able to communicate with the payment provider on your behalf. I refer to this type of secrets as private secrets. Private secrets should not be knowingly exposed.
User Data
I won't cover the user's data in detail in this episode. Protecting the user's data asks for a different approach. If an attacker wants access to the user's data, then they need to have physical access to the user's device or set up a man-in-the-middle attack.
There are a number of simple guidelines you can follow to secure the user's data. The first rule I apply is simple. Every piece of sensitive information is stored in the keychain. That is the safest location on an Apple device. I also recommend encrypting any data the user creates and stores in your application. Both Realm and Core Data have built-in support for encrypting data.
The second rule I stick to is merely common sense. Only store information that is absolutely necessary. There's no need to store the user's password in the keychain when the user signs in. Most services return an access token to the client in return for a valid email and password combination. That access token is used to identify and authenticate the user with the service.
The application stores the access token in the keychain, not the password. An application doesn't need the password for anything but to request an access token. You could store the user's email in the keychain for convenience. If the user signs out and signs in at a later point in time, it is convenient for the user to see the email text field prepopulated.
A key difference between the user's data and application secrets is the impact they have when compromised. If the user's device is compromised in some way and the user's data are in the hands of the attacker, only the user is affected.
Protecting Secrets
There are several strategies to protect private secrets, some of which you may already be familiar with. I also discuss the pros and cons of each strategy. Let's start with what is probably the most common and, at the same time, the least secure solution.
Info.plist
A very common location for storing secrets is the application's Info.plist. It is convenient and many frameworks, libraries, and SDKs explicitly ask you to add public secrets to the application's Info.plist. Why is that and is it a safe option?
The application's Info.plist is ideal for storing information that configures your application, such as your application's name, its version, and the name of the main storyboard. It can be, and often is, used to store public secrets. This isn't true for private secrets.
It is easy to download an application from the App Store, inspect the contents of the application bundle, and extract its Info.plist. That is why you should never store private secrets in the application's Info.plist.
Code
Another popular option is hard coding secrets into a project. Because the secrets are compiled and embedded in the application's binary, it is less trivial to extract them. Be warned, though. Anyone with the proper skills can extract such secrets with relative ease. While it is marginally safer to use this strategy, it isn't a solution you should use for storing private secrets.
Encrypting Secrets
A more advanced and more complicated strategy is encrypting the private secrets your application uses. The encrypted secrets can be embedded in the application's binary or they can be fetched from a remote service at runtime. The application needs to decrypt the encrypted secrets before it can use them, which means that the application also needs access to a decryption key.
You can include the decryption key in the application's binary or the application can fetch it from a remote service at runtime. Both options are liable to attacks because the decryption key is exposed one way or another.
There is another, more subtle problem with this strategy. The application can only use the encrypted secrets by loading them into memory. A skilled attacker can access the secrets while they are in memory by reading the contents of the memory.
This strategy isn't perfect, but it's much more robust than storing secrets in the application's Info.plist.
Fetching Secrets
Another more advanced strategy is not storing secrets in the application's binary. That sounds appealing. Doesn't it? At runtime, the application asks a remote service for the secrets it needs to do its work. There are several advantages to this approach.
The most obvious advantage is that the secrets cannot be extracted from the application's binary. Another compelling benefit is that secrets can be replaced quickly and easily. What happens if one of the secrets has been compromised? Because the secret isn't embedded in the application's binary, it can be revoked and replaced with a new one. The next time the application asks the remote service for the secret, it sends the application the new secret.
Most of these services are quite advanced and have a number of interesting options. It is possible to issue a different secret to each client. This limits the risk in case a secret is compromised.
Most services maintain detailed logs to track down the origin of a security breach. It's also possible to issue secrets that have a limited lifetime, drastically reducing the chance and consequences of a security breach.
This strategy is appealing, but it suffers from the same problem the previous strategy suffered from. The application uses the secrets to perform a task, which means that the secret is loaded into memory. A skilled attacker can access the secrets while they are in memory by reading the contents of the memory.
Using a Broker
The previous strategy has a number of compelling advantages. By not including secrets in your application's binary, it is easy to replace them when they are compromised. The last option I would like to discuss takes the previous strategy one step further and it is the option I like most.
I'm sure you agree that the best strategy is to hide private secrets from the application. Right? That sounds great. The question is "How would that work?"
Let me illustrate this option with an example. The application you are building needs to notify a third party service. The third party service expects your application to authenticate every request with a secret of some sort, an access token for example. How is that possible if the application doesn't have access to the secret?
The answer is surprisingly simple. The application uses a service that acts as a broker. Most applications fetch data from a backend. That backend can act as a broker. The application notifies the backend and it notifies the third party service on the application's behalf.
There are several benefits to this approach. Like the previous strategy, replacing a compromised secret is fast and painless. Because the application doesn't know about the secret, there's no need to update the application in the event that the secret is compromised.
But there's more. The application doesn't expose any details about the third party service it notifies. The attacker won't know which third party the application is sending data to. The attacker could read the data that is being sent and they could even send data to the third party service, but they don't know which third party service the application notifies. The attacker has no way to access the secret either. Only the broker can access and use the secret.
I have to admit that this solution is more advanced and it requires a backend, but the benefits are well worth the investment.
Permissions
If you're working alone on a project, then communication isn't a problem. From the moment you work in a team, large or small, it is important that miscommunication doesn't result in security issues. What do I mean by that?
Let's say the application you're working on needs to interact with a third party service. For that to work, you need a public secret, an API key for example. You ask someone on the team to create the secret because you don't have the required permissions to create one. You receive a secret and you follow the instructions of the third party service to make the integration.
Many services use permissions to control which actions can be performed with a particular secret. The secret your team mate gave you turned out to be a private secret with read and write access while the application only requires a public secret with read access. This means that your are protecting a private secret as if it were a public secret.
It's important to carefully communicate what you need and verify that what you are given is what you agreed on. Clear communication can often prevent security issues like this from making it into production.
Documentation
I strongly recommend documenting the secrets your application uses and every team member should have access to that document. Security is a shared responsibility.
Remember that you need to answer two questions for every secret your application has access to, (1) "What are the consequences if your application and the secrets it stores are compromised?" and (2) "How do you respond when that happens?" You need to prepare for the worst case scenario by having a plan in place.