Tokens Tokening

Our case-study today will set some ground rules for a new Anti-CSRF attitude that I was working on for the past few months. This new attitude, or, for the sake of correctness – mechanism, basically catalogs CSRF tokens. Don’t freak out! You’ll understand that in no time.

First, I must say that I am probably not the first one to think of this attitude. During some researches I came across the same principals of the tokens cataloging method I am about to show you.

So, What the hell is tokens cataloging you ask? It’s simple. This is an Anti-CSRF security attitude (/policy/agreement/arrangement – call it what you want) where CSRF tokens are being separated to different actions categories. This means that there will be a certain token type for input actions, such as editing a certain field or inserting new data, and there will be a different type of tokens for output actions, such as fetching sensitive information from the server, or requesting a certain private resource. These two main token groups will now lead our way to security perfectness. Whenever a user will be supplied with a form to fill, he will also be supplied with an input action token – a one-time, about-to-expire token which will only be valid to this specific session user, and will expire x minutes after its creation time. This input token will then be related to this specific form tokens family, and will only be valid in actions of this family-type.

Now, after explaining the “hard, upper layer”, let’s get down with some examples:

Let’s say we have a very simple & lite web application which allows users to:
a. Insert new posts to a certain forum.
b. Get the name of each post creator & the date of the creation of the post.

Ok, cool. We are allowing two actions: an input one (a), and an output one (b). This means we’ll use two token-families: one for inserting new posts, and the other for getting information about a certain post. We’ll simply generate a unique token for each of these actions, and supply it to the user.

But how are we going to validate the tokens?
This is the tricky part. Saving tokens in the database is a total waste of space, unless they are needed for a long time. Since our new attitude separates the tokens to different families, we also use different types of tokens – some tokens should only be integers (long, of course), some should only be characters, and some should be both. When there is no need to save the token for further action, the token should not be kept in a certain data collection, and it should be generated specifically for each user.
What does it mean? That we can derive tokens from the session user’s details which we already have – we can use his session cookie, we can use his username (obfuscated, of course) and we can mix some factors in order to generate the token in a unique way, which can only be ‘understood’ by our own logic later in the token validation process. No more creating a random 32-chars long token with no meaning that could be used trillion times. Each action should have its own unique token.

“This is so frustrating and unnecessary, why should I do it?”
If you don’t care about resubmitting of forms, that’s OK. But what about anti brute forcing, or even anti-DoSing? Remember that each action that inserts or fetches data from the DB costs you in space and resources. If you don’t have the right anti brute forcing or anti DoSing mechanism in place, you will go down.
By validating that each action was originally intended to happen, you will save unnecessary connections to the DB.

If implementing this attitude costs you too much, simply implement some of the ideas the were presented here. Remember that using the same type of token to allow different actions may cause you harm & damage. If you don’t want to generate a token for each user’s unique action, at least generate a token for each user’s “general” action, like output and input actions.

Knocking the IDOR

Sorry for the no-new-posts-November, FogMarks has been very busy experiencing new fields and worlds. But now — we’re back on baby!

Today’s case-study is about an old incident (and by “old” I mean 3 months old), but due to recent developments in an active research of a very known company’s popular product, I want to present and explain the huge importance of having an Anti-IDOR mechanism in your application.

Intro

Basically, an IDOR (Insecure Direct Object Reference) allows an attacker to mess around with an object that does not belong to him. This could be the private credentials of users (like an email address), private object that the attacker should not have access to (like a private event), or public information that should simply and rationally not be changed (or viewed) by a 3rd-party.

When an attacker is able to mess around with an object that does not belong to him, the consequences can be devastating. I’m not just talking about critical information disclosure that could lead the business to the ground (like Facebook’s recent discoveries), I am also talking about messing around with objects that could lead the attacker to execute code on the server. Don’t be so shocked — it is very much possible.

From an IDOR to RCE

I’m not going to disclose the name of the company or software that this serious vulnerability was found on. I am not even going to say that this is a huge company with a QA and security response team that could fill an entire mall. Twice.
But, as you might have already guessed, gaining access to a certain object thay you shouldn’t have had access to allows you sometimes to actually run commands on the server.

Although I can’t talk about that specific vulnerability, I am going to reveal my logic of preventing an IDOR from its roots.

Ideally speaking

An IDOR is being prevented using an Anti-IDOR Mechanism (AIM). We at FogMarks have developed one a few years ago, and, knock-on-wood, none of our customers have ever dealt with an IDOR problem. Don’t worry, we’re not going to offer you to buy it. This mechanism was created only for two large customers who shared the same code base. Create your own mechanism with the info down below, jeez!
But seriously, AIM’s main goal is to AIM (got the word game?) the usage of a certain object only to the user who created it, or to a user(s) who have access to it.

This is being done by storing that information in a database, especially for sensitive objects that could be targeted from the web clients.
When an object is being inserted to the DB, the mechanism generates it a unique 32 chars long identifier. This identifier is only being used by the server, and it’s called “SUID” (Server Used ID). In addition, the mechanism issues a 15 chars long integer identifier for the client side that is called, of course, “CUID” (Client Used ID). The CUID integer is being made from part of the 32 chars long SUID and part of the object details (like it’s name) using a special algorithm.

The idea of generating two identifiers to the same object is to not reveal the identifier of the actual sensitive object to the client side, so no direct access could be made in unexpected parts of the application.

Since object attributes tend to change (like their names, for example) the CUID is being replaced every once in a while, and the “heart” of the logic is to carefully match the CUID to SUID.

In the users’ permissions there is also a list of nodes that contains the SUID of objects that the user has access to.

When the user issues a request from the client side — the algorithm tries to generate part of the SUID from the supplied CUID. If it succeeded, it tries to match that part to one of the SUIDs list in the users’ permissions collection. If they match, the requesting user gets a one time limited access to the object. This one time access is being enabled for x minutes and for one static IP, until the next process of matching a CUID to SUID.

All this process, of course, is being managed by only one mechanism — The AIM.
The AIM handles requests in a queue form, so when dealing with multiple parallel requests — the AIM might not be the perfect solution (due to the possibility that the object will be changed by two different users at the same time).

Conclusion

In order to keep your platform safe from IDORs, requests to access sensitive objects should be managed only by one mechanism.

You don’t have to implement the exact logic like I did and compile two different identifiers for the same object, but you should definitely manage requests and permissions to private objects in only one place.

Here are some more examples of IDORs found by FogMarks in some very popular companies (and were patched, of course):

Until next time!

And The King Goes Down


Tokens are great. Well, sometimes.

Today’s case-study will discuss the importance of a Token Manager software.
Well, every site which allows login normally will use a token on each of the ‘critical’ actions it allows users to do. Facebook, for example, automatically adds a token at the end of any link a user provide, and even their own links! This mechanism is called ‘Linkshim’ and it is the primary reason why you never hear about Facebook open redirects, CSRFs or clickjacking (yeah yeah I know they simply not allowing iframes to access them, I’ll write a whole case-study about that in the near future).
Facebook’s method is pretty simple – if a link is being added to the page – add a token at the end of it. The token, of course, should allow only for the same logged-in user to access the URL, and there should be a token count to restrict the number of times a token should be used (hint- only once).

But what happens when tokens are being managed in a wrong approach?

A very famous security company, which still hasn’t allowed us to publish it’s name, allowed users to create a team. When a user creates a team, he is the owner of the team – he has the ‘highest’ role, and he basically controls the whole team actions and options – he can change the team’s name, invite new people to the team, change roles of people in the team and so on.

The team offers the following roles: Owner, Administrator and some other minor non-important roles. Only the owner and administrators of the team are able to invite new users to the team. An invitation can be sent only to person who is not on the team and does not have an account on the company’s web. When the receiver will open the mail he will be redirected to a registration page of the company, and then will be added to the team with the role the Owner/Admin set.

When I first looked at the team options I noticed that after the owner or an admin invites other people to the team via email, he can resend the invitation in case the invited user missed it or deleted it by accident. The resend options was a link at the side of each invitation. Clicking the link created a POST request to a certain ‘Invitation manager’ page, and passed it the invitation ID.

That’s where I started thinking. Why passing the invitation ID as is? Why not obfuscate it or at least use a token for some sort of validation?

Well, that’s where the gold is, baby. Past invitation IDs were not deleted. That means that invitations that were approved were still present on the database, and still accessible.

By changing the passed invitation ID parameter to the ‘first’ invitation ID of the Owner – It was possible to resend an invitation to him.
At first I laughed and said ‘Oh well, how much damage could it make besides spam the owner a bit?’. But I was wrong. Very wrong.

When the system detected that an invitation to the owner was sent, it removed the owner from his role. But further more – remember that I said that sending an invitation sends the receiver a registration page according to his email address? The system also wiped the owner’s account – his private details, and most important – his credentials. This caused the whole account of the owner to be blocked. A classic DoS.

So how can we prevent unwanted actions to be performed on our server? That’s kind of easy.
First, lets attach an authenticity token to each action. The authenticity token must be generated specifically and individually to each specific user.
Second, like milk and cheese – lets attach an expiration date for the token. 2 Minutes expiration date is the fair time to allow our token to be used by the user.
And last, lets delete used tokens from the accessible tokens mechanism. A token should be used only once. If a user has got a problem with that – generate a few tokens for him.

For conclusion,
This case-study presented a severe security issue that was discovered in the code of some very famous security company.
The security issue could have been prevented by following three simple principals – 1) Attaching a token to each action that is being performed by a user. 2) Setting a rational extirpation time for each token. 3) And most importantly – correctly managing the tokens and deleting used ones.

How Private Is Your Private Email Address?


After reading some blog posts about Mozilla’s Addons websites, I was fascinated from this python-based platform and decided to focus on it.
The XSS vector led basically to nowhere. The folks at Mozilla did excellent job curing and properly sanitizing every user input.

This led me to change my direction and search for the most fun vulnerabilities – logic flaws.

The logic
Most people don’t know, but the fastest way to track logic-based security issues is to get into the mind of the author and to try and think from his point of view. That’s it. Look at a JS function — would you write the same code? What would you have changed? Why?

Mozilla’s Addons site has a collections feature, where users can create a custom collection of their favorite addons. That’s pretty cool, since users can invite other users to a role on their collection. How, do you ask? By email address of course!

A user types in the email address of another user, an AJAX request is being made to an ‘address resolver’ and the ID of the user who owns this email address returns.

When the user press ‘Save Changes’, the just-arrived ID is being passed to the server and the being translated again to the email address, next to the user’s username. Pretty weird.

So, If the logic, for some reason, is to translate an email to an ID and then the ID to the email, we can simply interrupt this process in the middle of it, and replace the generated ID with the ID of another user.

The following video presents a proof of concept of this vulnerability, that exposed the email address of any of addons.mozilla.org users.

Final Thoughts
It is a bad practice to do the same operation twice. If you need something to be fetched from the server, fetch it one time and store it locally (HTML5 localStorage, cookie, etc.). This simple logic flaw jeopardized hundreds of thousands of users until it was patched by Mozilla.

The patch, as you guessed, was to send the email address to the server, instead of sending the ID.

Facebook Invitees Email Address Disclosure

Prologue

When Facebook was just a tiny company with only a few members, it needed a way to get more members.

Today, when you want more visitors to your site, you advertise on Facebook, because everybody is there.

Back then, the main advertising options were manually post advertisements on popular websites (using Google, for instance), or getting your members invite their friends using their email account.

Facebook’s Past Invitation System

When a user joined Facebook at its early days, there was literally nothing to see. Therefore, Facebook asked their members to invite their friends using an email invitation that was created by the registered user.

The user supplied his friends email addresses, and they received an email from Facebook saying that ‘Mister X is now on Facebook, you should join too!’.

Fun Part

As I came across this feature of Facebook I immediately started to analyze it.

I thought it would be nice to try and fool people that a user Y invited them to join, although the one who did it was the user X.

As I kept inviting people over and over again I have noticed something interesting: each invitation to a specific email address contained an invitation ID: ent_cp_id.

When clicking on Invite to Facebook a small windows pops up and shows the full email address of the invitee.

I wrote down the ent_cp_id of some email I would like to invite, and invited him once.

At this point I thought: “OK, I have invited this user, the ent_cp_id of him should not be accessible anymore”. But I was wrong. The ent_cp_id of it was still there. In fact, by simply re transmitting the HTTP request I could invite the same user again.

But the most interesting part of this vulnerability is the fact that any user could have seen the email address that was behind an ent_cp_id.

That means that anyone who was ever invited to Facebook via email was vulnerable to email address disclosure, because that invitation was never deleted and it was accessible to any user. All an attacker had to do next was to randomly guess ent_cp_ids. As I said, old ent_cp_ids aren’t deleted, so the success rate is very high.

Conclusion

When you are dealing with sensitive information like email address you should always limit the number of times that an action could be done. In addition, it is recommended to wipe any id that might be linked to that sensitive information, or at least hash-protect it.

Facebook quickly solved this issue and awarded a kind bounty.