Spring is here!
Hope you guys enjoyed the last days of the winter, because this summer is going to be hot. And I don’t mean the temperature. More details about that later…
Our case-study today set some ground rules for a new Anti-CSRF attitude that I was working on for the past few months. This new attitude, or, for the sake of correctness – mechanism, basically catalogs CSRF tokens. Don’t freak out! You’ll understand that in no time.
First, I must say that I am probably not the first one to think of this attitude. During some researches I came across the same principals of the tokens cataloging method I am about to show you.
So, What the hell is tokens cataloging you ask? It’s simple. This is an Anti-CSRF security attitude (/policy/agreement/arrangement – call it what you want) where CSRF tokens are being separated to different actions categories. This means that there will be a certain token type for input actions, such as editing a certain field or inserting new data, and there will be a different type of tokens for output actions, such as fetching sensitive information from the server, or requesting a certain private resource. These two main token groups will now lead our way to security perfectness. Whenever a user will be supplied with a form to fill, he will also be supplied with an input action token – a one-time, about-to-expire token which will only be valid to this specific session user, and will expire x minutes after its creation time. This input token will then be related to this specific form tokens family, and will only be valid in actions of this family-type.
Now, after explaining the “hard, upper layer”, let’s get down with some examples:
Let’s say we have a very simple & lite web application which allows users to:
a. Insert new posts to a certain forum.
b. Get the name of each post creator & the date of the creation of the post.
Ok, cool. We are allowing two actions: an input one (a), and an output one (b). This means we’ll use two token-families: one for inserting new posts, and the other for getting information about a certain post. We’ll simply generate a unique token for each of these actions, and supply it to the user.
But how are we going to validate the tokens?
This is the tricky part. Saving tokens in the database is a total waste of space, unless they are needed for a long time. Since our new attitude separates the tokens to different families, we also use different types of tokens – some tokens should only be integers (long, of course), some should only be characters, and some should be both. When there is no need to save the token for further action, the token should not be kept in a certain data collection, and it should be generated specifically for each user.
What does it mean? That we can derive tokens from the session user’s details which we already have – we can use his session cookie, we can use his username (obfuscated, of course) and we can mix some factors in order to generate the token in a unique way, which can only be ‘understood’ by our own logic later in the token validation process. No more creating a random 32-chars long token with no meaning that could be used trillion times. Each action should have its own unique token.
“This is so frustrating and unnecessary, why should I do it?”
If you don’t care about resubmitting of forms, that’s OK. But what about anti brute forcing, or even anti-DoSing? Remember that each action that inserts or fetches data from the DB costs you in space and resources. If you don’t have the right anti brute forcing or anti DoSing mechanism in place, you will go down.
By validating that each action was originally intended to happen, you will save unnecessary connections to the DB.
If implementing this attitude costs you too much, simply implement some of the ideas the were presented here. Remember that using the same type of token to allow different actions may cause you harm & damage. If you don’t want to generate a token for each user’s unique action, at least generate a token for each user’s “general” action, like output and input actions.