Open Redirects – Ups and Downs


A few years ago, when FogMarks was not even a tiny idea or a vision in my head, I used to do casual programming jobs on Fiverr.

One of the jobs/gigs(?) I was asked to do is to cause a user in site x.com to be redirected to Facebook.com and then, without an action from his side, to be redirected to a y.com site. I didn’t realize back then why would someone want the kind of thing. Why not simple redirecting the user directly to y.com?
I asked the person why would he want to do such a thing. His answer changed the way I (and after that, FogMarks,) treated and took care of Open Redirects.  He answered that by forcing a user come from a Facebook URL, the ads engines on y.com are paying much, much more, because a popular site like Facebook is redirecting users to y.com.

Until this answer I treated open redirects as simple security issues, that can’t hurt that much. I know that innocent user can get a link of the type: innocent.com?redirect_out=bad.com, but I believed that anyone with a little common sense will see such things going on. Those kind of attacks are mostly being used by Phishing web sites, to try and simulate the domain of innocent.com.

After this long introduction, I want to introduce today’s topic – the solution to open redirects. Facebook did it in their Linkshim system, but I want to introduce a much simpler solution that any of you can use.

Quick Note: Who should not adopt this solution? Websites who want their domain to be present on the Referrer header.

Well, the regular ways I have seen to prevent open redirects are creating a token (YouTube, Facebook’s l.php page), forbidding URLs that don’t not contain the host of the website and allowing a user to be redirected only from a POST request.

While creating an exit token is a good practice, handling and taking care of this whole system is pretty much of a headache. You have more values to store at the DB, you should check for expiration times, IP address and a lot more.

Forbidding URLs that don’t not contain the host of the website is simply wrong. Websites should allow other users to be redirected to another websites, not only pages within themselves.

And allowing user to be redirected only after he clicks on a button, for example, isn’t always that convenient – to you or to the user.

The Golden Solution
Honestly, the first thing you’ll about to say is: “What? This guys is crazy”. But the next thing will be: “Okay, I should give that a try”.
Well, a lot of websites today offers a free URL shortening services. In addition to that – they offer a free modular & convenient API.
Why don’t use them?!

Instead of carefully creating an exit token, forbidding outside redirects or requiring POST, simply translate any outside URL to a shortened URL that a certain service provides. If you don’t want third party services to store your information, you build one of your’e own using an open source system.

That way – you won’t have to worry about open redirects – they won’t be occur from your domain.

Lets say you have a page called ‘redirect_out.php’ with an r GET parameter that a user can supply:

my.com/redirect_out.php?r=http://bad.com

Accept the bad.com URL from the user, automatically translate it to a shorte.nd/XxXxX URL and then allow redirection.

Extra: Benefits of using a well known shortening service
As mentioned before, this solution is offered only to developer who don’t want their domain to be shown up as the referrer. If your worried about phising attacks, its a different scenario. Although by using a well known shortening service which manages a black list of known “bad domains” you’ll earn twice:

  • Your’e domain will not be the one who redirected users to ‘bad’ websites (yes, Google knows and checks that too)
  • You’ll increase the phishing protection level of your web. Your users will be able to be redirected to ‘bad’ web sites, but the shortening service will deny it, or at least warn those users (and potentially you).

 

How Private Is Your Private Email Address?


After reading some blog posts about Mozilla’s Addons websites, I was fascinated from this python-based platform and decided to focus on it.
The XSS vector led basically to nowhere. The folks at Mozilla did excellent job curing and properly sanitizing every user input.

This led me to change my direction and search for the most fun vulnerabilities – logic flaws.

The logic flaws logic
Most people don’t know, but the fastest way to track logic issues is to see things logically. That’s it. Look at a JS function – would you write the same code? What would you have changed? Why?

Mozilla’s Addons site has a collections feature, where users can create a custom collection of their favorite addons. That’s pretty cool, since users can invite other users to a role on their collection. How, do you ask? By email address of course!

A user types in the email address of another user, an ajax request is being made to an ‘address resolver’ and the ID of the user who owns this email address returns.

When the user press ‘Save Changes’, the just-arrived ID is being passed to the server and the being translated again to the email address, next to the user’s username. Pretty wierd.

So, If the logic, for some reason, is to translate an email to an ID and then the ID to the email, we can simply interrupt this process in the middle of it, and replace the generated ID with the ID of another user.

The following video presents a proof of concept of this vulnerability, that exposed the email address of any of addons.mozilla.org users.

Final Thoughts
It is a bad practice to do the same operation twice. If you need something to be fetched from the server, fetch it one time and store it locally (HTML5 localStorage, cookie, etc.). This simple logic flaw jeopardized hundreds of thousands of users until it was patched by Mozilla.

The patch, as you guessed, was to send the email address to the server, instead of sending the ID.

Facebook Invitees Email Addresss Disclosure

Prologue

When Facebook was just a tiny company with only a few members, it needed a way to get more members.

Today, when you want more visitors to your site, you advertise on Facebook, because everybody is there.

Back than, the main advertising options were manually post advertisements on popular websites (using Google, for instance), or getting your members invite their friends using their email account.

Facebook’s Past Invitation System

When a user joined Facebook at its early days, there was literally nothing to see. Therefore, Facebook asked their members to invite their friends using an email invitation that was created by the registered user.

The user supplied his friends email addresses, and they received an email from Facebook saying that ‘Mister X is now on Facebook, you should join too!’.

Fun Part

As I came across this feature of Facebook I immediately started to analyze it.

I thought it would be nice to try and fool people that a user Y invited them to join, although the one who did it was the user X.

As I kept inviting people over and over again I have noticed something interesting: each invitation to a specific email address contained an invitation ID: ent_cp_id.

When clicking on Invite to Facebook a small windows pops up and shows the full email address of the invitee.

I wrote down the ent_cp_id of some email I would like to invite, and invited him once.

At this point I thought: “OK, I have invited this user, the ent_cp_id of him should not be accessible anymore”. But I was wrong. The ent_cp_id of it was still there. In fact, by simply re transmitting the HTTP request I could invite the same user again.

But the most interesting part of this vulnerability is the fact that any user could have seen the email address that was behind an ent_cp_id.

That means that anyone who was ever invited to Facebook via email was vulnerable to email address disclosure, because that invitation was never deleted and it was accessible to any user. All an attacker had to do next was to randomly guess ent_cp_ids. As I said, old ent_cp_ids aren’t deleted, so the success rate is very high.

Conclusion

When you are dealing with sensitive information like email address you should always limit the number of times that an action could be done. In addition, it is recommended to wipe any id that might be linked to that sensitive information, or at least hash-protect it.

Facebook quickly solved this issue and awarded a kind bounty.

JSON Escaping Out In The Wild (The 10-Minutes XSS)


This case-study focuses on the core aspects and utilities of JSON, specifically its escaping method.

SoundCloud.com, such as many others, uses JSON to fetch user-relative information from the server.
The idea is simple, the server sends the client the HTML (and the Javascript, CSS, images etc) first, and then the javascript code uses ajax to fetch the data from the server.

In this case, the data from the server returns in a JSON format, and the javascript code knows to dissect it and to insert the data in the right places of the page.
The returned data is being escaped (probably) by the JSON escaper mechanism, so any malicious payloads won’t work.

The main question that should be asked here is: can we exploit the way the browser fetches the data? Can we make it fetch arbitrary HTML or Javascript code?

Well, the short answer is no.
The long answer (10 minutes after starting this research) is yes.

At first, I decided to focus on the notifications area (https://soundcloud.com/notifications), where I noticed that ajax requests are being executed when scrolling done (to fetch old notifications).

I have previously inserted some payloads in variant locations on SoundCloud, and analyzed the way the ajax asks for more notifications.
To safely print “bad” characters (‘,”,<,….), the JSON escapes them by putting a \ char before any “bad” character.

And so I started thinking.

What if i’ll do the escaping myself? Will the JSON still escapes it?

Well, it did. The JSON escaper mechanism ‘double-escaped’ payloads that were already escaped. All there was left to do was to pre-escape a payload, which caused to a stored XSS at the notification area.

JSON escaping must be done properly
The only reason why this worked is the assumption of the developers that no one will “pre-escape” their input before submitting it. The JSON mechanism always escaped the input (by adding ” or ‘ chars before it). If you are expecting user’s input to be outputted by a 3rd-party code, you should:
1. Never allow direct use of HTML/Script tags. If you must, allow users to use ‘special chars’ to design their input (such as *bold* instead of <b>).
2. Know the mechanism. JSON adds ” or ‘ before the text. Therefore, if you accept these chars from the user – simply URL encode them.

I’ll use this opportunity to note that the response team of SoundCloud was the fastest one I have ever worked with. Take a loot at the disclosure timeline.

Disclosure Timeline
13/02/2016 23:00 – Vulnerability found & reported.
14/02/2016 08:00 – Vulnerability confirmed by Soundcloud.
14/02/2016 14:00 – Vulnerability patched and bounty awarded (HoF + Swag).

Arbitrary File Upload From A Different Angle


Today we will discuss about arbitrary file uploads, a less common vulnerability, but one of the most powerful

Why? Because any platform with sain developers will validate the content type and the file extension of any file they interact with.
But today I want to introduce you with a new validation attitude that I now advise everyone to use – validate the content itself.
If you are expecting an image to be uploaded – expect the content to be suited as a content of an image. If you are expecting an html file – sanitize it and make sure no script tags exist. There are plenty of libraries and tools to do that.

With that being said, a lot of companies and developers I have came across with said to me: “Why should we make all that effort when we can just host the uploaded files on a CDN (Content Delivery Network)?”.
Well, today’s story is just about that.

Freelancer.com was my first interaction with this new attack vector. They allow images to be uploaded as profile and cover pictures, DOC and DOCX files to be uploaded as resumes and far more.

But they also have a nice feature where users can write articles and publish them for others to read.
This form supports images uploads, and those images, as you can imagine, are being uploaded to a CDN (https://cdn.f-cdn.com/).
f-cdn.com is of course the official CDN of Freelancer INC.

The developers here probably assumed that since images are being uploaded to a CDN, no sanitation and content validation should be done – The form does not support HTML – so XSS cannot be done, and the CDN does not allows php files (or any other like it) to be accessed – just download them.

Well, this attitude allowed me to be able to upload a .exe file to the official CDN of Freelancer INC.

Can you imagine a scenario where a hacker will host malicious viruses on the CDN of an official company? I hope you do.
Far more, there is not size limit – any size of a file can be hosted on the CDN (although it may take time to upload it and the connection may be refused eventually).

So remember – using CDNs is not always the best option. Yes, XSSes will not affect users (because of a different domains) and yes, shells and RCEs will not endanger your server directly. But mis-validating the type of files that your CDN hosts might cause your company to a great lost.
Keep that in mind.

Ancient Purifying

Prologue

One of the first XSSes I have found was the easiest one you can imagine. There are millions, yes – millions, of websites that sanitize users input, but sanitize it wrong.

Why? because they’re agnostics. And because they don’t update their sanitizing algorithm according to the latest black/white hat community discoveries.

Purifying Basics

Basically, user input should never be outputted as HTML code. The server, by rule, should not accept HTML tags, but when you decide to accept them, you need to sanitize properly. Why? Because a small mistake in the regex you are writing will lead users to plant stored or reflective XSSes that will jeopardize your users. And you.

An example of this statement will be an ongoing report of mine.
Tagged.com / Hi5.com is vulnerable to a very simple stored-XSS attack, using a very simple payload, which of course I will not disclose until they’ll fix the issue (the vulnerability submission is waiting for their response since 2015 (!)).
The rule with sophisticated platforms is to kiss (keep it super simple). Known XSSes will probably won’t work, since a trained security team monitors these systems all the time, and the sanitizing algorithm is being updated frequently.

3rd-party Purifiers aren’t always the answer

As you already know, I don’t believe that your security should be protected by 3rd party software. If you are enough of a “big boy” to store sensitive user data (of any kind), you should also be responsible enough to keep this data safe.
Use your’e own sanitizing mechanism, based on the simple ‘do not accept HTML/SCRIPT tags’ rule, and on the recent community discoveries.

Test your mechanism, open-source it

Perfection not always has it price (Sorry Stella). A perfect input purifier mechanism will become such one only with the help of the community. Therefore, allow pentesters to challenge your mechanism, when you think its vital enough. @soaj1664ashar did such thing with his version of input purifier. You are strongly advised to look at his post about it (Google it).

Conclusion

To finish up, I’ll leave you with this outstanding quote which basically concludes everything we discussed about:

“Security is not a product, but a process” -Bruce Schneier.

Happy new year!

When Previewing Becomes Dangerous

Prologue

Previewing is cool. A lot of web sites and services offer a ‘preview before posting’ or ‘preview before buying’ option.
But sometimes previewing becomes a dirty business. Like in this case-study your’e about to hear.

Feature Streching

Blinksale.com offers an ‘Invoice Preview’ feature before you send an invoice request to another user on their web (or via email). You enters an invoice message on a simple, small textbox. HTML and Javascript codes are being forbidden and escaped when the message is sent.
Before sending the invoice, your are allowed to preview the message you are about to send to the client. Clicking on the preview button opens a pop up window which shows an example of the invoice.

When I clicked on ‘preview’ I noticed that the ‘Enter’ char I inserted was translated to a ‘<br />’ text on the GET message body parameter (in the URL). I changing the input to </script><svg onload=alert(document.domain)></svg> didn’t work. That was the stage where I’ve started to think.

Fun Part

I have decided to analyze each GET parameter in the request, and noticed that one of them was referring to a template id.
The template id was wrapping the message with a nice css and some images. It changes the background color, the foreground color and the font of the input. Then I figured: “Hmm, the template determines the input’s font. Maybe it also reads the input and has an XSS filter on the it?.

I wondered: What if the template id will be of a non-existent template? I changed the id to some random number, entered the XSS payload again and bamIt worked.

Untitled

 

The XSS filter was set per-template, which means that it worked only when an existing template id was supplied. When I supplied a non-existent template id – there was no XSS filter, no nice css or images, but still the payload was generated, which resulted with a nice reflected XSS.

Conclusion


Allowing only some of the parameters of a request to affect the output might be risky. Always be aware of the impact and the importance of any parameter you use, even the smallest one.

Blinksale patched this issue and personally thanked me via email.