Phishing++ – Chapter II

Stay tuned on FogMarks

Hi! Long-time-no-case-studied.
I know, I know, this chapter was supposed to be released two weeks ago, but we waited for PayPal‘s official permission to disclose this two vulnerabilities before even starting to write the case-study (why? Read our about page).

So, finally – PayPal kindly allowed us to disclose two HTMLi/XSS vulnerabilities they had in the last summer, and that’s perfect because now we can show you a real life scenario that actually happened.
Before reading this, please make sure you fully read and understand the previous chapter. This is vital for this final chapter – its like you cannot start watching Breaking Bad from the second season (Although I know a guy who did it, because he was lazy downloading the first season. Weirdo.)

In the last chapter we talked about what companies can do to prevent 3rd party malicious phishing websites from using their HTML entities such as images, JS scripts and CSS style-sheets.
Today we are going to talk about the most dangerous & complicated Phishing attacks – phishing attacks that occur on the official website.

In the past, Phishing was super-easy. New internet users didn’t understand the importance or even the existence of an official domain. They were used to access their desired websites by a bookmark, by a mail someone sent them and later on – by Google.
Later on, Phishing became less easy. Companies started to warn their users from accessing their site from wrong sources like email messages or forum links. They made their users aware of the domain part of the browser’s window. And indeed, most users have adapted to these security precautions and now double-check the domain their viewing before making any action.

But no one prepared the companies for this new stage of Phishing attacks: Phishing on the official companies websites!Imagine that a malicious deceptive page is being inserted to the website under the official company domain.
A malicious file doesn’t have to be even uploaded – existing pages by the company can be altered by attackers – using XSSes, HTML Injections, Headers Injections or Parameter injections – and malicious content will be displayed.

Actually, there only one TV series about “Cyber” security I appreciate: CSI:Cyber. They presented this exact case of HTML Injection in their 10th episode of the first season.

Ok, now that we learned how to swim – let’s jump to the middle of the ocean.
PayPal suffered from two XSS/HTML Injection vulnerabilities which allowed 3rd-party malicious content to be added to official PayPal pages under the official ‘www.paypal.com’ domain.

This allowed us to create some very nice Phishing templates, such as:

In addition, we were able to actually create a <form> inside a page, which allowed us to send the credit card data of victims to a 3rd party websites. Unfortunately, this demo was not allowed to be published.

I even created an online ‘Web Gun’ Content Spoofer which can inject HTML entities directly to a vulnerable page:

The fix
Actually, there is no easy fix. Haunting down vulnerabilities and paying attention to payloads that are being injected to the website pages – via GET/POST requests or via SQL queries (in case of a permanent XSS) is pretty much the best way to handle this threat.
XSS Auditors – like presented in PayPal’s case – simply don’t work.

Writing this 2-chapters sequel was very fun. Expect similar case-studies in the near future.
But till then,
Cheers!

Cookies And Scream

Whoa, What a summer!
I know, we haven’t been active in the last month – blame that on the heat and on my addiction to the sea. I should really get a swimming pool.

OK, enough talking! The summer is almost over and now its time to step on the gas pedal full time.
Today’s case-study also discusses proper user-supplied input handling, but with a twist.

I feel like I talked enough about the importance of properly handling and sanitizing user supplied input. There are tons of XSS and HTML filters out there, and they are doing pretty good job.
But user input doesn’t always being shown onto the page or inserted into the DB. In some cases, many popular web platforms stores it in a cookie.

PayPal, for example, inserts the value of the GET parameter ‘cmd’ as the value of the cookie ‘navcmd’:

Request:
GET https://cms.paypal.com/cgi-bin/marketingweb?cmd=test HTTP/1.1
Host: cms.paypal.com
Connection: keep-alive
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.8,he;q=0.6

Response:
HTTP/1.1 404 Not Found
Server: Apache
Cache-Control: must-revalidate, proxy-revalidate, no-cache, no-store
X-Frame-Options: SAMEORIGIN
Cache-Control: max-age=0, no-cache, no-store, must-revalidate
Pragma: no-cache
Content-Type: text/html; charset=UTF-8
Date: Wed, 30 Aug 2017 21:15:04 GMT
Content-Length: 8764
Connection: keep-alive
Set-Cookie: navcmd=test; domain=.paypal.com; path=/; Secure; HttpOnly

There’s no evil with storing user supplied input in a cookie, and its actually a good practice sometimes, if you don’t want to use sessions or other similar mechanism.
A very common use for user supplied input in a cookie is storing a redirect URL: Sometimes you want to remember from which page the user came from, or to where redirect him at the end of the process. Keep that in mind.
Before I’ll get to the vulnerability itself, I’ll tease you a bit and say that this time, the malicious payload bypassed the XSS & HTML sanitation mechanism.

A very known financing company had this exact cookie functionality. User input from some GET parameters has been stored in some cookies. For example, the value of the GET parameter ‘redirect_url’ was stored in the cookie ‘return_url’. This cookie was then used by dozens of other pages in order to redirect users to a “thank you” page. An open redirect attack on that parameter was not possible, because the value of the GET parameter ‘redirect_url’ has been checked & verified before allowing it to be added as a cookie.

At first glance – everything looks fine. I’ve read the JS code that was responsible for sanitizing the input and determined that its doing its job pretty well – no HTML entities or other “bad” characters (like ‘, “) were able to be reflected – thanks to the encodeURI function that was being used intensively.

And then it hit me. encodeURI doesn’t encode characters like ; or = – The exact characters that are being used when setting a cookie!
So, a GET request to the vulnerable URL, without the ‘return_url’ GET parameter (to prevent collisions):

Request:
GET https://vulnsite.com/action?vulnGETParameter=xyz;return_url=https://fogmarks.com HTTP/1.1
.
.

Response:
HTTP/1.1 200 OK
.
.
Set-Cookie: vulnGETParameter=xyz;return_url=https://fogmarks.com; domain=.vulnsite.com; path=/; Secure; HttpOnly

The result of this in some cases was an open redirect in pages that relied on the fact the value of ‘return_url’ will always be safe.

When you decide to store user input in a cookie, you must know how to treat it well, and you must remember to dispose it when the time is right. In this case, using the same sanitation mechanism for input that will be shown onto the page and input that will be inserted to a cookie is wrong.
The patch here was simple: instead of using encodeURIencodeURIComponent() was used.

Happy & chilled autumn folks!

Independence Is Not a Dirty Word

Follow us @

Happy new year! This is the first post for year 2017.
I hope you guys weren’t too hangovered like I did. Seriously.

So, as promised in the last case-study, today we are going to see a very interesting case-study that will make you fire up those 4th of July fireworks, or do some BBQing in the park. Hmmm! Kosher Bacon! (NOTE: Bacons are not Kosher).

Everyone seem to love jQuery. This awesome “Javascript library” seems to be everywhere I look – thousands of companies use it in their website client’s side, and it is super convenient, especially when it comes to AJAX requests – importing jQuery makes our lives a whole lot easier.
jQuery is not alone. Google and Microsoft (and sometime Mozilla and Apple as well) release new JS libraries all the time, and advice developers to use them and to import them to their products. For example, if you want to play a QuickTime video, you should import Apple’s QuickTime JS library, and if you want that neat jQuery DatePicker, you should import that library from Google, jQuery or any other mirror.

Count the times I used the word ‘import’ in the last paragraph. Done? 4 times.
Whenever we want to use a certain ‘public’ JS code, which is belong to a certain company or service, we import it directly from them.
To be more clear, we simply put a <script> tag on our website, with a ‘src’ property pointing to the JS file address:

<script src=”http/s://trustworthydomain.com/path/to/js/file.js></script>

Did you get it? We are loading a script from another website – a 3rd party web site – to our website’s context. We are violating the number one rule of web security – we trust other website.

Now, this might sound a little stupid – why shouldn’t I be able to import a script from a trustworthy company like jQuery, Microsoft or Google? And you are basically right.

But,  When you are importing a script from a trustworthy company, in 90% of the time you will be importing it from the company’s CDN.
CDNs stands for Content Delivery Network, and it is a (quoted:) “is a system of distributed servers (network) that deliver webpages and other Web content to a user based on the geographic locations of the user, the origin of the webpage and a content delivery server.”

Its an hosting service which provides service to the company’s clients based on their location and a few other factors. The JS file you are importing is not being kept on the company’s official server (again- most of the times).

In this case-study we’ll see how a very popular web software company, which of course we cannot reveal yet, fell for this.

This company developed a very popular JS library and hosted it on a 3rd party CDN they purchased. That CDN was kind of ‘smart’ and redirected users to the closest server according to the user’s location:

When a request was arrived to the main server, the server determined the location of the IP and the routed the request to the nearest server according the determined location.

Dozens of websites have planted a <script src> tag in their source code to that company’s main server CDN, and it has provided their users with the necessary JS libraries everytime.

But after doing some research on the Apache server that was being on Server C (Copy C in the image), we have concluded that this server was vulnerable to an Arbitrary File Upload attack, which allowed us to upload a file to the CDN. Not that serious, at first glance.
But! When we examined the way the file was being upload, unauthorizedly of course, we saw that it is possible to use a Directory Traversal on the file path. We simply changed the filename to ../../../<company’s domain>/<product name>/<version>/<jsfilename>.js And we were able to replace the company’s legitimate JS file with a malicious one.

Basically, we had an XSS on dozens of websites and companies, without even researching them. The funny thing was that this attack affected only users who got directed to the vulnerable server (Server C).

What are we learning from this (TL;DR-FU;-)
Never trust 3rd party websites and services to do your job! I told you that millions of times already! Be independent. Be a big boy that can stay alone in the house. Simply download the JS file manually and keep it on your server!

But what happens when the JS library I am using gets updated?
Clearly, there is no easy way to keep track of it.
I advised a client of mine to simply write a cronjob or a python script that will check the latest version of the JS library available on the company’s server and then compare it to the local one. If the versions does not equal – the script sends an email to the tech team.
Or you can simply check it manually every once in a while. Big JS libraries don’t get updated that often.

So, after the horror movie you just watched, The next thing you are going to do, besides coffee, is to download your 3rd party libraries to your server.

Cheers.

JSON Escaping Out In The Wild (The 10-Minutes XSS)


This case-study focuses on the core aspects and utilities of JSON, specifically its escaping method.

SoundCloud.com, such as many others, uses JSON to fetch user-relative information from the server.
The idea is simple, the server sends the client the HTML (and the Javascript, CSS, images etc) first, and then the javascript code uses ajax to fetch the data from the server.

In this case, the data from the server returns in a JSON format, and the javascript code knows to dissect it and to insert the data in the right places of the page.
The returned data is being escaped (probably) by the JSON escaper mechanism, so any malicious payloads won’t work.

The main question that should be asked here is: can we exploit the way the browser fetches the data? Can we make it fetch arbitrary HTML or Javascript code?

Well, the short answer is no.
The long answer (10 minutes after starting this research) is yes.

At first, I decided to focus on the notifications area (https://soundcloud.com/notifications), where I noticed that ajax requests are being executed when scrolling done (to fetch old notifications).

I have previously inserted some payloads in variant locations on SoundCloud, and analyzed the way the ajax asks for more notifications.
To safely print “bad” characters (‘,”,<,….), the JSON escapes them by putting a \ char before any “bad” character.

And so I started thinking.

What if i’ll do the escaping myself? Will the JSON still escapes it?

Well, it did. The JSON escaper mechanism ‘double-escaped’ payloads that were already escaped. All there was left to do was to pre-escape a payload, which caused to a stored XSS at the notification area.

JSON escaping must be done properly
The only reason why this worked is the assumption of the developers that no one will “pre-escape” their input before submitting it. The JSON mechanism always escaped the input (by adding ” or ‘ chars before it). If you are expecting user’s input to be outputted by a 3rd-party code, you should:
1. Never allow direct use of HTML/Script tags. If you must, allow users to use ‘special chars’ to design their input (such as *bold* instead of <b>).
2. Know the mechanism. JSON adds ” or ‘ before the text. Therefore, if you accept these chars from the user – simply URL encode them.

I’ll use this opportunity to note that the response team of SoundCloud was the fastest one I have ever worked with. Take a loot at the disclosure timeline.

Disclosure Timeline
13/02/2016 23:00 – Vulnerability found & reported.
14/02/2016 08:00 – Vulnerability confirmed by Soundcloud.
14/02/2016 14:00 – Vulnerability patched and bounty awarded (HoF + Swag).

Ancient Purifying

Prologue

One of the first XSSes I have found was the easiest one you can imagine. There are millions, yes – millions, of websites that sanitize users input, but sanitize it wrong.

Why? because they’re agnostics. And because they don’t update their sanitizing algorithm according to the latest black/white hat community discoveries.

Purifying Basics

Basically, user input should never be outputted as HTML code. The server, by rule, should not accept HTML tags, but when you decide to accept them, you need to sanitize properly. Why? Because a small mistake in the regex you are writing will lead users to plant stored or reflective XSSes that will jeopardize your users. And you.

An example of this statement will be an ongoing report of mine.
Tagged.com / Hi5.com is vulnerable to a very simple stored-XSS attack, using a very simple payload, which of course I will not disclose until they’ll fix the issue (the vulnerability submission is waiting for their response since 2015 (!)).
The rule with sophisticated platforms is to kiss (keep it super simple). Known XSSes will probably won’t work, since a trained security team monitors these systems all the time, and the sanitizing algorithm is being updated frequently.

3rd-party Purifiers aren’t always the answer

As you already know, I don’t believe that your security should be protected by 3rd party software. If you are enough of a “big boy” to store sensitive user data (of any kind), you should also be responsible enough to keep this data safe.
Use your’e own sanitizing mechanism, based on the simple ‘do not accept HTML/SCRIPT tags’ rule, and on the recent community discoveries.

Test your mechanism, open-source it

Perfection not always has it price (Sorry Stella). A perfect input purifier mechanism will become such one only with the help of the community. Therefore, allow pentesters to challenge your mechanism, when you think its vital enough. @soaj1664ashar did such thing with his version of input purifier. You are strongly advised to look at his post about it (Google it).

Conclusion

To finish up, I’ll leave you with this outstanding quote which basically concludes everything we discussed about:

“Security is not a product, but a process” -Bruce Schneier.

Happy new year!

When Previewing Becomes Dangerous

Prologue

Previewing is cool. A lot of web sites and services offer a ‘preview before posting’ or ‘preview before buying’ option.
But sometimes previewing becomes a dirty business. Like in this case-study your’e about to hear.

Feature Streching

Blinksale.com offers an ‘Invoice Preview’ feature before you send an invoice request to another user on their web (or via email). You enters an invoice message on a simple, small textbox. HTML and Javascript codes are being forbidden and escaped when the message is sent.
Before sending the invoice, your are allowed to preview the message you are about to send to the client. Clicking on the preview button opens a pop up window which shows an example of the invoice.

When I clicked on ‘preview’ I noticed that the ‘Enter’ char I inserted was translated to a ‘<br />’ text on the GET message body parameter (in the URL). I changing the input to </script><svg onload=alert(document.domain)></svg> didn’t work. That was the stage where I’ve started to think.

Fun Part

I have decided to analyze each GET parameter in the request, and noticed that one of them was referring to a template id.
The template id was wrapping the message with a nice css and some images. It changes the background color, the foreground color and the font of the input. Then I figured: “Hmm, the template determines the input’s font. Maybe it also reads the input and has an XSS filter on the it?.

I wondered: What if the template id will be of a non-existent template? I changed the id to some random number, entered the XSS payload again and bamIt worked.

Untitled

 

The XSS filter was set per-template, which means that it worked only when an existing template id was supplied. When I supplied a non-existent template id – there was no XSS filter, no nice css or images, but still the payload was generated, which resulted with a nice reflected XSS.

Conclusion


Allowing only some of the parameters of a request to affect the output might be risky. Always be aware of the impact and the importance of any parameter you use, even the smallest one.

Blinksale patched this issue and personally thanked me via email.