Once Upon A Bit

Today’s case-study is pretty short – you are going to get its intention in a matter of seconds.
We are going to talk about observation, and about the slight difference between a no-bug to a major security issue.

Every security research requires respectful amounts of attention and distinction. That’s why there are no successful industrial automatic security testers (excluding XSS testers) – because machines cannot determine all kinds of security risks. As a matter of fact, machines cannot feel danger or detect it. There is no one way for a security research to be conducted against a certain targets. The research parameters are different and varies from the target. Some researches end after a few years, some researches end after a few days and some researches end after a few minutes. This case-study is of the last type. The described bug was so powerful and efficient (to the attacker), that no further research was needed in order to get to the goal.

A very famous company, which, among all the outstanding things it does, provides security consulting to a few dozens of industrial companies and  start-ups, asked us to test its’ “database” resistance. Our goal was to leak the names of the clients from a certain type of collection – not SQL-driven one (we still haven’t got the company’s approval to publish it’s name or the type of vulnerable data collection).

So, after a few minutes of examining the queries which provide information from the data collection, I understood that the name of the data row is a must in order to do a certain action about it. If the query-issuer (=the user who asks the information about the row) has permissions to see the results of the query – a 200 OK response is being returned. If he doesn’t – again – a 200 OK response is being returned.

At first I thought that this is a correct behavior. Whether the information exists in the data collection or not – the same response is being returned.
BUT THEN, Completely by mistake, I opened the response to the non existent data row in the notepad.

The end of the 200 OK response contained an unfamiliar, UTF-8 char – one that shouldn’t be there. The length of the response from the non existent data row request was longer in 1 bit!

At first, I was confused. Why does the response to a non-existent resource contains a weird character at the end of it?
I was sure that there is a JS code which checks the response, and concludes according to that weird char – but there wasn’t.

This was one of the cases where I cannot fully explain the cause of the vulnerability, because of a simple reason – I don’t see the code behind.

The company’s response, besides total shock to the our fast response, was that “apparently, when a non-existent resource is being requested from the server, a certain sub-process which searches for this resource in the data collection fires-up and encounters a memory leak. The result of the process, by rule, should be an empty string, but when the memory leak happens, the result is a strange character. The same one which is being added to the end of the response.

Making your code run a sub-process, a thread or, god forbid, an external 3rd-party process is a very bad practice.
I know that sometimes this is more convenient and it can save a lot of time, but whenever you are using another process – you cannot fully predict its results. Remember – it can crush, freeze, force-closed by the OS or by some other process (anti-virus?).
If you must use a thread or sub-process, at least do it responsibly – make sure the OS memory isn’t full, the arguments that you pass to the process, the process’s permission to run and its possible result scenarios. Don’t ever allow the process to run or execute critical commands basing on user input information.

Independence Is Not a Dirty Word

Follow us @

Happy new year! This is the first post for year 2017.
I hope you guys weren’t too hangovered like I did. Seriously.

As promised in the last case-study, today we are going to see a very interesting case-study, with an interesting twist.

Everyone seems to love jQuery. This awesome Javascript library is everywhere I look — dozens of thousands of companies use it in their website applications, and it is super convenient — especially when it comes to AJAX requests — importing jQuery makes our lives a whole lot easier.

A library of libraries

jQuery is not alone. Google and Microsoft (and sometimes Mozilla and Apple, Facebook and Twitter as well) release new JS libraries all the time, and advice developers to use them and to import them to their products. For example, if you want to play a QuickTime video, you should import Apple’s QuickTime JS library, and if you want that neat jQuery DatePicker, you should import that library from Google, jQuery or any other mirror.

Count the times I used the word ‘import’ in the last paragraph. Done? 4 times.
Whenever we want to use a certain public JS library, which belongs to a certain company or service, we directly import it from that company.
To be more clear, we simply place a <script> tag on our website, with a ‘src’ property pointing to the JS file address:

<script src=”http/s://trustworthydomain.com/path/to/js/file.js></script>

Did you get it? We are loading a script from another website — a 3rd party website — to our website’s context. We are violating the number one rule of web security — we trust other website!

Now, this might sound a little stupid

Why shouldn’t I be able to import a script from a trustworthy company like jQuery, Microsoft or Google? And you are right. Kind of.

When you are importing a script from a trustworthy company, in 90% of the time you will be importing it from the company’s CDN.
CDNs stands for Content Delivery Network, and it is (quoted:) “a system of distributed servers (network) that deliver webpages and other Web content to a user based on the geographic locations of the user, the origin of the webpage and a content delivery server.”

Its an hosting service which provides storage services to the company’s clients based on their location and a few other factors. The JS file you are importing is not being kept on the company’s official server (again- most of the times).

In this case-study we’ll see how a very popular company has fell for this

This company (which, of course, we cannot reveal) has developed a popular JS library and hosted it on a 3rd party CDN they purchased. That CDN was “smart” and redirected users to the closest server according to the user’s location:

When a request arrived to the main server, the server determined the location of the request’s IP and then routed the request to the nearest server according to the determined location.

Dozens of websites have planted a <script src> tag in their source code, pointing to that company’s main server CDN, and it has provided their users with the necessary JS library.

An (un)pleasent surprise

After doing some research on the Apache server that was installed on Server C (Copy C in the image), we have concluded that it’s version was really out of date, and that it was vulnerable to an Arbitrary File Upload attack, which allowed us to upload a file to the CDN (but not to execute code).
Not that serious, at first glance.

But! When we examined the way the file was being uploaded, unauthorisedly of course, we saw that it is possible to use a Directory Traversal on the file path. We simply changed the filename to ../../../<company’s domain>/<product name>/<version>/<jsfilename>.js And we were able to replace the company’s legitimate JS file with a malicious one.

Basically, we had an XSS on dozens of websites and companies, without even conducting one minute of research against them. The funny thing was that this attack affected only users who got directed to the vulnerable server (Server C).

What can we learn from this (TL;DR)

Never trust 3rd party websites and services to do your job! I told you that millions of times already. Be independent. Be a big boy that can stay alone in the house.

The safest solution will be to manually download the JS libraries files you use and keep them on your server.

But what happens when the JS libraries I’m using get updated?

Clearly, with the suggested method there is no easy way to keep track of it, besides using a Package Manager like Bower. All you’ll have to do is to synchronise your libraries every once in a while.

Before Javascript Package Managers were popular, I’ve advised a friend of mine to simply write a cronjob or a python script that will check the latest version of the JS library available on the company’s server and then compare it to the local one. If the versions does not equal — the script sends an email to the tech team.
Big JS libraries don’t get updated that often.

So, after the horror movie you just watched, The next thing you are going to do, besides coffee, is to download your 3rd party libraries to your server.


API  -  A. P.otentially I.diotic  - Threat

Happy Hanukkah and Marry Christmas to you all!

The end of the year is always a great time to wrap things up and set goals for the next year. And also to get super-drunk, of course.

In today’s holiday-special case-study we’ll examine a case where an attacker from one website can affect an entire other website, without accessing the second one at all. But before that, we need to talk a bit about Self XSS.

Basically, Self XSS is a stupid vulnerability. Usually, to be attacked, victims need to paste ‘malicious’ JS code into their browser’s Developer Console (F12), which will cause the code to execute on the context of the page the Developer Console is active on.
When Self XSS attacks have started, users were persuaded to paste the JS code in order to get a certain ‘hack’ on a website.
To deal with that, until this day Facebook prints an alert on every page’s Developer Console, in order to warn its users:

Because websites can’t avoid users to paste malicious JS code to the DC (developers console), Self XSS (SXSS) vulnerabilities are not considered high-risk vulnerabilities.

But today we’ll approach SXSS from a different angle

We are about to see how websites can innocently mislead victims into pasting ‘malicious’ JS code planted by an attacker.
Some websites allow users to plant HTML or other kind of code into their own websites or personal blogs. This HTML code is often generated by the websites themselves and being handed to the users as-is in a text box. All the users have to do is simply copy the code and paste it in their desired location.

Now, I know this is not the exact definition of an API, but in this case-study, this is my interpretation to it — a 3rd-party website is giving another website a code which provides a certain service.


Some very known company which hasn’t allowed me to disclose its name yet has allowed users to get an HTML code containing data from a group the users were part of — owned or participated.
When pasted in a website, the HTML represented the last top messages in the group — their title and the intro of the message’s body.

When ‘malicious’ code was placed in the title, like: "/><img src=x onerror=alert(1)/> – nothing happened on the company’s website – they correctly sanitized and escaped the whole malicious payload.

BUT! When the HTML was representing the last messages, there was no escaping at all, and suddenly, attackers could have run malicious JS code from website A onto the context of website B, just by planting the code in a title of the group’s message topic they’ve created.

Who’s to blame?

Well, both websites should get a no-no talk.
Website A is the one who supplied an ‘API’ — HTML code that shows last messages from a group hosted in itself, but the API does not escapes malicious payloads correctly.
But website B violated the number one rule — never trust a 3rd-party website to do your job. Website B added an unknown code (not as an iframe, but as a script) and didn’t stated any ground rules — it blindly executed the code it was given.

So how can we trust the untrustworthy?

A certain client has asked me regarding this a few weeks ago.
She said:

I must use a 3rd party code which is not an iframe, what can I do to keep my website safe?”

Executing 3rd-party JS code on your website is always a bad-practice (and I’m not talking of course on code like jQuery or javascript dependencies, although I am writing these days a very interesting article addressing this exact topic. Stay tuned).
My suggested solution is: Simply plant this code in a sandboxed page, and then open an iframe to that page. ITS THAT SIMPLE!

That way, even if website A will not escape its content as expected, the sandbox, Website C will be the one who take the hit.
This, of course, does not apply for scenarios where website B’s context is a must for website A, but it will work 95% of the time.

So why have I classified this case-study’s vulnerability as a Self-XSS?

Simply because I believe that when you put a 3rd-party code on your website you are Self-XSSing yourself, and all of your users.
The way I see it, Self-XSS is not just a stupid ‘paste-in-the-console’ vulnerability, its also using an unknown 3rd-party JS code in a your own environment.

This article was the last one of 2016.
I want to thank you all for a great year. Please don’t drink too much, and if you do — don’t drink and bug hunt! (Although, truth be told, that 10 Minutes XSS I’ve found on Soundcloud was after a night out. Oops.)

Happy holidays, and of course — happy & successful new year!

Knocking the IDOR

Sorry for the no-new-posts-November, FogMarks has been very busy experiencing new fields and worlds. But now — we’re back on baby!

Today’s case-study is about an old incident (and by “old” I mean 3 months old), but due to recent developments in an active research of a very known company’s popular product, I want to present and explain the huge importance of having an Anti-IDOR mechanism in your application.


Basically, an IDOR (Insecure Direct Object Reference) allows an attacker to mess around with an object that does not belong to him. This could be the private credentials of users (like an email address), private object that the attacker should not have access to (like a private event), or public information that should simply and rationally not be changed (or viewed) by a 3rd-party.

When an attacker is able to mess around with an object that does not belong to him, the consequences can be devastating. I’m not just talking about critical information disclosure that could lead the business to the ground (like Facebook’s recent discoveries), I am also talking about messing around with objects that could lead the attacker to execute code on the server. Don’t be so shocked — it is very much possible.

From an IDOR to RCE

I’m not going to disclose the name of the company or software that this serious vulnerability was found on. I am not even going to say that this is a huge company with a QA and security response team that could fill an entire mall. Twice.
But, as you might have already guessed, gaining access to a certain object thay you shouldn’t have had access to allows you sometimes to actually run commands on the server.

Although I can’t talk about that specific vulnerability, I am going to reveal my logic of preventing an IDOR from its roots.

Ideally speaking

An IDOR is being prevented using an Anti-IDOR Mechanism (AIM). We at FogMarks have developed one a few years ago, and, knock-on-wood, none of our customers have ever dealt with an IDOR problem. Don’t worry, we’re not going to offer you to buy it. This mechanism was created only for two large customers who shared the same code base. Create your own mechanism with the info down below, jeez!
But seriously, AIM’s main goal is to AIM (got the word game?) the usage of a certain object only to the user who created it, or to a user(s) who have access to it.

This is being done by storing that information in a database, especially for sensitive objects that could be targeted from the web clients.
When an object is being inserted to the DB, the mechanism generates it a unique 32 chars long identifier. This identifier is only being used by the server, and it’s called “SUID” (Server Used ID). In addition, the mechanism issues a 15 chars long integer identifier for the client side that is called, of course, “CUID” (Client Used ID). The CUID integer is being made from part of the 32 chars long SUID and part of the object details (like it’s name) using a special algorithm.

The idea of generating two identifiers to the same object is to not reveal the identifier of the actual sensitive object to the client side, so no direct access could be made in unexpected parts of the application.

Since object attributes tend to change (like their names, for example) the CUID is being replaced every once in a while, and the “heart” of the logic is to carefully match the CUID to SUID.

In the users’ permissions there is also a list of nodes that contains the SUID of objects that the user has access to.

When the user issues a request from the client side — the algorithm tries to generate part of the SUID from the supplied CUID. If it succeeded, it tries to match that part to one of the SUIDs list in the users’ permissions collection. If they match, the requesting user gets a one time limited access to the object. This one time access is being enabled for x minutes and for one static IP, until the next process of matching a CUID to SUID.

All this process, of course, is being managed by only one mechanism — The AIM.
The AIM handles requests in a queue form, so when dealing with multiple parallel requests — the AIM might not be the perfect solution (due to the possibility that the object will be changed by two different users at the same time).


In order to keep your platform safe from IDORs, requests to access sensitive objects should be managed only by one mechanism.

You don’t have to implement the exact logic like I did and compile two different identifiers for the same object, but you should definitely manage requests and permissions to private objects in only one place.

Here are some more examples of IDORs found by FogMarks in some very popular companies (and were patched, of course):

Until next time!

The Beauty And The Thoughtful

Are you following FogMarks?

Today’s case-study is based on some recent events and misunderstandings I had with Facebook, and its main goal is to set researchers expectations from bug bounty programs. Both sides will be presented, of course, and you will be able to comment your opinion in the comments section.

So, back in July I have found that it is possible to link between Scrapebooks that users have opened for their pets or family members to the users themselves (who relate to the pet or family member), even if the privacy setting of the user to the pet or family member was set to ‘Only me’.

This was possible to be done by any user, even if the user was not friends with the victim. All he had to do was to access this Facebooks’s mobile URL: http://m.facebook.com/<SCRAPEBOOK_ID>/

After accessing this URL, the attacker was redirected to another URL: https://m.facebook.com/<CREATOR_FACEBOOK_USER_ID>/scrapbooks/ft.<SCRAPEBOOK_ID>/?_rdr

and the name and the type of the Scrapebook was displayed, even if the privacy setting of it was set to ‘Only me’ by the creating user (the victim).

12 days after the initial report Facebook said that the issue was ‘not reproduceable’, and after my reply I was asked to provide even more information, so I have created a full PoC video. Watch it to get the full picture and only then continue to read.

So, as you can see accessing the supplied URL indeed redirected the attacker to the Scrapebook account that was made by the victim, and revealed the Scrapebook name – which is not private, and the Scrapebook maker ID (the FBID of the victim user).

5 days after I have sent the PoC video Facebook finally acknowledged it and sent it forward for a fix.

2 months after the acknowledgement I have received a mail from Facebook, asking me to confirm the patch. They simply denied from unauthorized users to access the vulnerable URL and then to be redirected to the Scrapebook.

2 days after I confirmed the patch, I got a long mail reply stating:

Thanks for confirming the fix. I’ve discussed this report with the team and unfortunately we’ve determined that this report does not qualify under our program.

Ultimately the risk here was that someone who could guess the FBID of a scrapbook could see the owner of that scrapbook. The “name” here isn’t a private piece of information: it will show up whenever the child or pet is tagged, for example, and so any changes related to that aren’t particularly relevant here. The risk of someone searching such a large space of potential IDs in the hope of finding a particular type of object (rare) in a particular configuration (even rarer) makes it highly implausible that any information would be inadvertently discovered here. Even if you were to look through the space your search would be untargeted and could not recover information about a particular person.

In general we attempt to determine whether or not a report qualifies under our program shortly after the initial report is submitted. In this case we failed to do so, and you have my apologies for that. Please let me know if you have any additional questions here.

Or in short: Thanks for confirming the fix, we now see after we fixed it that the impact of the vulnerability was able to be achieved after some hard work – iterating over Scrapebook IDs, so the report is not qualified and you won’t be awarded for it.

And now I am asking: How rude can it be to hold a vulnerability for 3 months, fix it, and then, only then, after the fix is deployed in the production and there is no way to demonstrate another impact aspect, say to the researcher: “Thanks, but no thanks”.

This case-study is here to demonstrate researchers the various opinions that exist for every report. In your opinion the vulnerability is severe, a must-fix that should not even be questioned, but in the eyes of the company or the person who validates the vulnerability – it is a feature, not a bug.

I would like to hear your opinion regarding this in the comments section below, on Twitter or by email.

Jumping Over The Fence

Are you following FogMarks?

“Fences were made to be jumped over” — John Doe

As you might have already guessed (or not), today’s case-study is all about open redirects, and bypassing mechanisms that were made to prevent them. Fun!

I have already shared with you my thoughts about open redirects and their consequences on the website’s general security.
Now it is the time to demonstrate how open redirects can be achieved by manipulating the AOR (Anti Open Redirects) mechanism.

A great example for a great AOR is again Facebook’s linkshim system.
Its basically attaching an access token to every URL that is being posted on Facebook.
That access token is personal, so only the user who now viewing the link can be the one to click on it and be redirected to its destination; other don’t. In addition, the linkshim mechanism checks the destination for the user and prevents the user from being redirected to a malicious website. Yes, pretty cool.

Well, until now the sun is shining and we all are having fun at the beach

Hang me that beer, would you?
But what happens when the AOR mechanism, the same one that we trust so much, is being manipulated to act differently?
That’s exactly what we are going to witness today.

Sadly, most websites that use an AOR manage the links that are being posted to them only if those links are of 3rd party websites. Which means, that if I am on the website x.com and I am posting a link to website y.com, the link will appear this way on x.com: x.com/out?url=y.com&access_token=1asd2ad6fdC

But if I’ll post a link to the same domain (post x.com/blabla on x.com), the link will appear as is: x.com/blabla

The reason this is happening is because websites usually trust themselves to redirect users within themselves. They think this is ‘safe’ and ‘pointless’ to attach an access token to a link that is redirecting the user to the same domain. And you can agree with them, like many has. I have heard the argument ‘if a certain page is vulnerable to an open redirect there is no reason to check redirection to it‘ countless times. But now I’m about to change that thought once and for all.

A very popular designs website

Which unfortunately I can’t reveal its name, it had this exact vulnerability.
The site allowed “inner links” to be redirected without any access token or validation, but required the referrer to be the same domain. Pretty smart.
But the AOR mechanism allowed any inner link to be redirected, as long as its domain was one of that company’s domains or subdomains.

Using a domain enumeration software I was able to detect a sub domain of the website that contained a mail service for the company’s employees, and that mail service had an open redirect vulnerability on its logout page — Even if the user was not logged in, when the logout page was being accessed with a ‘redirect after’ GET parameter, the user was redirected to any other page, even of a 3rd party web. That mail service, by the way, does not consider this behaviour to an open redirect vulnerability. Go figure.

Now that I have an open redirect on a sub domain page, how can I make it rain from the main domain?

Well, the answer was quite easy — I’ll simply use the logic flaw of the AOR mechanism to redirect the user to the sub domain and from there to the 3rd party site.

But there was still a problem — as I said before, the AOR mechanism allowed the link to be redirected to a subdomain, but only if the referrer was the same website.

So what have I done?

I have simply redirected the user to the same page, and then he got redirected again.

If the 2 vulnerable pages are:
Vulnerable mail service: http://mail.x.com/out?url=y.com
‘Vulnerable’ page within the domain: http://x.com/redirect?to=mail.x.com/out?url=y.com

And the second page requires the referrer header to be from x.com, I have simply issued the following URL:


That’s it.

Here’s an example of a simple, easy-to-use logic flaw within an AOR mechanism.

As always,


Always use protection

In what way do you interact with private information of your users? I mean to information like their full name, email address, living address, phone number or any other kind of information that may be important to them, or information they’d rather keep private.

Today’s case-study talks just about that. Parental advisory: Parental advisory: Explicit content. Just kidding.

We will talk about the way private objects (and I’ll explain my interpretation of the term ‘objects’ later on) should be handled, and then we will see 2 neat examples from vulnerabilities I have found on Facebook (and were fixed, of course).

OK, so you’re mature enough to ask your users to trust you with their email address, home address and phone number. If you are smart, you’ll know that this type of information should be transmitted on the wire via HTTPS, but you’ll remember that sometimes it is also a good practice to encrypt it by yourself.

So your users info is properly transmitted and saved in the database, you assume that your DB is immune to SQL injections or other leakage incidents, and you are thinking of cracking a beer and starting another episode of How I Met Your Mother.
Awesome! But first, I’d like to introduce you to another enemy: The IDOR.

Insecure Direct Object References are your information’s second-worst enemy (after SQLi, of course). Attacker who is able to access other users private objects (such as email address, phone number, etc) could basically expose all of the private data from the server, without “talking” with the DB directly or run arbitrary code on the server.

This is the time to explain my definition to “private objects”. User objects are not just the user’s phone number, email address, name, gender, sexual-orientation or favorite side of the bed. They are also objects that the user creates or own, like the items in the user’s cart, a group that the user is managing or a paint that a user has drew.

The best way to handle private objects is to define them as private and treat them with the appropriate honor.

If you know that only a certain user (or users) should be able to access a certain object, make sure that only those users IDs (or other unique-identifier) are able to access and mess with that object.

How will you do so?

Using a Private Object Manager (POM) of course.
The idea is simple: A one-and-only mechanism that will fetch or change information about private objects only if an accepted identifier has been provided. 
For example: A class that will return the email address of the user ID ‘212’ only if the ID of the user who requested that information is ‘212’).

Sounds obvious, right?
Before posting this case-study I had a little chat with some colleague about the idea of creating a unique mechanism that will handle all the requests to private objects.

He said that this is useless

“Because when a request is being made regarding a certain object, it is the job of the session manager to make sure that the currently active session is messing around with an object it has access to.”

But he was wrong. Very wrong.

Everyone knows Facebook events and groups. Everyone is part of a certain group on Facebook, or got an invitation to a certain event.
Like any other feature of Facebook (and especially after Cambridge Analytica data scandal), groups and events has different privacy levels. They can be public — and then every use will be able to see the event/group name and entire content, private — and then every user will be able to see the event/group name but not their content or secret— and then only users who were invited to join the group or participate an event will be able to see their name and content. Regular Facebook search does not discovers the existence of such groups/events.

Almost every object on Facebook has an ID — usually a long number that represents that object in Facebook’s huge Database, and so do groups and events.

So How can one determine the name or the content of a secret group or event?

I’ve spent a lot of time on the modern Facebook platform trying to fetch information from secret groups and events I cannot actually see, only by their ID.
But I couldn’t find any lead to disclose private information from secret objects. On Facebook’s modern platform. Modern.

And that’s when I started to think

Facebook has many versions to its web platform (and even to its mobile one).
Do they use the same Private Object Manager to access “sensitive” objects like a secret group or event?


Immediately after I’ve started to test the mbasic version of Facebook, I realized that things there work a little different. Ok, a lot different.

I have found 2 vulnerabilities which allowed the name of a secret group or event to be disclosed to any user, regardless the fact that he is not invited or in the group/event. The first vulnerability is here by presented, but the second one is yet to be fully patched (in progress these days):

Always use protection

Seriously, these vulnerabilities would have been prevented if Facebook would have implemented a single Private Object Manager to any of its version.
The idea of hoping that a session manager will prevent an insecure access to an object is ridiculous, simply because some objects are so wildly used (like groups on Facebook with millions of members), that the linkage of a user session to that object is high inefficient (and wrong).

Having a one and only filtering mechanism, a “condom”, to access the most important objects or details, is considered a best practice.


And The King Goes Down

Tokens are great. Well, sometimes.

Today’s case-study will discuss the importance of a Token Manager software.
Well, every site which allows login normally will use a token on each of the ‘critical’ actions it allows users to do. Facebook, for example, automatically adds a token at the end of any link a user provide, and even their own links! This mechanism is called ‘Linkshim’ and it is the primary reason why you never hear about Facebook open redirects, CSRFs or clickjacking (yeah yeah I know they simply not allowing iframes to access them, I’ll write a whole case-study about that in the near future).
Facebook’s method is pretty simple – if a link is being added to the page – add a token at the end of it. The token, of course, should allow only for the same logged-in user to access the URL, and there should be a token count to restrict the number of times a token should be used (hint- only once).

But what happens when tokens are being managed in a wrong approach?

A very famous security company, which still hasn’t allowed us to publish it’s name, allowed users to create a team. When a user creates a team, he is the owner of the team – he has the ‘highest’ role, and he basically controls the whole team actions and options – he can change the team’s name, invite new people to the team, change roles of people in the team and so on.

The team offers the following roles: Owner, Administrator and some other minor non-important roles. Only the owner and administrators of the team are able to invite new users to the team. An invitation can be sent only to person who is not on the team and does not have an account on the company’s web. When the receiver will open the mail he will be redirected to a registration page of the company, and then will be added to the team with the role the Owner/Admin set.

When I first looked at the team options I noticed that after the owner or an admin invites other people to the team via email, he can resend the invitation in case the invited user missed it or deleted it by accident. The resend options was a link at the side of each invitation. Clicking the link created a POST request to a certain ‘Invitation manager’ page, and passed it the invitation ID.

That’s where I started thinking. Why passing the invitation ID as is? Why not obfuscate it or at least use a token for some sort of validation?

Well, that’s where the gold is, baby. Past invitation IDs were not deleted. That means that invitations that were approved were still present on the database, and still accessible.

By changing the passed invitation ID parameter to the ‘first’ invitation ID of the Owner – It was possible to resend an invitation to him.
At first I laughed and said ‘Oh well, how much damage could it make besides spam the owner a bit?’. But I was wrong. Very wrong.

When the system detected that an invitation to the owner was sent, it removed the owner from his role. But further more – remember that I said that sending an invitation sends the receiver a registration page according to his email address? The system also wiped the owner’s account – his private details, and most important – his credentials. This caused the whole account of the owner to be blocked. A classic DoS.

So how can we prevent unwanted actions to be performed on our server? That’s kind of easy.
First, lets attach an authenticity token to each action. The authenticity token must be generated specifically and individually to each specific user.
Second, like milk and cheese – lets attach an expiration date for the token. 2 Minutes expiration date is the fair time to allow our token to be used by the user.
And last, lets delete used tokens from the accessible tokens mechanism. A token should be used only once. If a user has got a problem with that – generate a few tokens for him.

For conclusion,
This case-study presented a severe security issue that was discovered in the code of some very famous security company.
The security issue could have been prevented by following three simple principals – 1) Attaching a token to each action that is being performed by a user. 2) Setting a rational extirpation time for each token. 3) And most importantly – correctly managing the tokens and deleting used ones.

Opening Open Redirects

A few years ago, when FogMarks was not even a tiny idea or a vision in my head, I used to do casual programming jobs on Fiverr.

One of the jobs/gigs I was asked to do is to cause a user in site x.com to be redirected to Facebook.com and then, without an action from his side, to be redirected to a y.com site. I didn’t realize back then why would someone want that kind of thing. Why not just simply redirect the user directly to y.com?
I asked the person why would he want to do such a thing. His answer changed the way I (and after that, FogMarks) treated and took care of Open Redirects. He answered that by forcing a user request to originate from a Facebook URL, the ads engines on y.com are paying much, much more, because a popular site like Facebook has redirected the user to y.com.

Until this answer I treated open redirects as simple security issues, that can’t cause too much damage. I knew that innocent user could get a misleading link like: “www.innocent.com?redirect_out=bad.com”, but I believed that anyone with a tiny bit of common sense will detect such things. Those kind of attacks were mostly used by Phishing web sites to try and simulate that an evil page is actually hosted by the innocent domain (and in this case- of innocent.com).

After this long introduction, I want to introduce today’s topic — a simple solution to open redirects. Facebook did it in their Linkshim system, but I want to introduce a much simpler solution that any of you can use.

Side Note: Who should not adopt this solution? Websites who want their domain to be present on the Referrer header.

Well, the regular ways I have seen to prevent open redirects are creating a token (like YouTube’s and Facebook’s l.php page), forbidding URLs that don’t not contain the host of the website and allowing a user to be redirected only from a POST request.

While creating an exit token is a considered a good practice, handling and taking care of this whole system is pretty much of a headache. You have more values to store in the DB, you should check for expiration times, IP addresses and much more.

Another approach is to forbid redirection to URLs that don’t not contain the host of the website. It is simply wrong. Websites should allow other users to be redirected to another websites, not only to pages within themselves.

Allowing user to be redirected only after an action she initiated (like a click on a button), isn’t always that convenient — to you or to the user.

The Golden Solution

Honestly, the first thing you’ll about to say is: “What? Is this guy crazy?”. But the next thing will be: “Okay, I should give that a try”.

Well, a lot of websites today offers a free URL shortening services. In addition to that — they offer a free modular & convenient API.
Why don’t we use them?!

Instead of carefully creating an exit token, forbidding outside redirects or requiring them to be initiated POSTly (yes, I have invented this verb!), simply translate any outside URL to a shortened URL that a certain URL-Shortening service provides. If you don’t want third party services to store your information, you can build one of your’e own using an open source system.
Allow redirection only to the domain of the shortened URL.
That way you won’t have to worry about open redirects — they will never occur directly from your domain.

Lets say you have a page called ‘redirect_out.php’ with an r GET parameter that a user can control:


Normally, you should also include a token in this request, and a mechanism to validate its origin and expiration time.
I say- accept the bad.com URL from the user, and automatically translate it to a shorte.nd/XxXxX URL. Then always allow redirection to shorte.nd. A lot of the shortening services follow security guidelines and will deny redirection to ‘bad websites’, better than you will. Trust me.

If you want to strengthen up this method even more, you can add a whitelist to the shortening service, and configure it to direct requests only to a host which is in the whitelist.

Extra: Benefits of using a well known shortening service

As mentioned before, this solution was offered only to developers who don’t want their domain to be shown up as the referrer. If your’e worried about direct Phishing attacks, it’s a different scenario.

By using a well known shortening service which manages a black or a white list of known “good or bad” domains you’ll earn twice:

  • Your’e domain will not be the ‘one to blame’ who redirected users to ‘bad’ websites (yes, Google knows and checks that too), although you should really care about it too.
  • You’ll increase the phishing protection level of your web. Your users will be able to be redirected to ‘bad’ web sites, but the shortening service will deny it, or at least warn those users (and optionally you too).

Wrap up

This idea may sound a bit bizarre at the beginning, but it requires zero development time (when choosing to use a commercial service). The main idea here is to acknowledge that unwanted redirection of a user from one site to another can occur, and the best way to prevent the worst thing that could happen (that the user will arrive to bad.com) will be to rely on a mechanism to

Let me know your opinion on that.

How Private Is Your Private Email Address?

After reading some blog posts about Mozilla’s Addons websites, I was fascinated from this python-based platform and decided to focus on it.
The XSS vector led basically to nowhere. The folks at Mozilla did excellent job curing and properly sanitizing every user input.

This led me to change my direction and search for the most fun vulnerabilities – logic flaws.

The logic
Most people don’t know, but the fastest way to track logic-based security issues is to get into the mind of the author and to try and think from his point of view. That’s it. Look at a JS function — would you write the same code? What would you have changed? Why?

Mozilla’s Addons site has a collections feature, where users can create a custom collection of their favorite addons. That’s pretty cool, since users can invite other users to a role on their collection. How, do you ask? By email address of course!

A user types in the email address of another user, an AJAX request is being made to an ‘address resolver’ and the ID of the user who owns this email address returns.

When the user press ‘Save Changes’, the just-arrived ID is being passed to the server and the being translated again to the email address, next to the user’s username. Pretty weird.

So, If the logic, for some reason, is to translate an email to an ID and then the ID to the email, we can simply interrupt this process in the middle of it, and replace the generated ID with the ID of another user.

The following video presents a proof of concept of this vulnerability, that exposed the email address of any of addons.mozilla.org users.

Final Thoughts
It is a bad practice to do the same operation twice. If you need something to be fetched from the server, fetch it one time and store it locally (HTML5 localStorage, cookie, etc.). This simple logic flaw jeopardized hundreds of thousands of users until it was patched by Mozilla.

The patch, as you guessed, was to send the email address to the server, instead of sending the ID.