Keep Your Friends Close and Your Domains Closer! [Intro]

Stay updated on FogMarks. More to come soon. Pinky-swear.

Ok ok, I know.

(TL;DR;- Me apologising for not being around this year. You can skip right ahead to the case-study)

No post has been made since January 2018. Not even one XSS has been triggered, not even one byte of data has been leaked from a DB and not even one line of arbitrary code has been remotely executed. I can continue with that all night.
But listen, before the beating, hear me out first.

FogMarks has started as a self-test. Somewhere around November 2015, I worked as a junior security researcher in some company. That place had terrible work manners: “Juniors are dumb, so they should not be writing code to the production products (i.e. add features, involve in development), nor conduct security research. They should first sit down and look at others work”. Honestly, I was never good at looking at others and doing nothing.
So, I offered my help and expressed my opinion in a lot of issues and active researches that were going on. And well, they didn’t like that.
They have complained about me to the “superiors” and I was (rudely) asked to pretty much mind some other boring-junior-business.

At that time I was fascinated about finding a way to disable the AdBlock chrome extension remotely (yes, not the nicest thing to do, I know:). I started to conduct my own private research at the evenings. I was coming back from work straight to my real work. Time passed and indeed I have found a way to crash AdBlock (on Chrome 47- Gosh I’m old!). But then, during my research that was involving a lot of digging, T&E and tears, I have exposed to how insecure modern platforms are. I’ve read lots of badly-implemented source code in some very sensitive and widely-used open-source products, and I was shocked at some very half-an-hour-to-find severe security vulnerability that were (and still are, and always will be) in the world’s most popular platforms.
I have decided to devote my time to help solve those issues, conduct white-hat security research and most importantly- share my experiences, thoughts, ideas and some of my work methods and ideologies – here.

This is the story of how FogMarks was born. By the way, if we are completely honest here, the name FogMarks popped-up in my head only around February 2016, while driving to the cinema during one of Israel’s heaviest fogs ever. The road had blinking warning marks on it, and at some point I told my girlfriend that I only see marks in the fog and I just follow them to safety. Fog Marks.

So what the hell happened in 2018?!

After 2 years of researching, I’ve came across some very interesting development opportunity. Not something crazy, but some very helpful set of utils that can ease the life of a lot of people.
The thing is that developing it took a lot of time and a lot of planning. So I wasn’t active at all since January.

So you missed and came back crawling.. ha?!

Yeah. You got it. Security researching is one of the funnest things I’ve done in my career life. I had to put it on some hold so I could focus on that other project. But this year, I hope that project has been staebliezied- so I’m back on baby!

Keep Your Friends Close and Your Domains Closer!

Edit [30/12/2018]: This part will be published on January 1st, as I hope that company will allow me to disclose its name by then.

Stay tuned!

Edit [01/01/2019]: Post has been published.

API  -  A. P.otentially I.diotic  - Threat

Happy Hanukkah and Marry Christmas to you all!

The end of the year is always a great time to wrap things up and set goals for the next year. And also to get super-drunk, of course.

In today’s holiday-special case-study we’ll examine a case where an attacker from one website can affect an entire other website, without accessing the second one at all. But before that, we need to talk a bit about Self XSS.

Basically, Self XSS is a stupid vulnerability. Usually, to be attacked, victims need to paste ‘malicious’ JS code into their browser’s Developer Console (F12), which will cause the code to execute on the context of the page the Developer Console is active on.
When Self XSS attacks have started, users were persuaded to paste the JS code in order to get a certain ‘hack’ on a website.
To deal with that, until this day Facebook prints an alert on every page’s Developer Console, in order to warn its users:

Because websites can’t avoid users to paste malicious JS code to the DC (developers console), Self XSS (SXSS) vulnerabilities are not considered high-risk vulnerabilities.

But today we’ll approach SXSS from a different angle

We are about to see how websites can innocently mislead victims into pasting ‘malicious’ JS code planted by an attacker.
Some websites allow users to plant HTML or other kind of code into their own websites or personal blogs. This HTML code is often generated by the websites themselves and being handed to the users as-is in a text box. All the users have to do is simply copy the code and paste it in their desired location.

Now, I know this is not the exact definition of an API, but in this case-study, this is my interpretation to it — a 3rd-party website is giving another website a code which provides a certain service.


Some very known company which hasn’t allowed me to disclose its name yet has allowed users to get an HTML code containing data from a group the users were part of — owned or participated.
When pasted in a website, the HTML represented the last top messages in the group — their title and the intro of the message’s body.

When ‘malicious’ code was placed in the title, like: "/><img src=x onerror=alert(1)/> – nothing happened on the company’s website – they correctly sanitized and escaped the whole malicious payload.

BUT! When the HTML was representing the last messages, there was no escaping at all, and suddenly, attackers could have run malicious JS code from website A onto the context of website B, just by planting the code in a title of the group’s message topic they’ve created.

Who’s to blame?

Well, both websites should get a no-no talk.
Website A is the one who supplied an ‘API’ — HTML code that shows last messages from a group hosted in itself, but the API does not escapes malicious payloads correctly.
But website B violated the number one rule — never trust a 3rd-party website to do your job. Website B added an unknown code (not as an iframe, but as a script) and didn’t stated any ground rules — it blindly executed the code it was given.

So how can we trust the untrustworthy?

A certain client has asked me regarding this a few weeks ago.
She said:

I must use a 3rd party code which is not an iframe, what can I do to keep my website safe?”

Executing 3rd-party JS code on your website is always a bad-practice (and I’m not talking of course on code like jQuery or javascript dependencies, although I am writing these days a very interesting article addressing this exact topic. Stay tuned).
My suggested solution is: Simply plant this code in a sandboxed page, and then open an iframe to that page. ITS THAT SIMPLE!

That way, even if website A will not escape its content as expected, the sandbox, Website C will be the one who take the hit.
This, of course, does not apply for scenarios where website B’s context is a must for website A, but it will work 95% of the time.

So why have I classified this case-study’s vulnerability as a Self-XSS?

Simply because I believe that when you put a 3rd-party code on your website you are Self-XSSing yourself, and all of your users.
The way I see it, Self-XSS is not just a stupid ‘paste-in-the-console’ vulnerability, its also using an unknown 3rd-party JS code in a your own environment.

This article was the last one of 2016.
I want to thank you all for a great year. Please don’t drink too much, and if you do — don’t drink and bug hunt! (Although, truth be told, that 10 Minutes XSS I’ve found on Soundcloud was after a night out. Oops.)

Happy holidays, and of course — happy & successful new year!

The Beauty And The Thoughtful

Are you following FogMarks?

Today’s case-study is based on some recent events and misunderstandings I had with Facebook, and its main goal is to set researchers expectations from bug bounty programs. Both sides will be presented, of course, and you will be able to comment your opinion in the comments section.

So, back in July I have found that it is possible to link between Scrapebooks that users have opened for their pets or family members to the users themselves (who relate to the pet or family member), even if the privacy setting of the user to the pet or family member was set to ‘Only me’.

This was possible to be done by any user, even if the user was not friends with the victim. All he had to do was to access this Facebooks’s mobile URL:<SCRAPEBOOK_ID>/

After accessing this URL, the attacker was redirected to another URL:<CREATOR_FACEBOOK_USER_ID>/scrapbooks/ft.<SCRAPEBOOK_ID>/?_rdr

and the name and the type of the Scrapebook was displayed, even if the privacy setting of it was set to ‘Only me’ by the creating user (the victim).

12 days after the initial report Facebook said that the issue was ‘not reproduceable’, and after my reply I was asked to provide even more information, so I have created a full PoC video. Watch it to get the full picture and only then continue to read.

So, as you can see accessing the supplied URL indeed redirected the attacker to the Scrapebook account that was made by the victim, and revealed the Scrapebook name – which is not private, and the Scrapebook maker ID (the FBID of the victim user).

5 days after I have sent the PoC video Facebook finally acknowledged it and sent it forward for a fix.

2 months after the acknowledgement I have received a mail from Facebook, asking me to confirm the patch. They simply denied from unauthorized users to access the vulnerable URL and then to be redirected to the Scrapebook.

2 days after I confirmed the patch, I got a long mail reply stating:

Thanks for confirming the fix. I’ve discussed this report with the team and unfortunately we’ve determined that this report does not qualify under our program.

Ultimately the risk here was that someone who could guess the FBID of a scrapbook could see the owner of that scrapbook. The “name” here isn’t a private piece of information: it will show up whenever the child or pet is tagged, for example, and so any changes related to that aren’t particularly relevant here. The risk of someone searching such a large space of potential IDs in the hope of finding a particular type of object (rare) in a particular configuration (even rarer) makes it highly implausible that any information would be inadvertently discovered here. Even if you were to look through the space your search would be untargeted and could not recover information about a particular person.

In general we attempt to determine whether or not a report qualifies under our program shortly after the initial report is submitted. In this case we failed to do so, and you have my apologies for that. Please let me know if you have any additional questions here.

Or in short: Thanks for confirming the fix, we now see after we fixed it that the impact of the vulnerability was able to be achieved after some hard work – iterating over Scrapebook IDs, so the report is not qualified and you won’t be awarded for it.

And now I am asking: How rude can it be to hold a vulnerability for 3 months, fix it, and then, only then, after the fix is deployed in the production and there is no way to demonstrate another impact aspect, say to the researcher: “Thanks, but no thanks”.

This case-study is here to demonstrate researchers the various opinions that exist for every report. In your opinion the vulnerability is severe, a must-fix that should not even be questioned, but in the eyes of the company or the person who validates the vulnerability – it is a feature, not a bug.

I would like to hear your opinion regarding this in the comments section below, on Twitter or by email.

And The King Goes Down

Tokens are great. Well, sometimes.

Today’s case-study will discuss the importance of a Token Manager software.
Well, every site which allows login normally will use a token on each of the ‘critical’ actions it allows users to do. Facebook, for example, automatically adds a token at the end of any link a user provide, and even their own links! This mechanism is called ‘Linkshim’ and it is the primary reason why you never hear about Facebook open redirects, CSRFs or clickjacking (yeah yeah I know they simply not allowing iframes to access them, I’ll write a whole case-study about that in the near future).
Facebook’s method is pretty simple – if a link is being added to the page – add a token at the end of it. The token, of course, should allow only for the same logged-in user to access the URL, and there should be a token count to restrict the number of times a token should be used (hint- only once).

But what happens when tokens are being managed in a wrong approach?

A very famous security company, which still hasn’t allowed us to publish it’s name, allowed users to create a team. When a user creates a team, he is the owner of the team – he has the ‘highest’ role, and he basically controls the whole team actions and options – he can change the team’s name, invite new people to the team, change roles of people in the team and so on.

The team offers the following roles: Owner, Administrator and some other minor non-important roles. Only the owner and administrators of the team are able to invite new users to the team. An invitation can be sent only to person who is not on the team and does not have an account on the company’s web. When the receiver will open the mail he will be redirected to a registration page of the company, and then will be added to the team with the role the Owner/Admin set.

When I first looked at the team options I noticed that after the owner or an admin invites other people to the team via email, he can resend the invitation in case the invited user missed it or deleted it by accident. The resend options was a link at the side of each invitation. Clicking the link created a POST request to a certain ‘Invitation manager’ page, and passed it the invitation ID.

That’s where I started thinking. Why passing the invitation ID as is? Why not obfuscate it or at least use a token for some sort of validation?

Well, that’s where the gold is, baby. Past invitation IDs were not deleted. That means that invitations that were approved were still present on the database, and still accessible.

By changing the passed invitation ID parameter to the ‘first’ invitation ID of the Owner – It was possible to resend an invitation to him.
At first I laughed and said ‘Oh well, how much damage could it make besides spam the owner a bit?’. But I was wrong. Very wrong.

When the system detected that an invitation to the owner was sent, it removed the owner from his role. But further more – remember that I said that sending an invitation sends the receiver a registration page according to his email address? The system also wiped the owner’s account – his private details, and most important – his credentials. This caused the whole account of the owner to be blocked. A classic DoS.

So how can we prevent unwanted actions to be performed on our server? That’s kind of easy.
First, lets attach an authenticity token to each action. The authenticity token must be generated specifically and individually to each specific user.
Second, like milk and cheese – lets attach an expiration date for the token. 2 Minutes expiration date is the fair time to allow our token to be used by the user.
And last, lets delete used tokens from the accessible tokens mechanism. A token should be used only once. If a user has got a problem with that – generate a few tokens for him.

For conclusion,
This case-study presented a severe security issue that was discovered in the code of some very famous security company.
The security issue could have been prevented by following three simple principals – 1) Attaching a token to each action that is being performed by a user. 2) Setting a rational extirpation time for each token. 3) And most importantly – correctly managing the tokens and deleting used ones.