Saturday, March 30, 2019

Serious debugging

When i started this blog the intention was to write about application security. I chose the name "Fix in the wild" as a twist to "Exploit in the wild."
I later realized that I was actually trying to fix quite a few things in the wild. One example was trying to reduce the number of snails in our garden. But the real fight soon started fixing some wild creatures indoors.
Our house was infested with grey silverfish (Ctenolepisma Longicaudata). We saw them frequently in the basement, but they also journeyed through most of the house. More and more we saw them getting trapped in drawers and kettles in the kitchen. We had to do something about this before it made one or more family member crazy.
We started making some traps. Silverfish tend to get trapped in kettles and other objects of metal, glass and plastic with smooth and steep sides. We tried to make some traps where they could climp up, fall in and not get out. But the real problem was to find an effective lure. I must say that we were quite lucky, because one of the first things we tested turned out to work well. Suddenly we had a very effective trap, and best of all it was fully non-toxic!
This was totally different from what I was working on in my day job, but at one point these two worlds actually touched each other:


As we now were on the right path countless iterations of improvements to trap and lure followed. And to make a long story short we got increasingly better results, started a company and designed a trap for mass production. Then we had to figure out designs, webshop, suppliers, bottles for lure, accounting, and all the other bits and pieces necessary to launch the product.
Now we have already taken our first orders and welcome everyone to https://www.silverfish.no. Initially we only take orders from Norway in our webshop. Give us some time and we'll see if we can serve more countries. With time I'll perhaps even have time to write about security again...

Monday, February 26, 2018

Fighting mixed content with report-uri

On the Internet we see a great adoption of and push towards HTTPS. More and more sites are using HTTPS, getting certificates gets cheaper and easier and browsers are increasingly discouraging the use of HTTP. I want to take part and bring all our clients and users into to the good world of HTTPS.

For many sites it is quite straight forward to switch from HTTP to HTTPS: Install a certificate, fix some URLs and set up some redirects. Others, like Stack Overflow have found it to be much more involved. At my job we had a good mix of users on HTTP and HTTPS as our clients have had the freedom to choose. I want to remove the option for weak security entirely. The first problem is that "fix some URLs" is about fixing a million URLs and second that almost all of those URLs are controlled by our clients. The consequence of having these URLs referencing content on HTTP would be that browsers would choke on mixed content when everything is loaded over HTTPS. The page loads over HTTPS but requests content over HTTP. The result could be lacking security indicators in the browser or blocked scripts and style sheets which quickly leads to a really bad user experience.

Thursday, June 15, 2017

Lessons and advice from my talk at NDC Oslo 2017

I'm speaking at NDC Oslo 2017 right now. If you are interested in the lessons and advice I present in my talk, I have gathered them here:

  1. Always keep your third party software up to date.
  2. When passwords are posted, always do a redirect. Even if the password is wrong.
  3. Don’t leak information.
  4. Apply authorization to all non-public functions.
  5. Always apply HttpOnly and Secure flags on cookies if possible.
  6. Renew tokens on login and make them sufficiently random.
  7. Passwords are hard! Check best practice for storing, changing, resetting, remember function, etc.
  8. Check authentication on every page after login.
  9. Apply anti-CSRF tokens or similar measures when forms are posted.
  10. Prevent XSS by
    1. Validating input
    2. Output encode all user input for correct context
    3. Use content-security-policy header if possible


My advice on how to prevent disaster:

  • Educate your developers and testers
  • Educate support on response and escalations
  • Use security testing tools: Fiddler, ZAP or Burp, sqlmap or Havij, code analysis, third party version scanners etc
  • Security test of new and existing features
  • Log alerts, detect suspicious activity
  • State how vulnerabilities can be reported
  • External test or audit
  • Bug bounty program

Final words: Hack you own systems. Assume that users are evil. It just takes one evil user. Know your enemy. Know the tools and techniques that hackers use and what they are looking for. Find and fix the vulnerabilities before someone else does.

Monday, May 8, 2017

Who's looking down on you?

There is a fierce competition in the grocery store market with a never ending fight to cut costs and attract customers. In response to this my local grocery store installed self checkout registers some time ago. If there is a queue on the manned checkout the self service becomes a really attractive option. The other day I tried it but failed when I wanted to buy a single chili. It was just too light for the weight to register, even in a bag. But I wouldn't be writing about it here if it was just about bad usability.

This is what it looks like:

After a few campaigns people have learnt that they should protect their PIN code from other people. The self checkout area is not a place where people can easily hang around and watch you type the PIN without acting suspicious. But they don't need to. The design makes it all too easy to hide a camera. Here is what it looks like if you bend down:


And this is what it looks like if you look up from the buttons of the payment terminal:
I think they tidied this up, when I looked at this earlier there were a lot more cables here. But still there is plenty of space for a camera and perhaps even communication devices. Criminals could even leverage the free WiFi for customers.

With the PIN recorded criminals only need the card itself. That would require some pick-pocketing or bag snatching before heading over to the nearest ATM. It could be an expensive grocery shopping, a lot due to a design without security in mind.

Wednesday, February 22, 2017

Authorization testing on steroids

A shared environment containing private and confidential information requires excellent access control. To verify that the access control works as it should, it has to be tested. But testing authorization isn't easy:

  • There is very limited tool support. A tool would need to be trained to understand how it should operate and what the logic of the each page is. This training would eliminate the savings on using a tool in most cases.
  • Applications are different, and methods and techniques may need to adapt for every case.
  • There is very limited course coverage. I have been to a few courses on application security/ethical hacking/penetration testing and none of them have really covered the topic. It is probably avoided as it is hard to teach on it due to the previous two reasons.
  • It is complex and time consuming. If you have four actions on an object (read, write, update, delete), as a minimum you will need to check all instances where the user should be denied access. If the user can access an object in a different contexts, you will need to repeat all the checks for every context.
  • It can require a lot of request modification. Sometimes it is just about incrementing a number. Other times it requires copying GUIDs or complex structures from another session. And sometimes it is even more complex and the solution is to copy cookies and relevant headers between sessions.

So even though "Broken Access Control" is number four on the OWASP top 10 list (2017), I think we're often not doing what we should to prevent it. The vulnerability ranks highly because it is prevalent and because the consequences of getting it wrong are devastating. It will let someone without permission access/modify/delete other people's data.

What is authorization testing?

Basically we need to check that a user only sees what the user is supposed to see. In simple terms it is a test to ensure we are blocked from seeing or doing something we don't have permission to see or do.

For example, let's say we have user A who is an admin. Being an admin, user A has access to everything under https://example.com/admin. User A should also be able to get a list of all objects when accessing https://example.com/objects, as well as access all individual objects https://example.com/objects/1, https://example.com/objects/2, https://example.com/objects/72324.

User B on the other hand should not be able to access the admin section, and when accessing https://example.com/objects only the few objects he or she has permission to view should be visible. If user B only has permission to view object 524, then https://example.com/objects/524 should return the object, while https://example.com/objects/523 should not return any objects or grant access.

In some cases, such as in the above examples, we can perform authorization testing on GET requests just by working in the address bar of the browser. As user B, access https://example.com/admin and see what happens, then check what https://example.com/objects/523 returns. You should also check if the response to https://example.com/objects/523523523 differs from the previous to see if there is a difference between unauthorized and non-existing objects.

But generally you need a web proxy. Fiddler is my favorite, it is free and you can download here. It lets you break on requests to change anything in the request before sending it to the server, and it also letss you replay old requests and modify them. I will divide the changes into three categories:
  1. Ids of objects in URL or request parameters.
  2. Authentication, such as cookies or access tokens
  3. Everything else, such as HTTP verb, referrer, content-type etc.
Category 1 was discussed above. Category 3 is probably the least likely to return anything interesting, apart from the HTTP verbs, but most of the verbs can be tested in category 2.

Impersonation

To test authentication change (category 2), you will need to identify all the authentication headers. This could be cookies, X-Auth-Token, Anti-CSRF token, session ids and others. You will have to copy one or more of these into every request you are testing. What you are doing is that you are transferring all information on the user. The server will then view the request as coming from the user sending the request you copied the items from.

Replacing authentication and Anti-CSRF tokens is a lot of work, and if you get it wrong your test may return a false negative. So we should automate it.

In Fiddler, go to FiddlerScript. If you haven't enabled it already, you can download it.
Inside class Handlers, insert the following:
    public static RulesOption("&Toggle user") 
    var m_replace: boolean = false; 

    static var replaceUserHeaders = {
        'Cookie' : '',
        'X-CSRF-Token' : ''
    };

Then locate the line
static function OnBeforeRequest(oSession: session){
Right below you can paste these lines:
        if (m_replace){
            for (var key in replaceUserHeaders)
            {
                var value = replaceUserHeaders[key];
                if (null != value && value != ""){
                    oSession.oRequest.headers.Remove(key);
                    oSession.oRequest.headers.Add(key, value);
                }
            }
        }

Now you can have Fiddler swap authentication tokens for you. If only cookies are used for authentication, copy the cookie value from a request made by user B and insert it between the two quotes in the script after 'Cookie' :. If other headers are necessary you can add them as well. If the values are blank the header is skipped. Save the script.

Now log in as user A and go to Fiddler again. In the menu choose Rules > Toggle user (or hit Alt, R, T). Fiddler will now replace all the headers you asked it to. (It may also do this for all other requests based on your Fiddler settings. That's why you suddenly see a lot of login pages, but can't log in to anything else in your browser)

Switch back to the browser. It has been loaded in the context of user A. Click on something and see what happens when the request is sent as user B. Was the result as expected? You'll often have to go back to Fiddler, turn off "Toggle user" and recover to a non error page in the browser before you can proceed and test your next item.

Tip: Use different browsers for users A and B, or use an incognito window in Chrome that doesn't share cookies with the main instance. Don't log out from A or B as both sessions must be running at the same time.

Alternatively you can perform some actions as user A, turn on Toggle user and replay the requests in Fiddler by selecting a request and pressing "r" or by dragging the request to the Composer in Fiddler and then click execute. 

Introducing steroids

Do you want to go really fast? Let's say you are testing a REST API. First you can configure Fiddler to filter out everything you don't need to test such as js, css and perhaps image files. Then you get the authentication headers for user B. Then you let user A log in and perform a number of actions in the application. You should get a list like this:



Note the "204" responses - these are  the result of PUT requests, whilst other requests are GET. There may be some more requests that are not interesting; these should be deleted before you proceed. Then you select all the requests and click "r". Fiddler will issue all requests again, swap the authentication headers and you will see the result. Let's just hope your session hasn't expired.


Here are my results. Previous requests are selected and have a blue background. The first new request is loading the single page application and the second is loading a list of available items. The forth request only returns current permissions. The rest return 404 as intended. Everything is as it should be.

Results may vary. The HTTP status may be 200, 301, 302, 401, 404 or something else; you will need to know your application. If it is 200 OK it was a legitimate request, you broke authorization or got an error page. Just looking at the size of the response (the "Body" column) in the request list in Fiddler will give you an indication. If you get 3xx, 4xx or 5xx the authorization is probably sound (but there may be other issues). In this example GET /surveydesigner/api/surveys?... should return a valid response for all as it should list accessible objects, but the returned content should be different. In my example A gets a response of 45.537 bytes while B only gets 914 bytes.

There are some cases where this doesn't work. For example I once tested an application that logged me out every time I tried to access something I was not authorized to see.

However in most cases it works and in many cases it saves me almost all the hassle of laborious authorization testing. I set it up, click around the application, select all the requests and hit a button. All the red responses indicate sound authorization checks and I can focus on the other few and check if they are interesting.