Hacker Newsnew | past | comments | ask | show | jobs | submit | capt8bit's commentslogin

This is common misinformation. Even in this case, OpenBSD did not break the embargo. After protesting, they received the permission of the researcher to publish:

  Note that I wrote and included a suggested diff for OpenBSD already, and that
  at the time the tentative disclosure deadline was around the end of August. As
  a compromise, I allowed them to silently patch the vulnerability.
(https://www.krackattacks.com/#openbsd)


I have used mitmproxy for performing web application pentest for years.

If you want something to create/intercept/edit/tamper/replay requests, this is your tool. If you want to script any of those things, this is still your tool.

However, burp comes with a lot of bells and whistles that don't make a lot of sense to build in to mitmproxy, but you can script yourself. For example, there is no intruder, spider, or scanner tool. But, they have an easy to use interface to write scripts that will be run on every request you make, or individual requests.

Or, you can just pass all mitmproxy traffic out to burp and get the best of both worlds.


I found burps active scanning feature in the pro version insanely valuable. So far it has found blind SQL injections, numerous xss vulns, command injection and even XXE. I think it's very hard to script such a comprehensive feature into mitmproxy (that is burp pro with collaborator servers).

Still if you're comparing the free version of burp with mitmproxy they do seem very similar. I wouldn't know for sure since I've never used mitmproxy.


I wouldn't bother with the free version of Burp. If that's where you're at, use Fiddler or mitmproxy.

For software developers doing routine integration-test security checks, I think there's probably a lot of value in the scanner. For professional testers, though, I think the scanner does more harm than good: if it's routinely spotting things you don't spot manually, you should revise your technique.


Many thanks to you, cortesi, and the rest of the team.

I hope this doesn't mean that the command-line interface is going to become a second-class interface? The CLI is what originally attracted me to mitmproxy. Having CLI keyboard shortcuts helped me immediately speed up my workflow faster than I was ever able to customize ZAP or Burp.

I love mitmproxy and use it for all my web application pentests. The flexibility is amazing, and allows me to quickly adapt, or write a rewrite rule or plugin, in any situation. Regex rules for limits and intercepts are amazing too.

Keep up the good work.


The console interface will always remain a first-class interface. We plan to bring mitmweb up to feature parity with the console, after which we'll make sure that neither tool has capabilities the other doesn't. I'm super excited about the new possibilities the web interface opens up for the project (watch this space), but the command-line tool is wired into my fingers. :)


That's interesting to me. Can you describe your Burp workflow, and how you accomplish the same things in mitmproxy?

One of the things we're doing this year for our clients is selling them on the idea of doing basic security integration testing as part of their normal dev process, which might involve these companies buying copies of Burp for their team. But I could probably be convinced that mitmproxy would work just as well.

(I am, for what it's worth, extremely familiar with Burp, but only casually acquainted with mitmproxy).


> (I am, for what it's worth, extremely familiar with Burp, but only casually acquainted with mitmproxy).

You may be disappointed, since I am the opposite. I use burp in rare cases now, since I've been using mitmproxy for so long.

>That's interesting to me. Can you describe your Burp workflow, and how you accomplish the same things in mitmproxy?

You may also be disappointed on how my methodology might differ. I work for a shop that performs grey-box (authenticated) web application tests that focus on manual comprehensive coverage. So my work flow goes something like this:

  - Manually walk the entire authenticated application, or designated portion. Populating all areas and using all application functionality.
  - Analyze mitmproxy logs to identify all application entry points, and other OWASP areas of interest. (mitmproxy's "Limit" feature makes this fast and easy)
  - Manually Test every endpoint, or other item, by duplicating the request, editing, and resending.
This is pretty basic, so I'm sure you see how this could be done with burp's repeater and intruder, which were the main areas I used.

In mitmproxy you can easily duplicate a flow, edit it, and resend it. Then, based on the response, resend it again. That's what I do for API testing, for web application testing it is usually closer to:

  - Stage requests in Hackbar on Firefox and send the request, which goes through mitmproxy.
  - Use Mitmproxy for fine-tuning anything, watching the details, and scripting the tedious.
> One of the things we're doing this year for our clients is selling them on the idea of doing basic security integration testing as part of their normal dev process, which might involve these companies buying copies of Burp for their team. But I could probably be convinced that mitmproxy would work just as well.

My basic common work flow is probably not very helpful in this case, but I think mitmproxy could still be very helpful to you. You probably know that mitmproxy can replay a series of requests, and has a great scripting API. I think these two features could be used together to meet a lot of your needs. Let me tell you about some of my experiences:

In some cases a customer just can't get their fix quite right, so I will need to repeatedly perform a series of tedious steps to retest an issue. So, I automate the process by using mitmproxy to save a file of the steps involved in testing the vulnerability. This may be something like:

  Log in -> Create widget -> edit widget -> destroy widget -> submit note
Then, I can pass this to mitmdump to be replayed, with sticky-cookies enabled, and I just watch the output to make sure the output is correct.

As far as scripting goes, you may be interested in cases where I wanted to quickly test every input parameter for input validation of a list of characters. The script I wrote would be run on every request I made to the site I was testing, if it was a post request it would do the following:

  - Duplicate the request for every POST parameter in the request
  - Inject the list of characters in to the post parameter
  - Run/Replay the new request
Hopefully you can make sense of my examples. I think by replaying request, or writing simple testing scripts, you could automate a lot of the basic security integration testing.

The beautiful thing about mitmproxy is that it has a simple, powerful, scriptable, core. By using the powerful building-blocks it's provided I can make it do whatever I want it to do. But, it may require a little python scripting.

Let me know if want additional attempts at clarification.

edited for formatting


This is great.

For what it's worth, the approach you take with web applications is pretty much the same as the one used by all the high-end software security firms (certainly Matasano, iSEC, Leviathan, and Bishop Fox). Out on a limb, I'd say every consultant at every one of those firms gets a copy of Burp.

The walk/filter/replay workflow you're talking about is one Burp is built around --- that's the Proxy History, "Send To Repeater", and "Repeater" features.

Regarding software teams at startups: I totally buy that mitmproxy is more scriptable than Burp (it doesn't hurt that most of the people we're working with in 2017 are Python shops). But I used Intruder a lot when testing, and I'm not sure I'd want to lose that; I think there's a lot of value in the sort of but not quite random fuzzing Burp is good at doing, for serendipity finds.


> I hope this doesn't mean that the command-line interface is going to become a second-class interface?

No worries - the command line interface is definitely staying a first-class interface.

We're also planning to mirror the CLI keyboard shortcuts in the web interface (to the extent possible). Some are already in, others are coming. :)


The Security Researchers that I work with, and myself included, usually follow the RFPolicy:

http://www.wiretrip.net/p/rfpolicy.html

This responsible disclosure policy was first put together by rain.forest.puppy. (One of the first people to discover SQL injection, and one of the founders of the OSVDB.) We have had good results with it, and nearly all the people that we have disclosed vulnerabilities to have found it to be more than fair, and motivating. The researchers have found that it also gets results quickly.

By default it requires you to disclose that you are following this policy, disclose the vulnerability, as directed in the document, then give them 5 days to respond. If you have done everything you could to contact them, and they will not respond, then disclose.

However, as others here have been saying, it may take a while to fix this problem. If they do respond, they may want to "negotiate" more than 5 days to fix the issue. That's great. Get some details, set up a reasonable timeline with them, and get a contact's information. Then it's up to you to hold them accountable. Sometimes this means disclosing on the agreed upon deadline, other times it means following up and seeing if more time should be given before disclosing.

The main issue, as you point out, is keeping users/data safe. If the company is unwilling to work with you, not disclosing could put other people at risk, because you didn't stop unsuspecting users from signing up for the service. On the other hand, disclosing without working with the company can unnecessarily put the current users/data at risk.

It's good to have a balance. The RFPolicy has helped me to have that balance when doing responsible disclosure. Give it a look over. It's not too late to use the RFPolicy now.


Thank you for the link of the policy, it seems to be a fair one.

I just read it and it looks like you are misinterpreting it.

> then give them 5 days to respond > they may want to "negotiate" more than 5 days to fix the issue

"5 working days" is not the same at all than "5 days", think of public holidays and week ends ...


You are absolutely right. Big difference. I did not mean to imply that it was 5 days, regardless of holidays/weekends.

In fact, because we are online more often during holidays, it is during holidays that we most often find vulnerabilities that we need to disclose.

Especially during December, we are much more lenient.


> I did not mean to imply that it was 5 days

My comment was supposed to be an addendum but I failed by nitpicking and questioning your interpretation. I'm sorry.

As a security researcher, may I ask you these questions:

Which channel are you using as a first contact ? Would it be enough for me as a saas supplier to monitor security@myservice.com ? I must admit I'm bit afraid by a cleartext channel for this kind of disclosure. Would you have some recommandations for the receiving part of the vulnerability ?


security@ is common, but it's unlikely that I'm going to blindly send an email to an address without knowing it is monitored, except as a last resort.

Having a web page on your website that is easily identifiable via google is probably one of the best. You can put a PGP key there if you like. You will find that security researchers have a wide range of caring about how secure the communications are, so don't be surprised if lots do not bother to use it, since it's still your data that is at risk and not theirs. Alternatively, there are bug bounty programs for incentivizing researchers (both to find bugs, but also to play nice), and those generally work over HTTPS, so it's encrypted to that extent.

HackerOne recently launched a Directory service for security contacts: https://hackerone.com/blog/wheres-that-security-at I don't think that is the most common way by far, but if you particularly care, you might want to use that.


> My comment was supposed to be an addendum but I failed by nitpicking and questioning your interpretation. I'm sorry.

Well, I appreciate you pointing out where I was vague. You provided, and brought about, some important clarification.

> Which channel are you using as a first contact ?

It takes quite a bit of effort to make a nice writeup for an identified vulnerability, and hunt down where to send it. Generally, someone willing to take the time to be nice and send in a detailed report, is willing to look for the best channel to send information through.

I start by looking through the "About" and "Contact" pages of the web application or service that I found the vulnerability on. If they have a reference to a bug tracking system, a system administrator, or a security contact, I send it there. If they are all emails, I usually send a message to all of them, to make sure that someone receives it.(If I don't know who is going to get the message, I am initially vague on the vulnerability, and ask for a technical contact to forward the technical information to.) Otherwise, I look at whois information to see if there is a good technical contact. If I still haven't found a good contact, I send a message to all of the emails listed in the RFPolicy. If all of those messages bounce, I send a message to any email address I can find for the domain. And once, I even called in to a sales line after all of this, and explained the situation. They got me in contact with "Bob the website guy", to take care of the issue.

I have never received a bounty for any of these. I just want to do my best to make sure it gets taken care of.

> Would it be enough for me as a saas supplier to monitor security@myservice.com ?

I think this would be a good backup plan. Probably safe to add forwards for all the RFPolicy emails.

> I must admit I'm bit afraid by a cleartext channel for this kind of disclosure. Would you have some recommandations for the receiving part of the vulnerability ?

My recommendation would be to make it easy and clear to find how you would prefer to receive notices. If you include your PGP key on your contact page with a message that all security reports should be encrypted, most I know are willing to do so. If you prefer them to send it via an HTTPS "Contact" page, say so, and most will see that and send it via that channel. Just like your saas, if you make it intuitive and useful, they will be happy to use it.


"I've seen good documentations, but nothing beats the Arch Wiki. It's the most comprehensive Wiki you could imagine."

I have heard this a number of times from Arch Fans, but I think I disagree. While I have referenced their wonderful Wiki often for "cook book" style answers (like how to fix URXVT+Tmux interactions), I do not view it the same as good documentation. I would also not really call it "comprehensive". To me, good documentation is as comprehensive as possible of the entire subject, and not just a "how to" on configuring it to look or behave like another user's installation.

Compare the Arch wiki to The OpenBSD FAQ [1], the FreeBSD Handbook [2], or good man pages [3]. The FAQ and Handbook have thousands of pages of comprehensive documentation and examples. A well written man page should be capable of answering most of your questions and allowing you to determine what actions you want to take.

I prefer the comprehensive documentation, or thorough man pages. Although, I definitely appreciate the usefulness of the "cook book" style Arch wiki.

[1] http://www.openbsd.org/faq/ [2] https://www.freebsd.org/doc/handbook/ [3] http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man4/...


I completely agree that we need comprehensive documentations. That's irreplaceable when you need to properly configure something. Good manuals are mostly used as a reference.

The Arch Wiki is something different. It guides you step by step how to reach your destination, and that's what most of the users need.


I'd call the Arch wiki comprehensive, but not cohesive. A prerequisite for cohesive documentation is that the system it's documenting is cohesive. The BSDs, as you noted, have cohesiveness down to a science, and their documentation reflects that.

Take as an example the network configuration for FreeBSD [1] and Arch [2]. FreeBSD, because it's a cohesive system, is able to document the One Correct Way of configuring the network.

The Arch instructions are much more "choose your own adventure", but that reflects the array of actual choices that you have in Linux for network config. Should you use systemd-networkd, dhcpcd, netctl, ifplugd, etc? Or maybe some combination? If you need wireless networking [3] which connection manager do you want to use? How is it going to interact with the thing managing your wired networking?

Linux, for better or worse, gives you a very fragmented set of options compared to more cohesive systems like the BSDs. The Arch wiki does a good job of documenting those fragmented options.

[1] https://www.freebsd.org/doc/handbook/config-network-setup.ht... [2] https://wiki.archlinux.org/index.php/Network_configuration [3] https://wiki.archlinux.org/index.php/Wireless_network_config...


The documentation ethic comes, for better or worse, with somewhat of an RTFM ethos. While I am considering Arch for my enroute ThinkPad, Ubuntu is still a consideration because of AskUbuntu on StackExchange. QandA offers an alternative set of community standards in regard to support.


The Arch Wiki is a bit of a mix. Part documentation, part how-to guide, but it's still one of the best references to understand how stuff works (and how to use it) in *nix in general.

I do agree though, that OpenBSD has by far the best documentation out there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: