Gnucitizen.org - Information Security Think Tank

  • A short list of useful links.

    Security

    Blogs

    Zines

    Reference

    Random

    Articles

    Hardware

    Software

    Philosophy

    Programming

    Security

  • Let's say that we want to train an ML model to hack web applications. What would that look like in practice? Let's do this thought experiment.

    We first need to define an environment where the agent (the ML model) can operate and essentially learn. In principle, this would necessitate boiling down the process of web hacking into a limited number of inputs that the agent needs to send in some combination and get rewarded for in return (let's think of this from the perspective of reinforcement learning).

    For this thought experiment, let's consider that the input is just three buttons (think of them as a game controller) which the agent can press to manipulate the environment. The agent will smash the buttons, and in return, it may get some reward for finding a vulnerability or getting pretty close to it (novelty-based reward system).

    To make this more interesting, the agent also needs to pick up some environmental heuristics (let's call these sensors). Let's keep it simple and concentrate on the URL only. We start by eliminating the domain and the protocol and focus on the URL path and query. Each part of the path can be split into tokens, and each token encoded into a numeric value from a dictionary. The URL query follows the same transformation. Each parameter is encoded into a numeric value from a dictionary, and the query value can be encoded into several parameters representing the type, length, complexity score, etc.

    Now we have a vector of values, but we are far from done yet. Let's imagine that the 3-button controller is used to perform the following operations: button A and B are for moving left and right, and button C is for changing the letter (rotating forward, i.e. D becomes E) under the cursor. This information also needs to be encoded into the vector, so we need a value for the cursor position.

    Up to this point, we have an environment (the game), the input, the sensors and some reward system, whatever that may be. The next step is to train the model by allowing the agent to operate inside the environment.

    If we let the agent loose, given enough time (infinite monkey theorem), we should start seeing results. In theory, if our model is built to purpose, the agent will not be entirely random, but it will perform actions with some level of intuition available due to the connections in the neural net. However, it is also possible to teach the model by providing good and bad examples. In other words, human input is required to show how to operate in the environment effectively and, as a result, speed up the training process significantly.

    And just like that, we might have an ML model that hacks, limited to this specific problem domain. While it may not beat web application scanners in terms of depth, I believe it has a chance of discovering novel hacking techniques as long as they are related to input validation. Anything that requires more complex interactions, such as opening separate sessions, figuring out out-of-band attacks, etc., would significantly increase the number of parameters and complicate the model unless a more generalised learning environment is invented.

  • This is an extract from a larger piece of work that I have not finished yet but I thought it might be worth sharing at this point to draw attention to this subject. The subject on Zero Trust has been occupying my thoughts for a long, long time. Only recently, I have managed to put this philosophy in a practical context. While I cannot reveal what I am working on, I can share some of the thinking behind it.

    I believe that fundamentally Information Security is about trust; whether we are referring to the more intuitive kind of trust such as when giving access to systems, to the more convoluted trust models built into the layers of abstraction from the code running in the fundamental microcontrollers and hardware components, all the way to the actual software which runs on top of the OS. Trust is about the belief in the reliability, truth, or ability of someone or something to demonstrate a certain behavior. In other words, it is about assumptions. Security problems arise when these assumptions are questioned and ultimately broken.

    Thus, the practice of Zero Trust (no trust) is the ultimate tool for thinking in terms of Information Security and the only way to create resilient security systems. Zero Trust (ZTS) is not a framework or a tool but a mental model that has a lot of practical manifestations of its values and principles. Experience has taught me that organizations that are built with Zero Trust in mind thrive in the digital world. Organizations that fail to understand and implement Zero Trust fundamentally fail in the long-run or incur significant penalties in terms of arising technical debt, uncomfortable levels of risk, and other negative side-effects. It is essential to point out that Zero Trust is not about creating a culture of mistrust among partners and employees. That is not the goal. Zero Trust is the philosophy of thinking about security and system design from first principles.

    The best way I figured that describes Zero Trust is with the following simple analogy. Let's imagine that we need to set up a brand new company, except that we cannot afford an office and thus we need to work from the local Coffee Shop network. Assuming that this network is already hostile, as we don't know who else might be using it, the kind of decision we will make in terms of security and resilience will be very different from the decisions we will make if we are to work from a traditional office environment where access is strictly governed. In the first scenario our assumptions are that everything is compromised thus we need extra assurances. In the second scenario, we base our decisions on the flawed assumption that our design is safe and secure. The problems arise when these basic assumptions are fundamentally broken.

    There are a number of organizations and wide-spread technologies that are built on top of Zero Trust as a fundamental building block. Needless to say, such entities demonstrate a high degree of resilience with a proven, long-term track record. It is important to state that the ZTS design is not uncommon. Ultimately any SaaS product exercises ZTS design in the context of the larger ecosystem (The Internet). Google GSuite, Amazon AWS, Slack, and other popular software vendors have their own authentication, authorization systems, completely independent from each other. These products are designed by different companies with different development teams under entirely different constraints and objectives with the assumption that they will exist in a hostile world. An adverse security event occurring at Google GSuite, while may have some ripple effects in its vicinity, is unlikely to affect Amazon AWS. Surely there is a level of fragility in the overall system but ultimately systems designed to be used exclusively on The Internet are designed with Zero Trust in mind as far as their public interfaces are concerned.

    There are even some companies that have extended the principles of Zero Trust beyond software security. Amazon is one such organization that currently develops one of the most advanced and well-engineered cloud platforms the world has seen and that is mainly due to internalizing ZTS as part of their engineering culture. Amazon AWS came to be due to a fundamental change in the way Amazon started building its software. Jeff Bezos defined the principle of Zero Trus at the start of AWS with the following key points: 1) all teams will expose their data and functions through service interfaces 2) teams must communicate with each other through these service interfaces 3) no other communication mechanism is allowed 4) it doesn’t matter what technology is used 5) all services must be designed to be extensible. With Zero Trust in mind, Amazon ultimately paved the road to the behemoth that AWS is today. By creating an opaque engineering culture, individual teams in Amazon are ultimately responsible for the security, resilience, and functionalities of their own creations.

    With the exception of a few notable examples, a few engineering organizations fully internalize the principles of Zero Trust in their technology DNA. Most engineering organizations begin the struggle with too much trust early on when not enough time is dedicated to planning and focus and high-pace delivery ultimately takes all the energy. While this is an understandable strategy for young aspiring startups, technical debt accumulated over time eventually brings the organization of its knees with slower than ever delivery cycles, diminishing innovation, and security posture beyond acceptable comfort levels.

    I strongly believe that regardless of our level of technology and security maturity, every decision must be tested by removing all assumptions and thinking in terms of first principles. In other words, before we take an action we must test our assumptions first. Only then, once we face the ultimate and indisputable truth, we are safe to proceed.

    https://www.linkedin.com/pulse/thoughts-zero-trust-petko-d-petkov

  • In episode 4 (0x03) of the cult TV-series Mr. Robot, Elliot hacks into Steel Mountain’s Data Center HVAC system (Heating, Ventilation, and Air Conditioning) by connecting a rouge Raspberry Pi to an exposed network access point. By controlling the temperature of the server rooms, Elliot was theoretically able to pump up the heat to cause the backup tapes to melt and as a result destroy Evil Corp’s ability to recover from a software implant attack meant to encrypt all of their data thus rendering their entire business useless overnight and freeing people from the tyranny of credit cards, unfair loans, and other modern-world evils.

    While Mr. Robot is just a TV show, the attacks and techniques that were scattered around the main plot are considered by many experts as realistic. Those who are familiar with these attacks may also notice that most of the hacking is physical to a large degree, i.e. it requires some level of proximity to the target. Whether Eliot is hacking into cars by using signal replay attacks, breaking into police computers using software bluetooth keyboards or cloning the security guards’ RFID-based access cards, all attacks have an element of proximity danger. The show highlights the fact that in an increasingly more complex physical world where digitization is moving at an extremely fast pace, perfect security is a pipe-dream.

    I am sure this is not the answer you were looking for but let’s not despair. We can rightly assume that perfect security is not possible but surely we should not make it any easier for our adversaries. How do we do that is a matter of philosophy.

    I believe that changing the way we think about devices, networks and systems can influence current and future security strategies and provide long-term returns. My personal mantra is Zero Trust Security. Nothing is trusted, everything is assumed to be compromised. This may sound like a ridiculous proposition but it is, in fact, the basis on which many systems and networks (including The Internet and The Dark Web) are built.

    This is not a wide-spread mental security model in most enterprises, however. In fact, in many businesses, decisions are based on the assumption of some sort of underlying trust. The simple fact that a user needs to be connected to a specific network in order to access a system automatically implies that there is a trust relationship at the core. All decisions, such as how to build additional network infrastructure and providing access to data and applications are based on the flawed assumption that preceded. Over-time the effect of this is layers of unrealistic expectations that can be defeated by the most basic forms of attack.

    Let’s imagine for a second that we don’t work from a dedicated office network but from Starbucks’ free WiFi network. What will change in our design to ensure that our systems are secure? Will we build applications and networks the same way we build them today? How would we govern access? This simple thought exercise can reveal a lot about what sane security design should look like in hostile environments. One thing is for sure, we will not put critical production applications on a hostile network without some guarantees that they are safe and secure. User access is not guaranteed under some static rules, i.e. RBAC (Role-based Access Control).

    The Starbucks security model is not a farfetched example. The fact is that while there is still a need for dedicated corporate networks, there is a continuous demand for more flexibility. Working from home and co-working offices are two recent examples where security teams have no practical say over how things are done. Home and co-working networks should be considered hostile by definition. How do we manage to stay secure in such circumstances? I believe what the security industry calls Zero Trust Security is the key to the answer.

    “Zero Trust Security” is a high-level concept. It is as high-level as “organizational synergy” and other corporate lingo if you ask me. What does it even mean? From my point of view it is applying the Starbucks security model I provided as an example but in practice. It is simply a way of thinking about the problem of security.

    Right now, one way to look at Zero Trust Security (a.k.a Beyond Corp as coined by Google) is as a mechanism that is used for challenging the user at the right time without any previous assumptions. For example, imagine that you login into a critical business application for which you need to provide a set of credentials. Upon successful authentication, you are provided with a basic level of access, i.e. you are not yet fully trusted. When you decide to perform a critical operation, you are challenged again (you are elevating privileges) but this time you need to do an identity verification check combined with a push notification to someone else for approval. Other types of operation may require different levels of escalation entirely depending on your personal circumstances which would be related to the current device, knowledge of information, approval, physical location, a combination of endpoint security features, etc. Applications should be able to adapt to the users’ ever-changing environmental circumstances and provide adequate security controls in response.

    In a world where the user is constantly challenged depending on their personal circumstances, hardware and software implants are less concerning. While there is still a risk, at least it is somewhat mitigated by the fact that users are no different from each other and by default they have minimal access. Only when it is absolutely required, the user is provisioned temporary access so that they can fulfill a specific task. The decision to grant or deny access is performed holistically. While it is still possible for malicious software or devices to impersonate a user, it is significantly harder and more expensive. At the end of the day, it is down to economics. The goals is to make it prohibitively more expensive for attackers.

    Let’s go back to Elliot and FSociety. While Elliot’s attack on Steel Mountain’s HVAC is realistic, it is certainly a product of a much more fundamental problem related to trust. Why is it even possible to influence a critical system by plugging a rouge Raspberry Pi into an insecure network? It certainly helps develop the plot but it does sound like something that can be easily avoided in real life.

    https://www.linkedin.com/pulse/zero-trust-security-petko-d-petkov/

  • Here is a short tutorial on how to search when you do BBH. Now, you can do a lot of unix pipes and grep/ripgrep and so on but if you want to get a scale you might want to look into elasticsearch. #bugbountytips Here are a couple of commands to get you started:

    First, you need to convert your data to JSON. jq is your friend. The following command will curl a request and put it into a document:

    C=$(curl -v https://secapps.com 2>&1) jq -n '{contents: env.C}'

    Sending a document to elasticsearch can be done like this:

    curl -XPUT 'http://localhost:9200/curl/_doc/abc' -d '{}'

    Stringing it all together should look like this:

    T=https://secapps.com D=abc C=$(curl -v "$T" 2>&1) jq -n '{contents: env.C}' | curl -XPUT "http://localhost:9200/curl/_doc/$D" -d '@-'

    In the previous tweet, you control the T and D variables. The rest is just doing the job. These variables are only available for that specific line. You don't need to export them beforehand. Bash is magic!

    Once you have your docs in elasticsearch you can search whichever way you want and with kebana, you can graph them too or set alerts ;) Now you can call yourself a data scientists.

    Spinup a $5 DO node and play around.

    https://twitter.com/pdp/status/1148617500357222401

  • This is what I do to write successful bug bounty reports that payout. I used this technique on 3 hidden bug bounty programs. One of them gave me a 2K bounty. For the other two, I don't know. #bugbountytips

    1. Your introduction is like the first chapter of a book. This is your opportunity to show style and creativity. If this part of the report is weak the rest of the content is likely to be skimmed over.

    2. Don't just slam some technical description and call it a day. Layout exactly how you found the vulnerability. It is like a good story. What tipped you to look in this direction? How long did it take you to get there? What obstacles you had to overcome and how you came on top against all odds?

    3. Don't overstate the impact. I am guilty of this as well. Sometimes things are not as critical as they look but you could provide a measured assessment that will tip the scale your way. It is better to be truthful than annoying.

    4. Rember that it is much easier to make a good first impression than to change the other person's mind. The later comes at personal cost. They have to admit they were wrong. You are not going to make friends this way.

    5. Ask for feedback. I started doing this on my most recent reports. There is no other way to know what you did wrong and how you can improve.

    6. Always provide value. This morning I reported an issue I was almost sure it will be a dup. The potential reward was 5 digits. I wrote them a nice bug report. It was a dup but my report still rocks.

    My most recent reports are like chapters of a book (only 3 of them so far). When I wrote them I was thinking that perhaps one day I will bind them all and create an international bestseller.

    This is also time for self-reflection. You might learn a thing or two about your style or perhaps figure out if it is time to change strategy and direction.

    Remember that there are people on the other side of the computer screen. Communication is an essential skill.

    Btw, I messed up on almost all points above forgetting my years of training. The final report was the most important deliverable at the company where I and @dcuthbert used to work. We used to spend a lot of time to make sure everything was perfect before we hand over.

    https://twitter.com/pdp/status/1148325597690703872

  • Perhaps the reason you are not finding vulns/bugs is either because your environment is not setup correctly or your methodology requires improvements. Here are a number of tips to help you with that #bugbountytips

    1. Pivot between multiple exit nodes. Some services apply ACL rules so you will never be able to reach them from your home broadband. Use AWS C9 ;)

    2. If you use separate profiles for Chrome you are doing it wrong. You will be wrestling the browser and the target application at the same time. Don't do this. Use pown cdb launch -t or pown cdb launch -t -P auto to launch chrome with a pentest-friendly configuration.

    3. Don't stick to the scope. You need to have a bird-eye view to understand how things work. Don't hack but do take out of scope targets for consideration.

    4. You will be as good as your tools. If all you do is Burp, ZAP and the likes you will find the same bugs as your peers. You need to understand that all tools have their own intricacies and you will miss things if you stick to one method only. Diversify!

    5. Automate as much as you can. Sometimes you are lucky and you get a small window of an opportunity. I will let you know about one such bug once it gets triaged - but let's say that it appears I had only a window of a couple of hours to find it.

    6. Either you do surface scans or deep dives. Don't do both at the same time. You will get lost and you will miss things. I've done this mistake myself many times.

    7. Read old reports. The older they are the better. Everyone is looking at the most recent hactivity reports and will follow the same trails. Some of the coolest research you will hear about at BH this year is based on papers written in 2006.

    8. Have a methodology. When I was pentesting for a boutique consultancy company in London I learned to use a well-developed methodology which I initially hated. I still consider it one of the best methodologies I have ever encountered. Develop your own.

    9. Take your time. Tomorrow you will have better ideas. It hurts when you don't find anything but this is part of the creative process. I don't call it a failure, I call it iteration.

    10. Don't obsess about making the perfect system. Far too many people, including myself, try to make the perfect recon or the perfect automated scanning infrastructure, etc. If other people have built it, capitalize on their work. Solve the unsolved problems.

    This is what I have on the top of my head. I hope it helps.

    https://twitter.com/pdp/status/1147928550307258368

  • GNUCITIZEN is 12 years old more or less. Can you believe it?

    I am still trying to process the information and while it saddens me a little that we left the blogging scene for the past 8+ years I am proud with what we have achieved and the kind of legacy we have created. Even after so many successes and countless more failures, it is interesting to trace GC's impact on the security community over the years - from being first to try out different vulnerability disclosure practices to popping calc.exe from PDF documents or hacking hubs - it has more or less started here and took a life on its own.

    It is almost surreal watching our old videos and even sometimes it is a bit embarrassing reading some of our old research, blog posts, and the good old comment flame wars. But this is normal. As you grow older and learn more you also become aware of your personal limitations. The naivety of the younger you are long gone and what is left is the hardened shell fortified by years of experience.

    But, like any good story there is a sequel and while I cannot promise that will be better than the original what I am certain of is that it will be packed with this same hardened experience I talk about. It will be likely more measured, less flamboyant and to some extent conservative but it will also be insightful and perhaps a little naive. Naivety is important although it may come at some personal costs. To some extent, I believe the security community has forgotten its roots and what I am hoping to achieve with GC2 is to restore to balance in the universe.

    So are you ready? I am not. But I am ready to dive deep!

  • I am really happy to announce the first release of proxify. I started writing this tool several years ago but I was never able to finished it. The first release (version 1.0) is now available for download on all platforms: Linux, Mac and Windows.

    What is Proxify

    The idea behind Proxify is to create a proxy that is just good at doing proxying. It is the proxy of all proxies so-to-say. Proxify is lightweight, streamlined, concurrent and very efficient proxy utility that is easy to integrate into other tools. There is a good need for such tools because proxies are quite complex and not trivial to write even if you choose to use a high-level language such as Java, Python or Ruby.

    This tool is written in C and comes with all dependencies pre-included in the package. This means that it is very portable on all platforms and you do not need any special setup. Having all files in the same folder is just enough to make it run.

    Proxify is multithreaded and can in theory make optimal use of multi-cpu environments. The tool is non-buffering which means that it is really fast. It supports WebSockets, WebRTS and other streaming protocols. It fully understands HTTP. It does SSL interception and clones certificates on the fly.

    Integration At Its Core

    As mentioned earlier, Proxify is great if you need to create a custom proxy application or you want to embed proxy functionalities into your own app. The tool will do all the hard work and you just need to provide a very simple restful HTTP service to do the forwarding of data between the browser and the remote target. The protocol is based on the HTTP proxy specifications with the only difference that you don't have to support the CONNECT method or do any SSL interception. Additionally, Proxify automatically detects end of streams when certain types of protocols are used. This makes the tool very handy, easy, re-usable technology that can be used in situations when we just want to write simple scripts to da a trivial job without to understand completely how the whole stack works. Everything is pretty much magically handled for you: and there is a lot going on behind the scene.

    Other Usages

    Proxify can be used for many things. Here is an example of how you will launch the tool to hex dump all the trafic to the screen:

    ./proxify -p 8080 -x

    The output of this command will look like this:

    xxxxxx:xxxxx pdp$ xxx/proxify -p 8080 -x
    Proxify Version 1.0
    
    Copyright 2013 GNUCITIZEN. All rights reserved.
    Commercial use of this software is strictly prohibited.
    For commercial options please contact us at http://www.gnucitizen.org/.
    
    [0000]   47 45 54 20 2F 20 48 54   54 50 2F 31 2E 31 0D 0A   GET...HT TP.1.1..
    [0000]   55 73 65 72 2D 41 67 65   6E 74 3A 20 63 75 72 6C   User.Age nt..curl
    [bfc8]   2F 37 2E 32 37 2E 30 0D   0A 48 6F 73 74 3A 20 77   .7.27.0. .Host..w
    [f4c9]   77 77 2E 67 6E 75 63 69   74 69 7A 65 6E 2E 6F 72   ww.gnuci tizen.or
    [cea4]   67 0D 0A 41 63 63 65 70   74 3A 20 2A 2F 2A 0D 0A   g..Accep t.......
    [609f]   50 72 6F 78 79 2D 43 6F   6E 6E 65 63 74 69 6F 6E   Proxy.Co nnection
    [f2e5]   3A 20 4B 65 65 70 2D 41   6C 69 76 65 0D 0A 0D 0A   ..Keep.A live....

    If we want to dump all requests and responses into individual files than we can use the following command:

    ./proxify -p 8080 -D /path/to/folder

    This will also capture everything that is streamed as well, which means that you can even record video, audio and whatever is streaming over HTTP. You can mix and match all options for bets result and please check the command flags for more information.

    Tool Readiness

    Proxify is essentially ready for most use-cases although there are several things which needs to be improved especially around the SSL interception. Please use the tool with caution because it may have memory leaks or even memory corruption bugs. A huge portions of the code is not throughly tested. This is something I am working to improve in the near future. I am also planning to add more options for even better control over the process.

    Fair Use

    The tool is free! You can use it right away. However, comercial use is strictly prohibited at this stage. If you want to use the tool for comercial purposes, please get in touch to discuss your options.

  • It is hard to get back to blogging especially when there are easier alternatives to scratch your itch - I am talking about twitter. However, I decided to make the effort and see where it takes me. It will be difficult initially but practice leads to continuous improvement.

    What I would like to do is to highlight some of the work I did to take two relatively simple and straightforward penetration testing practices to the next level: this is XML and JSON fuzzing. If you have worked as a penetration tester or you have been moderately interested in web security you should have encountered a web service written on top of either of these technologies.

    Both JSON and XML are slick beasts. They are both structured data containers and rely on well-formatted documents in order to be processed successfully. There is very little room for movement out of the spec and in fact they are both error intolerant. Most parsers will explode even on the tiniest errors in the document structure, such as for example if you leave a comma on the last item of an array inside a JSON structure. The reason I am mentioning this is because this is the basis of the two core fuzzing strategies - as I define them.

    The first strategy is to concentrate on finding bugs in the actual parser/processor. In this case we will aim to submit ill-formatted documents and observe for strange behaviour. The types of problems typically discovered through this strategy are memory corruption bugs. The reason for this is because even in 2012 strings are still difficult to deal with and both formats are human-readable and rely heavily on processing text. Even binary input is represented textually.

    The second strategy is to concentrate on finding bugs after the document has been parsed/processed. In this case we will aim to submit unexpected input but still stick to the format and the specifications of the document. This strategy is used to discover a lot wider range of bugs depending on how the structured data is used later on inside the application. The types of bugs discovered will depend on the targeted platform, language and all kinds of other things.

    Both strategies can be mixed. However, from personal experience, I believe that you will be better off if you don't because things can get quite confusing and you may not be able to setup all necessary measurement equipment correctly in order to find actual bugs or extract any useful data.

    The first strategy I tend to leave it in the realm of research. The reason for this is because there are not that many parsers for both JSON and XML. Each programming language usually offers a few libraries which are widely adopted. Fuzzing these libraries will get us bugs which apply to all applications that make use of them - i.e. research in my opinion. On the other hand, the second strategy is targeted towards specific applications and platforms. And this is what I will mainly concentrate on for the rest of this series of articles.

    As I discussed earlier this "second", so-to-say, strategy is all about sending unexpected input but still keeping the document well formatted. So what is unexpected input? Well unexpected input is everything from very large numbers to very small ones (MIN_INT, MAX_INT, UNSIGNED MAX_INT, LONG, etc). Unexpected input is also logical values such as true and false, the special atom nil, null and 0 and 1. Some other unexpected values could be empty data structures where a value is expected such as when sending empty array but the application expects a number or a string. The list goes on and on and you can spend weeks tuning a fuzzer to find more interesting stuff by incorporating more unexpected input.

    It is fair to say that not all unexpected values are equal. Some values are more likely to cause strange behaviour than others and this all depends on the target platform. Let's take JSON for example. In JSON we have 2 main structured containers: {} - object and [] - array. Now, Java applications typically map/unmarshall JSON structures to classes. Therefore if we have a class which has public member variable "a" of type integer but we send an empty object, an exception will be raised before the input is even processed by the application. This is not quite like that in other programming languages which are not so strictly typed. For example, in PHP the developer may expect an integer but actually the parser will produce an array and while this will cause an error at some point later inside the application it will not immediately explode during parsing. This kind of conditions are very interesting.

    So why I am mentioning this? Well, typically a fuzzer will generate a lot of combinations. Some of them may be fruitful. Most of them will be waste of time. However, by knowing what we are up against we can tune the fuzzer to be smarter and as a result of this a lot faster and more fruitful - I rather spend manually analysing 1000 results than 1000000.

    I think I am running out of energy. After so many years of silence this post looks quite lengthy. Btw, such fuzzers exist. You can find one as part of the Websecurify Online Suite and you can go ahead and try it for free now. Both JSON and XML are well supported. The reason I am mentioning this is because the rest of the series will concentrate on exploring how these fuzzers work and what kind of vulnerabilities we can find with them.

  • This is really one of my favourite talks from this year's HITB in KL.

    @haroonmeer did an exceptional job at describing what it takes to produce an exceptional piece of work/research and the various pitfalls and sacrifices one needs to make.