It’s All About Quality

June 28th, 2011

I recently joined Core Security, and one thing that strikes me is the difference in philosophy and approach to the development of exploits between our solutions and others on the market. Core Security offers each customer peace of mind in knowing that there is a dedicated group of Core employees tasked with writing and continually testing the exploits in our products. Core Security doesn’t pay bounties to third parties in an effort to produce exploits. Instead, we’ve developed a systematic approach to in-house exploit development and testing that is superior in this industry.

For more than 10 years, Core Security has hired and maintained a full-time, in-house team of dedicated Exploit Writers and Testers focused on producing safe and reliable commercial-grade exploits. Our customers often cite our in-house CoreLabs R&D  operation, our practice of anticipating future info security needs, and our dedication to innovation as key reasons for selecting Core Security’s solutions to carry out their proactive security testing and measurement efforts.

In this post, I’m going to talk about Core’s commercial-grade exploits and what we do to make them stable and consistent. I believe that our commitment to developing exploits in-house is one thing that separates our solutions from some other testing options. Furthermore, we maintain, update and improve our exploits as the product capabilities grow – so, for instance, they work while performing Man-in-The-Middle attacks over WiFi or when pivoting from multiple operating systems.  It is for these reasons, and the reasons listed below, that Core Security’s approach to writing and testing and releasing exploits to our customers is the best in the industry. 

It’s all about testing …

Our extensive library of exploits is continuously run through rigorous QA, using an effective combination of automated testing processes and close personal inspection. The testing teams work hard to reduce the chances that our exploits will have unpredicted or ancillary effects on tested systems or processes.

While the automation of testing for existing exploits is relatively easy, extensive testing of new, “work in progress” exploits is significantly harder and can only be done by hand. The Testing Team exhaustively tests the new exploits in a range of environments to eliminate or reduce the circumstances when those exploits could cause issues in the target environment.

It’s all about uptime …

The integrity of a system is directly related to its ability to operate in an unimpaired condition.   Core Security’s exploits are written and tested to a commercial-grade standard, and our agents are designed with the same care. Core Security seeks to not disrupt the integrity of tested systems while running exploits, and successful exploits will automatically deploy a payload – the patented Core Agent. This agent can be deployed as memory-resident, file-based or persistent. Memory-resident agents are run in RAM, and they are automatically removed under a number of circumstances. These include events such as when a user issues a cleanup command, a user loses connection to the agent, or the compromised service or machine is restarted. File-based agents or Persistent agents can be copied to a target’s file system and can be removed using our solutions’ Clean-Up capabilities or by hand.

In the rare cases where the agent is maintained on a device after a test is completed, it is automatically erased from the system’s memory the next time the tested machine is rebooted (if it was memory resident). For file and persistent agents that were not cleaned up, it is not possible for anyone else to communicate with that agent due to the authentication that is performed between the Impact workspace and agents it has deployed. However, it possible for Core Impact to reconnect to that exploit and “Clean Up.” Additionally, all information about how the agent was packaged is contained in the module logs of Core Impact solutions, providing enough information to remove the agent by hand.

It’s all about stability …

Some exploits – due to a factor of the vulnerability they are exploiting – could disrupt the stability of the targeted service. Consequently, while Core Security has a goal of providing only safe exploits, there occasionally is the potential to disrupt systems processes when executing some exploits. Before one of these specific exploits is executed, users are cautioned regarding the potential implications of running that exploit.

One goal of the Exploit Testing Team is to determine if an exploit will cause a loss of system stability. Of course, inadvertently putting a system into a degraded state during an assessment is typically not part of the Scope of Work of most security test and measurement assessments. Core Security therefore offers customers the peace of mind to know that our exploits are thoroughly tested, and that we have minimized the likelihood of crashing systems or making services unavailable.

It’s all about cleaning up …

Another common concern of security testers is ensuring that any agents/payloads that they deploy will establish a path by which attackers could someday find their own way into an organization’s networks or systems. During the penetration test, our product design of mutual authentication does not permit backdoor entry. And after the test is over, if communication with a file-based or persistent agent is lost, it is possible to reconnect to that running agent and issue the “Clean Up” command. Furthermore, Core’s products log all of their activities, meaning that agents can be easily found in the event that a manual clean up is required.

It’s all about trust [but verify] …

Over the years, many of our customers – including large US Federal Agencies – have conducted independent code reviews of our products to confirm their safety and predictability in sensitive IT environments. To my knowledge, these reviews have always resulted in the organization deciding to implement the solution.

It’s all about making the best use of your time …

What is most revealing about Core Security’s products is that they make an expensive, time-consuming and potentially disruptive process quicker, safer and easier.  While it might be possible to replicate the results provided by a Core Security product by hand (a highly skilled and trained hand), it’s clear to me that a better use case is to have that “skilled hand” use a commercial-grade product that stabilizes, standardizes and automates tasks that were previously resource-intensive and potentially risky – allowing them to focus their attention on those aspects of testing that benefit from a “wetware” based approach. As such, our customers don’t have to worry about the stability and security of exploits emanating from the public domain. Instead, they gain consistent and repeatable real-world testing capabilities – and they free up bandwidth to focus on really tricky things that are best suited to the human mind.

– Brian Curry, Product Marketing Manager

To comment on this post, please CLICK HERE.

Enough Already!

June 9th, 2011

Could it be any clearer that information security approaches that focus on defensive tactics just aren’t working? How many times do we need to open the Wall Street Journal and see a headline about how yet another company has had sensitive consumer information stolen?  In just the past day, we’ve spoken with several national reporters and many companies who want to know what can be done to control the escalation of major breaches. Our answer is pretty straightforward: proactively test yourself and find the problem before someone else does.

It’s time to go on the offensive with security. With all of the coverage on cyber-attacks affecting major corporations, most recently Citigroup, the question that comes to mind is, “Why are companies so hesitant to perform regular security testing?”  By security testing, I mean using safe attacks (the kind that give you access but don’t cause any damage) to proactively see if you can break into your own infrastructure. Basically, you figure out if you have a hole that would allow a bad guy access and fix it before they figure out a way to leverage it.

Here at Core Security Technologies, we have over 1,300 customers who are testing their security in real-time for breaches, but what about the other tens of thousands of companies that aren’t?  In today’s marketplace, we could meet 10 companies in the same industry, with the same public profile and virtually the same technology deployed. But only half of them would be willing to engage in testing themselves for exploitable holes. Why? Everyone we speak to says they really like the idea of finding out if someone could break in (think APT) and steal something they care about. Not only is this a well-established security practice, but there are also mature technologies that can completely automate the process for you. So what is the real issue?

I think if you ask many organizations why they aren’t proactively testing their security, the answers would boil down to a few simple issues – most notably, security information can be overwhelming and showing where you have a problem is a scary proposition. A lot of organizations don’t want to admit to themselves (or their management) that they aren’t perfect and/or then have to allocate the resources needed to fix the problems. Plausible deniability is an easy route for too many people. Some organizations are worried that you might leave a service unavailable during this testing, which admittedly is a possibility. However, there are best practices to maximize service uptime, like working with the asset owner before the test, testing in the lab, testing a staging environment, or testing during a maintenance window. Just do it at a time of YOUR choosing.

A good friend and industry analyst said, “The bad guys don’t sign a code of ethics.” Their attacks are coming at the worst possible time and are geared to get as much info as they can. Companies must have the will to find out what paths potential hackers could use to infiltrate their systems and fix them before a breach occurs. Will it be a little bumpy along the way? Yes. But the bottom line is that we can’t solely rely on defensive security products any longer. It’s not working.

- Mike Yaffe, Director of Enterprise Marketing

To comment on this post, please CLICK HERE.

Bringing a Legacy of Proactive, On-Demand Security Testing to the Cloud

June 6th, 2011

Today, I’m proud to share news of another significant advancement from Core Security Technologies: the Core CloudInspect cloud security testing solution for Amazon Web Services (AWS).

Given the spate of high-profile breaches in the news, it’s no surprise that organizations are actively seeking ways to proactively assess their security postures before incidents occur. At Core, we’ve long said that penetration testing puts this power into the hands of our customers – allowing them to proactively validate their security controls and helping them to better answer the question, “Could this happen to us?”

Now, as their critical information assets move into virtualized environments outside the walls of their organizations, security professionals and business leaders alike are again wondering how best to verify their IT security and answer practical questions about their threat readiness.

Along with our partners at Amazon, we have been talking with and listening to our customers and the broader community – and it’s clear that the cloud is presenting them with visibility issues regarding security. So, as we’ve done since 1995, Core Security is responding to customer and market needs with a solution that offers clarity via proactive, real-world security intelligence.

As the first on-demand security testing solution for cloud deployments, Core CloudInspect offers organizations a level of security visibility and access previously unavailable outside of their internal environments. With CloudInspect, Amazon AWS customers can verify the readiness of their cloud-based systems and applications versus real-world threats – and get the actionable information they need to address any exposures.

It’s clear that we could not bring security testing to the cloud without the cumulative base of technology that has been proven for over a decade in CORE IMPACT Professional and further honed in CORE INSIGHT Enterprise – or without the hundreds of combined years of expertise brought to bear by our research, development, and consulting services groups.

So while CloudInspect is indeed the first of its kind, I’m confident that our legacy of research, experience and innovation will make it the effective solution that our customers are demanding to verify and validate the security standing of their cloud deployments.

– Mark Hatton, president & CEO

CloudInspect home page

Click to visit the CloudInspect home page

 

 

 

 

 

 

 

 

 

 

 

 

To comment on this post, please CLICK HERE.

Looking Behind the Curtain: Evading Antivirus and other Defenses with CORE IMPACT Pro

May 23rd, 2011

A common question I get asked by customers and non-customers alike is about how our products can help them assess the effectiveness of their defensive products and measure the amount of additional security that these investments offer. In a lot of environments patches or fixes cannot be applied (either at all or in a timely manner), so a compensating control (an Antivirus or some kind of IDS/IPS) is deployed instead to reduce or eliminate the threat the vulnerability presents to the business.

When you consider our solutions and those defensive technologies, you really have two products directly opposed to each other. Our products are designed and engineered to allow you to test your environments and defenses using real-world attack techniques; those products are intended to stop real world attacks from gain any foothold in the environment. What does this mean? In reality it means if one of our exploits is successful, half of our customers are ringing the creators of their defensive products complaining that they didn’t stop the exploit, and when an exploit fails, half our customers are calling us complaining that the exploit didn’t evade the defensive product. We end up in a cat and mouse game – I would say race but that implies a finish line, and I don’t see any sign of that.

It is an intellectually fun and exciting game – the reality is that evading these defenses is hard, and even when you do it doesn’t mean you are finished. When we implement a feature that evades AVs or IDS/IPS-type products and release it to our customers, we don’t break out the champagne and reassign the folks that devised the evasion – instead we monitor the defensive products to determine how they react to the change and design triggers/techniques to mitigate those changes – and then the whole dance starts again.

In order to better highlight the different types of work that we do around Exploit Effectiveness (our way of describing the desire for exploits to beat the defenses trying to stop them) I asked Core Security developer Alejandro David Weil to describe a sample of the work he has been doing recently. I think you will agree that it is an interesting insight into the various methods available to enable our exploits to avoid detection and help our customers better measure the effectiveness of the defenses they have invested in.

- Alex Horan, CORE IMPACT Product Manager

 

The Many Faces of Exploit Effectiveness - by CORE IMPACT developer Alejandro David Weil

A few months ago, the Core Security Exploit Effectiveness team started digging deeper into the evasion techniques that we build into our products. This is a really broad topic, and it would be impossible to comprehensively cover our evasion capabilities in one post. Exploit effectiveness applies to almost every penetration testing feature we offer, and when building our products we frequently tackle a number of considerations, such as:

  • choosing the best connection method
  • defining an exploit selection order
  • generating less-suspicious network traffic

However, antivirus and IDS evasion consistently rise above the rest, since it is their job to “catch” attackers.

Well, our job at Core Security is to help you test your organization against real-world attacks. We’re therefore constantly looking for ways to circumvent defensive technologies and demonstrate how attackers can still take advantage of vulnerabilities – with or without the latest AV or IDS in place. Surprisingly (or maybe not), it’s still possible to skip protections using some well-known techniques.

Case Study: Client-Side Evasion

We recently took a CORE IMPACT client-side exploit for a vulnerability in a Microsoft ActiveX control and ran it against a vulnerable machine replicated with nine of the most popular antivirus solutions. This relatively standard attack was only detected by two of the AVs. Although we expected the exploit to be flagged by more than two solutions, those two stops proved that we had still work to do.

Solution A: HTML Obfuscation

The first antivirus that blocked the exploit is, in my opinion, the best-known antivirus software on the market. It detected exactly the vulnerability the exploit was designed to target. Since the AV knew what we were attacking, it might appear at first glance that there was little we could do – nonetheless, I began some tests to learn how exactly it detected the attack. However, a co-worker soon suggested that I simply obfuscate the HTML containing the vulnerable function used. That did the trick, so I never had to determine how the attack was detected; I just had to “cloak” it.

We created the obfuscation capability by recursively parsing HTML and javascript code, splitting it into chunks, and re-coding it randomly with specific javascript functions several times over. The encoding functions provide, among other things, symbols and strings compression and translation to randomly defined character sets.

Solution B: The Mutable Decoder

Solving the next evasion challenge was a little more complex, so bear with me here – it’s interesting.

We recently had been working on a machine-code encoder to evade string matching, which is a technique commonly used by virus writers since the dawn of AV. The first thing a lot of virus writers do is encrypt the code with different keys and/or algorithms to make it impossible to get a substring-based pattern of them. While this approach seems ok at first glance, the problem remains in that the virus has to include code to decrypt the malicious code – so you need to include decrypting code, but you can’t always use the same code (again, to avoid fingerprinting). Since the mutated decoder code has to be generated from the virus code itself, so defensive solution vendors can take one instance of the virus, analyze it, and make it generate its different decoders – ultimately aiding in their string matching efforts.

We therefore took the approach of generating different variations of decoders when our exploits request them. Also, since we generate decoders from PYTHON, we can perform more complex code generation that if we can do the same in assembler. This approach effectively mutates the decoder routine and therefore enhances the exploit’s overall effectiveness.

The Mutable Decoder supports the inlineegg code generation library we use to make code eggs. In designing the Decoder, we followed several criteria including:

  • the routine had to be made of instructions and higher-level blocks of code for that could generate and automatically switch to different versions
  • there could be no fixed byte in any position
  • it had to employ deterministic generation

As a result, we ended up with over a thousand different decoders in which we spread “garbage code,” which is composed of different machine instructions with restrictions that prevent them from negatively affecting the decoder. That gave us – are you ready for this? – 1,191,310,725,003,002,020 (about 1.19e+18) garbage codes. Mixed through decoder routines, these convert to more than 1.19e+21 different codes. AND we still can generate decoder code in a deterministic way that lets us test them all before release. Needless to say, this makes it much harder for defensive solutions to make a string-based match.

“But what about AV sandboxing and behavioral analysis?” you ask?

It’s true that the Mutable Decoder was made with IDS/IPS/HIPS software evasion in mind, because its focus was to make it harder to get a string-match. We didn’t think it would be effective with antivirus, since AV solutions typically use sandboxing/code emulation and behavioral analysis, which makes an encoder useless. Sandboxing and code emulation leave the suspicious or unexpected code naked and easily detectable after executing the decoding routine. Behavioral analysis is designed to detect the suspicious behavior while it’s happening. However, these techniques require much more processing power and execution time – so, if the AV knows exactly what to look for and where to look, string matching is often sufficient.

However, we were wrong about our Mutable Decoder’s ability to evade AV. When we analyzed how the second antivirus solution detected our attack, we found it was triggered by a little fragment of code our exploits use to deploy an IMPACT agent. We then found that adding the Mutable Decoder to the triggering code was enough to skip the antivirus alert and safely install the Impact Agent!

What we learned

  • Even the vexing defensive challenges, which at first seem impossible for attackers (and penetration testers) to surmount, can be solved through creative research.
  • The deeper we dive into the subject of exploit effectiveness, the more improvement potential we reveal. For example, while making the Mutable Decoder, we discovered that we had to create the garbage egg to generate valid (but restricted) machine code to obfuscate and we realized it would be effective to generate padding and nopsled chunks
  • In client-side environments, antivirus runs under constraints analogous to those in malware detection – specifically, the deeper analysis they do, the longer it takes and the more it degrades performance.
  • Criminals are in a race with defensive solution vendors. So we have to be a race, too. Like attackers in the wild, we are continuously working to understand detection techniques and find ways to bypass them.
  • Old techniques can still come in handy!

- Alejandro David Weil, Developer

To comment on this post, please CLICK HERE.

Looking Behind the Curtain: QA Testing CORE IMPACT Pro

May 9th, 2011

Those who know me are aware that I wasn’t born with perfect dress sense (being colour blind doesn’t help) and as such I have a tendency to rely on sales assistants at clothing shops to help pick out new outfits. Experience has taught me that while the salesperson may insist that an outfit will “suit you sir” or complement me perfectly, I should really get someone I trust to give me a onceover before I go public with my latest rocking outfit.

Even more important is the quality of the products we release here at Core Security.  In fact, it is really important to everyone here and, as you would expect, we have always put a lot of effort and attention into the quality testing of each version of our products. However, as a result of all effectiveness of our testing, most of our customers are blissfully unaware of the hard work our testing teams do to help prevent our customers from experiencing any issues.

I sat down with Laura Balian and asked her to tell me about her experiences working in testing for the last 5 years. Given that testers only have a finite amount of time to test a product, I wanted to understand the thorough process Laura and the ~15 other testers use when assessing new features and product versions. How do they ensure the tests performed both emulate the way our customers will use the product, and how are they most likely to uncover any issues that need to be addressed? Here, in her own words, are what Laura considers to be the important elements of quality testing.

- Alex Horan, CORE IMPACT Product Manager

 

A Core QA Professional’s Viewpoint, by Laura Balian

Years ago, when I started to work in testing, it was difficult for me to see at a glance how the test cases for new products were devised. I realized from the start that you can quickly understand “why” we were doing testing, but “how” is a level of detail that is trickier to grasp. You can read and learn from almost anywhere and teach yourself or be taught. Ultimately, things work better when you have that light-bulb moment when everything falls into place and you say “I’ve got it!”  No matter how much you have read, studied or practiced, everything makes more sense after that moment.

In my case that moment came over when one workmate told me, “Just try and break it!” Now, I am not telling you testing is a matter of just “breaking” things, but the mindset helps. When we are testing a product we try to make it work as expected by using it in the normal manner, but we also design tests to see how the product reacts when it is not used as we anticipated. At least it helps me to think, if you break it first, then a user is less likely to experience the frustration that comes along with when a product breaks or fails and instead experiences what they paid for it to do. I think that my previous statement explains the “why” aspect of testing, which is to avoid users having broken pieces of your product in their hands – and is commonly translated in the business world as “good quality”. But of course, breaking sounds like much more fun!

So, how can you maximize the good quality of the product you make and reduce the risk of negative results as much as possible? Using a professional experienced testing team is the most direct way. The amount and quality of resources you invest to test has a direct and measureable impact on the final result and will strongly decrease the odds that your software is “buggy.” Those resources are not financial; what I am talking about is time and people – and it’s great to be a member of such a well-staffed and talented QA group here at Core.

Nowadays, the willingness and desire to automate everything is rampant, especially in testing. However, many companies and testers are forgetting about the value of key parts within the testing process; analyzing the new product or feature, thinking and planning how new features will be tested, both individually and in the larger context of the product, and then executing and watching carefully (both for what you expected to see and anything unexpected). Of course I am not denying that automation saves time and helps testers quickly check for basic issues and perform routine activities, but you cannot automate if you don’t first plan and design what to “break” – and on the first execution of any new feature or capability, human eyes cannot be replaced.

Fortunately, our team has both types of testers and each group handles the part they like. I am part of a group of people working as analysts, designers and executors; we work closely with another group of people that automate test cases and constantly develop better automation tools.

We are given documents at the beginning of every release, which we read carefully to start getting ready for the upcoming features. Once we get the finalized requirements, we start working closely in small groups per the particular feature. Testers, developers and automators dynamically form new groups on the fly to assure the quality of the released product.

Developers help us to understand the technical aspects of what they are doing and add information about the test scenarios they consider important. Automators work closely with us to think about the best ways to automate both new test cases and also old ones for regression purposes. I think this is the best way QA work can be done. However, during the years I have been working as a tester, I have heard people say manual testing is something a robot can do and that “executing” is a “monkey” issue. But you can only automate properly only once you have seen the product as a user sees it and run it from start to finish with a clear goal in mind.

Alex asked me what the important qualities of a testing team are. I say for a number of reasons the importance of people’s minds, eyes and hands being directly involved in the tests should never be underestimated. First of all, you need a good team to take software requirements, analyze them carefully and understand them in the context of “Why are we making this feature?” and “How will it help our different types of customers?” Only then can the team plan what they are going to test, how and in what time. Second, we all know software can vary from requirements to the final product and that variation requires the testing team to be flexible enough to both understand and absorb the impact of any sudden or deep changes and adapt their tests on the fly. Third, it is only the tester’s eyes that can see those tricky issues that a user will experience because they use the product – not just running a single script and observing the results.  It is only when you run a test case manually, actually “touching” the product, that you can see the elements of the product working together as well as tricky combinations that you wouldn’t have thought of combining.

Last but not least, every large team must have good communication. A good tester has to be able to communicate properly and negotiate with their different stakeholders. Testers should also be able to interpret what their product manager is asking for on behalf of the customers. They should also communicate properly to the development team on how to reproduce a bug and explain why they think it is not a proper behavior for the software to exhibit. Testers have to be prepared to handle discussions where developers may defend their work and not accept some issues as bugs or disagree on an expected behavior. And most important of all, a good tester should understand the user’s point of view and be capable of seeing and using the software through their eyes.

So to wrap up, people are the most valuable resource you can ever have. I would say a good quality product requires quality testers and a good quality tester needs to be open minded, flexible, willing to be challenged and to learn, analytical, empathic, and a good communicator. I believe that the Impact Pro QA Team has all that and more.

- Laura Balian, QA Tester

To comment on this post, please CLICK HERE.