Threat Risk Assessment: Difference between revisions
From Federal Burro of Information
Jump to navigationJump to search
(Created page with " Differentiators: * Utility of reporting to client: ** We always provide context to the client organization in our reporting, and in our testing approach. What does this mean...") |
No edit summary |
||
Line 1: | Line 1: | ||
suporting docs: | |||
* https://www.sans.org/reading-room/whitepapers/auditing/overview-threat-risk-assessment-76 | |||
* https://www.cse-cst.gc.ca/en/system/files/pdf_documents/tra-emr-1-e.pdf | |||
Differentiators: | Differentiators: |
Revision as of 18:23, 7 July 2016
suporting docs:
- https://www.sans.org/reading-room/whitepapers/auditing/overview-threat-risk-assessment-76
- https://www.cse-cst.gc.ca/en/system/files/pdf_documents/tra-emr-1-e.pdf
Differentiators:
- Utility of reporting to client:
- We always provide context to the client organization in our reporting, and in our testing approach. What does this mean for your environment?
- Executive summaries available as a condensed document or presentation, to ensure results are meaningful to a non-technical audience. E.g. this could include an auditor or executive. Ensure attention is drawn to the right risks.
- Consultant narrative is provided for every vulnerability, providing context and guidance.
- Simple, useful classification of vulnerabilities is used. Easier to digest for the report audience.
- Appendices: Succinct visual examples provide: e.g. Results of intelligence gathering, scripts/letters used in social engineering attempts, results of goal-based pen tests.
- Standards-based:
- We test against a standard, not just something we came up with. We use Pen Test Execution Standard for general methodology. OWASP testing guide. NIST SP800-115. Web app, network, social engineering, etc, will have different standards.
- Application of human expertise and analysis:
- We don’t just emphasize tools. Use more manual searches and are generally more thorough than our competitors.
- We leverage the intelligence we are able to gather in preliminary phases, OSINT (e.g. process documentation), when social engineering is conducted.
- Threat modeling isn’t always done by competitors, identifying likely attackers and their current techniques.
- Heavy emphasis on manual testing in vulnerability analysis (e.g. use of nmap, Sparta), especially compared to an individual contractor. A lot of time guiding the tools against likely vulnerabilities and conducting analysis of findings to find other potential vulnerabilities. E.g. brute forcing admin consoles we discover, where a Nessus scan will return nothing. We don’t just validate scanner results, we analyze and re-test based on our knowledge.
- For web application testing, manual testing is even more important. Manual tests are required to test the most basic business logic exploits.
- Sometimes it is worth asking how much time our competition has allotted to specific activities. A lot of the times it doesn’t add up… you can’t test and report on a complex web application in a day.
- Emphasize talent. The bios the customer has provided, are they the ones doing the testing? Or are they business owners?
- Vulnerability details are not limited to what Nessus provides. Web application vulnerabilities are written by the Risk Advisory team. (edited)