Jul 17, 2017
Companies are increasingly finding it difficult to hire skilled security experts - particularly application security or software security professionals. The latest BSIMM data shows that organisations with mature software security practices only have 1.4 security specialists per 100 developers - and that number is much lower for the rest of us. This frustrates initiatives that rely on these skills to deliver new services to the market securely. At the same time, organisations are under increasing pressure to deliver services at a faster pace with the rise in popularity of DevOps, Continuous Delivery and agile practices - which all help achieve that business goal: get to market with a quality service faster than the competitors.
Application security has had to play catch-up and adapt their approaches so that they fit into this paradigm of smaller changes with more frequent deploys. Integration of static application security testing (SAST) tools directly into developer’s IDE’s or plugged into their Continuous Integration servers can certainly help in identifying security bugs early on where they are much cheaper to fix.
That’s all well and good for security bugs, but what about design flaws which account for 50% of security defects? Automated testing tools are unable to identify design flaws, because they lack any context about the software under test. Does the service make a clear text HTTP call to a third-party API and transmit cardholder data across that link? Does the application enforce an appropriate level of authentication for the sensitivity of the data? These are questions that the application security team can answer during a threat modeling exercise. But who has the time and resources to spend days with the key developers reviewing the design of the software? One of the options is to push more responsibility for security to the developers themselves, or to establish security champions who have the skills and training to perform the necessary design-time security review of software. This is where self-service secure design can help. In its simplest form, the core security team can collaborate with the architects to create security standards that can be applied to all software and services built by the organisation. The OWASP Application Security Verification Standard is a great place to start and can easily be adapted to suit the specific requirements of different organisations.
The downside of standards is that they can easily become yet another 100-page document that sits in a drawer and is never actually used to design a service securely. Here again, it’s the security team’s responsibility to adapt to what their development teams are already doing to manage feature requirements: if they’re using Jira, then record the security requirements in that - if they’re using user stories, then use those for security stories.
Even if a palatable delivery mechanism for security requirements is found, there is still the challenge of creating only relevant and accurate requirements. Carpet bombing Jira with a blanket set of requirements that must be applied to all software regardless of their architecture or the security context is an easy way to burn goodwill with the development team. If accurate requirements through manual analysis and threat modeling is at the one end of the spectrum, and the other end is a single security standard that applies to all software - then somewhere in the middle lies a useful medium where we can use smaller sets of composable risk patterns that can be assembled using a rules engine or scripting engine.
The simplest way to get started with such an approach is to work backwards from a complete standard like the ASVS, and then subdivide it into groups that can be applied for a given architecture decision and/or security requirement. For example, a single risk pattern and set of security requirements can be created for single factor authentication against any type of service and another for the transmission of sensitive data from a client to a server, yet another for the storage of sensitive data on a mobile device, etc. These patterns of requirements can then be assembled like building blocks based on the functional requirements and architectural decisions of the development team. Having a library of composable risk patterns can greatly speed up the process of secure design - and if these were fronted by a decision tree or rules engine, could make secure design a truly self-service activity that can scale under today’s development pressures.