It's more than culture: Addressing the root cause of common security frustrations
This year, GitLab's annual survey of DevSecOps professionals uncovered several issues related to organisational culture that could be preventing deeper alignment between engineering and security teams. A majority (58%) of security respondents said they have difficulty getting development to prioritise remediation of vulnerabilities, and 52% reported that red tape often slows their efforts to quickly fix vulnerabilities. In addition, security respondents pointed to several specific frustrations related to their jobs, including difficulty understanding security findings, excessive false positives, and testing happening late in the software development process.
DevSecOps promises better integration between engineering and security, but it's clear that frustrations and misalignment persist. That's because these challenges are symptoms of a larger problem with how organisations view security, as well as how teams work together and how they allocate time to security.
Escaping the vulnerability hamster wheel
Vulnerability scanning surfaces all potential vulnerabilities — however, just because a software package has a common vulnerability or exposure (CVE) doesn't mean it's reachable or exploitable. Security teams and developers alike are still triaging and filtering through vulnerability findings that have grown exponentially over the years since authenticated vulnerability scanning became the norm.
The move to authenticated scanning has improved the effectiveness of security programs in many ways, but it's also put developers on an endless hamster wheel of fixing things that don't matter. When teams waste their efforts on patches that don't address an exploitable vulnerability, they are diverted from more critical tasks, such as patching vulnerable and exploitable flaws. That's the source of much of the division between security and engineering teams today.
So, how can organisations address the root cause of these issues and promote better integration between engineering and security? Here are three ways to prevent common security frustrations at the source.
1. Silence the noise, focus on actionable high-fidelity signals
Excessive false positives were the second highest-rated frustration identified by security respondents in our survey. False positives are clearly a challenge, but they are often a vulnerability management problem in disguise.
If an organisation sees many false positives, that could be a sign that they haven't done all they can to ensure their security findings are high-fidelity. Organisations should narrow the focus of their security efforts to what matters. That means traditional static application security testing (SAST) solutions are likely insufficient. SAST is a powerful tool but loses much of its value if the results are unmanageable or lack appropriate context. For SAST to be most effective, it must be used seamlessly with other security and development tools and be accessible to developers.
Another issue is that most scanning tools have a very narrow context window for understanding vulnerability findings. This is one of the areas where AI can help with AI-powered features that explain security vulnerabilities.
2. Minimise the tech stack, minimise the attack surface
Staying focused on what matters doesn't just apply to security testing — it should start with how an organisation builds software in the first place.
Although AI promises to help simplify software development processes, our survey suggests that many organisations still have a long road ahead. In fact, respondents who are using AI were significantly more likely than those not using AI to want to consolidate their toolchain, suggesting that the proliferation of different point solutions running different AI models could be adding complexity, not taking it away.
The ever-increasing complexity of organisations' tech stacks is a major contributor to security frustrations. Some complexity is unavoidable when building large, multi-faceted software systems. However, organisations should take steps to avoid complexity resulting from suboptimal design decisions, such as difficult-to-maintain code and redundant dependencies. This unnecessary complexity creates a larger attack surface and generates more security scan findings for teams to sort through, prioritise, and address.
Organisations should approach development through the lens of software minimisation — that is, being intentional about the tools they adopt and what they decide to build into their codebases. This will help minimise dependencies, improve the security of the software supply chain, reduce scanner noise, and ease the burden on developers to fix non-critical issues.
3. Normalise paved roads
Security testing happening too late in the software development lifecycle was another one of the top frustrations identified by our survey respondents. Teams might be frustrated when they want to ship something and it gets delayed because a vulnerability is detected late — but in many cases it might not have been possible to detect that vulnerability any earlier. What is possible, however, is operationalising easily deployable, reusable security components, limiting the variables and potential vulnerabilities.
Teams can avoid late-stage surprises by embracing tested and assured design patterns based on repeatable use cases: the "paved roads" approach. A paved road is a recommended path, including a curated set of tools, processes, and components, that teams can follow to build secure applications more efficiently — for example, using GitOps to version and deploy well-architected and tested Infrastructure as Code that deploys at scale for all workloads.
Adopting paved roads potentially removes some flexibility but ultimately reduces the operational burden and rework on engineering teams and increases security. This needs to be a collaborative effort between security and development. Security can help to design paved roads, but engineering has to be involved to operate and maintain them as part of the codebase.
Security is a domain, not a team
We're already seeing security as a practice shift into engineering teams, and we can expect the boundaries between the two to continue to blur. However, with the rapid adoption of AI and the corresponding acceleration of software development — 66% of our survey respondents said they are releasing software twice as fast or faster than last year — it will be critical for organisations to establish systems and frameworks that optimise for the greatest security benefit. That's why the idea of a cultural disconnect between development and security isn't the whole story. Fostering a culture of collaboration is essential, but security and engineering teams must also work together to rethink foundational aspects of software development, such as optimising existing codebases and building scalable engineering-centric solutions that can be seamlessly adopted by technical teams across the organisation.