Stay updated with ZDNET: Set us as a preferred source on Google.
Key Points from ZDNET
- App security must be a top-level leadership responsibility.
- Company culture can determine the success of secure-by-design efforts.
- An operational framework turns security prevention into everyday practice.
Companies are now prioritizing software strategies that improve cybersecurity. The goal is to integrate security from the very beginning of development and create tools that detect bugs and vulnerabilities before they escalate. This piece explores the shift from reacting to threats after the fact to preventing them upfront as a cultural shift, and why leaders must move security from an afterthought to a foundational design principle.
Conventional application security typically identifies and fixes flaws, often after software is released. Secure-at-the-source is a proactive strategy aimed at stopping issues before they arise. However, especially in large organizations, this approach requires more than just tools. To enforce this strategy company-wide, prevention must become a funded, managed, and repeatable operational model.
Software Security as a Leadership Duty
This is where software oversight shifts from a departmental task to a board-level priority. When the code created by your development teams handles customer experience, operations, identity, payments, analytics, and AI processes, secure design becomes a critical risk management focus for senior leadership.
Developers build software—it’s what they do. We have tools, now enhanced by AI, that act as scanners and dashboards to spot and monitor issues. But neither our software tools nor our engineering teams can set company-wide priorities, distribute engineering resources across the enterprise, adjust incentives, settle departmental disputes, or embed risk prevention into the core operating principles of every department and division.
Also: Privacy in the AI era is achievable, says Proton’s CEO, but one concern keeps him awake
When a business releases a quarterly or annual report, investors, executives, and regulators closely examine debt; the higher the debt, the more worried stakeholders get.
While balance-sheet debt shows the company’s future payment obligations, technical debt and security debt are harder to quantify. Still, both represent the organization’s future maintenance and repair burdens.
This burden translates into lost opportunities, reputational damage, reduced customer satisfaction, and actual financial costs, sometimes far exceeding what appears on the balance sheet. Feature scope, deadlines, staffing, outsourcing, platform choices, and vendor decisions all influence how much security debt the company accumulates.
Unlike traditional debt, technical and security debt is often underreported to senior leaders. Certainly, vulnerability counts and ticket closure rates show some progress, but they only highlight and reward cleanup efforts. These metrics don’t reveal whether critical flaws, recurring defect types, and risky defaults are actually decreasing or increasing.
Also: How to check what ChatGPT knows about you – and take back your data privacy
CISA’s (Cybersecurity and Infrastructure Security Agency) Secure by Design program advises organizations to:
- Appoint an executive as chief security-by-design officer: Assign one leader accountability for customer security results.
- Support the secure-by-design executive: Allow leadership to shape product investment and risk reduction.
- Feature secure-by-design details in financial reports: Treat customer security as a business performance metric.
- Deliver regular product-security updates to the board: Ensure customer risk is visible at the governance level.
- Establish meaningful internal incentives: Recognize teams that improve customer security outcomes.
- Form a secure-by-design council: Align prevention goals across business and technical teams.
- Build and develop customer councils: Leverage customer feedback to strengthen product security.
CISA’s guidance focuses on secure-by-design for customer-facing products. However, you’ll need to extend this approach further, making it a universal priority not just for products delivered to users but for all internal operations as well.
Embedding Application Security into Company Culture
Corporate culture is a complex, intangible force. On one side, there are official policies and management directives. On the other, there’s the actual culture, shaped by spoken and unspoken signals throughout the organization.
We’ve discussed how to weave application security into management directives. But it’s equally—if not more—important to embed it into the company culture itself.
Also: The argument against an imminent software developer collapse
Security can’t simply be the team that blocks progress. Security awareness must become a shared habit, where product managers consider abuse scenarios, architects establish trust boundaries, developers adopt safer coding practices, and security teams offer actionable advice.
For those who doubt that company culture can shift quickly, I have a personal story that proves otherwise. Early in my career, I was a very young CEO. My company had grown four months. To keep things manageable, I reorganized our teams into departments: sales, engineering, manufacturing, and others.
One week, nearly everyone reported to me and pitched in wherever the company needed help; the next week, we had territorial conflicts. People in one department wouldn’t assist with another department’s priorities. The same individuals who had, just days before, collaborated willingly suddenly refused to help unless explicitly told to by their manager.
I was completely taken aback by this change. I just wanted a structure that allowed the company to move faster and not rely entirely on my personal direction for every decision. Instead, I got newly created obstacles to productivity and the emergence of small silos. This shift happened over a single weekend. On Friday, we all worked as one team. On Monday, it was the “D” word (department), and everyone’s behavior transformed.
Also: These 4 critical AI vulnerabilities are being exploited more quickly than defenders can keep up
While my story is a cautionary tale about rapid cultural change, I’m sharing it as a lesson in how organizations can transform overnight. I was too inexperienced at the time to realize that it’s crucial to be deliberate about how you shape cultural shifts, but you can learn from my experience. Quick ending: I banned the “D” word, and everyone returned to collaborating. People, right?
In any case, if you want to make prevention a fundamental part of your company culture, you’ll need to tackle two potential challenges from the start: developer friction and ownership.
Also: Nearly half of cybersecurity professionals want to
Here’s why you should quit
As you weave application security into your team’s culture, pay close attention to how you communicate. When developers receive problem reports that feel like accusations, unclear descriptions, overwhelming requests, or that ignore their existing workload, they’ll push back, resist the shift, and turn uncooperative behavior into a fine art. However, when security information reaches developers as precise requirements, ready-to-use components, rapid feedback, and practical guidance, they’ll be far more motivated to contribute to the pre-release improvement and risk-reduction efforts.
Choices constantly need to be made. Developers need guidance in understanding business priorities. For instance, should they focus on resolving a design flaw or shipping a new feature? Demales from sales and customer support can create a constant push-and-pull dynamic.
Strong pre-release quality management demands well-defined accountability for design choices, dependency selections, secret management, build pipelines, deployment sign-offs, and vulnerability handling. A management framework must exist that shields developers and testers from mixed signals (and competing or clashing department heads).
Remember that unclear expectations and disputes over ownership can crush the drive for quality and security. As my experience with the “D” word demonstrates, culture should never foster a situation where passing the buck on anything is seen as acceptable.
Transforming application security into an operating model
A drastically simplified view of a business model is that it explains how the company earns revenue. Similarly, a drastically simplified view of an operating model is that it’s how the company carries out its activities to generate revenue.
An operating model captures how a company provides value to its customers and how it manages its own operations. Plenty of companies and organizations operate without a formalized operating model. Essentially, they take actions, and outcomes follow. But once those actions are intentionally shaped into intentional, repeatable, predictable, and adjustable processes, that’s when an operating model becomes a genuine force multiplier.
Consulting firm McKinsey describes an operating model as “the backbone of any organization. It details how the company provides value to its customers, functions on a daily basis, and meets its strategic goals.”
The firm states: “A strong operating model acts as a guiding structure for decision-making, resource distribution, innovation, and numerous other essential activities and practices within the business — all aimed at boosting efficiency and driving sustainable growth.”
Also: 5 security strategies your business can’t afford to get wrong in the AI era — and why they matter
Given that today’s software development infrastructure supports nearly every other form of value creation across nearly every organization, a company-wide operating model for software reliability and security is entirely logical.
Once top-level leadership commits to the essential goal of shifting preventative security and code development earlier in the lifecycle, the organization needs clearly defined roles, decision checkpoints, workflows, incentives, measurements, and escalation routes that embed the early-stage application security process into everyday organizational operations.
When establishing an operating model for preventative security, address questions such as:
- Who is responsible for secure design choices?
- At what point does threat modeling occur?
- Which features need a security assessment?
- What secure templates or vetted components should teams adopt?
- Who has authority to grant exceptions?
- How are dependency risks managed?
- What metrics indicate whether prevention efforts are effective?
- How can the board or executive leadership track progress?
This represents the formalization of security-by-design. By embedding an operating model practiced across all levels and all areas, early-lifecycle security and code reliability can become part of every phase: architecture, development, release, and ongoing maintenance.
Strengthen enterprise resilience
With all this talk about early-stage security and reliability, I need to emphasize something truly, truly critical: not everything will go perfectly. Don’t expect that following the best guidance will eliminate late-stage security issues or vulnerability problems that demand urgent fixes.
Also: 5 methods to fortify your network against the accelerated pace of AI-driven attacks
Don’t expect that every piece of code will leave your network completely stable and error-free. Don’t assume attackers won’t manage to penetrate your network, or your customers’ networks, simply because you follow security-by-design principles.
Life is unpredictable. The variables in software development and in life are practically endless. Things, as the saying goes, happen. That’s precisely why we’re recommending these best practices.
By adopting these practices, you can cut down on the frequency of emergencies. You can lower your overall security and technical debt (which might, incidentally, reduce your actual financial debt). This approach can help you minimize preventable flaws, rapidly gain insights from incidents, establish safer defaults, apply stronger engineering judgment, and create internal systems that actively decrease the likelihood of making risky decisions.
Also: Half of security leaders say they’re unprepared for AI attacks — 4 steps to take immediately
Collectively, these practices work to strengthen enterprise resilience. Resilience carries several different meanings:
- Webster’s defines it as, “An ability to recover from or adjust easily to misfortune or change.”
- The United Nations Office for Disaster Risk Reduction defines resilience as, “The ability of a system, community or society exposed to hazards to resist, absorb, accommodate, adapt to, transform and recover from the effects of a hazard in a timely and efficient manner.”
- McKinsey defines it as, “The ability to not only recover quickly from a crisis but to bounce back better — and even thrive.”
- The US Department of State describes resilience as “The ability to bounce back from difficult experiences.”
It’s that capacity to bounce back that sits at the heart of my recommendation. By restructuring your organization to incorporate resilience through code security and reliability from the start and across the entire lifecycle, you can boost your ability to recover when challenges arise, as well as cut down on how often you face self-inflicted problems.
What one change would most boost your organization’s ability to recover from software security issues? Share your thoughts in the comments below.
You can keep up with my daily project updates on social media. Be sure to subscribe to my weekly newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.



