Oliver Rochford in Collaboration With Daylight Security: AI Changed Your SOC Without Asking

Related

Share

There is a version of the AI-in-security story that is mostly about speed. AI triages faster. AI correlates more data. AI closes tickets that used to sit in a queue for hours. That version is not wrong, exactly. But according to a new research report, it is missing the part that actually matters.

Security Operations at the Nexus of AI, authored by Oliver Rochford of Cyberfuturists and produced in collaboration with Daylight Security, opens with a premise that is harder to argue with every quarter: AI has already changed how security operations work. The more pressing question is whether the organizations running those operations understand how.

Rochford, a former Research Director at Gartner who helped define the SOAR market category and has accumulated more than 4,000 engagements with security leaders over his career, is well-positioned to make this argument without tipping into vendor hype. The report is careful to acknowledge that AI delivers genuine capability gains in context assembly, triage, investigation enrichment, and response recommendation. It does not ask readers to be skeptical of those gains. It asks them to think carefully about who captures those gains, under what conditions, and with what governance structures in place.

The Three Shifts That Changed Everything

The structural argument runs through three shifts. AI has changed the economics of managed detection and response, making it possible for AI-native MDR providers to deliver personalized, expert-grade analysis at price points previously unavailable to most organizations. AI has shifted the governance question from “who runs the SOC?” to “who owns decisions when machines are making them?” And AI has dissolved the traditional distinction between security tools and security services, making every AI SOC deployment an implicit commitment to a particular operating model, whether the buyer recognizes it as such or not.

That third shift deserves particular attention because it is the one most likely to surprise organizations mid-contract. When a company buys a traditional security tool, it is buying capability. How that capability is governed, tuned, and operated remains under internal control. When a company deploys an AI SOC platform, it is making governance commitments that are embedded in the platform’s configuration, confidence thresholds, and decision logic. Changing those commitments later means retraining models, rebuilding workflows, and losing institutional knowledge that has accumulated in the AI’s learned understanding of the environment. The switching costs are higher than traditional tooling, and they are often invisible at purchase time.

Choosing an Operating Model

The report frames the choice facing organizations around two core operating models, with a third hybrid path for those whose maturity is uneven across security domains. The internal AI SOC model works best for organizations with mature teams, strong detection engineering capability, and genuine appetite for governing AI decisions as a distinct discipline. The AI-enabled MDR model works best for organizations without dedicated SOC capacity, those that prefer predictable costs, and those whose environments are reasonably well-served by a managed provider’s configuration and tuning choices. The hybrid model suits transition periods or environments where some security domains are well-understood internally while others benefit from outside expertise.

What neither model can do, the report argues, is substitute for a deliberate choice. Organizations that make this decision by default, through tool purchases and MDR renewals, inherit the governance framework of whatever vendor they selected. That framework may or may not align with their regulatory obligations, their incident response posture, or their board’s expectations about accountability.

The Accountability Problem

The accountability question runs throughout the report as its most persistent theme. In traditional security operations, responsibility for decisions was relatively traceable. A human analyst triaged an alert and made a call. That call might be wrong, but it was a human call, auditable and explainable. AI-driven decisions operate differently. They are probabilistic rather than deterministic. An alert is not simply fired or not fired; it receives a confidence score, and the threshold at which that score triggers action is itself a governance decision, typically set by the vendor rather than the customer. When a model is updated mid-contract, those thresholds may shift without the customer’s knowledge.

The report is particularly clear-eyed about alert suppression, which it identifies as the highest-stakes, lowest-visibility decision point in AI-enabled security operations. When AI suppresses an alert, no human sees the original signal. There is no record of a human deciding the alert was low priority. If the suppression is wrong, the failure becomes apparent during incident response, after damage has occurred. The report’s standard for mature AI-enabled MDRs is that suppression decisions must be visible, confidence-scored, and auditable. Providers who cannot meet that standard are leaving a governance gap that their customers may not discover until it matters most.

Daylight Security as a Design Reference

Daylight Security is referenced throughout the report as an example of AI-native MDR architecture built around these principles. The Daylight platform constructs a knowledge graph of organizational context, including assets, relationships, and behavioral norms, and uses it to derive verdicts for every event. High-confidence events resolve automatically. Events that fall below the confidence threshold surface to a human analyst with a full evidence package, including the observable artifacts behind the AI’s classification. The design is explicit that not all decisions should be automated and that some require customer policy ownership, a recognition that the human-in-the-loop is not a formality but a genuine design requirement.

What Practitioners Found in the Field

The practitioner interviews embedded in the report add texture to the framework. One CISO at a European mobility company described deploying an AI-driven MDR and discovering that analysts who had previously spent their days on phishing triage and header analysis found those tasks automated within weeks. The transition was reactive rather than planned, and in hindsight he would have invested earlier in creating new roles and titles to absorb the workforce change. Another practitioner, a sole security operator at a US-based conservation nonprofit, replaced a pass-through MDR that performed no correlation with an AI platform that cost half as much and delivered far more contextual investigation depth.

Both practitioners arrived independently at the same constraint on autonomous response: AI should detect and investigate; humans should respond. One framed it as enterprise risk management. The other framed it as an accountability principle. Neither framing is identical, but the architectural conclusion was the same.

The Recalibration CISOs Need

The report’s closing recommendations are direct. Make the operating model decision explicitly. Evaluate AI-enabled offerings on decision governance as much as capability. Treat the AI supply chain as a first-class risk. Expect AI to multiply analyst effectiveness, not replace analysts. And build verification into the operating model, because AI makes oversight harder, not easier, and that challenge must be engineered for rather than assumed away.

For CISOs who have been navigating AI-in-security conversations by focusing on capability demonstrations and pricing negotiations, the report is a recalibration. The question is not whether the AI works. The question is whether your organization owns the decisions it makes.