Join your hosts, Anton Chuvakin and Timothy Peacock, as they talk with industry experts about some of the most interesting areas of cloud security. If you like having threat models questioned and a few bad puns, please tune in!
What is the hardest thing about turning distinct incident reports into a fun to read and useful report like M-Trends?
How much are the lessons and recommendations skewed by the fact that they are all “post-IR” stories?
Are “IR-derived” security lessons the best way to improve security? Isn’t this a bit like learning how to build safely from fires vs learning safety engineering?
The report implies that F500 companies suffer from certain security issues despite their resources, does this automatically mean that smaller companies suffer from the same but more?
"Dwell time" metrics sound obvious, but is there magic behind how this is done? Sometimes “dwell tie going down” is not automatically the defender’s win, right?
What is the expected minimum dwell time? If “it depends”, then what does it depend on?
Impactful outliers vs general trends (“by the numbers”), what teaches us more about security?
Why do we seem to repeat the mistakes so much in security?
Do we think it is useful to give the same advice repeatedly if the data implies that it is correct advice but people clearly do not do it?
So, another Next is done. Beyond the usual Vegas chaos, what was the overarching security theme or vibe you [Tim] felt dominated the conference this year?
Thinking back to Next '24, what felt genuinely different this year versus just the next iteration of last year's trends?
Last year, we pondered the 'Cloud Island' vs. 'Cloud Peninsula'. Based on Next 2025, is cloud security becoming more integrated with general cyber security, or is it still its own distinct domain?
What wider trends did you observe, perhaps from the expo floor buzz or partner announcements, that security folks should be aware of?
What was the biggest surprise for you at Next 2025? Something you absolutely didn't see coming?
Putting on your prediction hats (however reluctantly): based on Next 2025, what do you foresee as the major cloud security focus or challenge for the industry in the next 12 months?
If a busy podcast listener listening could only take one key message or action item away from everything announced and discussed at Next 2025, what should it be?
Vulnerability response at cloud-scale sounds very hard! How do you triage vulnerability reports and make sure we’re addressing the right ones in the underlying cloud infrastructure?
How do you determine how much to pay for each vulnerability? What is the largest reward we paid? What was it for?
What products get the most submissions? Is this driven by the actual product security or by trends and fashions like AI?
What are the most likely rejection reasons?
What makes for a very good - and exceptional? - vulnerability report? We hear we pay more for “exceptional” reports, what does it mean?
In college Tim had a roommate who would take us out drinking on his Google web app vulnerability rewards. Do we have something similar for people reporting vulnerabilities in our cloud infrastructure? Are people making real money off this?
How do we actually uniquely identify vulnerabilities in the cloud? CVE does not work well, right?
What are the expected risk reduction benefits from Cloud VRP?
We've seen a shift in how boards engage with cybersecurity. From your perspective, what's the most significant misconception boards still hold about cyber risk, particularly in the Asia Pacific region, and how has that impacted their decision-making?
Cybersecurity is rife with jargon. If you could eliminate or redefine one overused term, which would it be and why? How does this overloaded language specifically hinder effective communication and action in the region?
The Mandiant Attack Lifecycle is a well-known model. How has your experience in the East Asia region challenged or refined this model? Are there unique attack patterns or actor behaviors that necessitate adjustments?
Two years post-acquisition, what's been the most surprising or unexpected benefit of the Google-Mandiant combination?
M-Trends data provides valuable insights, particularly regarding dwell time. Considering the Asia Pacific region, what are the most significant factors reducing dwell time, and how do these trends differ from global averages?
Given your expertise in Asia Pacific, can you share an observation about a threat actor's behavior that is often overlooked in broader cybersecurity discussions?
Looking ahead, what's the single biggest cybersecurity challenge you foresee for organizations in the Asia Pacific region over the next five years, and what proactive steps should they be taking now to prepare?
How have you seen IAM evolve over the years, especially with the shift to the cloud, and now AI? What are some of the biggest challenges and opportunities these two shifts present?
ITDR (Identity Threat Detection and Response) and ISPM (Identity Security Posture Management) are emerging areas in IAM. How do you see these fitting into the overall IAM landscape? Are they truly distinct categories or just extensions of existing IAM practices?
Shouldn’t ITDR just be part of your Cloud DR or maybe even your SecOps tool of choice? It seems goofy to try to stand ITDR on its own when the impact of an identity compromise is entirely a function of what that identity can access or do, no?
Regarding workload vs. human identity, could you elaborate on the unique security considerations for each? How does the rise of machine identities and APIs impact IAM approaches?
We had a whole episode around machine identity that involved turtles–what have you seen in the machine identity space and how have you seen users mess it up?
The cybersecurity world is full of acronyms. Any tips on how to create a memorable and impactful acronym?
Adversa AI is known for its focus on AI red teaming and adversarial attacks. Can you share a particularly memorable red teaming exercise that exposed a surprising vulnerability in an AI system? What was the key takeaway for your team and the client?
Beyond traditional adversarial attacks, what emerging threats in the AI security landscape are you most concerned about right now?
What trips most clients, classic security mistakes in AI systems or AI-specific mistakes?
Are there truly new mistakes in AI systems or are they old mistakes in new clothing?
I know it is not your job to fix it, but much of this is unfixable, right?
Can you walk us through Google's typical threat modeling process? What are the key steps involved?
Threat modeling can be applied to various areas. Where does Google utilize it the most? How do we apply this to huge and complex systems?
How does Google keep its threat models updated? What triggers a reassessment?
How does Google operationalize threat modeling information to prioritize security work and resource allocation? How does it influence your security posture?
What are the biggest challenges Google faces in scaling and improving its threat modeling practices? Any stories where we got this wrong?
How can LLMs like Gemini improve Google's threat modeling activities? Can you share examples of basic and more sophisticated techniques?
What advice would you give to organizations just starting with threat modeling?
You are responsible for building systems that need to comply with laws that are often mutually contradictory. It seems technically impossible to do, how do you do this?
Google is not alone in being a global company with local customers and local requirements. How are we building systems that provide local compliance with global consistency in their use for customers who are similar in scale to us?
Originally, Google had global systems synchronized around the entire planet–planet scale supercompute–with atomic clocks. How did we get to regionalized approach from there?
Engineering takes a long time. How do we bring enough agility to product definition and engineering design to give our users robust foundations in our systems that also let us keep up with changing and diverging regulatory goals?
What are some of the biggest challenges you face working in the trusted cloud space?
Is there something you would like to share about being a woman leader in technology? How did you overcome the related challenges?
Where do you see a gap between the “promise” of LLMs for security and how they are actually used in the field to solve customer pains?
I know you use LLMs for anomaly detection. Explain how that “trick” works? What is it good for? How effective do you think it will be?
Can you compare this to other anomaly detection methods? Also, won’t this be costly - how do you manage to keep inference costs under control at scale?
SOC teams often grapple with the tradeoff between “seeing everything” so that they never miss any attack, and handling too much noise. What are you seeing emerge in cloud D&R to address this challenge?
We hear from folks who developed an automated approach to handle a reviews queue previously handled by people. Inevitably even if precision and recall can be shown to be superior, executive or customer backlash comes hard with a false negative (or a flood of false positives). Have you seen this phenomenon, and if so, what have you learned about handling it?
What are other barriers that need to be overcome so that LLMs can push the envelope further for improving security?
So from your perspective, LLMs are going to tip the scale in whose favor - cybercriminals or defenders?