Foe/Shield Model of Danger of Terrorist Assaults utilizing Conviction

0
0
2212 days ago, 766 views
PowerPoint PPT Presentation
Terrorists acts are not arbitrary occasions, but rather we have extensive epistemic ... Purposeful Event: Terrorist Blow Up Building. Are the Consequences worth the push to ...

Presentation Transcript

Slide 1

Foe/Defender Model of Risk of Terrorist Attacks utilizing Belief SAND 2006-1470C Risk Symposium 2006 Risk Analysis for Homeland Security and Defense: Theory and Application March 20, 2006 John Darby Sandia National Laboratories jldarby@sandia.gov 505-284-7862

Slide 2

Acknowledgments Linguistic Model for Adversary Logic Evolved Decision (LED) Methodology created at Los Alamos National Laboratory by Terry Bott and Steve Eisenhawer Belief/Plausibility measure of vulnerability included Numerical Model for Defender Uses work and proposals from Jon Helton, Arizona State University Belief/Plausibility with Fuzzy Sets 1986 paper by Ronald Yager jldarby@sandia.gov 505-284-7862

Slide 3

Goal Apply Belief/Plausibility Measure of Uncertainty to Evaluating Risk from Acts of Terrorism Why Belief/Plausibility? We have extensive epistemic (condition of information) instability Terrorists acts are not arbitrary occasions, but rather we have significant epistemic vulnerability for assessing them jldarby@sandia.gov 505-284-7862

Slide 4

Random versus Purposeful Random occasion: Earthquake Magnitude of seismic tremor free of structures presented to quake Event is an "imbecilic" disappointment Intentional Event: Terrorist Blow Up Building Are the Consequences worth the push to accumulate the assets to have the capacity to overcome any security frameworks set up and pulverize the building Event is a decision by a reasoning, noxious foe with critical assets jldarby@sandia.gov 505-284-7862

Slide 5

Terrorist Risk (Intentional) Safety Risk (Random) Likelihood Maximum Risk Maximum Risk (Uncertainty) (Uncertainty) Consequence Safety Risk versus Psychological oppressor Risk jldarby@sandia.gov 505-284-7862

Slide 6

Belief/Plausibility for Epistemic Uncertainty Toss a reasonable coin Uncertainty is aleatory (irregular) Probability Heads is ½ Probability Tails is ½ But in the event that we don't know coin is reasonable May be two-headed or two-followed Epistemic (condition of learning) instability Insufficient data to allocate Probability to Heads and Tails Belief/Plausibility for Heads is 0/1 Belief/Plausibility for Tails is 0/1 With more data (really flipping the coin) we can lessen Epistemic Uncertainty For Fair Coin we can't diminish aleatory instability jldarby@sandia.gov 505-284-7862

Slide 7

Belief and Plausibility Belief/Plausibility shape a Lower/Upper Bound for Probability Belief is the thing that likelihood will be Plausibility is the thing that likelihood could be Similar to a Confidence Interval for a Parameter of a likelihood dispersion; a certainty measure that parameter is in interim, however precisely where in interim is not known Belief/Plausibility both decrease to Probability if Evidence is Specific Plausibility Probability is some place in [Belief, Plausibility] Interval Belief jldarby@sandia.gov 505-284-7862

Slide 8

Fuzzy Sets for Vagueness Consequences (Deaths) are "Significant" "Major" is fluffy: between around 500 and around 5000 passings jldarby@sandia.gov 505-284-7862

Slide 9

Adversary/Defender Adversary (them) Defender (us) Adversary and Defender each have diverse objectives and distinctive conditions of information Risk = Threat x Vulnerability x Consequence Defender objective: Minimize Risk with accessible assets Adversary objective: Maximize Consequence with accessible assets (working suspicion) Adversary is the Threat Epistemic Uncertainty for Vulnerability and Consequence Defender knows Vulnerability and Consequence Epistemic Uncertainty for Threat jldarby@sandia.gov 505-284-7862

Slide 10

Scenario and Dependence Scenario characterized as: Specific Target, Adversary Resources and Attack Plan Resources are: properties (numbers weapons, and so forth.) and learning Risk for a Scenario Risk = f x P x C f is recurrence of situation (number every year) P is likelihood situation is effective C is outcome P is contingent on situation (foe assets and security set up, both physical security and insight gathering) C is restrictive on situation (target) Risk is situation subordinate Adversary has decision (not arbitrary occasion) Risk must consider a great many situations jldarby@sandia.gov 505-284-7862

Slide 11

Defender Model for a Scenario Risk = f x P x C f, P, and C are arbitrary factors with instability Degrees of Evidence to f, P, C in light of condition of learning Convolution utilizing Belief/Plausibility Measure of Uncertainty jldarby@sandia.gov 505-284-7862

Slide 12

Example Result from Defender Model Probability accept prove for an interim consistently appropriated over interim jldarby@sandia.gov 505-284-7862

Slide 13

Worst Scenarios: Ranked By Decreasing Expected Value Best 0 10 6 Expected Value of Deaths every Year: f*P*C Defender Ranking of Scenarios For Belief/Plausibility Expected Value is an Interval [E low , E high ]. Diminishes to point (Mean) for Probability Rank by E high , Subrank by E low Scenario jldarby@sandia.gov 505-284-7862

Slide 14

Expected Value of Likelihood: f 0 Expected Value of Deaths: P*C 10 6 Next Level Of Detail for Defender Ranking jldarby@sandia.gov 505-284-7862

Slide 15

Adversary Model Use surrogate Adversary (Special Forces) Adversary has Choice All Variables of concern must be "alright" or we will pick another situation Recruit Insider? Not unless effectively set Large Team? Worry about being recognized by Intelligence Uncertainty? Entryway was green yesterday, is red today… What else changed? Factors for Adversary Decision are Not all Numeric Consequence = Deaths x Economic Damage x Fear in Populace x Damage to National Security x Religious Significance x … .. Passings and Economic Damage are numeric Fear in Populace, Damage to National Security, and Religious Significance are not numeric jldarby@sandia.gov 505-284-7862

Slide 16

Adversary Model Linguistic Model Develop Fuzzy Sets for Each Variable Develop Approximate Reasoning Rule Base for Linguistic Convolution of Variables to Reflect Scenario Selection Decision Process (LANL LED handle) We are not the Adversary, we attempt to think like the Adversary Considerable Epistemic Uncertainty Use Belief/Plausibility Measure of Uncertainty Propagated up the Rule Base jldarby@sandia.gov 505-284-7862

Slide 17

Adversary Model Assume Adversary Goal is Maximize Expected Consequence Expected Consequence ≡ P x C Expected Consequence is Adversary gauge of Consequence, C, weighted by Adversary gauge of Probability of Success, P jldarby@sandia.gov 505-284-7862

Slide 18

Example of Adversary Model Rule Base and Variables Expected Consequence = Probability Of (Adversary) Success x Consequence Probability Of Success = Probability Resources Required Gathered Without Detection x Probability Information Required Can Be Obtained x Probability Physical Security System can be Defeated Consequence = Deaths x Damage To National Security Fuzzy Sets Expected Consequence = {No, Maybe, Yes} Probability Of Success = {Low, Medium, High} Consequence = {Small, Medium, Large} Probability Resources Required Gathered Without Detection = {Low, Medium, High} Probability Information Required Can Be Obtained = {Low, Medium, High} Probability Physical Security System can be Defeated = {Low, Medium, High} Deaths = {Minor, Moderate, Major, Catastrophic} Damage To National Security = {Insignificant, Significant, Very Significant} jldarby@sandia.gov 505-284-7862

Slide 19

Example of Adversary Model Part of Example Rule Base jldarby@sandia.gov 505-284-7862

Slide 20

Example of Adversary Model Focal Elements (Evidence) for Particular Scenario Deaths: 0.8 for {Major, Catastrophic} 0.2 for {Moderate, Major} Damage To National Security: 0.1 to {Insignificant, Significant} 0.9 to {Significant, Very Significant} Probability Resources Required Obtained Without Detection: 0.7 to {Medium} 0.3 to {Medium, High} Probability Information Required can Be Obtained: 0.15 to {Medium} 0.85 to {Medium, High} Probability Physical Security System can be Defeated: 1.0 to {Medium, High} jldarby@sandia.gov 505-284-7862

Slide 21

Example of Adversary Model jldarby@sandia.gov 505-284-7862

Slide 22

Example of Adversary Model jldarby@sandia.gov 505-284-7862

Slide 23

Example of Adversary Model jldarby@sandia.gov 505-284-7862

Slide 24

Adversary Ranking of Scenarios Defender thinking like Adversary Ranks by Plausibility Rank situations in light of the credibility for the most noticeably awful fluffy set for expected outcome, "Yes" in the earlier illustration, sub-positioned by believability of the following most noticeably awful fluffy sets, "Possibly" and "No" in the earlier case Note: Actual Adversary utilizing the Model would Rank by Belief "We won't endeavor a situation unless we trust it will succeed"… Osama jldarby@sandia.gov 505-284-7862

Slide 25

Software Tools Numerical Evaluation of Risk for Defender BeliefConvolution Java code (composed by writer) RAMAS RiskCalc Linguistic Evaluation for Adversary LinguisticBelief Java code (composed by writer) LANL LEDTools jldarby@sandia.gov 505-284-7862

Slide 26

"Combinatorics" Issue for Convolution with Belief/Plausibility Convolution utilizing Belief/Plausibility must be done at the central component level Convolution of 20 factors each with 3 central components brings about a variable with 3 20 central components Need to Condense or Aggregate Degrees of Evidence for Large Scale Problems Must "consolidate" or "total" central components with rehashed convolution of factors to diminish number of central components to sensible size jldarby@sandia.gov 505-284-7862

Slide 27

Evidence Aggregated Evidence * Bins (straight or log 10 ) Aggregation in BeliefConvolution Code jldarby@sandia.gov 505-284-7862

Slide 28

Aggregation in LinguisticBelief Code Aggregation is "Programmed" per Rule Base Focal Elements for Inputs are on Fuzzy Sets For Input factors Focal Elements for Output are on Fuzzy Sets of Output Variable Happiness = Health x Wealth x Outlook on Life Assume Health, Wealth, and Outlook on Li

SPONSORS