Beyond Relationships: Measuring Soft Outcomes of Cooperative Threat Reduction Through Research-based Networks

Since its inception, Cooperative Threat Reduction has measured hard outcomes for biological threat factors.  For over 20 years, the Nunn-Luger Score Card has been used to account for the elimination of all weapons and materials of mass destruction.  Historically, the hard target for biological threat reduction was to equip and build facilities, so partners could practice laboratory functions safely and securely; the present hard target is to secure facilities.[1] Most disarmament work occurred in the Former Soviet Union, where progress was easy to measure, for example, the number of facility upgrades.[2] Current threat reduction assistance is harder to measure, hinged to relationships with partners around the globe, specifically through cooperative engagement, capacity building, and sharing knowledge and best practices.  The current approach presents challenges to measuring outputs and subsequently justifying investments. One method to better account for biological threat reduction is research networking to enhance national and regional expertise and increase global collaboration.  Measuring a network’s effectiveness to threat reduction can be borrowed from International Development government and non-government organizations.

A network for research can reinforce the competence and capacity of emerging economies.  The thematic focus of a network may vary depending on the topic; however, the intent for establishing a network remains consistent: to support horizontal frameworks for disease surveillance and response.  Implementing a research-based network can (1) promote a common understanding of the risk and prevalence of under-reported and under-diagnosed emerging infectious diseases (e.g., rickettsial pathogens); (2) connect researchers to establish needs and gaps across the biological resiliency spectrum of prevent, detect, respond, and recover; and (3) convene funders from agencies that can support areas in life sciences research to promote consolidation of need and prevent duplication of effort, ultimately yielding more effective national public health policies.

Research-based networks increase the ability of individuals and institutions to engage with a wider global community of peers and stakeholders and undertake high-quality life sciences research that is safe and secure in practice.  This approach establishes common values and connections that can override policy driven collaborations or detractions between nations in favor of scientific collaborations to address shared problems. Researchers are best placed to identify and address the health priorities and challenges of their respective nations and provide local and national policy-makers with a broad range of high-quality, relevant evidence to inform decision making.

While the value of these relationships is apparent, qualifying the effectiveness of a research-based network is less clear, which is why the initial investment is critical: setting achievable objectives, tied to short and long-term indicators of success ensure value and potential sustainability of the network.  A research-based network needs to be initially rooted in (1) regular, straightforward communication; (2) common values; (3) long-term commitment of the participants; (4) incentivized growth; and (5) opportunities for participation in the process.  Much of this can be accomplished by selecting champions of effort, establishing terms of reference or a charter for organization, and a regular process for communication (e.g, meetings, newsletters, and calls).

Much of the framework in measuring the effectiveness of International Development can be applied to measuring the effectiveness of Cooperative Threat Reduction.  The two fields share common issues of increasingly comp­lex and changing environments, need to be both credible and cost-effective, and complex issues and trends that impact application over the next several years.[3]  Results-based management, which is a broad performance management strategy aimed at achieving change sets End-states, indicators, and targets with a monitoring system for regular data check-ins; this process seems to be the most used by development agencies and hold the most promise for assessing the effectiveness of biological threat reduction.

Written by Katie Leahy katie.leahy@globalsyseng.com

[1] http://www.nti.org/media/pdfs/NunnLugarBrochure_2012.pdf?_=1354304005

[2] http://www.dtra.mil/Portals/61/Documents/20130101_fy13_ctr-scorecard_slides_jan13.pdf

[3] http://www.iosrjournals.org/iosr-jbm/papers/Vol17-issue2/Version-3/H017237076.pdf

Lessons Observed versus Lessons Learned in Federal Acquisition

In 15 years of Federal Acquisition experience we continue to observe the same lessons in our processes and operational shortfalls.  Those lessons become significant limitations and risk to our Operational responders.

First, it is important to define the difference between lessons learned versus lessons observed.  A lesson learned in our operational system is a defined shortfall in an after-action report with a systematically developed solution(s) to ensure that this shortfall is eliminated.  A lesson observed is shortfall that continually shows up in a series of after-action reports, and no organization takes ownership to eliminate the shortfall.  A good example of a lesson observed is the inability for state, local and federal responders to operate over a common radio platform.  I feel confident that the next large-scale event that occurs in the United States will have a section in the after-action report regarding this shortfall.

Now, returning back to Federal Acquisition, our technology development integration into the operational world (from Warfighters abroad to first responders locally) has continued to observe many of the same lessons.  A key lesson observed is acquisition of the latest technologies outpaces our ability to integrate them into our operational construct.  Without a full understanding of what a new technology’s capability is and its applied significance to the work at hand, we cannot have a clear understanding of how we should react to its outputs.

A chilling illustration of this challenge is the use of a new technology called radar in World War II:

Unit s/n 012 was at Opana PointHawaii on the morning of the seventh of December 1941 manned by two privates, George Elliot and Joseph Lockard. That morning the set was supposed to be shut down, but the soldiers decided to get additional training time since the truck scheduled to take them to breakfast was late. At 7:02 they detected the Japanese aircraft approaching Oahu at a distance of 130 miles (210 km) and Lockard telephoned the information center at Fort Shafter and reported “Large number of planes coming in from the north, three points east”. The operator taking his report passed on the information repeating that the operator emphasized he had never seen anything like it, and it was “an awful big flight.”

The report was passed on to an inexperienced and incompletely trained officer, Kermit Tyler, who had arrived only a week earlier. He thought they had detected a flight of B-17s arriving that morning from the US. There were only six B-17s in the group, so this did not account for the large size of the plot. The officer had little grasp of the technology, the radar operators were unaware of the B-17 flight (nor its size), and the B-17’s had no IFF (Identification friend or foe) system, nor any alternative procedure for identifying distant friendlies as the British had developed during the Battle of Britain. The raid on Pearl Harbor started 55 minutes later, and signaled the United States‘ formal entry into World War II a day later.

The radar operators also failed to communicate the northerly bearing of the inbound flight. The US fleet instead was fruitlessly searching to the southwest of Hawaii, believing the attack to have been launched from that direction. In retrospect this may have been fortuitous, since they would have met the same fate as the ships in Pearl Harbor had they attempted to engage the vastly superior Japanese carrier fleet, with enormous casualties.

After the Japanese attack, the RAF agreed to send Watson-Watt to the United States to advise the military on air defense technology. In particular Watson-Watt directed attention to the general lack of understanding at all levels of command of the capabilities of radar- with it often being regarded as a freak gadget “producing snap observations on targets which may or may not be aircraft.” General Gordon P. Saville, director of Air Defense at the Army Air Force headquarters referred to the Watson-Watt report as “a damning indictment of our whole warning service”.[1]

Because our Federal Acquisition process can take 10+ years to go from concept to fielded capability, the technical and operational communities have a tendency to procure versus go through the requirements/acquisition rigors to ensure we provide true utility versus “something” new.  To that end, we are in serious danger in our current Federal Acquisition approaches to observe yet another lesson of “a damning indictment of our whole warning service.”

A Path Forward

Advanced Technology Demonstrations (ATDs) are a good way to bridge the gap between the operational needs and the acquisition rigors as they pull in the warfighter community and the requirements/joint combat developers into a project in a meaningful way.  The ATD process as currently structured can provide logical feedback mechanisms to the science and technology community while providing an initial capability that warfighters can actively use within their current missions.  Further, the operational units will have worked with the ATD management team to ensure the new tools are integrated into their current or even adjusted concepts of operation.  To that end, we eliminate the fiasco that was our radar systems at Pearl Harbor and ensure that we are not continuing to observe lessons learned of missions past.

[1]http://worldwar2headquarters.com/HTML/PearlHarbor/PearlHarborAirFields/radar.html

Written by Chris Russell chris.russell@globalsyseng.com

Countering the threat and operating in a complex WMD environment: Enhancing and Integrating Early Warning

Countering the threat and operating in a complex WMD environment: Enhancing and Integrating Early Warning

Last year while driving my daughter home from school during some rainy weather in San Diego, both of our iPhones simultaneously alerted us to a Tornado Warning (yes, in Southern California!). After my initial surprise and disbelief, I noticed that the sky was in fact unusually dark in front of us and made a decision to slow down and further evaluate the situation.

I subsequently received a text from a friend who is a former Air Force meteorologist, asking about a tornado alert he had seen about San Diego (he lives in VA). Later forensics revealed that there had been a real tornado threat less than three miles from where I was. My phone tracked where I was, where the threat was, and immediately informed me of the danger. I also received additional information to support the notification. It wasn’t a perfect report, but it at least offered me a trigger to study the situation and make a ‘low-regret’ decision to be careful before proceeding. I am sure many of you have also experienced similar warnings about weather and even traffic delays when your smart phone accesses your calendar, compares your location to where you need to be at a certain time, and recommends that you leave earlier than planned.

In recent technology demonstrations, we have seen many similarities with the current state of smart phone integrated capabilities. Seamlessly integrating information in a CBRN environment would increase commanders’ awareness of impending threats and allow for faster associated protective actions. In a biological threat environment, where the consequences are even more difficult to identify because of latency of effect, commanders should be provided with options that easily explain the threat, its impact to the mission, and any recommended defensive actions (heightened MOPP status, avoidance of threat/denial of movement, etc.). Even if a solid determination of threat cannot be made, a quantifiable probability level of a threat or even the ruling out of a particular CBRN threat would be valuable to a decision-maker.

The Maneuver Support Center of Excellence (MSCoE) at Fort Leonard Wood, MO recently published an Early Warning White Paper that discusses how the Joint CBRN Defense community is fundamentally shifting its view of “how the integration of sensors (CBRN and non-CBRN) provides WMD Situational Awareness (Proliferation Prevention/ Counterproliferation) and Situational Understanding of CBRN Hazard Activities (CBRN Defense/CBRN Consequence Management) to provide WMD Early Warning”. [1] It describes a holistic approach to the problem, which focuses on the CBRN Enterprise providing a functionally integrated WMD Early Warning framework that networks joint ISR solutions (away from platform centric capabilities) while encouraging integration and innovation to support rapid and sound decision-making. The paper further examines the temporal components of WMD Early Warning. Figure 1 displays a nominal timeline with decisions, based on level of awareness and understanding, that can occur over time with increasing confidence and impact.

Adjusting the time components is what future capability development should concentrate on to achieve a best-case situation as depicted in Figure 2, which would (1) maximize information gathering pre-attack and (2) minimize time to situational understanding.

Pre-attack information is mainly intelligence related but will also include background environmental and medical surveillance data. This information must be continually collected to provide not only warning but also baseline data that will be critical for comparison and analysis after an attack, especially in a biological incident. It is imperative that all of this information be assimilated and understood quickly so that commanders can make quicker and better decisions so that optimal courses of action (COAs) can be pursued.

The challenge in developing this integrated architecture is not only collecting all of this information in real time from a network of sensors and other data sources, but also enabling the commanders to get the ‘so-what’ to make informed decisions and not become paralyzed by an excess of data. Granted, the staff planning/advising function will never be replaced, but in a dynamic threat environment where tactical commanders need to make rapid protective decisions, there must be a reliable and almost automated decision support capability that facilitates, at a minimum, low regret/low risk decisions that he/she might not otherwise have the time to consider in the middle of an operation.

With recent technological advances in CBRN detection/sensor technology, the improved ability to rapidly and seamlessly network and share information across current disparate information systems, and development of smart algorithms to provide decision-makers with more quantifiable and reliable analysis, we are at an optimal time to provide the warfighter with enhanced capabilities to counter the CBRN threat. If our smart phones can figure out how to data mine information from disparate applications and give us credible and effective recommendations, we should be able to pursue similar capabilities for the warfighter operating in a CBRN environment. GSE currently supports the Defense Threat Reduction Agency (DTRA) in this mission space and looks forward to providing innovative technological solutions for the warfighter.

[1] Joint Concepts for Countering Weapons of Mass Destruction: Weapons of Mass Destruction (WMD) Early Warning White Paper, Maneuver Support Center of Excellence, February 2017

Joint Concepts for Countering WMD Early Warning White Paper, MSCoE Feb 2017

Written by Mike Midgley

Confidence to Enter the “Valley of Death:” Transition Confidence Levels for Technology Transition

Throughout history, technology has determined the victor in war. In the Battle of Agincourt in 1415 the English soundly defeated a numerically superior French army with the newest battlefield technology—the English Longbow. In World War II, bright minds at Bletchley Park developed Bombe and Colossus, machines that could decipher German code, that turned the tide of war, and launched the computer age. The War Production Board directed efforts to the mass production of the new miracle drug Penicillin in time for the D-Day invasion, saving countless lives during and after the war. Timely technology is crucial. However, too often, tools fail to transition from R&D to the field. Worse is technology so altered by the time it reaches production that it is useless “shelf-ware,” costing warfighter lives and wasting tax payer money. Transition is one of the greatest hazards in the DoD Acquisition community. How do we move technology from the laboratory to the battlefield without becoming another casualty in the “Valley of Death”?

Technology Transition is an extremely difficult process. There is often little funding associated with the bridge between these phases. “The bridge” figuratively crosses the so-called “Valley of Death” (figure below). The needs that are fulfilled at the front of the Valley (successful R&D outputs) are often different than the needs required exiting the Valley (successful entrance criteria to a field-able system).

Figure taken from the Defense Manufacturing Management Guide for Program Managers (https://acc.dau.mil/communitybrowser.aspx?id=520822).

While there are standard approaches as defined by the Defense Acquisition University (DAU) to better enable transition (e.g., Transition IPTs, Technical Readiness Levels, and Technical Transition Agreements), the U.S. Special Operations Command (USSOCOM) is focusing on a risk-based approach called Transition Confidence Levels. Anthony Davis and Tom Ballenger from USSOCOM Science and Technology are defining these Transition Confidence Levels and how they can ensure technologies under development are ready for transition.

The Transition Confidence Levels approach attacks yet another transition hazard: leaving the R&D phase too early. As R&D budgets wane, the pressure to quickly field new technology has put greater emphasis on the transition metric. But by leaving R&D too early (or transitioning too early) many projects become “shelf-ware” as a report or a product that will never get used.

To manage the portfolio of projects under development at SOCOM S&T, Davis and Ballenger created a Transition Confidence Level (TCL) scale to complement the already required Technology Readiness Level scale for technology maturity. Management becomes more dynamic as projects move up the TCL scale. Senior leader involvement also kicks in at the highest levels. TCL transparency ensures that everyone understands a project’s development status, and budgets and manages accordingly.

Davis and Ballenger say that the “adoption of TCL has provided a wealth of insight into the progress of the S&T portfolio toward transition with a minimum of additional data entry.” They believe the “tool has immediate potential application to numerous S&T organizations and portfolios and is easily adaptable to fit each organization’s particular needs.”

The Combatting Weapons of Mass Destruction (CWMD) mission now falls under USSOCOM. The Defense Threat Reduction Agency (DTRA) is the Combat Support Agency that is focused on CWMD and now reports to USSOCOM (recently changed from USSTRATCOM.)  For Global Systems Engineering, DTRA is one of our key clients and this TCL concept will become very important over the coming months and years.

With Technology Confidence Levels, SOCOM S&T is learning, improving, and adjusting to keep a dream alive – better acquisition management. Read Bridging the “Valley of Death” here.

Written by Chris Russell and Elizabeth Halford

Tags: acquisitiontechnology transitionValley of Death