Disaster Preparedness, Response, and Recovery: A Family Affair

In my career, I have had the pleasure of supporting numerous U.S. Government disaster preparedness, response, and recovery efforts. For the most part, I have observed that these three types of efforts are categorized as separate yet chronological missions – without much overlap in terms of how they are managed or resourced.

You prepare for a disaster. When it occurs, you respond to it. And then after everything settles down, you begin to preform longer-term recovery efforts. In other words, all three mission phases are conducted as standalone efforts. However, I have experienced that this is not necessarily the most effective approach toward successfully managing a disaster.

Each disaster – whether it be a hypothetical one or a real one – is as one of a kind and diverse as the individual personalities of the federal, state, and local representatives responsible for working these three aspects of an incident. Although the incidents and individuals themselves are always unique, there seems to be a theme that transcends these differences: the interconnectedness between preparedness, response, and long-term recovery efforts, and a reasonable necessity for blurred boundaries between these mission areas.

I have observed that effectively managing a disaster depends on establishing a community with a culture of cooperation, regardless of where one mission starts and another mission begins. At an individual level, this often means that, during these three phases of an incident, a proactively collaborative philosophy is put ahead of individual personalities and even mission “swim lanes.”

This collaborative approach requires that those focused on disaster preparedness communicate with and support the response mission, those working on response efforts collaborate and talk to those who will work toward long-term recovery, and both response and recovery representatives help inform future preparedness efforts. The most successful disaster management efforts are not seen exclusively in a preparedness phase, in an effective immediate response mission, or in a well-implemented recovery process – the most effective approach comprises a blended activation of all three phases.

This lesson became clear to me during the 2011 Japanese tsunami and the associated Fukushima nuclear plant disaster. When I was working out of the U.S. Embassy in Tokyo as part of the U.S. response mission following the incident, the Deputy Chief of Mission (DCM) clearly understood that the circumstances demanded a holistic and comprehensive level of support for our foreign partners. Each morning, he held an all hands on deck coordination meeting with a diverse audience of embassy officials and U.S. interagency representatives in the embassy’s first floor auditorium. He started each meeting by asking those with a preparedness mission, those with a response mission, and those with a long-term recovery mission what they were doing to support each other.

A long time has passed since Fukushima, but I still think about the Embassy DCM’s approach toward disaster management. These days, I spend a lot of my time dealing with the daily disasters that occur on a much smaller scale in my own house, and it has taken everything I have learned about disaster response to navigate the risks associated with a life shared with both a one and three year old.

Now that my one year old daughter has developed the ability to swiftly stagger around the house on her own, she has adopted parading over to the dog’s water dish and sadistically flipping it over as her new favorite pastime. After this occurred a few times in a row, my wife started preparing for future incidents by laying large towels on the floor. (Why we didn’t just move the water dish somewhere else is a conversation for another time.) My three year old son, who is currently going through a well-intentioned “I must help” phase, typically responds by running over to the incident and frantically throwing paper towels, napkins, clean diapers, or his socks on the spill. Then I come in and finish where my son left off and restore the floor back to its original non-dog-water flooded condition.

Although not a true disaster, the dog dish being flipped over onto the floor is a type of incident that must be taken seriously. Thus, I realized that the threat of future dog-dish upheaval was one that our household would benefit from learning to cooperatively respond to. I called a family meeting to outline some of the lessons I’ve learned throughout my career about how the three of us – or four if you count the dog – can work more collaboratively together in navigating how we prepare for, respond to, and recover from the inevitable situation. An all hands on deck approach in our family response would make everyone’s job easier, and begin to teach my son the principles of teamwork and effective response from a young age.

I started telling my wife and son about how, back in 2011 in Tokyo, the DCM would specifically ask USAID representatives what they needed in the response phase to be successful in their recovery mission. I described to my family how he would then turn to those supporting the immediate response and ask what preparedness activities and points of contracts were needed to help support them in their immediate near-term mission, and how he would ask those who are responsible for preparedness what lessons could be learned from this crisis that can better help the U.S. and its partners prepare for future incidents.

I explained to my wife and oldest child that, in any form of crisis management, the missions of preparedness, response, and recovery are really part of one larger mission and must strategically overlap to maximize their individual mission efforts.

After I used the Fukushima story to illustrate how everyone in the family can work with each other in the face of this new water dish threat, my wife handed me my daughter and a clean diaper and asked me to respectfully respond to her newest disaster, restore her butt to its previous clean form, and prepare for follow on incidents as she had eaten mashed-up prunes and some playdough for breakfast. I think that was my wife’s way of telling me she appreciates me drawing parallels between a nuclear disaster and my daughter’s habits of making a mess. Meanwhile, my son had stopped listening about halfway through my pontification and began to smash goldfish crackers into the living room carpet, probably as a way to give the family more practice working as a team in managing disasters.

At one point back in 2011, I got a chance to talk one-on-one with the DCM. He took the opportunity to complain about how his March Madness bracket had undergone its own tsunami thanks to an unnamed ACC team out of Tallahassee. However, I used the opportunity to ask him about his approach with disaster management. Although he did not have a lot of disaster management experience, he felt that – with as much capability available to support our allies and as terrible as the disaster was – he simply didn’t want to see the three individual efforts of response, recovery, and future preparedness efforts start and stop independently without allowing for a meaningful transition to take place.

He said that they needed each other to be the best at their respective roles. It was a simple and intuitive answer. I doubt he realized that the experience had help shape the way I think about disaster management. However, still to this day, I believe that holding these relatively intuitional principles ahead of mission introversion and the default human disposition of prioritizing one’s own objectives ahead of another yields an opportunity for larger success.

Written by Randy Thur randolph.thur@globalsyseng.com

Fresh Water: A True Systems Problem of Our Time

World's 1st Large-Scale Solar Powered Desalination Plant - Al Khafji

In March, the residents of Cape Town, South Africa were delivered a reprieve from doomsday when city officials announced that “Day Zero,” when the city will be forced to cut off the taps to city residents, had been pushed back and was no longer predicted to occur in 2018.[i] Extreme water conservation measures, including rationing of 50 liters per resident per day—roughly a sixth of the average American’s daily usage—have slowed sinking reservoir levels, but Cape Town’s future is still dependent on getting decent rainfall this winter. Ironically, counting on the rain was one of the primary causes for the current crisis, as city officials in recent years banked on average historical rainfall patterns holding despite warnings of depleted reservoirs and an increasingly unpredictable climate.

Having grown up in southern California, I’m no stranger to drought and alarming predictions of water scarcity, albeit in less dire circumstances. The state’s water supply has been under constant strain for my entire life. Yet I never saw any impact on southern Californians’ lifestyle until I went home to stay with my parents in Orange County in the summer of 2015, a few months into state-mandated water restrictions. The conservation measures were noticeable, but nowhere near as painful as Cape Town’s. Before the situation got desperate, California was bailed out by an unusually wet winter in 2016-2017 that refilled reservoirs across the state and led to a record snowpack in the Sierra Nevada, the main water source for much of the state.

In hindsight, California got lucky. Cape Town’s ongoing battle to forestall “Day Zero” illustrates the outcome California could have gotten—and could still get. The state’s Water Resources Control Board is considering re-imposing restrictions amid signs that California could be slipping back into drought conditions.[ii] As in South Africa, American water management officials can’t bank on past precipitation patterns holding and have to prepare for an increasingly volatile water supply exacerbated by climate change.

Cape Town is currently scrambling to build four desalination plants and a wastewater processing plant as well as drill new water wells as part of a last-ditch effort, but most of those projects are behind schedule and won’t be operational in time to impact the current crisis. One question raised by California and Cape Town is: why is it that our water management strategy in drought-prone regions is so often reactive rather than proactive?

Water supply management in the western United States admittedly presents an intricate dilemma, lying at the intersection of decades of shortsighted public policy, an increasingly unpredictable climate, unfavorable economic incentive structures, and myriad engineering challenges distributed across interstate, state, regional, and municipal levels. It’s a true system of systems problem.

The water supply problem presents a full slate of challenges and there’s no silver bullet to address them all. But there is clearly a need for a diversified water supply portfolio that includes drought-resistant sources that can provide drinking water on a large scale. Desalination—the process of removing salt and minerals from seawater or brackish water—could be among the potential solutions.

Though it is often decried for its significant power consumption and detrimental environmental impacts, desalination may present one of many potential paths forward for arid, climate-challenged states like California or Texas. Desalination has faced stiff resistance to adoption in the United States, which primarily hinges on the higher price consumers have to pay for drinkable water supplied through desalination compared to other fresh water sources like rivers or groundwater.

Desalination at utility-level production is a highly energy-intensive process using current methods. Seawater reverse osmosis, the most commonly used technique in the United States, requires pumping water through several stages of pretreatment before heating it to high temperatures and forcing it through semipermeable membranes at high temperature and pressure to strain out salt and particulates. These energy requirements add up to half or more of the total cost of desalination in most plants. Some consumers are also concerned about the environmental impacts of desalination, like the effect of discharging the highly saline brine byproduct on ocean life and the greenhouse gas emissions from all the required power production.

Obviously, there’s a wide array of barriers to desalination adoption that must be overcome. Yet recent advances in desalination technology show promise for mitigating these concerns while bringing costs down.

As the National Research Council noted in a seminal 2008 report on desalination that holds true to this day:

“Water scarcity in some regions of the United States will certainly intensify over the coming decades, and no one option or set of options is likely to be sufficient to manage this intensifying scarcity. Desalination, using both brackish and seawater sources, is likely to have a niche in the future water management portfolio for the United States.”[iii]

And a niche it should have. Desalination plants could serve as a hedge against a Cape Town-like crisis, staving off a potential Day Zero in the worst-case situation—if American consumers could be convinced that paying a slight premium for drought-resistant water sources to diversify their water supply portfolio is worth it. But that will likely require more cost parity between desalinated water and other water sources as well as minimizing desalination’s environmental impact to assuage public resistance and allow the technology to be accepted as one piece of a larger climate change adaption strategy.

In a bid to increase efficiency and reduce costs, the desalination industry has already shifted toward collocating desalination plants with coastal power plants in an effort to maximize efficiency: the seawater pumped in as cooling water for the power plant is used for desalination since it is already heated part of the way to the high temperatures required for traditional desalination methods, which reduces energy requirements; construction costs can decrease 5-20% as the two plants share seawater intake and discharge facilities; and less electricity is lost in the transfer between plants because the distance is reduced.

The recently opened Carlsbad Desalination Plant in San Diego County, a great example of a collocated desal plant and now the largest in the nation supplying 50 million gallons (190,000 cubic meters) of drinkable water per day, is considered state of the art by American desalination standards. Yet it also illustrates the enduring challenges to desalination’s acceptance and competitiveness in the United States. The plant is considered relatively efficient in its energy usage and produces water at fairly competitive prices, but it still draws power from the local grid, which is sourced from 70% nonrenewable energy. On top of that, it was built on the site of the 1950s-era, natural-gas burning Encina Power Station, which the county has been trying to shut down. The poor optics helped fuel a slew of criticism and lawsuits against the plant, even though Poseidon Water, the operator, purchased carbon emissions offsets and undertook reforesting programs. Lingering bitterness over Carlsbad has held up development of other desal plants in Southern California.

The American desalination industry would do well to learn from the examples of oil-rich Saudi Arabia and the United Arab Emirates, which ironically are both global leaders in renewable energy-powered desalination. Saudi Arabia’s Al Khafji plant became the world’s first operational, large-scale, solar-powered desalination plant when it came online in November 2017. It features a 15-megawatt panel array using polycrystalline solar cells, which produce enough electricity to support the plant’s desalination of 60,000 cubic meters of water per day although not enough to also serve as a power source for the surrounding community. But the whole project is valued at $130 million, well under the Carlsbad plant’s $1 billion total construction price tag. Saudi Arabia got a third of the production capacity but at 1/8 of the capital cost, in addition to shedding the stigma against burning fossil fuels.

World's 1st Large-Scale Solar Powered Desalination Plant - Al Khafji
World’s 1st Large-Scale Solar Powered Desalination Plant – Al Khafji

Click for full video overview

Though critics have charged that the operating costs of solar-powered desal are prohibitively expensive, an ambitious effort underway in Abu Dhabi suggests that solar-powered desalination is becoming increasingly competitive. The Masdar project, bankrolled largely by the UAE’s sovereign wealth fund, aims to create an entire carbon net-zero, self-sufficient city for 50,000 residents. Though the overall effort has run into setbacks and the revelation that being completely net-zero isn’t feasible, a solar-powered desalination pilot program under the effort has shown strong potential for scaling up to help supply the city.[iv] Capitalizing on cheaper and more efficient solar cells would make this project quite affordable—cutting the price of water from solar-powered desalination in half by 2050.[v]

In the United States, such high-risk ventures are less practical without government support. But the federal funding picture changed drastically in FY 2018. The Department of Energy issued a funding opportunity announcement in September 2017 and expects to start making grants totaling $15 million in this fiscal year to support solar-powered desalination research, especially for research into integrating solar desalination systems.[vi] That’s a relatively massive shift by U.S. federal funding standards as the Department of the Interior’s Bureau of Reclamation—historically the largest federal funder of desalination research—expects to grant just $1.2 million under its desalination research program for FY18.[vii] Desalination companies and researchers should seek to leverage this bow wave to reinvigorate renewable energy desalination research in the U.S. and propagate this interest in “clean” desalination. Other barriers remain to making solar-powered reverse osmosis desalination competitive and operating it on a large-scale—such as how to efficiently store solar energy at nighttime when the desal plant still has to run—but those will likely prove surmountable in the face of technological advances.

If American water management officials were to derive any lesson learned from the Cape Town water crisis, it should be that we cannot rely solely on the methods that helped us cope with past droughts and that adapting to future droughts driven by an increasingly erratic climate will require bold, innovative strategic planning that can attract federal research support and overcome public resistance. Integrating renewable energy production with desalination represents exactly that kind of thinking.

Written by Maclyn Senear maclyn.senear@globalsyseng.com

[i] David McKenzie and Brent Swails, “Day Zero deferred, but Cape Town’s water crisis is far from over”, CNN, March 9, 2018, https://www.cnn.com/2018/03/09/africa/cape-town-day-zero-crisis-intl/index.html

[ii] Associated Press, “Could California drought restrictions slash water rights? Some think so,” CBS News, Feb. 21, 2018, https://www.cbsnews.com/news/could-california-drought-restrictions-slash-water-rights-some-think-so/

[iii] National Research Council, 2008, Desalination: A National Perspective, Washington, DC: The National Academies Press, https://doi.org/10.17226/12184

[iv] “Renewable Energy Water Desalination Programme,” Masdar, 2018, http://www.masdar.ae/assets/downloads/content/3588/desalination_report-2018.pdf

[v] Richard Martin, “To make fresh water without warming the planet, countries eye solar power,” MIT Technology Review, May 12, 2016, https://www.technologyreview.com/s/601419/to-make-fresh-water-without-warming-the-planet-countries-eye-solar-power/

[vi] “Funding Opportunity Announcement: Solar Desalination,” Department of Energy, Solar Technologies Office, DE-FOA-0001778, Sept. 27, 2017, https://www.energy.gov/eere/solar/funding-opportunity-announcement-solar-desalination

[vii]“Desalination and Water Purification Program for Fiscal Year 2018,” U.S. Department of the Interior, Bureau of Reclamation Research and Development Office, Funding Opportunity Announcement No. BOR-DO-18-F002, Feb. 2018

Whole-of-Government/Whole-of-Community (Multi-sectoral Coordination) in Emergency Management: Domestic and International Strategies

In the wake of devastating events, such as the recent impact of Hurricane Harvey in Texas, I am reminded of the importance of a comprehensive national strategy for emergency preparedness and response. It is difficult for anyone to remain focused on the broader collaborative picture during any emergent situation, let alone a hurricane that resulted in days of flooding, destruction, and loss of life.  However, we have seen time and again that key sectors within government, industry, and relevant non-governmental organizations provide the most value when responding to an incident in a collaborative and coordinated manner. For example, soon after the Arkema chemical plant explosion was reported in the aftermath of Hurricane Harvey, the Environmental Protection Agency (EPA) deployed one of their resources, an Airborne Spectral Photometric Environmental Collection Technology (ASPECT) aircraft, to Crosby, Texas to retrieve chemical information from the resulting smoke cloud to help inform the response efforts. In doing so, the EPA not only fulfilled their sector and agency specific requirements, but also concurrently enhanced the overall strategy and resources of the state and local first responders leading the response effort.

Events like these highlight that there must be a mutual understanding of the goals, requirements, and expertise of sector counterparts, in order to effectively work together. This is a crucial baseline for cohesive emergency preparedness and response planning. In the United States, we refer to this as Whole-of-Government/Whole-of-Community.  With our mission and country partners in Southeast Asia, we refer to this as multi-sectoral.   Regardless of the terminology, this approach is applicable to all forms of emergency incidents; may they be of natural or man-made origin, accidental or intentional, and across all hazard types.

Lessons Learned: United States

Over the years, the United States has accrued experience and lessons learned in multi-sectoral coordination from numerous events, such as the response to the 9/11 Attacks in 2001 and Hurricane Katrina in 2005. These events of national concern highlight that coordination and collaboration among various sectors and entities, within and outside of government, is a crucial component of effective threat and hazard mitigation.

For strategy, the United States relies upon the National Response Framework (NRF), which is a document that provides guidance for all-hazard incident response in the United States. Preceded by the National Response Plan (NRP), the NRF was developed in part from lessons learned during Hurricane Katrina in 2005, the September 11, 2001 terrorist attacks, the London bombings, and national, state and local exercises.[i] It emphasizes multi-sectoral roles and responsibilities during an incident response. The NRF delineates an approach to government and private sector integration for emergency preparedness, response and recovery efforts, and it advances the notion that governments at all levels, the private sector, non-governmental organizations, and individual citizens share responsibility in incident response.[ii] The NRF relies upon the National Incident Management System (NIMS) to coordinate all phases of emergency management activities among all levels of government, the private sector and non-governmental organizations.[iii]

An example of U.S. multi-sectoral coordination in practice can be found in the U.S. Federal Bureau of Investigation (FBI) and Centers for Disease Control (CDC) Joint Criminal and Epidemiological Investigations (CrimEpi) Program. This program was developed to enable communication, collaboration and coordination between law enforcement and public health communities during a response to a potential biological threat.

Lessons Learned: Malaysia

In Malaysia, national concern regarding multi-sectoral coordination during an emergency response was also prompted by experiences and lessons learned. Various disasters including mudslides, landslides, tropical storm Greg in 1996, and the collapse of the Highland Towers Condominium in 1993 drove the creation of an integrated all-hazard disaster management system in Malaysia.

Malaysia’s National Security (NSC) Directive No. 20 is the mechanism for integrative management of major land all-hazard disasters and incidents. This policy determines the roles and responsibilities of the various stake-holding agencies involved in a disaster response. Through NSC Directive No. 20, a Disaster Management and Relief Committee (DMRC) is established at the national, state and district levels. Each DMRC consists of an interagency and inter-sectoral group of stake-holders. The federal level DRMC formulates policies and strategies, and the state and district level DMRCs implement the national disaster management procedures.  NSC Directive No. 20 also requires all government agencies to develop Standard Operating Procedures (SOPs) for disaster prevention, which includes a review and update of their Emergency Response Plans (ERPs).[iv]

An example of a Malaysian multi-sectoral coordination activity is the Biological Incident Response Training and Evaluation (BRITE) program, which was sponsored by the U.S. Defense Threat Reduction Agency (DTRA) Cooperative Biological Engagement Program (CBEP). The BRITE program was developed based off the aforementioned U.S. CrimEpi program, and was tailored to improve the ability of Malaysia’s law enforcement, public health, animal health and community stakeholders in bioincident response efforts. Global Systems Engineering assisted in the implementation of the BRITE program in Malaysia.

A Global Issue

The importance of multi-sectoral coordination in emergency preparedness and response is not an issue specific to one country.  Collaborative national emergency management strategy is an issue of global concern. Different countries often experience a diversity of hazard and threat variables due to their specific societal, economical and geographical specificities.  While coordination efforts must be tailored to the characteristics of a particular country, the need for a multi-sectoral approach and strategy to all-hazards response efforts remains across international boundaries. For example, the Indian Ocean earthquake and tsunami in 2004, and various international Highly Pathogenic Avian Influenza (H5N1) outbreaks over the years, have emphasized the need for strong multi-sectoral coordination in emergency response efforts.

Unanticipated hazards will arise in the future. As of this post, and just as the Hurricane Harvey response moves into the recovery phase, another hurricane is on the not-so-distant horizon. This Hurricane, “Irma,” has already turned into a category five hurricane, and is headed towards Florida at a steady pace. We continue to see that hazards and threats are almost always unpredictable in timing, scope, and severity. This only serves to further highlight how imperative it is that national multi-sectoral coordination strategies for emergency response must remain flexible, inclusive, and adaptable to emerging unprecedented threats.

Written by Caitlin Devaney caitlin.devaney@globalsyseng.com

[i] https://www.fema.gov/pdf/emergency/nrf/NRFRolloutBriefingNotes.pdf

[ii] https://www.fema.gov/pdf/emergency/nrf/nrf-core.pdf

[iii] https://www.fema.gov/pdf/emergency/nims/NIMS_core.pdf

[iv] http://www.adrc.asia/management/MYS/Directives_National_Security_Council.html?Fr

Image Credit: Aivaro Blanco/EPA

As posted on ABC News at http://abcnews.go.com/US/hurricanes-harvey-irma-cost-us-economy-290-billion/story?id=49761970

 

The Future of Wearable Sensing: Trendy for Me, Tactical for Soldiers

The sweeping craze of wearable fitness trackers has enabled increased awareness of health status for everyday users. Combining step trackers with email, calendars, text alerts, and sleep monitoring has made wearable tech an integral part of a daily routine.  Add in some friendly competition as people challenge their friends to see who can get the most daily steps, and these devices provide a fun incentive to make sure you are always wearing it. But recently these wearable tech companies have invested in providing more meaningful data than just step tracking and energy expenditure. Sleep quality measurements and heart rate tracking have opened the door to potentially more significant health monitoring. As wearable tech continues to develop, providing a health conscious, competitive game with friends looks like just the beginning.

Warfighter health readiness is constantly at-risk due to biological (naturally occurring or intentionally released) and chemical exposure. The U.S. Armed Forces has a history of encountering infectious diseases in the field. In the past, infectious diseases have caused greater mortality than battle injuries.[1] Melioidosis, nicknamed the “Vietnamese time bomb”, is a notable example of warfighters unknowingly encountering bacteria as helicopters kicked up dirt and hidden pathogens in the tropical soil. Melioidosis is often difficult to diagnose, and can remain latent for years before symptoms actively present. U.S. military personnel were on the ground providing logistical support and training during the Ebola outbreak in West Africa.[2] Outbreaks of Q fever and leishmaniasis in soldiers returning from Iraq and Afghanistan prove biological exposure is not a stand-alone risk, but a ubiquitous threat regardless of time and place. Identifying the presence of disease before the soldier knows they have been exposed provides the opportunity to remove the soldier from immediate threats, treat the soldier quickly, and keep the soldier in condition for action.

If wearable sensors being developed could alert soldiers and leaders to impending illness, before a soldier starts to feel the symptoms of a disease, this information could be integrated with other information as part of the Integrated Early Warning effort under development by the Defense Threat Reduction Agency (DTRA.) Dr. Christian Whitchurch is providing the vision and leading the effort for Wearable Sensing for Chemical and Biological Exposure Monitoring.[3]

One key element of enabling IEW wearable technology is effectively managing data from devices to a platform that can analyze data and create useful data visualizations. As Mike Midgley mentioned in a previous blog post, “the challenge in developing this integrated architecture is not only collecting all of this information in real time from a network of sensors and other data sources, but also enabling the commanders to get the ‘so-what’ to make informed decisions and not become paralyzed by an excess of data”. On the individual level, preemptively identifying illness or heat strain could keep the warfighter healthy and fit for continued work in the field. On an aggregate level, trends in data could show when units are collectively exposed to a threat, or have reached an unsafe level of heat strain. This information would only be useful, however, when packaged into a digestible form.

Equivital Ltd, of Cambridge, UK, is one wearable tech company which has developed a display that uses green, amber, and red indicators to suggest the risk associated with each soldier’s physiological status.  Some measures such as body temperature and heart rate may not be very useful on their own as warning indicators, but they can be combined in an algorithm produced by the US Army Research Institute of Environmental Medicine (USARIEM) into a Heat Strain Index (HSI) to alert leaders of potential heat injury. Leveraging data visualization like that of the Equivital™ Black Ghost System (shown below) to display warning for potential illness is the next step.

Equivital™ Black Ghost Application[4]
Integrated early warning using wearable technology would enable timely countermeasures, triage, and exposure notification to not only protect the lives of our warfighters, but to enable mission assurance and provide information for effective decision making. Integrating wearable tech with personalized medicine and military preparedness would best enable our military leadership to make informed and proactive decisions for the benefit of the warfighter. We are only beginning to scratch the surface for the capabilities of this developing wearable technology. Continued multisector coordination between the warfighter, military leadership, and researchers will pave the way for wearable technology to function in a way that best serves all stakeholders. GSE looks forward to supporting this effort as DTRA leads the way for Wearable Sensing for Integrated Early Warning.

Written by Kat Gray katarina.gray@globalsyseng.com

[1] https://kaiserfamilyfoundation.files.wordpress.com/2013/10/8504-the-u-s-department-of-defense-and-global-health-infectious-disease-efforts.pdf

[2] https://www.defense.gov/News/Article/Article/603892/

[3] http://www.nbmc.org/wp-content/uploads/1.7-Whitchurch-Defense-Threat-Reduction-Agency.pdf

[4] http://www.equivital.co.uk/products/military

Beyond Relationships: Measuring Soft Outcomes of Cooperative Threat Reduction Through Research-based Networks

Since its inception, Cooperative Threat Reduction has measured hard outcomes for biological threat factors.  For over 20 years, the Nunn-Luger Score Card has been used to account for the elimination of all weapons and materials of mass destruction.  Historically, the hard target for biological threat reduction was to equip and build facilities, so partners could practice laboratory functions safely and securely; the present hard target is to secure facilities.[1] Most disarmament work occurred in the Former Soviet Union, where progress was easy to measure, for example, the number of facility upgrades.[2] Current threat reduction assistance is harder to measure, hinged to relationships with partners around the globe, specifically through cooperative engagement, capacity building, and sharing knowledge and best practices.  The current approach presents challenges to measuring outputs and subsequently justifying investments. One method to better account for biological threat reduction is research networking to enhance national and regional expertise and increase global collaboration.  Measuring a network’s effectiveness to threat reduction can be borrowed from International Development government and non-government organizations.

A network for research can reinforce the competence and capacity of emerging economies.  The thematic focus of a network may vary depending on the topic; however, the intent for establishing a network remains consistent: to support horizontal frameworks for disease surveillance and response.  Implementing a research-based network can (1) promote a common understanding of the risk and prevalence of under-reported and under-diagnosed emerging infectious diseases (e.g., rickettsial pathogens); (2) connect researchers to establish needs and gaps across the biological resiliency spectrum of prevent, detect, respond, and recover; and (3) convene funders from agencies that can support areas in life sciences research to promote consolidation of need and prevent duplication of effort, ultimately yielding more effective national public health policies.

Research-based networks increase the ability of individuals and institutions to engage with a wider global community of peers and stakeholders and undertake high-quality life sciences research that is safe and secure in practice.  This approach establishes common values and connections that can override policy driven collaborations or detractions between nations in favor of scientific collaborations to address shared problems. Researchers are best placed to identify and address the health priorities and challenges of their respective nations and provide local and national policy-makers with a broad range of high-quality, relevant evidence to inform decision making.

While the value of these relationships is apparent, qualifying the effectiveness of a research-based network is less clear, which is why the initial investment is critical: setting achievable objectives, tied to short and long-term indicators of success ensure value and potential sustainability of the network.  A research-based network needs to be initially rooted in (1) regular, straightforward communication; (2) common values; (3) long-term commitment of the participants; (4) incentivized growth; and (5) opportunities for participation in the process.  Much of this can be accomplished by selecting champions of effort, establishing terms of reference or a charter for organization, and a regular process for communication (e.g, meetings, newsletters, and calls).

Much of the framework in measuring the effectiveness of International Development can be applied to measuring the effectiveness of Cooperative Threat Reduction.  The two fields share common issues of increasingly comp­lex and changing environments, need to be both credible and cost-effective, and complex issues and trends that impact application over the next several years.[3]  Results-based management, which is a broad performance management strategy aimed at achieving change sets End-states, indicators, and targets with a monitoring system for regular data check-ins; this process seems to be the most used by development agencies and hold the most promise for assessing the effectiveness of biological threat reduction.

Written by Katie Leahy katie.leahy@globalsyseng.com

[1] http://www.nti.org/media/pdfs/NunnLugarBrochure_2012.pdf?_=1354304005

[2] http://www.dtra.mil/Portals/61/Documents/20130101_fy13_ctr-scorecard_slides_jan13.pdf

[3] http://www.iosrjournals.org/iosr-jbm/papers/Vol17-issue2/Version-3/H017237076.pdf

Lessons Observed versus Lessons Learned in Federal Acquisition

In 15 years of Federal Acquisition experience we continue to observe the same lessons in our processes and operational shortfalls.  Those lessons become significant limitations and risk to our Operational responders.

First, it is important to define the difference between lessons learned versus lessons observed.  A lesson learned in our operational system is a defined shortfall in an after-action report with a systematically developed solution(s) to ensure that this shortfall is eliminated.  A lesson observed is shortfall that continually shows up in a series of after-action reports, and no organization takes ownership to eliminate the shortfall.  A good example of a lesson observed is the inability for state, local and federal responders to operate over a common radio platform.  I feel confident that the next large-scale event that occurs in the United States will have a section in the after-action report regarding this shortfall.

Now, returning back to Federal Acquisition, our technology development integration into the operational world (from Warfighters abroad to first responders locally) has continued to observe many of the same lessons.  A key lesson observed is acquisition of the latest technologies outpaces our ability to integrate them into our operational construct.  Without a full understanding of what a new technology’s capability is and its applied significance to the work at hand, we cannot have a clear understanding of how we should react to its outputs.

A chilling illustration of this challenge is the use of a new technology called radar in World War II:

Unit s/n 012 was at Opana PointHawaii on the morning of the seventh of December 1941 manned by two privates, George Elliot and Joseph Lockard. That morning the set was supposed to be shut down, but the soldiers decided to get additional training time since the truck scheduled to take them to breakfast was late. At 7:02 they detected the Japanese aircraft approaching Oahu at a distance of 130 miles (210 km) and Lockard telephoned the information center at Fort Shafter and reported “Large number of planes coming in from the north, three points east”. The operator taking his report passed on the information repeating that the operator emphasized he had never seen anything like it, and it was “an awful big flight.”

The report was passed on to an inexperienced and incompletely trained officer, Kermit Tyler, who had arrived only a week earlier. He thought they had detected a flight of B-17s arriving that morning from the US. There were only six B-17s in the group, so this did not account for the large size of the plot. The officer had little grasp of the technology, the radar operators were unaware of the B-17 flight (nor its size), and the B-17’s had no IFF (Identification friend or foe) system, nor any alternative procedure for identifying distant friendlies as the British had developed during the Battle of Britain. The raid on Pearl Harbor started 55 minutes later, and signaled the United States‘ formal entry into World War II a day later.

The radar operators also failed to communicate the northerly bearing of the inbound flight. The US fleet instead was fruitlessly searching to the southwest of Hawaii, believing the attack to have been launched from that direction. In retrospect this may have been fortuitous, since they would have met the same fate as the ships in Pearl Harbor had they attempted to engage the vastly superior Japanese carrier fleet, with enormous casualties.

After the Japanese attack, the RAF agreed to send Watson-Watt to the United States to advise the military on air defense technology. In particular Watson-Watt directed attention to the general lack of understanding at all levels of command of the capabilities of radar- with it often being regarded as a freak gadget “producing snap observations on targets which may or may not be aircraft.” General Gordon P. Saville, director of Air Defense at the Army Air Force headquarters referred to the Watson-Watt report as “a damning indictment of our whole warning service”.[1]

Because our Federal Acquisition process can take 10+ years to go from concept to fielded capability, the technical and operational communities have a tendency to procure versus go through the requirements/acquisition rigors to ensure we provide true utility versus “something” new.  To that end, we are in serious danger in our current Federal Acquisition approaches to observe yet another lesson of “a damning indictment of our whole warning service.”

A Path Forward

Advanced Technology Demonstrations (ATDs) are a good way to bridge the gap between the operational needs and the acquisition rigors as they pull in the warfighter community and the requirements/joint combat developers into a project in a meaningful way.  The ATD process as currently structured can provide logical feedback mechanisms to the science and technology community while providing an initial capability that warfighters can actively use within their current missions.  Further, the operational units will have worked with the ATD management team to ensure the new tools are integrated into their current or even adjusted concepts of operation.  To that end, we eliminate the fiasco that was our radar systems at Pearl Harbor and ensure that we are not continuing to observe lessons learned of missions past.

[1]http://worldwar2headquarters.com/HTML/PearlHarbor/PearlHarborAirFields/radar.html

Written by Chris Russell chris.russell@globalsyseng.com

Countering the threat and operating in a complex WMD environment: Enhancing and Integrating Early Warning

Countering the threat and operating in a complex WMD environment: Enhancing and Integrating Early Warning

Last year while driving my daughter home from school during some rainy weather in San Diego, both of our iPhones simultaneously alerted us to a Tornado Warning (yes, in Southern California!). After my initial surprise and disbelief, I noticed that the sky was in fact unusually dark in front of us and made a decision to slow down and further evaluate the situation.

I subsequently received a text from a friend who is a former Air Force meteorologist, asking about a tornado alert he had seen about San Diego (he lives in VA). Later forensics revealed that there had been a real tornado threat less than three miles from where I was. My phone tracked where I was, where the threat was, and immediately informed me of the danger. I also received additional information to support the notification. It wasn’t a perfect report, but it at least offered me a trigger to study the situation and make a ‘low-regret’ decision to be careful before proceeding. I am sure many of you have also experienced similar warnings about weather and even traffic delays when your smart phone accesses your calendar, compares your location to where you need to be at a certain time, and recommends that you leave earlier than planned.

In recent technology demonstrations, we have seen many similarities with the current state of smart phone integrated capabilities. Seamlessly integrating information in a CBRN environment would increase commanders’ awareness of impending threats and allow for faster associated protective actions. In a biological threat environment, where the consequences are even more difficult to identify because of latency of effect, commanders should be provided with options that easily explain the threat, its impact to the mission, and any recommended defensive actions (heightened MOPP status, avoidance of threat/denial of movement, etc.). Even if a solid determination of threat cannot be made, a quantifiable probability level of a threat or even the ruling out of a particular CBRN threat would be valuable to a decision-maker.

The Maneuver Support Center of Excellence (MSCoE) at Fort Leonard Wood, MO recently published an Early Warning White Paper that discusses how the Joint CBRN Defense community is fundamentally shifting its view of “how the integration of sensors (CBRN and non-CBRN) provides WMD Situational Awareness (Proliferation Prevention/ Counterproliferation) and Situational Understanding of CBRN Hazard Activities (CBRN Defense/CBRN Consequence Management) to provide WMD Early Warning”. [1] It describes a holistic approach to the problem, which focuses on the CBRN Enterprise providing a functionally integrated WMD Early Warning framework that networks joint ISR solutions (away from platform centric capabilities) while encouraging integration and innovation to support rapid and sound decision-making. The paper further examines the temporal components of WMD Early Warning. Figure 1 displays a nominal timeline with decisions, based on level of awareness and understanding, that can occur over time with increasing confidence and impact.

Adjusting the time components is what future capability development should concentrate on to achieve a best-case situation as depicted in Figure 2, which would (1) maximize information gathering pre-attack and (2) minimize time to situational understanding.

Pre-attack information is mainly intelligence related but will also include background environmental and medical surveillance data. This information must be continually collected to provide not only warning but also baseline data that will be critical for comparison and analysis after an attack, especially in a biological incident. It is imperative that all of this information be assimilated and understood quickly so that commanders can make quicker and better decisions so that optimal courses of action (COAs) can be pursued.

The challenge in developing this integrated architecture is not only collecting all of this information in real time from a network of sensors and other data sources, but also enabling the commanders to get the ‘so-what’ to make informed decisions and not become paralyzed by an excess of data. Granted, the staff planning/advising function will never be replaced, but in a dynamic threat environment where tactical commanders need to make rapid protective decisions, there must be a reliable and almost automated decision support capability that facilitates, at a minimum, low regret/low risk decisions that he/she might not otherwise have the time to consider in the middle of an operation.

With recent technological advances in CBRN detection/sensor technology, the improved ability to rapidly and seamlessly network and share information across current disparate information systems, and development of smart algorithms to provide decision-makers with more quantifiable and reliable analysis, we are at an optimal time to provide the warfighter with enhanced capabilities to counter the CBRN threat. If our smart phones can figure out how to data mine information from disparate applications and give us credible and effective recommendations, we should be able to pursue similar capabilities for the warfighter operating in a CBRN environment. GSE currently supports the Defense Threat Reduction Agency (DTRA) in this mission space and looks forward to providing innovative technological solutions for the warfighter.

[1] Joint Concepts for Countering Weapons of Mass Destruction: Weapons of Mass Destruction (WMD) Early Warning White Paper, Maneuver Support Center of Excellence, February 2017

Joint Concepts for Countering WMD Early Warning White Paper, MSCoE Feb 2017

Written by Mike Midgley

Confidence to Enter the “Valley of Death:” Transition Confidence Levels for Technology Transition

Throughout history, technology has determined the victor in war. In the Battle of Agincourt in 1415 the English soundly defeated a numerically superior French army with the newest battlefield technology—the English Longbow. In World War II, bright minds at Bletchley Park developed Bombe and Colossus, machines that could decipher German code, that turned the tide of war, and launched the computer age. The War Production Board directed efforts to the mass production of the new miracle drug Penicillin in time for the D-Day invasion, saving countless lives during and after the war. Timely technology is crucial. However, too often, tools fail to transition from R&D to the field. Worse is technology so altered by the time it reaches production that it is useless “shelf-ware,” costing warfighter lives and wasting tax payer money. Transition is one of the greatest hazards in the DoD Acquisition community. How do we move technology from the laboratory to the battlefield without becoming another casualty in the “Valley of Death”?

Technology Transition is an extremely difficult process. There is often little funding associated with the bridge between these phases. “The bridge” figuratively crosses the so-called “Valley of Death” (figure below). The needs that are fulfilled at the front of the Valley (successful R&D outputs) are often different than the needs required exiting the Valley (successful entrance criteria to a field-able system).

Figure taken from the Defense Manufacturing Management Guide for Program Managers (https://acc.dau.mil/communitybrowser.aspx?id=520822).

While there are standard approaches as defined by the Defense Acquisition University (DAU) to better enable transition (e.g., Transition IPTs, Technical Readiness Levels, and Technical Transition Agreements), the U.S. Special Operations Command (USSOCOM) is focusing on a risk-based approach called Transition Confidence Levels. Anthony Davis and Tom Ballenger from USSOCOM Science and Technology are defining these Transition Confidence Levels and how they can ensure technologies under development are ready for transition.

The Transition Confidence Levels approach attacks yet another transition hazard: leaving the R&D phase too early. As R&D budgets wane, the pressure to quickly field new technology has put greater emphasis on the transition metric. But by leaving R&D too early (or transitioning too early) many projects become “shelf-ware” as a report or a product that will never get used.

To manage the portfolio of projects under development at SOCOM S&T, Davis and Ballenger created a Transition Confidence Level (TCL) scale to complement the already required Technology Readiness Level scale for technology maturity. Management becomes more dynamic as projects move up the TCL scale. Senior leader involvement also kicks in at the highest levels. TCL transparency ensures that everyone understands a project’s development status, and budgets and manages accordingly.

Davis and Ballenger say that the “adoption of TCL has provided a wealth of insight into the progress of the S&T portfolio toward transition with a minimum of additional data entry.” They believe the “tool has immediate potential application to numerous S&T organizations and portfolios and is easily adaptable to fit each organization’s particular needs.”

The Combatting Weapons of Mass Destruction (CWMD) mission now falls under USSOCOM. The Defense Threat Reduction Agency (DTRA) is the Combat Support Agency that is focused on CWMD and now reports to USSOCOM (recently changed from USSTRATCOM.)  For Global Systems Engineering, DTRA is one of our key clients and this TCL concept will become very important over the coming months and years.

With Technology Confidence Levels, SOCOM S&T is learning, improving, and adjusting to keep a dream alive – better acquisition management. Read Bridging the “Valley of Death” here.

Written by Chris Russell and Elizabeth Halford

Tags: acquisitiontechnology transitionValley of Death