Scandinavian Working Papers in Business Administration

SSE Working Paper Series in Business Administration,
Stockholm School of Economics

No 2022:2: Assessing Whether Mission-Driven Innovation Makes a Difference: Mission Impossible? Developing a Guiding Framework for the Evaluation of Five Mission Driven Environments for Health in Sweden

Anna Essén (), Karl Wennberg () and Anna Krohwinkel ()
Additional contact information
Anna Essén: Department of Entrepreneurship, Innovation, and Technology, Postal: Stockholm School of Economics, P.O. Box 6501, SE-113 83 Stockholm, Sweden
Karl Wennberg: Dept. of Management and Organization, Postal: Stockholm School of Economics, P.O. Box 6501, SE-113 83 Stockholm, Sweden
Anna Krohwinkel: Leading Health Care

Abstract: Background. Mission-driven innovation (MDI) policies are founded on governmental attempts to address fundamental but complex societal challenges. The rationale behind such attempts is typically to influence the directionality of innovation towards addressing the perceived challenge. This report focuses on a particular instance of MDI policy executed by Sweden’s innovation agency, Vinnova: the funding of five so-called “mission-driven environments” (MDEs) in 2019. The policy in question is called ‘Vision-Driven Health’ and was initiated in 2019 to support the establishment of inter-organizational and cross-disciplinary coalitions that work towards a common vision and a long-term systemic transformation within the Swedish health care and life science sector. Aim. The report aims to provide a framework for evaluating five MDEs funded by Vinnova. Vinnova asked us to consider, in particular, the role of eight “Work Principles” (WPs) they recommended the MDEs implement. This report is the result of the first (of two) possible steps in evaluating the five MDEs. The first step is about developing a framework for evaluating MDEs. We hereafter refer to it as a pre-study. A second step would involve actually evaluating the five MDEs based on the framework in this report. Methods. The report is based on selective reviews of relevant literature providing insights about best practices for setting up and governing MDE-like initiatives and possible approaches and challenges to evaluating such initiatives. We also collected empirical data about how the five Swedish MDEs operationalized the principles. We surveyed members of the participating MDEs, asking them what a meaningful evaluation could imply from their perspectives. Finally, we consulted a group of external experts on three occasions. Findings. At an overall level, the Vinnova-recommended WPs partly align with practices recommended in the relevant literatures. However, the WPs are formulated abstractly and implemented heterogeneously by the five MDEs. We argue that this heterogeneous implementation is necessary for the MDEs to progress towards their visions but complicates a uniform set of evaluation principles. The MDEs also prioritize the WPs differently, and we observed an additional set of informal WPs. The literature consists primarily of normative studies defining MDI and its relevance and studies that discuss sets of challenges tied to evaluating MDI policies and initiatives. Empirical studies and evaluations remain scarce. Suggestions. Drawing on insights from the literature, we outline a framework for formative and summative evaluation that could be used to evaluate the MDEs and the WPs with which they are set to work. We specifically argue for combining contribution and attribution approaches to evaluation, which could include the following steps: Formative Evaluation Steps (A) If and to what extent the MDE is justified due to a “failure” of the system, market, or current development direction; (B) If and how the MDE’s governance arrangements are purposeful, consistent, and coherent (processes and structures; i.e., ways of working and formalized routines, standards, decisions, and rules); (C) If and how there is a “match” between the MDE’s interventions and identified barriers (weaknesses, bottlenecks, impeding regulations, social norms, etc.). Formative and Summative Evaluation Step (D) If and how the targeted overarching sociotechnical system/field demonstrates improved performance, such as capabilities (system functions and interactions like knowledge sharing), transition processes, and outcomes. Summative Evaluation Steps (E) If and how the targeted overarching sociotechnical system/field exhibits structural changes, such as a change in the types of innovations, new forms of cross-sectorial collaborations, or new networks constellations in the system, because of the MDE; (F) If and to what extent there is measurable impact on the societal level in terms of mitigating the failure addressed and reaching the MDE’s “vision” or “mission.” For evaluating specific MDEs, we conclude that the formative Steps B and C (and after the MDEs have been in operation for some time, Steps D and E, which also are discussed in the report) are of utmost relevance. Step A is a policy-mix decision, and Step F is an evaluation of the overall policy). For Steps B through E, we detail how an evaluation could be done and the type of data needed and exemplify useful methods for each evaluation step. Continuous Evaluations For Step B (governance arrangements), we suggest that evaluations focus on: Are the WP formulated necessary and sufficient for MDEs? Are some WPs more important than others to achieve the expected process outcomes? How do MDEs develop routines and decision rules to operationalize the WPs, and what are the results of their progress? For Step C, we suggest that each MDE evaluate the “match” between the interventions and initiatives they initiate and the barriers to reaching the vision they identified. This involves assessing whether an MDE seems to contribute to eliminating or diminishing the power of bottlenecks in a sociotechnical system. Ideally, this should focus on the most crucial bottlenecks. This step is a necessary precursor to evaluating whether the MDE spurs the emergence of new, needed functions in the sociotechnical system (Steps D and E). This type of evaluation must be (a) conducted on an ongoing basis and (b) handled or coordinated by the MDEs because identifying barriers to their goals and launching initiatives to address such barriers are, in fact, their raisons d’être. Ex Post Evaluations Summative and attribution-oriented evaluation steps aim to assess outcomes and the degree to which an MDE reached its goals. This implies a “working backwards” approach, where observable changes are reviewed, followed by an analysis of whether they can be linked causally to an MDE intervention/activity. Here we suggest evaluating whether and how the targeted sociotechnical system(s) demonstrates improved performance (formative/summative evaluation Step D) and whether the system exhibits any structural changes that facilitate reaching the vision (summative evaluation Steps E and F). Ideally, such evaluations should be conducted ex post the current MDE initiatives because systematic change often takes years to accrue. As such, these types of evaluations instead should be conducted by the policy actor or external evaluators working on their behalf, not the MDEs. Considerations. The MDEs in focus are similar in having received funding (relatively small relative to other MDI initiatives globally) from Vinnova and being instructed to implement eight WP. However, the MDEs also were given agency in determining what challenges to focus on, how to design their vision, and how to implement the WP. We show that the MDEs exhibit great differences in these regards, which has logical consequences for designing an evaluation approach that is useful for all five. Thus, we caution against assessing the MDEs uniformly on all WPs or mere “vision attainment.” Instead, we argue that an evaluation of the MDEs also needs to assess the WPs; that is, it should evaluate the policy design of the overall MDE program. Finally, a prerequisite for addressing multiple and diverse stakeholders’ needs is to gain their trust. Stakeholders who are more engaged with and understand the evaluation’s wider purposes are less inclined to feel “threatened” and will impart more useful and meaningful information. Thus, we argue for actively involving the MDEs in the evaluation steps (especially Step C, which is a tool to actively help them prioritize, document, and evaluate the actions and initiatives they take) and, whenever needed, organize external expert panels to assist them in this work.

Keywords: Mission-driven Innovation; innovation policy; healthcare; evaluation; grand challenges

JEL-codes: E61

62 pages, May 3, 2022

Full text files

hastma2022_002.1.pdf PDF-file Full text

Download statistics

Questions (including download problems) about the papers in this series should be directed to Helena Lundin ()
Report other problems with accessing this service to Sune Karlsson ().

RePEc:hhb:hastma:2022_002This page generated on 2024-10-23 23:29:07.