Data Driven Grantmaking: The State of Current Research

| 0 Comments |

1.0: Introduction

Grantmaking, like any other form of philanthropy, is not a strictly rational activity. That’s not an accusation of irrationality, just arationality. Or more simply speaking, ordinary humanity. Neither is it an accusation of complete lack of rigor. Indeed, there are remarkable efforts toward rigor in the sector, as some research shows. It’s just a clear assertion that grantmaking has many motives, many contradictions, and many things that are rarely discussed.

At The Gilbert Center, one of the main question we’ve been studying for some time is a seemingly simple one: How do grantmakers engage in the study and practice of cause and effect? The Buddhists might call this the study of Karma (which is why I have always seen Buddhists as scientists). Here we call it either i4 (Innovation, Intelligence, Insight, Integrity: our four part model of practice) or, less proprietarily, Data Driven Decision Making. For the purposes of this brief paper, we can call it Effectiveness. The question then becomes: How do grantmakers engage in the study and practice of Effectiveness?

Simply put, effectiveness in grantmaking is about a grantmaker’s actions causing, over time, more and more of the desired outcomes. Therefore much of the study of effectiveness is the study of outcomes. Those outcomes that are outwardly proclaimed are not necessarily the ones toward which the staff and trustees actually strive. Other outcomes are at play. Outcomes related to social position, job security, sense of identity, and power relations are but a few obvious ones. In mature organizations, these latter outcomes are acknowledged as factors, sometimes even seen as an aspect of capacity. But with any luck, they are put in their place. But I think we all know that in most organizations, the unspoken but powerful outcomes are hardly mentioned.

Thus, in practice, the pursuit of effectiveness is less about the reduction of other conflicting motives and more about the adoption of certain practices that, by virtue of structure and habit, ensure greater impact over time. There are a wide variety of these methods and an even wider variety of names for them. The desire for a sense of uniqueness (another one of those “other” outcomes) on the part of the foundation and its trustees is one guarantee of the latter. As established through the last twenty years of practice among peers, coalitions, and consultants, some of these methods are now called: Strategy, Logic Models, Logical Frameworks, Theories of Change, Evaluation, Research, and Outcome Assessment.

As part of a current research project of ours on the nature of these practices among the largest U.S. grantmakers, we’ve compiled a bibliography of some of the significant papers on the topic, with a particular emphasis on research papers. We are providing this bibliography in three forms: (1) The bulk of this article is devoted to short annotations on the eight papers most based on actual research. (2) Following that, you’ll find all thirty-six papers listed in a standard bibliography format (Here we’re using the standard set forth by the Modern Humanities Research Association). (3) At the end of the article, you’ll find a link for downloading a BibTeX file containing all the data in the bibliography.

Because the majority of significant research in philanthropy is published outside of well-indexed journals, typically as independent reports, they are very easy to overlook. We are striving toward an accurate current picture of the state of research. Will you tell us what we’ve overlooked?
 

2.0: Eight Research Papers

The following summaries are largely non-editorial in nature, although the selection of papers and their results is a clear reflection of our interests in studying grantmaker relationship to data driven decision making. For each of these papers, if possible, we look at the following elements: (1) the topics being covered, (2) their basic research approach and methods, and (3) their core findings and conclusions, if any.

This list is far from comprehensive. First, we focus in this section exclusively on papers that can be described as being based on research (dominated by work from The Center for Effective Philanthropy). Second, as mentioned before, there are inherent limitations to literature search in this field and large gaps are bound to emerge.
 

2.1: Drowning in Paperwork, Distracted from Purpose: Challenges and Opportunities in Grant Application and Reporting (Project Streamline)

This report, researched and published by Project Streamline, studied the information gathering practices of grantmakers, the impact of those practices on both grantees and grantmakers, some approaches being used to streamline those practices, and the implications of both the practices and their alternatives for philanthropy.

The researchers used a multi-modal approach to data gathering, including: interviews with people representing 51 different foundations, nearly 1000 complete surveys (of 5000 distributed), focus groups, and a literature review. All told, they had direct data from 858 distinct foundations of all types and sizes in addition to supporting data from other sources.

The findings of this report are extensive. The upshot is that (1) grantees put many hours into doing repetitive data gathering and reporting; (2) this data is often not put to any material use by the grantmaker; and (3) much of the data could be found by other means using grantmaker resources. A number of alternative structures are suggested.
 

2.2: Beyond the Rhetoric: Foundation Strategy (Center for Effective Philanthropy)

For this research, The Center for Effective Philanthropy chose a topic with the potential for challenging levels of abstraction: the nature of the decision making process of American grantmakers and its relationship to the notion of strategy.

They chose as their sample 21 foundations that met the following criteria: at least $100 million in assets, non-corporate, non-operating, with both a CEO and a Program Officer available for interview. The primary methodology was content analysis of 42 transcribed one-hour interviews.

Researchers established four types of grantmakers, as defined by the relationship between strategy and their decision making processes, which (in keeping with the tradition of these things) they gave catchy names: Charitable Bankers, Perpetual Adjusters, Partial Strategists, and Total Strategists. However the broader finding was that there was, across the board, very little connection between genuine strategy (as defined clearly by certain practices) and grantmaker decision making. Instead, interviewees tended to describe as “strategic” whatever decision making framework they happened to use.
 

2.3: More than Money: Making a Difference with Assistance Beyond the Grant (Center for Effective Philanthropy)

The purposes of this research was to (1) discover the attitudes and behaviors of the CEO’s and Program Officers of U.S. foundations on the matter of providing non-financial assistance to organizations, (2) develop an understanding of the types of such assistance, (3) how such assistance is perceived by grantees, (4) whether such assistance strengthens organizations and their programs, and (5) the means by which such assistance can be effectively provided.

The sample consisted of 100 organizations who had commissioned Grantee Perception Reports, plus 48 more meant to provide some diversity to that heavily self-selected group. They gathered data through surveys and interviews from the entire sample. Case studies were gathered from those that had an established pattern of non-financial assistance.

The researchers established fourteen types of non-financial assistance, along with five patterns of provision of such assistance (including focusing on a particular field, comprehensive assistance, and three varieties of “little assistance”). Among grantees, the findings were that the majority of them received no non-financial assistance and the majority of those that did received only two or three types. Among grantmakers there was huge variation of practice, ranging from providing such assistance to as few as 9% of their grantees on one end to 97% of their grantees on the other. As for effectiveness, the researchers recommended what might be thought of as a narrow, but deep approach, rather than a broadly distributed, but shallow approach.
 

2.4: Essentials of Foundation Strategy (Center for Effective Philanthropy)

This research is built on the work from Beyond the Rhetoric (see section 2.2 above). It focused on testing the adoption of the concepts of strategy established in that report and developing a more detailed concept of the “essentials” of strategic decision making.

The sample was drawn from the staff of large U.S. foundations ($100 million or more in assets), from which they received 190 completed surveys representing 155 foundations. Statistical tests and corrections were performed to create a single data set for each foundation. Unlike most research, the authors also selected only for findings of a certain effect size.

There are two key elements to the definition of “strategic decision making” being tested in this research: (1) Such decision making is focused on an external context in which the foundation and its grantees operate. (2) Such decision making uses a hypothesized causal connection between the use of foundation resources and goal achievement (a logic model, if you will). They used these, along with four ancillary characteristics, as the distinguishing features

The report found that leaders are optimistic about their effectiveness, but many lack the elements of strategy defined in this report. The report itself doesn’t attempt to prove the impact of such strategic practice, but the authors do argue that only strategic foundations have a chance at persuasively making the case for their impact.
 

2.5: Indicators of Effectiveness: Understanding and Improving Foundation Performance (Center for Effective Philanthropy)

This was the pilot study for the Grantee Perception Reports that have formed a core of the work of the Center for Effective Philanthropy. It was an exploratory study meant to find a set of reliable and useful measurements for foundation performance assessment.

The key finding of this research was the lack of a shared conceptual framework for foundation effectiveness. Without such a shared framework, the insights that might be derived from comparisons are hard to discover.
 

2.6: Evaluation in Philanthropy: Perspectives from the Field (Grantmakers for Effective Organizations)

This research is primarily anecdotal in nature, although other research is referenced and the methods approach a literature review (though not meta-analysis) at times.

Three key pieces of literature are referenced in establishing the “perspective from the field” in this report: (1) A 2005 study by the California HealthCare Foundation defined foundation-wide evaluation as “the process through which foundations examine the overall value of their philanthropic activities.” The study found that few organizations appear to be conducting foundation-wide evaluations but that “more are beginning to consider its benefits.” (2) According to polling conducted by Harris Interactive for the Philanthropy Awareness Initiative, influential community leaders show a limited understanding of the work of grantmakers. Eighty-five percent of community leaders could not give an example of a foundation benefiting their community, and 89 percent could not give an example of a foundation’s impact on an issue they care about. (3) GEO’s own 2008 survey of the field found that grantmakers overwhelmingly stated that their evaluation results were intended primarily for internal audiences: 88 percent said evaluation is primarily for grantmaker staff, and 78 percent said it is primarily for boards.
 

2.7: In Search of Impact: Practices and Perceptions in Foundations’ Provision of Program and Operating Grants to Nonprofits (Center for Effective Philanthropy)

This is a large scale report on what we can learn about foundation practices from grantees, particularly those practices related to program versus operating support and their effects on grantees.

Data was collected and analyzed from about 20,000 completed grantee surveys representing ratings of 163 large foundations. In addition, a survey of the CEOs from those 163 foundations was conducted, along with in-depth interviews with leaders at 26 grantee organizations.

The report found that the majority of foundations provide less than 20% of their grantees with operating support. Typical program support grant from a foundation is 3% of grantee’s annual budget, and operating support grant is 4% of a grantees annual budget. Nearly half of all grants are one year in duration, regardless of grant type.

The typical Grant made by foundations in this study have three characteristics: They are (1) program restricted, (2) small, and (3) short term. (The authors note some tension between the needs of the grantmaker and the needs of the grantees.) There continue to be unanswered questions about the persistence of these patterns.
 

2.8: State of Evaluation 2010: Evaluation Practice and Capacity in the Nonprofit Section (Innovation Network)

The focus of this research was to develop a meaningful snapshot of practices in the nonprofit sector as a whole. The primary target for study was funders, board members, and nonprofit leadership, rather than staff or clients.

Data was obtained through voluntary return of surveys by just over 1000 U.S. nonprofits. Some effort was made to segment that sample (particularly by size of organization) where there were noticeable differences in results.

The most common self-reported barriers were the lack of staff time, funding, expertise, and support of leadership. (Over one third of respondents indicated that none of their funders supported evaluation.) The report also indicated that evaluation simply ranked low in the priorities of the organizations, with 62% ranking evaluation in the bottom half of their priorities out of the ten options given.
 


 

3.0: Bibliography of Thirty-Six Papers

3.1: ‘Foundation Performance Assessment Framework’, Center for Effective Philanthropy  [accessed 13 March 2011].

3.2: Bearman, Jessica, ‘Drowning in Paperwork, Distracted from Purpose: Challenges and Opportunities in Grant Application and Reporting’ (Project Streamline), p. 43  [accessed 4 April 2011].

3.3: Bolduc, Kevin, ‘For Performance Assessments, How Public Should Foundations Be?’, Center for Effective Philanthropy, 2011 [accessed 15 March 2011].

3.4: Bolduc, Kevin, Ellie Buteau, Greg Laughlin, Ron Ragin, and Judith A. Ruth, ‘Beyond the Rhetoric: Foundation Strategy’ (Center for Effective Philanthropy, 2007), p. 32  [accessed 25 February 2011].

3.5: Buchanan, Phil, ‘Fighting a Phantom: Reflections on a Caution Against Over-Emphasizing Metrics’, Center for Effective Philanthropy, 2010  [accessed 4 March 2011].

3.6: Buchanan, Phil, ‘Funders Agree: More Must Be Done to Assess Performance’, Center for Effective Philanthropy, 2010 [accessed 4 May 2011].

3.7: Buchanan, Phil, Ellie Buteau, and Shahryar Minhas, ‘Can Feedback Fuel Change At Foundations? An Analysis of the Grantee Perception Report’ (Center for Effective Philanthropy, May 2011), p. 12  [accessed 18 May 2011].

3.8: Buchanan, Phil, Kevin Bolduc, and Judy Huang, ‘Turning the Table on Assessment: The Grantee Perception Report’ (Center for Effective Philanthropy, 2005), p. 14 f [accessed 1 March 2011].

3.9: Buteau, Ellie, ‘Funders Should Do More to Help Nonprofits Build Evidence’, Center for Effective Philanthropy, 2010 [accessed 25 February 2011].

3.10: Buteau, Ellie, Phil Buchanan, Cassie Bolanos, Andrea Brock, and Kelly Chang, ‘More than Money: Making a Difference with Assistance Beyond the Grant’ (Center for Effective Philanthropy, 2008), p. 35  [accessed 1 March 2011].

3.11: Buteau, Ellie, Phil Buchanan, and Andrea Brock, ‘Essentials of Foundation Strategy’ (Center for Effective Philanthropy, 2009), p. 30 [accessed 11 March 2011].

3.12: Cameron, Charles, ‘The fetishization of metrics’, Social Edge: A program of the Skoll Foundation, 2010 [accessed 4 May 2011].

3.13: Cameron, Charles, ‘Theory of Change: A Collaborative Tool?’, Social Edge: A program of the Skoll Foundation, 2010  [accessed 4 May 2011].

3.14: Canales, Jim, ‘Challenges to Good Performance Assessment’, Center for Effective Philanthropy, 2010 [accessed 4 March 2011].

3.15: Canales, Jim, ‘The Case for Foundation Performance Assessment’, Center for Effective Philanthropy, 2010  [accessed 4 May 2011].

3.16: CEP, ‘Indicators of Effectiveness: Understanding and Improving Foundation Performance’ (Center for Effective Philanthropy, 2002), p. 39 [accessed 1 March 2011].

3.17: Chu, Tim, ‘Talk It Out: The Value of Discussing Reports and Evaluations’, Center for Effective Philanthropy, 2011 [accessed 21 April 2011].

3.18: Gates Foundation, ‘A Guide to Actionable Measurement’ (Bill & Melinda Gates Foundation, 2010) [accessed 2011].

3.19: GEO, ‘Evaluation in Philanthropy: Perspectives from the Field’ (Grantmakers for Effective Organizations, 2009), p. 42  [accessed 1 March 2011].

3.20: Gibson, Cynthia, and William M Dietel, ‘What Do Donors Want?’, Nonprofit Quarterly, 2010  [accessed 4 April 2011].

3.21: Huang, Judy, Phil Buchanan, and Ellie Buteau, ‘In Search of Impact: Practices and Perceptions in Foundations’ Provision of Program and Operating Grants to Nonprofits’ (Center for Effective Philanthropy, 2006), p. 32 [accessed 1 March 2011].

3.22: Hughes, Bob, ‘What Are the Limits of Quantitative Performance Measurement?’, Center for Effective Philanthropy, 2010 [accessed 9 March 2011].

3.23: LFA Group, ‘Findings from the Center for Effective Philanthropy 2010 Market and Impact Assessment Survey Executive Summary’ (Learning for Action, May 2010), p. 9 [accessed 1 March 2011].

3.24: Montenegro, Marco, ‘Language Matters’, CausePlanet, 2011 [accessed 16 April 2011].

3.25: Reed, Ehren, and Johanna Morariu, ‘State of Evaluation 2010: Evaluation Practice and Capacity in the Nonprofit Sector’ (Innovation Network, Inc., 2010), p. 24  [accessed 17 April 2011].

3.26: Saul, Jason, ‘The Dirty Little Secret About Measurement’, Jason Saul, 2010  [accessed 4 April 2011].

3.27: Smith, Bradford, ‘PhilanTopic: Transparency: One Size Does Not Fit All’, Philantopic: A blog of opinion and commentary form Philanthropy News Digest, 2010 [accessed 4 April 2011].

3.28: Snibble, Alana Conner, ‘Drowning in Data’, Stanford Social Innovation Review, 2006  [accessed 4 May 2011].

3.29: Talley, Jerry L, Eugene H Fram, ‘Using Imperfect Metrics Well: Tracking Progress and Driving Change’, Nonprofit Management: Dr. Eugene H. Fram, 2010  [accessed 23 April 2011].

3.30: Tierney, Thomas J, ‘Higher-Impact Philanthropy: Applying Business Principles to Philanthropic Strategies’, The Bridgespan Group, 2007  [accessed 4 May 2011].

3.31: Urban Institute, The, ‘Analyzing Outcome Information’ (The Urban Institute, 2004) [accessed 2011].

3.32: Urban Institute, The, ‘Building a Common Outcome Framework to Measure Nonprofit Performance’ (The Urban Institute, 2006) [accessed 1 July 2011].

3.33: Urban Institute, The, The Center for What Works, ‘Key Steps in Outcome Management’ (The Urban Institute, 2003) [accessed 13 April 2011].

3.34: William and Flora Hewlett Environmental Program, ‘Doing good today and better tomorrow: A Roadmap to High Impact Philanthropy Through Outcome-Focused Grantmaking’ (William and Flora Hewlett Foundation, 1 June 2009), p. 27  [accessed 12 April 2011].

3.35: Wolk, Andrew, Anand Dholankia, and Kelly Kreitz, ‘Building a Performance Measurement System: Using Data to Accelerate Social Impact’ (Root Cause, 2009), p. 66 [accessed 25 April 2011].

3.36: Woodwell Jr., William H., and Lori Bartczak, ‘Is Grantmaking Getting Smarter? A National Study of Philanthropic Practice’ (Grantmakers for Effective Organization, 2008), p. 14  [accessed 1 March 2011].

 



4.0: Bibliography Downloads

4.1: Download Bibliography in BibTeX format

4.2: Download Bibliography in RTF format

 



The Gilbert Center's eagerly awaited report, "Does Evidence Matter to Grantmakers? Data, Logic, and the Lack thereof in the Largest U.S. Foundations," will be published in late February 2012.

(c)2011 by Michael C. Gilbert and Andie N. Whitley.  All rights reserved. This article is reprinted with permission from Nonprofit News.

Unless otherwise noted, Community TechKnowledge, Inc (CTK) and blog authors have no financial or other business relationship.  At no time will the contents of this blog be used by CTK to promote software products or services.  Guest bloggers own all rights to their blog editorial and statements by bloggers do not necessarily represent the views or opinions of CTK.

Add new comment

You've decided to leave a comment. That's great! Please keep in mind that comments are moderated and rel="nofollow" is in use. So, please do not use any spammy keywords or else it will be deleted. Thanks for sharing your thoughts!

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.

Contact us, and a specialist will answer your questions.  Contact Us