You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Enhancing the capacity of community organizations to evaluate HIV/AIDS information outreach: A pilot experiment in expert consultation

Abstract

The National Library of Medicine’s AIDS Community Information Outreach Program (ACIOP) supports and enables access to health information on the Internet by community-based organizations. A technical assistance (TA) model was developed to enhance the capacity of ACIOP awardees to plan, evaluate, and report the results of their funded projects. This consisted of individual Consultation offered by an experienced evaluator to advise on the suitability of proposed project plans and objectives, improve measurement analytics, assist in problem resolution and outcomes reporting, and identify other improvement possibilities. Group webinars and a moderated blog for the exchange of project-specific information were also offered.

Structured data collections in the form of reports, online surveys, and key informant telephone interviews provided qualitative feedback on project progress, satisfaction with the TA, and the perceived impact of the interventions on evaluation capacity building. The Model was implemented in the 2013 funding cycle with seven organizations, and the level of reported satisfaction was uniformly high. One-on-one TA was requested by four awardee organizations, and was determined to have made a meaningful difference with three. Participation in the webinars was mandatory and high overall; and was deemed to be a useful means for delivering evaluation information. In subsequent funding cycles, submission of a Logic Model will be required of awardees as a new model intervention in the expectation that it will produce stronger proposals, and enable the evaluation consultant to identify earlier intervention opportunities leading to project improvements and evaluation capacity enhancements.

1.Introduction

The U.S. National Library of Medicine (NLM), a part of the National Institutes of Health, is the world’s largest medical library. The AIDS Community Information Outreach Program (ACIOP) was launched by NLM’s Specialized Information Services division in 1994 to provide the affected HIV/AIDS community with access to vital health information increasingly becoming available on the Internet [7]. Since that time, more than 300 awards have been made to community-based organizations and their partners for the purpose of enabling the acquisition of needed computer and communications equipment, facilitate user training in accessing HIV/AIDS information, and create locally meaningful and culturally relevant resource materials based upon the latest authoritative and evidence-based information available from NLM.

These Internet-accessible resources center around AIDSource (http://aids.nlm.nih.gov), AIDSInfo (http://aidsinfo.nih.gov), and the general purpose PubMed/Medline database (http://pubmed.gov) for research scientists and clinicians, the consumer health portal MedlinePlus (http://medlineplus.gov) for patients, families and the general public, and a clinical trials database (www.clinicaltrials.gov) reporting current and recently completed clinical research on the newest HIV/AIDS treatment and prevention protocols. Additionally, the PubMed Central (http://www.ncbi.nlm.nih.gov/pmc) online library of full text published journal articles places the results of publically funded research in the hands of everyone, without cost. All are part of an extensive collection of curated databases and online portals that also span such diverse domains as genomics and toxicology. Together they are searched billions of times each day [11]. Facilitating access to these information resources and enabling their use by the affected HIV/AIDS community is the underlying goal of the ACIOP. It also plays a central role in NLM’s outreach efforts to address the challenge of reducing and ultimately eliminating health disparities in underserved and underrepresented minority populations [8].

A formal evaluation of the ACIOP by Columbia University colleagues in 2012 found that although most program objectives were being met, important deficiencies existed in a significant number of the awardees’ final reports that were studied. These were sometimes in the form of critical omissions of what was accomplished, often traceable to incomplete or limited project planning and/or skills needed for measurement of impact. This led the Columbia team to make the recommendation that NLM seek to enhance the evaluation capacity of awardees, particularly less experienced community-based organization. The intent would be to encourage the implementation of better project planning and evaluation by awardees, including a more thorough reporting and documentation of project objectives, measurement tools, and observed outcomes [9].

An invitational stakeholders Workshop was convened by NLM in late 2012 with several previously funded and other leading community organizations, and knowledgeable health leaders in HIV/AIDS who endorsed this key recommendation. They considered alternative mechanisms for improving awardees’ evaluation capacity. They agreed that this could best be accomplished by means of providing expert consultation that would advance the capabilities of local project staff with limited evaluation experience. Additionally, they urged that means be explored to make awardees’ project methods and results more transparent and sharable amongst all awardees by encouraging the use of listservs and blogs [9]. NLM resolved to take the proactive action of providing external technical evaluation consultation services that would be offered on an experimental basis to newly funded awardees, without cost to them, throughout the one year duration of an ACIOP project. Access to a blog and group webinars would also be provided.

2.The pilot experiment

In the following ACIOP funding cycle, year 2013, NLM made significant changes in ACIOP proposal preparation requirements, project reporting specifications, and the need for documentation of evaluated project results. In September 2013, seven ACIOP projects were awarded funding in the amount of $40,000 each for a one-year period. These projects constituted Cohort 1 of a planned iterative multi-year experiment in evaluation capacity building at the community level. Each cohort runs for one year; the first year is the subject of this paper. See Table 1 for project summary descriptions, including the geographic location of the projects and the target audience. The initial awardees included three projects whose proposals explicitly focused on the use of emergent mobile health technology that is of increasing interest to NLM as a means for disseminating HIV/AIDS prevention and treatment information.

Table 1

2013 AIDS community information outreach project award descriptions

NameLocationProject descriptionAudience
Community Education GroupWashington, D.C.“Project HIVE I”

Developed a health information virtual exchange that serves as a one-stop HIV/AIDS resource for use of smartphone technology by African Americans.
Low income African Americans
Comunidades UnidasWest Valley City, Utah“Community Outreach Project to Increase Access to HIV/AIDS Health Information”

Deployed mobile technology by focusing on information retrieval skills development of promotoras, Hispanic health workers, in community-based organizations in Utah and expanded their text messaging system and social media campaigns.
Hispanic health workers (promotoras) and Hispanics
George Washington University School of Public HealthWashington, D.C.“Health Information Partners (HIPS 8)”

Focused training on health information literacy with community health advocates and sponsored hands-on computer workshops in neighborhood health and education centers.
Low income African Americans
Hispanic Communication GroupWashington, D.C.“InfoSida Spanish-Language Outreach & Activation 2013”

Produced and distributed culturally and linguistically appropriate HIV/AIDS messages to syndicated radio stations and newspapers serving the Hispanic community.
Hispanics
Morgan State University School of Community Health and PolicyBaltimore, Maryland“IMPACT2 Project in Community Health”

Used community-based theater and experimented with a ‘photovoice’ methodology to improve access to electronic HIV/AIDS information resources.
Adolescent and young adult African Americans
Philadelphia FIGHTPhiladelphia, Pennsylvania“Patient. Click. Connect”

Provided PLWHA access to their electronic health records through a patient portal, and access to electronic health information resources.
Low income PLWHA
YTHOakland, California“YTH: HIV ACCESS Providers Project 2.0 (HAPP2.0)”

Employed multi-media hub-spoke model of online, social media and mobile platforms, which built on previous work that created a mobile compatible version of its website and connected it to FaceBook.
Young adults of color

NLM contracted with the Oak Ridge Associated Universities (ORAU) headquartered in Oak Ridge, Tennessee, and its Oak Ridge Institute for Science and Education (ORISE), to provide the services of an evaluation consultant to assist the leadership and staff of these newly funded ACIOP projects. The 1:1 technical assistance provided by the evaluation consultant, Arletha Livingston, Ph.D., MPH along with the development and implementation by NLM of a new blog (https://aciopblog.wordpress.com) and group webinars, constituted the experimental intervention.

Evaluation capacity is multi-faceted: Several operational definitions informed our goal of enhancing evaluation capacity at community-based organizations. The American Evaluation Association focuses on outcomes, and defines evaluation as a systematic process to determine merit, worth, value or significance www.eval.org [2]). The Centers for Disease Control and Prevention (CDC) focuses on methodology and program improvement, and defines evaluation as the systematic collection of information about the activities, characteristics, and outcomes of strategies (i.e., programs) to make judgments about the strategy, improve strategy effectiveness, and/or inform decisions about future strategy development (www.cdc.gov/eval [4]). NLM adds an emphasis on outcomes-based evaluation to determine what changes have been achieved and results accomplished, asking the question: Are you making a difference? [13]. When organizational evaluation capacity enhancement is the explicit goal, we can further specify a need to strengthen an organization’s ability to consistently measure success, make programmatic improvements, and provide funders with evidence of good fiscal stewardship [1].

Theoretical constructs help guide the experiment: The evaluation consultant framed the intervention as a technical assistance model grounded within the constructs of two behavioral theories. The first, Participatory/Community Empowerment Theory [6,18] holds that community-based organizations should be principal agents in the telling of their own project evaluation story. The second, Diffusion of Innovations Theory [12,14] sees evaluation as a new and valuable skill set to be disseminated, adopted, and used; an innovation that has the potential to enhance organizational effectiveness and impact project outcomes.

The following research questions were addressed:

  • What organizational and situational evaluation capabilities contribute to successful awardee project planning, implementation, and outcomes? How may these be facilitated and enhanced by the provision of technical assistance?

  • What constitutes an effective technical assistance intervention model? What strategies work well and those less so? Are there predictable differences in awardee organizational and/or project characteristics that affect the acceptance and utilization of offered consultation?

  • What are the characteristics of an effective evaluation consultant in this context? How were the projects positively impacted by the consultant? How did the consultant make a difference?

The focus of this paper is on answering these and related questions, thereby developing a better understanding of the evaluation capacity of community-based organizations, both initially and amenable to enhancement. Specifically, and of primary interest, is assessing the value of individualized evaluation consultation, as provided in this first year, and lessons learned that may lead to further improvements with ACIOP awardees in subsequent years.

3.Methods

Valuable insights were gleaned from a review of the evaluation research literature that helped establish operational benchmarks for identifying and applying evaluation best practices. These were informed by relevant theories from the disciplines of public health and communication research that suggested effective strategies leading to the adoption of these best practices by awardee organizations. ACIOP staff and the evaluation consultant developed, put in place, and pilot tested an Evaluation Capacity Building Model (The Model) to improve awardee project planning, implementation, evaluation, and documentation of outcomes. Interventions were strategically executed at key points in Cohort 1’s ‘one-year’ project timeline of September 2013 to September 2014.

The Model interventions are in reality a ‘mixed model’, comprising structured data collections (Quarterly Reports and Surveys); ad hoc follow-up questions and responsive information exchanges (Follow-up Telephone Calls and Key Informant Interviews); group instructional resources to improve project reporting and enhance evaluation capacity (Webinars and Blog); and one-on-one individualized evaluation consultation (1:1 Technical Assistance). Additionally, an Evaluation Typology and a Case Studies rubric were developed and applied; they proved to be useful methodological tools. All constitute The Model and are described below.

Quarterly Reports and Follow-up Telephone Calls. In January, April, June and October, awardees submitted online reports using an updated template consisting of 31 multiple-choice and short-answer questions. The reports characterized the organization demographically; identified progress achieved on project-specific activities (planning, implementation, and evaluation); and related the project’s overall scope and objectives to the ACIOP’s goal of enabling improved information access by the target audience. Each report was analyzed for completeness, and follow-up telephone calls were made by the evaluation consultant to remedy incomplete or ambiguous responses, and to discuss barriers and challenges identified individually, and in the aggregate group responses.

Webinars. Three informational Webinars were presented in October, January and April. The first Webinar was an overview of the new project reporting and evaluation requirements; the availability of a Blog for information and resource sharing; and the availability of individualized 1:1 technical assistance provided by the evaluation consultant. The second Webinar demonstrated the new online quarterly reporting system. The third Webinar was held to qualitatively assess the group’s adoption of evaluation technical assistance, and allow all awardees to learn about other organizations’ project activities. An initial post-award orientation webinar was scheduled for early October, but was canceled due to a shutdown of the federal government.

Blog. All awardees were encouraged to contribute to an NLM-moderated blog (https://aciopblog.wordpress.com) created specifically to store and share sample data collection instruments, PowerPoints, awardee project interests, and evaluation resources.

Mid-Year and End-of-Year Surveys. Self-administered online surveys in July and December were fielded. They consisted of 36 multiple-choice and short answer questions that elicited feedback from awardee organizations on the perceived value and effectiveness of specific intervention strategies (project reporting templates, Webinars, and Blog), and especially the 1:1 evaluation technical assistance provided.

Key Informant Interviews. Conducted by the evaluation consultant with project principals during the months of July and August, these in-depth telephone interviews were scheduled in advance and lasted 30-45 minutes. Approximately 25 pre-coded questions in six categories were covered, and served as a major source of information about an organization’s existing evaluation capacity, its use of evaluation technical assistance, and traced how organizations with prior year funding were able to advance their evaluation capacity. The interviews also served as an initial basis to develop comprehensive Case Study summaries of each awardee organization and its project.

One-on-One Evaluation Technical Assistance. Additional telephone calls and email communications were initiated by the evaluation consultant and took place periodically with four of the seven awardee organizations that exercised their option to request individualized one-on-one (1:1) technical assistance. Three of the four organizations self-reported having relatively advanced evaluation experience. Technical assistance provided could take the form of evaluation instrument development, instruction for reporting project activities, recommendations for needed staff training, and custom project activity enhancement recommendations such as incorporation of mobile health technology into project plans and capturing evaluation data on outreach activities through social network mapping.

Evaluation Typology. A typology of organizational evaluation capacity was adapted by the evaluation consultant from earlier research [19]. It is based in part on other work reported by Volkov & King [16], Maton & Salem [10] and Chaskin [5]. The typology was used to characterize each awardee organization across several structural (fixed) and situational (modifiable) dimensions. The former include age/experience of the organization; organization size; number of clientele served. The latter include access to internal evaluation champions and external strategic partners; leadership’s evaluation knowledge; and frequency of evaluation utilization. The seven awardee organizations were scored on each dimension based upon their responses to questions in the quarterly reports, surveys, key informant interviews, and technical assistance calls. The evaluation consultant assigned each organization to one of three categories of evaluation capacity, subjectively weighting some dimensions more importantly than others, and also taking into account their own self-categorization. The organizations were classified as having above average knowledge and experience; average evaluation knowledge and relatively less frequent use of evaluation; and below average capacity reflecting minimal knowledge and no use of evaluation. Of the seven Cohort 1 organizations represented in the experiment, five were scored above average, two were average, and none below average on their initial evaluation capacity.

Case Studies. Comprehensive case studies were developed for each awardee organization that systematically described its mission, demographic characteristics, project objectives and evaluation outcomes, the use made of NLM’s information resources, existing organizational evaluation capacity, and of particular interest, the use made of available individualized 1:1 evaluation technical assistance.

The Case Studies served as a comprehensive descriptive resource from which to identify and report the experiment’s main findings that are presented below in the Results section. They are derived from a qualitative analysis of each intervention comprising The Model. They also contain a plethora of detailed project-specific activity data, many quantitative measures, e.g., frequency counts of varying kinds, which are pertinent to documenting the performance of a project. The data are not reported here as they are not germane to the more general purpose of this paper, that of assessing the value and scalability of The Model that was tested in the pilot experiment.

In summary, The Model incorporates several structured data collections, administered iteratively at key points during project implementation, that provide feedback informing ACIOP program managers, the evaluation consultant, and project principals. They consist of periodic reports of project progress – quarterly and final; general purpose surveys administered online that include both open-ended and closed questions; these are subsequently followed by key informant telephone interviews with project personnel probing project progress, difficulties encountered and overcome, satisfaction with the technical assistance offered and received, and perceived impact of the interventions on evaluation capacity building.

4.Results

Overall, utilization and effectiveness of The Model and its interventions was positive but somewhat uneven, as described in the following summary and in the details that follow below:

  • The 1:1 technical assistance was requested by four of the seven organizations, and it had a meaningful impact on the projects of three of those that did. Satisfaction with the technical assistance offered, both general and 1:1, was uniformly positive.

  • There was a 90% rate of attendance in the group Webinars; however, vocal participatory contributions by the organizations’ representatives tended to be low. It was not possible to assess their utility in real-time, apart from the generally positive comments offered later in survey and interview responses. Five awardees reported accessing the Blog regularly; one organization said it did not find it useful, and only two organizations posted content. The Blog did not realize its potential of serving as a mechanism for individual projects to share and exchange information on the scale that was anticipated.

  • The organizations reported a positive effect of the interventions on their ability to effectively plan, collect data, evaluate and report their project activities and outcomes. The level of detail of the quarterly reports increased with each reporting cycle. The awardee organizations appear to have benefited from the additional structure of the reporting templates, aided also by helpful group and individualized instruction on how to provide the needed information.

  • Organizational and evaluation capabilities posited a priori by the typology, and found to contribute to successful project implementations and outcomes were: age and experience >10 years; clientele served >250 persons; a clearly defined mission that supports evaluation; evaluation champions inside the organization; many external strategic partners; leadership’s evaluation knowledge above average; evaluation techniques utilized often; dedicated evaluation resources and staff; low staff turnover; and staff open to new ideas.

  • Barriers to success in achieving project objectives included new staff not being oriented and trained properly; organizations not having a credible evaluation plan at the beginning, and organizations not taking advantage of the free 1:1 evaluation technical assistance offered by the evaluation consultant.

  • Six organizations reported successfully achieving their project and evaluation goals and objectives to their satisfaction. However, underestimating the difficulty of accomplishing project objectives can be problematic; for example, obtaining external technical support to build a new website that was beyond an organization’s internal skill set.

It is instructive to drill down within the individual Case Studies to assess The Model’s ability to enhance evaluation capacity, and to impact project planning, reporting, and outcomes. We begin with the findings of the four organizations that requested 1:1 evaluation technical assistance, as that is of particular interest. They are George Washington University, YTH, Morgan State University, and Comunidades Unidas.

George Washington University School of Public is one of two academic institutions represented in Cohort 1. It has received ACIOP funding in the past, and its outreach efforts target the health education needs of underserved residents in the District of Columbia. The project “Health Information Partners (HIPS 8)” aims were to increase the public’s knowledge of science-based HIV and health information, critical appraisal skills, and community engagement in reducing the HIV epidemic. The organization has above average evaluation capacity, and it sought 1:1 technical assistance in developing an outreach evaluation plan, confirmed its suitability, and brainstormed with the evaluation consultant on means to measure project impact. This entailed specifying the types of evaluation questions to ask, pretesting them, and identifying the kinds of outcomes/impacts to expect. The consultation was robust, and extended to the future use of GIS social network mapping analysis, and mobile health technology to track the reach of peer educators in the community. They also networked with another awardee organization, YTH, to generate ideas about how to use tablets to track peer education outcomes. The organization has also involved community health advocates more in developing its evaluation plans, a positive outcome from the evaluation consultant’s efforts to promote community engagement in the evaluation process.

YTH, Internet Sexuality Information Services, Inc. offers health providers in the San Francisco Bay area an abundance of critically needed information services and training platforms that enable them to address adolescent HIV and STD prevention and sexual health, and to effectively use social media (e.g., Tweet Chat; Social Media Lounge) and related mobile health technology customized for at-risk youth of color. The HAPP2.0 project focused on establishing access links to NLM’s information resources from their web pages, and instructing youth-serving providers on the best means to make use of them within their communities. The organization was judged to have above average evaluation capacity, and this richness of experience was observed to be a motivator for requesting additional advice in the form of 1:1 technical assistance offered by the evaluation consultant. In numerous conference calls, the consultant confirmed the suitability of the existing evaluation plan, and brainstormed with the organization to identify appropriate tracking strategies and metrics that helped ensure that both project and evaluation objectives were achieved. The consultant also encouraged posting the awardee’s Tweet Chat to the ACIOP blog. Responses by this organization were very positive with regard to their satisfaction with the technical assistance interventions offered by the evaluation consultant, and the content presented in the Webinars.

Morgan State University School of Community Health and Policy is the second academic institution studied. The organization’s project, IMPACT2 (Intergenerational Model for Parents, Adolescents, and Children in Technology and Training) uses community theater to provide training in accessing evidence-based HIV/AIDS information on the Internet. A novel ‘photovoice’ methodology blends the oral tradition of African-American culture with over 100 pictures taken by community participants that, together, enabled viewers to see HIV/AIDS through the eyes of others. Photovoice allowed participants to verbally tell their story which added context and value to the online information resources being shared. The organization was judged to have average evaluation capacity, and it requested and received extensive 1:1 evaluation technical assistance. The evaluation consultant was sensitive to the value of an oral history methodology, and helped the organization to develop a plan for analyzing and utilizing suitable qualitative measures that are appropriate for contextualizing a participant’s experience. The consultant also encouraged the sharing of stories in an online community forum, and on the ACIOP blog. Project reports indicate that user training and numerous quantifiable learning objectives were achieved.

Comunidades Unidas partners with local community groups and the health sciences library at the University of Utah to introduce NLM’s HIV/AIDS information resources to the Latino population. The funded project focused on improving information retrieval skills development for health outreach workers (promotoras) and the staff of local community-based organizations. A key initiative was to expand their text messaging system and social media campaigns to increase awareness of NLM resources, and promote access through mobile phone and related technology. The organization was judged to have average evaluation capacity. It reported that its staff members used the ACIOP blog for evaluation resources, and attended the Webinars. The organization requested and received 1:1 technical assistance from the evaluation consultant. Although the evaluation plan as outlined in the proposal was very strong, the capacity to go beyond collecting data was not evident. The evaluation consultant suggested staff training to improve analysis and reporting skills, but staff members were unable to add the task to their work duties. In part this may have been affected by the unanticipated resignation and relocation of the project manager which resulted in the need to focus on meeting operational project goals. Evaluation objectives were not achieved, and only anecdotal evidence of reported project success was available at the conclusion of the award period.

Organizations that did not request 1:1 evaluation technical assistance:

Community Education Group developed a Health Information Virtual Exchange (Project HIVE 1), a HIPPA compliant application that can serve as a one-stop HIV/AIDS information source for African-Americans in selected areas of the District of Columbia. The project sought to capitalize on the growing use of smartphone technology by focusing on the increasing availability of electronic AIDS-related information resources on mobile platforms, and also their potential use as a communications delivery tool for disseminating customized, repurposed information. The organization had previous ACIOP awards and intended to build on the prior successes of these projects. But technical difficulties in the migration of their operation from a former website developer to new software having the potential for heightened security, flexibility, and customer service did not proceed as planned. The website was not fully executed and could not be launched during the award period; a complete rebuild was determined to be needed. Additionally, staff turnover, reported difficulty dealing with the Web contractor, and an underestimation of the technical difficult of accomplishing the changes were likely contributing factors. The organization was judged to have above average evaluation capacity, and participated in the Webinars and reported using the blog. It did not request 1:1 technical assistance from the evaluation consultant, beyond determining content for the quarterly reports. It did not perceive the evaluation technical assistance offered by the evaluation consultant as germane to overcoming the information technology challenges that were paramount.

Hispanic Communications Network is a social marketing agency that focuses on disseminating health information to the Hispanic community, including use of Spanish radio for which it produces and syndicates culturally and linguistically appropriate programming messages. The project sought to leverage its national media network, online resources, and partnerships with Hispanic-serving HIV/AIDS organizations to increase awareness and access to NLM’s HIV/AIDS information resources. Messages were distributed to radio stations, newspaper columns, and digital social media. The organization was judged to have above average evaluation capacity, including use of an independent evaluator. They did not request 1:1 technical assistance from the ACIOP evaluation consultant, stating that no further assistance was needed and that all project objectives were achieved. However, the consultant judged their evaluation plan relied too heavily on social media analytics, and would have benefited from a more comprehensive approach that included outcome and impact measures, and also qualitative methods for added context. The organization participated in the Webinars but did not use the Blog.

Philadelphia Fight is a comprehensive AIDS service organization with a mission to provide state-of-the-art, culturally competent, primary and specialty care to low income members of the community. Consumer education, advocacy, social services, and outreach to persons who are high risk or living with HIV are also provided. It has previously received ACIOP awards. The “Patient. Click. Connect” project provided hands-on workshop training and support for persons living with HIV/AIDS (and their medical providers) to access their electronic health records through a patient portal, and also link them to NLM’s online information resources. The organization was judged to have above average evaluation capacity, and although it made use of the Webinars and Blog, it did not request 1:1 technical assistance from the evaluation consultant. The depth and breadth of the organization’s reporting, and their reported satisfaction with having exceeded the project’s goals suggests that this is an instance where existing evaluation capacity was sufficient to achieve meaningful results without additional technical assistance.

5.Discussion

The following research questions were at the center of this pilot experiment, and helped guide the interpretation of results:

What organizational and situational evaluation capabilities contribute to successful awardee project planning, implementation, and outcomes? Best practices supportive of situational evaluation capacity building are having staff that are properly oriented and well-trained; the existence of an explicit project evaluation plan; good communication with NLM and its consultants in identifying and resolving problems encountered; networking between awardee organizations and information sharing; and having evaluation champions in and outside the organization. NLM can potentially facilitate all of these situational factors through its evaluation interventions, but not those that are structural and are also associated with good evaluation outcomes (e.g., mature organizational age, substantial size, and established reach). However, NLM could assign greater weight to, and fund proposals from organizations that do evidence such characteristics.

How may the positive situational variables associated with good evaluation outcomes be facilitated and enhanced? We found that offering technical assistance where and when needed, in the form of individual 1:1 expert consultation by telephone calls and email messages, could be an effective mechanism for confirming the suitability of existing evaluation plans, brainstorming new ideas, creative problem solving, and offering useful suggestions for user survey questions or measurement statistics. Organizations may be offered strategic guidance on the value of community engagement as a means to promote mutual understanding, or help establishing a collaboration with a local external evaluation champion possessing needed evaluation skills. However, offered advice may not always be accepted and acted upon as we observed in one instance when a recommendation for increased staff training was not implemented. This can be expected when extenuating circumstances prove difficult to overcome, despite the best intentions of the organization and NLM. Also, NLM and/or its evaluation consultant may not necessarily possess the technical skills needed to anticipate or prevent a damaging project delay as occurred when a website migration could not be implemented in time to achieve the project’s objectives.

What constitutes an effective technical assistance intervention model? What strategies work well and those less so? A Blog appears to offer minimal value as a passive means to share project tools, or other materials that can enhance evaluation capacity. Organization staff may be too limited time-wise to engage in activities that offer little immediate prospect of benefit in the form of content that is applicable to achieving the project’s objectives or evaluation goals. On the other hand, mandating attendance at structured group Webinars is a relatively low-cost and easily scalable intervention that can impart general information that strengthens evaluation capacity. It is less effective in addressing individual project questions or concerns than may be provided in 1:1 technical assistance. On the other hand, that intervention is more costly, and less easily scalable. Ultimately, a cost-benefit analysis will be needed to assess value that cannot now be reliably estimated from a pilot experiment that included only seven organizations. However, calculations suggest that the additional cost of offering the services of an evaluation consultant was only approximately 8% of the total funded cost. But it is not known if the allocated level of effort could have accommodated the needs of all seven organizations had each one requested 1:1 individual technical assistance.

What are the characteristics of an effective evaluation consultant in this context? The ability to form effective and trusting professional relationships with the principals and staff of awardee organization is clearly important. It is at once a means to encourage and sustain buy-in, and also reduce a tendency to perceive external evaluators as a potential threat rather than a source of new insights, problem solving skills, and evaluation capacity building. First-hand experience working with community-based organizations is an asset, as they are a major ACIOP awardee category with unique needs, strengths, and limitations that call for sensitivity and understanding. This is especially true for minority-serving organizations where socio-economic, cultural, and language differences can present formidable barriers to successful collaboration. The evaluation consultant should also be proactive and strongly motivated to make a difference helping an organization achieve its project objectives, and improve its evaluation capacity. While a solid skill set encompassing the traditional evaluation methodologies used in formative and summative program evaluations is necessary, so too is an awareness of and experience with so-called ‘alternative metrics’ (altmetrics) that are becoming increasingly commonplace in the evaluation of Internet-based social media usage and impact [17]. These statements are based on the self-appraisal of the evaluation consultant along with concurrence of NLM management.

Are there predictable differences in awardee organizational and/or project characteristics that affect the acceptance and utilization of offered consultation? At this juncture, this question cannot be answered with certainty. Additional research is needed. Of the four organizations requesting 1:1 technical assistance, two were judged to have above average evaluation capacity, and two average capacity. (None were judged to have below average evaluation capacity as such organizations may have been eliminated from funding consideration due to weaker project proposals, and/or were systematically downgraded on that criterion during the review process.) We would expect organizations possessing only average evaluation experience to be receptive to the offer of technical assistance, and this was found to be the case. We were also pleased to see that two organizations having better than average capabilities would perceive this as an opportunity to benefit from the skill set and experience of an expert evaluator. The remaining three funded organizations that did not request 1:1 technical assistance were judged to be of above average capacity, and they communicated to the evaluation consultant that they lacked a need for such assistance. Whether their projects could have been improved beyond their own expectations is not known. But in at least one instance, the evaluation consultant felt that their evaluation plan relied too heavily on social media analytics, and would have benefited from a more comprehensive approach using additional qualitative measures. Another project, as noted previously, became side-tracked with technical website development issues. This suggests the possibility that the evaluation consultant might have helped identify this as an incipient problem in the offing, if the organization’s project plan had been more inclusive and precise with details of the kind required by a ‘logic model’.

How can a logic model become part of an intervention model to enhance evaluation capacity? We have begun to experiment with a requirement that ACIOP projects provide a logic model that is a visual representation of the project that illustrates how planned activities are linked to project results. Logic models come in many different formats, but they all present the shared perspective of an “if… then” statement. “If we obtain the necessary resources and conduct certain activities, we will achieve our desired outcomes” [13]. Eight Cohort 2 awardees for 2014 are currently being assisted by the evaluation consultant to use a logic model template that specifies project inputs, activities, outputs, outcomes and impact. Inputs are the resources needed, including people, time, money, materials, equipment, and technology. Activities are what you do (e.g., conduct training sessions, provide services) and who is reached (e.g., participants, agencies, community-based organizations). Outcomes are the results or benefits of the project, including short-term outcomes such as changes in knowledge; intermediate outcomes such as changes in behavior; and long-term outcomes or impacts such as changes in healthcare access, individual or population/community health status. An evaluation logic model is part of the project logic model, and describes how the project will be evaluated to support and document the overall project outcomes.

6.Conclusions and future plans

The Evaluation Capacity Building Model will continue to be the subject of research with new ACIOP awardees, leading to a critical mass of experience with more organizations and projects upon which to reliably judge the effectiveness of The Model and lessons learned. Modifications to the interventions will be introduced as appropriate. The introduction of a project logic model occurred in mid-year of the Cohort 2 award cycle, partly in response to the hypothesis that a more proactive intervention by the evaluation consultant may be needed to identify earlier potential problem areas in project planning and/or evaluation where individual technical assistance may be needed. NLM is also requiring that all new funding proposals for 2015 that will constitute Cohort 3 include a logic model as part of the proposal package. It is hypothesized that this will also encourage organizations to devote greater effort in advanced project planning and evaluation, thus producing stronger proposals.

The ACIOP funding solicitation for 2015 will also be modified conceptually to place a new emphasis on attracting proposals that target ‘men who have sex with men’ (MSM), an especially high-risk subpopulation of younger males that accounts for a preponderance of new HIV infections in the U.S. [3]. Additionally, there is renewed emphasis on attracting proposals that employ mobile health technologies and social networking services that are increasingly relied upon as easily accessible communications media and sources of health information [15]. Both changes will be reviewed at an invitational Workshop scheduled in Spring 2017, along with any impact on strategies for enhancing the evaluation capacity of organizations working with this target population and/or employing these rapidly proliferating and accessible communications media. NLM is also expanding its efforts in the application of responsive design and other means to effectively use these Internet-based platforms for information dissemination, and its brand as a trusted source of health information.

7.Limitations

Enhancing an organization’s evaluation capacity may be facilitated by addressing situational factors (modifiable) that are amenable to interventional change, such as training a stable and motivated staff. The presence of structural (fixed) weaknesses inherent, for example, in relatively young organizations of small size and limited reach are less amenable to change by NLM. The latter organizations likely were effectively selected out by the competitive ACIOP contract award process which now gives greater weight in its review criteria than in previous years to an organization’s prospects for at least a reasonably good chance to plan and evaluate its proposed project. This does limit our understanding in the present study of The Model’s robustness, as only organizations characterized as having above average or average evaluation knowledge and experience (characterized post hoc) were included; none being below average. Another limitation is a test of any model having only seven subjects must also be viewed as preliminary, as is the present pilot; a critical mass of additional projects is needed. It remains for The Model to be used with additional awardee organizations in future funding cycles that will further inform our understanding of its strengths and weaknesses relative to NLM’s ability to provide appropriate and cost-effective technical assistance. However, it will likely remain a practical limitation that organizations having below average evaluation capacity will continue to be underrepresented, as NLM’s ability to fund ACIOP project proposals is finite, and the overriding goal must be to continue supporting organizations that have a reasonably good chance to achieve their outreach objectives, and not necessarily to provide a comprehensive test of The Model’s robustness across a broad continuum of organizations possessing strong to weak pre-award evaluation capacity.

References

[1] 

Akintobi, T. Henry, E.M. Yancey, P. Daniels, R.M. Mayberry, D. Jacobs and J. Berry, Using evaluability assessment and evaluation capacity-building to strengthen community-based prevention initiatives, Journal of Health Care for the Poor and Underserved 23: (2) ((2012) ), 33–48. doi:10.1353/hpu.2012.0077.

[2] 

American Evaluation Association, About us, http://www.eval.org/p/cm/ld/fid=4.

[3] 

Centers for Disease Control and Prevention, HIV among gay and bisexual men, Last updated August 12, 2015, http://www.cdc.gov/hiv/group/msm/.

[4] 

Centers for Disease Control and Prevention, Program Performance and Evaluation Office (PPEO) – program evaluation, Last updated May 18, 2015, http://www.cdc.gov/eval/.

[5] 

R.J. Chaskin, Building community capacity: A definitional framework and case studies from a comprehensive community initiative, Urban Affairs Review 36: (3) ((2001) ), 291–323. doi:10.1177/10780870122184876.

[6] 

Cousins, J. Bradley and L.M. Earl, The case for participatory evaluation, Educational Evaluation and Policy Analysis 14: (4) ((1992) ), 397–418. doi:10.3102/01623737014004397.

[7] 

N.C. Dancy and G.A. Dutcher, HIV/AIDS information outreach: A community-based approach, Journal of the Medical Library Association 95: (3) ((2007) ), 323–329. doi:10.3163/1536-5050.95.3.323.

[8] 

N.C. Dancy, M.L. Rockoff, G.A. Dutcher, A. Keselman, R. Schnall, E.R. Siegel and S. Bakken, Empowering patients and community online: Evaluation of the AIDS community information outreach program, Information Services and Use 34: (1–2) ((2014) ), 109–148. doi:10.3233/ISU-140720.

[9] 

N. Dancy-Scott, G.A. Dutcher, A. Keselman and E.R. Siegel, NLM workshop marks 20 years of community outreach and capacity building in HIV/AIDS, Journal of Consumer Health on the Internet 18: (4) ((2014) ), 357–366. doi:10.1080/15398285.2014.952999.

[10] 

K.I. Maton and D.A. Salem, Organizational characteristics of empowering community settings: A multiple case study approach, American Journal of Community Psychology 23: (5) ((1995) ), 631–656. doi:10.1007/BF02506985.

[11] 

National Institutes of Health and National Library of Medicine, About the National Library of Medicine, Last updated June 12, 2015, https://www.nlm.nih.gov/about/.

[12] 

B. Oldenburg and K. Glanz, Diffusion of innovations, in: Health Behavior and Health Education: Theory Research, and Practice, 4th edn, K. Glanz, B.K. Rimer and K. Viswanath, eds, Jossey-Bass, San Francisco, (2008) .

[13] 

C.A. Olney and S. Barnes, Collecting and Analyzing Evaluation Data, 2nd edn, National Network of Libraries of Medicine, Outreach Evaluation Resource Center, Seattle, (2013) .

[14] 

E.M. Rogers, Diffusion of Innovations, 5th edn, Free Press, New York, (2003) .

[15] 

R. Schnall, J. Travers, M. Rojas and A. Carballo-Diéguez, eHealth interventions for HIV prevention in high-risk men who have sex with men: A systematic review, Journal of Medical Internet Research 16: (5) ((2014) ), e134. doi:10.2196/jmir.3393.

[16] 

B. Volkov and J. King, A checklist for building evaluation capacity, 2007, https://www.wmich.edu/sites/default/files/attachments/u350/2014/organiziationevalcapacity.pdf.

[17] 

K. Weller, Social media and altmetrics: An overview of current alternative approaches to measuring scholarly impact, in: Incentives and Performance: Governance of Research Organizations, I.M. Welpe, J. Wollersheim, S. Ringelhan and M. Osterloh, eds, Springer International Publishing, New York, (2015) . doi:10.1007/978-3-319-09785-5_16.

[18] 

J.S. Wholey, Evaluability assessment, in: Handbook of Practical Program Evaluation, J.S. Wholey, H.P. Hatry and K.E. Newcomer, eds, 2nd edn, Jossey-Bass, San Francisco, (2004) .

[19] 

A. Williams-Livingston, Building evaluation capacity in public health community-based organizations serving minority populations, Master’s thesis, Emory University Rollins School of Public Health, 2010.