Interagency collaboration is one of the most highly recommended practices in transition to adulthood for youth with disabilities, but it is also one of the least empirically understood. Recent literature cites the need to clarify collaboration as a construct, and focus on collaborative practices and processes to supplement research on antecedents and outcomes.
This exploratory, mixed methods research looks to highlight collaborative practices and processes in NYS PROMISE, a statewide project aiming to improve outcomes for youth with disabilities who receive Supplemental Security Income. The project used interagency agreements to specify required service coordination processes, communications, information sharing, professional development, and cross-training.
The mixed methods approach combines three studies. In Study 1, the Levels of Collaboration Survey (LCS, Frey et al., 2006) provided data on regional network changes over time. In Study 2, an organizational attitudes and experiences (OAE) survey measured satisfaction of partner communications. In Study 3, qualitative analysis of biannual on-site interviews helped contextualize the characteristics of partnerships that project staff found effective.
In a descriptive sense, the LCS showed increases in regional cohesion, peaking in the project’s penultimate year. Changes were statistically significant in at least one region. OAE responses showed regional increases in satisfaction with partner communications. Qualitative analysis indicated that characteristics of partnerships described as effective included: (a) joint objectives and clearly defined roles; (b) extensive sharing of information and resources; and (c) frequent communication (formal and informal). Staff turnover, inadequate organizational capacity, and partners not executing key functions were described as barriers to productive collaboration.
Feedback from project staff provides clarification of the experiences of agency staff coordinating services, and helps refine certain constructs and assumptions common in collaboration research about transition. The authors discuss implications for future research and the development of sustainable systems of interagency collaboration in the field of transition.
Interagency collaboration is often cited as a best practice in transition to adulthood for youth with disabilities (Oertle & Trach, 2007; Noonan, Morningstar, Erickson, 2008; Wehman, Schall, & Carr, 2014; Kohler, Gothberg, Fowler, & Coyle, 2016). Emphasis usually falls on relations between secondary education, and adult rehabilitation and independent living systems, where systems coordination can mitigate service gaps (Oertle, Trach, & Plotner, 2013). Practices that systematically involve parents and families in the transition process are commonly supported (Henninger & Taylor, 2014; Biswas, Tickle, Golijani-Moghaddam, & Almack, 2016). Service coordination in transition planning is also a legal requirement under the Rehabilitation Act of 1973 (29 U.S.C. §723(b)(7)) and the 2004 Individuals with Disabilities Education Act (IDEA) reauthorization (20 U.S.C. §1401(34)). Although service system collaboration is one of the most highly-recommended practices in transition, it is also one of the least empirically understood (Fabian, Dong, Simonsen, Luecking, & Deschamps, 2016; National Council on Disability, 2008). Recent literature emphasizes the need for further empirical work to identify effective practices, clarify collaboration as a construct, and supplement research on antecedents and outcomes with focus on systems-level processes that increase collaboration (Morningstar & Mazzotti, 2014; Fabian et al., 2016). To contribute to these need areas, we used a mixed methods, exploratory approach with evaluation data collected from agencies, public schools, and non-profits engaged in New York State Promoting the Readiness of Minors in Supplemental Security Income (NYS PROMISE), a five-year project providing case management and transition services to youth with disabilities receiving Supplemental Security Income.
Promoting the Readiness of Minors in Supplemental Security Income (PROMISE) is a joint federal research demonstration of the U.S. Departments of Education (USDOE), Health and Human Services, and Labor, with evaluation support for the demonstration from the Social Security Administration. PROMISE funded six model demonstration projects to address barriers and obstacles to economic independence and promote successful education and employment post-school outcomes for youth who receive SSI, providing approximately $230 million to demonstration projects in Arkansas; California; Maryland; New York; Wisconsin; and a six-state consortium which included Arizona, Colorado, Montana, North Dakota, South Dakota and Utah.
One objective of NYS PROMISE was to strengthen existing systems to better support youth during the transition to adulthood, through formal cross-agency service delivery and coordination practices, supplemented by professional development, organizational capacity, and cross-training opportunities at state and regional levels. The formal coordination framework of NYS PROMISE increased the required minimum level of interagency coordination and communication above pre-existing levels. To increase formal, task-oriented interagency service coordination, the project utilized an existing centralized data collection and information sharing system (New York State Employment Services System, or NYESS), embedded within regional networks of local employment service providers. During the project, use of the NYESS system for recordkeeping, service referral, and payment was expanded to include regional networks of participating public schools, community service providers, and parent resource centers. Using this centralized data system, school, provider, and parent center staff had contractual responsibilities for service planning, delivery, and reporting. School-based case managers and parent center family coaches met with intervention group youth and families quarterly to determine appropriate referrals geared towards financial, employment, educational, and/or independent living outcomes, after which providers carried out and recorded services, initiating review and approval by the case manager or family coach. Contracts with schools, service agencies, and parent centers varied in scope, service requirements, and payment-for-service method, as field conditions warranted. Individual contracts were modified, cancelled, or initiated with new service providers as the intervention evolved. Rather than viewing the project as a discrete, externally-derived intervention, in this paper we focus on systems-level change within pre-existing regional service networks as a result of required frequency and formality of contact. In identifying themes, we consider both practices and processes that were included in the project intervention, as well as those developed contextually by project staff.
In the field of transition to adulthood for youth with disabilities, interagency collaboration entails both formal and informal relationships between youth and adult systems, in which communications, information, and resource sharing help achieve joint transition goals and service coordination (Noonan, Morningstar, & Erickson, 2008; Test, Fowler, White, Richter, & Walker, 2009). Different theoretical and operational definitions focus alternatively on synergistic elements (e.g., leadership, trust, mutuality; Weis, Anderson, & Lasker, 2002; Thomson, Perry, & Miller, 2007) and more instrumental elements such as frequency of communication, resource sharing, formality, shared decision making, and role clarity (Frey, Lohmeier, Lee & Tollefson, 2006; Noonan, McCall, Zheng, & Erickson, 2012; Povenmire-Kirk et al., 2015). The assumptions and constructs that underlie collaboration as a best practice still need refinement. For example, Fabian et al. (2016) utilized two different measures of collaboration among interagency teams, finding that instrumental or task-oriented perceptions of collaboration had a positive effect on vocational rehabilitation outcomes, while perceptions of team “synergy” had slight, negative effects. Lack of agreement as to how to define systems collaboration is a potential weakness in a number of human services fields, and raises concerns that poorly calibrated approaches to collaboration can have undesired effects, like diffusion of responsibility (Longoria, 2005). In collaboration research more broadly, a recent systematic review revealed a tendency to focus more on antecedents and outcomes of collaboration, and less on the processes exhibited in strong collaborations (Gazley & Guo, 2017). Even task-oriented measures of collaboration often lack granular detail about effective interventions in practice, such as what minimal level of interagency communication is “frequent” or how best to structure formal collaborative agreements (such as memoranda of understanding/agreement) and resource sharing practices. This is a barrier to making specific, evidence-based recommendations.
Two psychometrically-validated instruments, the Interagency Collaboration Activities Scale (IACAS) (Greenbaum & Dedrick, n.d.; Dedrick & Greenbaum, 2011) and the Levels of Collaboration Survey (LCS) (Frey, et al., 2006) are commonly used in transition research (Trach, 2012). Both conceive of collaboration as progressing through a continuum of stages or degrees, and utilize different measures for task-oriented collaborations. In comparison to the LCS, the IACAS uses a Likert-type measure of the extent to which an agency shares in financial and physical resources, program development and evaluation, client services activities, and collaboration policies (e.g., informal/formal agreements; Greenbaum & Dedrick, n.d.).
Building on the pre-existing literature on interagency collaboration in transition, this mixed method analysis aims to answer two related research questions: (1) Is there evidence that the NYS PROMISE model improved collaborations between schools and providers? (2) What attributes of successful collaboration practices and processes were described by program staff? Thus, the first question was evaluative, and the second was exploratory. We answered these questions with three studies. To answer the first question, we used Frey et al.’s (2006) LCS, which provided data on regional network changes over time (Study 1) and an organizational attitudes and experiences (OAE) survey which measured satisfaction of partner communications (Study 2). To answer the second research question, we conducted a qualitative analysis of biannual on-site interviews at project research demonstration sites (Study 3), using a priori codes developed based on the constructs in Frey et al.’s (2006) LCS. This allowed for a deeper dive into the LCS scale findings, and an opportunity to narrow recommendations along the LCS’s continuum. This data allowed a useful opportunity to evaluate systems change during a statewide grant project focused on cross-systems collaboration, while also exploring stakeholder feedback about attributes of successful partnerships, as a means to investigating, revising, and narrowing recommended best practices in collaboration.
3Methods and results
The mixed methods approach combines three studies. Sources of data included the LCS (Study 1), a survey instrument capturing project staffs’ self-reported organizational attitudes and experiences (OAE, Study 2), and feedback from project case managers as to the quality and nature of their collaborations with service providers and parent resource centers (Study 3). Table 1, below, summarizes the instruments, frequency of administration, and type of data retrieved. In this section, we provide: (i) a description of the study participants, (ii) an overview of the project implementation, which differed by region, and (iii) a description of methods and results for each study. Quantitative analyses in Studies 1 and 2 are presented first with a focus on describing change in collaborations and communications among project staff, followed by a qualitative analysis (Study 3) to identify practices and processes that staff found effective at improving collaboration and service coordination efforts.
|1||Levels of Collaboration Survey||Annual||Ordinal||Interagency “levels of collaboration” survey administered to all partner agencies providing services|
|2||Organizational Attitudes &Experiences Survey||Biannual||Ordinal, Interval||Assessed organizational structure and practices enabling program implementation and experiences/attitudes of program staff|
|3||On-Site Fidelity Visit||Biannual||Qualitative||Site visit, file review, and interview assessing adherence to program design and implementation at demonstration sites|
All research participants for the three studies were adult professionals working with agencies under contract with the project. Participants for the LCS (Study 1) included one representative from each of the PROMISE-contracted schools, parent centers, and service providers (n = 28 in wave 1, n = 29 in wave 2, n = 30 in waves 3 and 4). In Study 2, responses were received from research demonstration site (RDS; n = 152), parent center (n = 44), and service provider staff (n = 102) on the organizational attitudes and experiences measure. All responses were collected anonymously semi-annually from the Fall 2015 to Spring 2018, with some participants responding more than once over time. Participants for the on-site fidelity site visit interview (Study 3) included twenty-one research demonstration site case managers, during each wave. Participant demographics were not collected so that participants could maintain anonymity. Respondents for each of the three studies were taken from the same pool of agency representatives. Although many of the participants will be the same across the three studies, the samples were not identical, due to slight differences in where and how the instruments were used and the data collected. For instance, the LCS was disseminated online to a panel of designated “primary contacts” in managerial functions at the participating PROMISE agencies, while the OAE survey was disseminated to all members of those same agencies who participated in bi-annual statewide learning communities, and finally the qualitative data was derived from in-person interviews with lead case managers at research demonstration sites. Study-specific differences in data collection approaches, screening criteria, and number of participants are described below in the context of each individual study.
In New York State, PROMISE consisted of three administrative regions: Capital-area (CAP), Western New York (WNY), and New York City (NYC). In CAP and WNY, contracted schools, parent centers, and service providers were existing agencies in the region and their participation in the project was largely stable throughout. Because of the emphasis on existing systems, we refer to this as an “indigenous” project design. The NYC regional network experienced more change in agency participation over the project than the upstate regions (CAP and WNY). Changes in NYC were iterative, with the project design morphing from an indigenous, school-based case management and service delivery model to a “hybrid” service delivery model that provided both school-based and community (geography) based case managers. Iterative changes were intended to rectify issues with case management and service provider referrals for Intervention Group (IG) youth enrolled in District 75 (D75), a large special education school district covering all five NYC boroughs.
In D75, many youth attend schools outside their local neighborhood/borough and/or transfer schools throughout their high school enrollment period. In order to serve IG students across schools and widely dispersed geographic areas across NYC, school-based case managers serving IG youth in NYC were supplemented with the addition of community case managers beginning the second quarter of project Year Two, and community service providers beginning the second quarter of Year Three, both of whom absorbed large portions of the IG youth case load for case management and service provision, respectively. Additionally, because of the higher overall service need of the D75 student population, three community habilitation specialists were brought on to provide training in daily living skills. The NYC region also experienced the loss of a large, comprehensive service provider in Year One due to financial insolvency; to address the resulting impact on service provision capacity, an additional service agency was recruited during Year 3 to enhance service capacity across all five NYC boroughs.
3.3Study 1: Levels of collaboration survey
To measure longitudinal changes in interagency collaborations, we used Frey, et al.’s (2006) Levels of Collaboration Survey (LCS) for measuring collaboration among grant partners. Table 2 below provides the definitions of the different levels of collaboration in the LCS. Respondents were asked to rate their collaborations along a six-stage continuum from ‘no interaction’ (0) to ‘collaboration’ (5), based on a range of constructs pertaining to awareness of the other organization, role clarity, frequency and formality of communication, resource and information sharing, shared decision making, and systems integration (at the high end of the scale). This instrument provided a validated set of constructs for obtaining annual data on partner-to-partner interactions.
|0||No interaction||Agencies do not interact|
|1||Networking||Aware of the organization, loosely defined roles, little communication, all decisions made independently|
|2||Cooperation||Provide information to each other, somewhat defined roles, formal communication, all decisions made independently|
|3||Coordination||Share information and resources, defined roles, frequent communication, some shared decision making|
|4||Coalition||Share ideas, share resources, frequent and prioritized collaboration, all members have a vote in decision making|
|5||Collaboration||Members belong to one system, frequent communication is characterized by mutual trust, consensus is reached on all decisions|
The survey was administered annually, in the three NYS PROMISE administrative regions over four years/waves (Winter 2015 to Winter 2018). Lead project staff from agencies were asked to rate their collaborations with required grant partners, providing bidirectional (in-degree and out-degree) data on collaborations between partners. Respondent lists were compiled based on individuals’ role and experience on the project, targeting responses from agency leads with project familiarity and managerial and project administrative responsibilities. Because of regional differences in project design and implementation, each region was analyzed separately. For CAP and WNY, analyses included only contracted schools, parent centers, and service providers that participated in the project from start-to-finish. The number of agency respondents in each wave remained constant in CAP (n = 8) and WNY (n = 9). In NYC, additional agency responses were recorded in the second and third wave due to changes in implementation design discussed earlier (n = 11 in wave 1; n = 12 in wave 2; n = 13 in waves 3/4). Missing data, from wave non-response, totaled 6.5% for the four waves; because of the proportion and type of missingness, in-degree imputation was used for missing values (Huisman & Steglich, 2007; Analytic Technologies, n.d.). In all but one case, missing wave data was from NYC.
Analyses for CAP and WNY are presented first, followed by NYC. Network analysis focuses on mapping formal and informal connections between collaborators using graph theory and resulting measures (Wasserman & Faust, 1994). In the section that follows, number and strength of tie will be described using average degree, which is the average number of ties per agency, and average weighted degree, which is the average sum of the weights of those edges. To summarize the regional cohesion, we will use the raw number of strong ties (≥3) and a density statistic, which is calculated simply as the total number of ties divided by the total number of possible ties. For interpretation of network results, average scores≥3 are considered strong ties between partners, based on setting coordination as the target level of collaboration, like Frey et al.’s (2006) project. Ratings above 3 on the LCS entail systems integration above project expectations (e.g., “all members have a vote in decision making” and “members belong to one system”), which, although positive, were not required by project design.
Because the CAP and WNY agency participation was more consistent throughout the project, we were able to statistically test change over time in these regions. Paired t-tests compared the networks for the first (2015) and final (2018) waves within the CAP and WNY regions. The alpha level was set to.05 for rejection of the null hypothesis that network density did not change over time. Standard errors were generated by 10,000 bootstrap samples with UCINET 6 software (Borgatti, Everett & Freeman, 2002). Because bootstrapped results were used, the 95% confidence interval (CI) of the density difference was inspected for the inclusion of 0. If the CI does not contain 0, then the null hypothesis can be rejected, where inclusion of 0 means the null hypothesis cannot be rejected. Due to small sample sizes and other factors (e.g., changes in agency-level respondent resulting from staff turnover), we treat the following analyses of networks as primarily descriptive and do not draw causal inferences about the effectiveness of any one intervention on improving collaborations.
Table 3 contains average collaboration ratings across time for all three regions. The collaborations in the CAP and WNY regions peaked, in terms of both average level of collaboration value and network cohesion, during the second to last, or penultimate year of the project. In 2017, the CAP region average rating was 2.54 and the WNY region rating was 2.35, averages between cooperation and coordination. NYC ratings also peaked in 2017, with an average of 1.71 between networking and coordination but due to the variability, their ratings did not differ from 2015 to 2018. As indicated in Table 4, descriptively, the pure count of strong ties in each region and the density of strong ties also increased during the project. NYC is more difficult to describe longitudinally because of the addition of non-indigenous project partners, who became central to the project network and changed the dynamic between existing partners (by the final wave, the community case managers had the highest in-degree value in the NYC network , while the community service providers had the fourth highest ).
|Year||n||M||SD||Avg. Weighted Degree|
|Capital-area New York|
|Western New York|
|New York City|
|Year||n||Number of strong ties||SD||Average degree||Density (%)|
|Capital-area New York|
|Western New York|
|New York City|
Note: A strong tie is indicated by a rating of 3, 4, or 5 on the LCS for a partner.
Testing whether the change in density was statistically different from 0 when comparing the first year (2015) to the final year (2018), the CAP network was more dense (t7 = 3.91) at the end of the project but WNY was not (t8 = 1.43). Density, variance, t-tests, and 95% CIs are listed in Table 5. As noted above, these statistical procedures could not be performed for NYC, due to changes in network and design mid-project. Measures for WNY reflect change from the first to final year of the project, although descriptive measures of collaboration peaked during the penultimate year of the project. Later, we discuss implications for the sustainability of collaborations beyond grant funding periods.
|n||M||σ2||M||σ2||t||S.E.||95% CI of difference|
|WNY||9||1.44||2.25||1.82||2.15||1.43||0.26||[– 0.14, 0.89]|
*Calculated with 10,000 bootstrap samples.
3.4Study 2: Organizational attitudes and experiences survey
The organizational attitudes and experiences (OAE) survey was developed by project staff as a fidelity tool focused on project monitoring and evaluation of changes in organization capacity of school-based research demonstration site (RDS), parent center, and service provider staff. The questions utilized in this analysis asked about frequency and satisfaction of contact with parent centers and service providers. Frequency responses ranged on a six-point scale from ‘Never’ (0) to ‘More than once a week’ (5). Satisfaction questions were asked on a Likert-type scale from ‘Very dissatisfied’ (1) to ‘Very satisfied’ (5).
The OAE was administered twice annually: Fall 2015, Spring 2016, Fall 2016, Spring 2017, Fall 2017, and Spring 2018. Six observations were omitted because all questions in this analysis were unanswered, leaving 298 responses over 6 waves. Cumulative logic models were estimated for the three OAE questions. Proportional odds testing determined that the assumption of relationship equality held for all groups in each model. For frequency of contact and communication satisfaction, the higher response choices were compared to the lower response choices. In all models with statistically significant predictors, predicted probabilities were generated to aid interpretation. Each probability is the probability of responding higher on the continuum of responses. High probabilities indicate that it was very likely that someone did respond above that level. Low probabilities indicate that it was very unlikely that someone responded at that level or higher. Estimates were obtained via maximum likelihood from the vglm function in the VGAM package (Yee, 2010) in R 3.5 (R Core Team, 2018).
The OAE was administered over time, but data was not available to match responses to individuals over time. To account for attenuated standard errors due to non-independent responses, the alpha level was lowered to.025. Nested models were compared using the change in model residual deviance and degrees of freedom to determine whether a more complex or simpler model should be retained. After determining whether a predictor should be retained or not, the coefficients were examined, and predicted probabilities generated.
126.96.36.199Frequency of interaction with parent centers.
The number of interactions with parent centers did not change from Fall 2015 to Spring 2018. Statistical differences were identified by region and roles. Table 6 contains the predicted probabilities of responding at or above each level, broken out by respondent group and region. For RDS and parent center staff in CAP and WNY, there was a high probability (60% and 82% respectively) that they spoke to the parent centers ‘More than once a week’, and their response rates did not differ (β=1.08, SE = 0.67, p = .10). Service providers in those same two regions contacted parent centers less frequently. NYC demonstrated a different pattern from the other two regions with less frequent contact for respondents from RDS and parent centers (β=– 3.17, SE = 0.50, p < .001) with 33% more than ‘Monthly’ contact by the RDSs and parent centers, and the probability of greater than ‘Monthly’ contact dropped to 20% for service providers in NYC. Regression coefficients for this and all final models can be found in Table 9.
|RDS and Parent Centers|
Note: The columns represent thresholds from one level of response to the next. The questions contained six categories that resulted in five thresholds.
188.8.131.52Frequency of interactions with service providers.
The only significant predictor of frequency of interactions with services providers was role. Service providers contacted other service providers less frequently (β=– 1.59, SE = 0.26) than staff at RDSs and parent centers, as show in Table 7. For example, the probability that RDS and parent center staff would contact service providers more than ‘Monthly’ was 63%, but for service providers, the probability for the same level was 26%.
|RDS and Parent Centers||0.96||0.85||0.63||0.37||0.17|
Note: The columns represent thresholds from one level of response to the next. The questions contained six categories that resulted in five thresholds.
There were no significant predictors of satisfaction with parent center, but satisfaction in communication with service providers did change over time (β=0.19, SE = 0.08, p = .013). As indicated by the very high probabilities shown in Table 8, there were very few responses in the ‘Very dissatisfied’ category, so most of the change over time occurred in ‘Neutral’ or above. In Fall 2015, there was only a 29% probability that communication would be rated above satisfied. By Spring 2018 the probability of that high rating was 51%. Note that the probability of ‘satisfied’ or ‘very satisfied’ increased in all three regions, but that NYC was notably lower and with less of a difference than CAP and WNY.
|New York City|
|Western New York|
3.5Study 3: Qualitative site visit analysis
Qualitative data about staff experiences with interagency partnerships was collected through biannual, On-Site Program Fidelity Visit Interviews, conducted at project research demonstration sites. First, an initial site visit (Fall 2014 to Spring 2015) with newly on-boarded research demonstration sites helped evaluators gain a baseline understanding of pre-existing practices and community partnerships. After that, a biannual site visit protocol was implemented, from Spring 2016 through Spring 2018 (five waves). Site visits included an interview with project staff, as well as a record review to evaluate adherence to project requirements. The objectives of the interview were to ascertain: (a) site processes, experiences, and development over time; (b) local variations and innovations; and (c) surface issues, questions, challenges, and technical assistance needs. The interviews were comprised of eight open-ended questions, including two items inquiring about coordination tasks with partner agencies. These two items were: (1) Please explain the methods you utilize to coordinate case management tasks with parent center staff; and (2) Please explain the methods you utilize to coordinate service referral tasks with service provider staff. The other items inquired about a site’s standard case management process, experience using the centralized data system, challenges/barriers experienced or expected, and technical assistance needs. Although these additional questions did not specifically inquire about collaboration, it was not unusual for staff to mention working with partners in these items, therefore all responses were retained in the analysis. For qualitative analysis, a priori codes were developed based on the constructs in Frey et al.’s (2006) “levels of collaboration” scale as an initial framework for analysis, and included broadly: (i) communication, (ii) decision-making, (iii) familiarity, (iv) role clarity, (v) formality of connection, (vi) systems integration (sharing information/resources), and (vii) trust. Line-by-line coding was conducted, and additional sub-codes were identified as appropriate.
Although there were data informing each of the a priori categories, some topics were more common than others. In this section, we describe prevalent themes in the qualitative analysis of staff experiences with interagency coordination. As Table 10 below shows, the most heavily used codes related to communication and role clarity, followed by formality of connection and systems integration. The codes identified least often related to trust, shared decision-making, and familiarity. Incidentally, trust and shared decision-making exist at the high-end of the LCS scale, while familiarity exists at the low-end of the scale where interagency teams are “aware” of a partner. Often, multiple codes were evident in a single response, with “communication,” in particular, cutting across other issues such as role clarity and systems integration, and even tending to coincide with less-frequently coded topics such as familiarity and trust. Parent code counts in Table 10 include positive, neutral, or negative excerpts.
|Frequency of interaction with parent centers|
|(Intercept):1 – Never to Quarterly||5.47||0.53||10.32||<.001***|
|(Intercept):2 – Quarterly to Monthly||3.74||0.49||7.68||<.001***|
|(Intercept):3 – Monthly to 2-3 times/month||2.47||0.47||5.26||<.001***|
|(Intercept):4 – 2-3 times/month to Weekly||1.45||0.45||3.24||.001**|
|(Intercept):5 – Weekly to More than once a week||0.42||0.42||0.99||.322|
|NYC||– 3.17||0.50||– 6.39||<.001***|
|SP||– 2.64||0.58||– 4.53||<.001***|
|WNY:SP||– 1.71||0.82||– 2.08||.038|
|Model residual deviance||703.55||df = 1190|
|Frequency of interaction with service providers|
|(Intercept):1 – Never to Quarterly||3.26||0.28||11.83||<.001***|
|(Intercept):2 – Quarterly to Monthly||1.72||0.18||9.37||<.001***|
|(Intercept):3 – Monthly to 2–3 times/month||0.52||0.15||3.49||<.001***|
|(Intercept):4 – 2–3 times/month to Weekly||– 0.53||0.15||– 3.57||<.001***|
|(Intercept):5 – Weekly to More than once a week||– 1.57||0.19||– 8.33||<.001***|
|SP||– 1.59||0.26||– 6.22||<.001***|
|Model residual deviance||875.65||df = 1314|
|Communication satisfaction with service providers|
|(Intercept):1 – Very dissatisfied to Dissatisfied||4.62||0.59||7.82||<.001***|
|(Intercept):2 – Dissatisfied to Neutral||2.79||0.38||7.34||<.001***|
|(Intercept):3 – Neutral to Satisfied||1.21||0.33||3.61||<.001***|
|(Intercept):4 – Satisfied to Very satisfied||– 0.91||0.33||– 2.78||.005**|
|NYC||– 1.49||0.32||– 4.63||<.001***|
|Model residual deviance||594.23||df = 973|
Note: *<.025, **<.01, ***<.001.
|Parent code||Code count||Sample quotes|
|Communication||250||“We meet weekly and communicate constantly through email phone and face to face. We go to our quarterly and semiannual meeting together. We also attend all intervention meetings with students.”|
|“We talk all the time, we meet frequently and talk and coordinate in person. [Service provider] will come to the school and is very active with us here. We are very flexible.”|
|Role clarity in coordination||197||“We really work in partnership with our parent center staff. Nothing is ever ‘that is your job that is my job.’ We work together. Service coordination— I can answer these questions even though that may be a [parent center] job. We do everything together and collaborate.”|
|“We both contact the family. It’s via email phone or even text, then we work on the issues that need collaboration, or we split them up where it can be assigned…I swing by their office in person too if I have concerns about a family, and they actually offer us space now at their office which is really helpful. We coordinate and communicate events with them all the time.”|
|Formality of connection||115||“We meet [with the provider] at least once a week. Mostly twice. After doing our initials, I look on NYESS to see what has been done, then we communicate about what has been done or is needed.”|
|“[The] process has been evolving regularly…for example, we have a procedure in place where we’re contacting [the provider], going through the referral process, giving basic information about the student and the parent, what times work best for coordinating activities, sometimes we’re either meeting with the parents, or together, or at different times.”|
|Systems integration||48||“I’ve…given [the provider] a list of all our students in the project, their classrooms, and I’ve even spoken with the teachers so they are aware and have met the staff. That way the providers can come in even if I’m not on site. Both [of the service] providers take advantage of this arrangement.”|
|Familiarity||26||“Teachers know who she is, the lady from [service provider], so they welcome her into classes and stuff which makes it easier. The more we see of her, the better.”|
|Trust||23||“But since then we’ve been going to [service provider] and we have a good relationship with them now. They are very responsive and they understand our student population. Since we started referring to [them] we’ve had a lot of services start to happen.”|
|Shared decision- making||22||“I work very closely with the providers, if a student is engaging in a service that is going to end, we collectively decide what’s next for them… If they have to switch agencies, or move from [one service to another], we decide that together and I will talk with the other agency about what they are able to handle, since it will build their case load.”|
Overall, excerpts describing communications were more often positive (frequent communication, positive communication) or neutral, than negative (infrequent communication, negative communication). However, infrequent or poor communication was mentioned more often in NYC than upstate, where very few excerpts were coded negatively. Among staff who described their collaborations in positive terms, the accessibility, frequency, and productivity of communications were paramount. Positive communications more often involved a mix of formal and informal connections: required monthly or quarterly meetings with a larger group of staff, supplemented with more personalized and frequent phone, email, or in-person contacts. Often these contacts evolved over time, from primarily formal meetings, to frequent, informal contacts as relationships deepened. In descriptions of strong communications, staff often mentioned specific individuals at partner organizations by name. By contrast, negative experiences with interagency communication most commonly involved: (i) issues with initiating contact and not hearing back in a timely manner; (ii) lack of familiarity (sometimes resulting from staff turnover or initial failure to delegate a point-of-contact); (iii) miscommunication about project tasks and objectives (an issue that overlaps with role clarity); and, (iv) in a select few instances, concerns that a partner was undermining staff relationships with youth and families (an issue that overlaps with trust). At the initial site visit, research demonstration sites varied in their pre-existing level of contact with outside agencies. A few had long-standing relationships with a variety of external agencies; some had superficial relationships or relationships with only a few outside agencies; and others had few or no contacts outside of their own school or district. In some cases, school-based case managers were able to build on existing relationships through the project, while others were asked to work with partners for whom they had no pre-existing familiarity. The latter scenario was more common in NYC than upstate, making it a possible contributing factor to the regional differences observed in communications and progression through the “levels of collaboration.”
In comparison to communication, excerpts on role clarity were roughly evenly split between positive (clear roles, partner executing role) and negative mentions (unclear roles, partner not executing role, organizational capacity issues). Staff who described their collaborations in positive terms emphasized the degree to which they and partners had a shared set of individual and joint objectives, and clarity on which tasks were assigned to whom. Several staff also mentioned that they valued flexibility around these roles, and cultivated some degree of overlap in expertise and duties in order to more-efficiently deliver services to youth participants. Problems with role clarity typically entailed concerns that partner staff were duplicating roles, or were performing an unclear role within the project. The issues outlined by project staff implicate the need for carefully balancing prescription with flexibility in defining interagency roles.
More than other codes, “formality” of connection was less clearly evidenced as intrinsically good or bad for partnerships. Many respondents described a mix of formal and informal connections. Crosscutting with “communication,” excerpts discussing the formality of partnerships often focused on formal monthly or quarterly meetings, as required by the project, along with weekly or even daily informal interactions as comfort levels increased. Crosscutting with “systems integration,” the same can be said about early reliance on required formal information sharing through the NYESS data system, which evolved into more nuanced, often in person, efforts as comfort levels increased. For many, this connection evolved over time, beginning with infrequent formal interactions and adding frequent, informal coordination practices as partners became more familiar with one another and their complementary roles. Interestingly, within staff responses, “formal” connections were described as having instrumental value but not as intrinsically associated with strong collaborations. For example, some sites, particularly in the less cohesive network of NYC, reported that their only contact with partner agencies was during formal, minimally required interactions (e.g., making referrals or approving billing in the data management system, or attending required meetings). By contrast, stronger connections entailed both required and ad hoc communications, driven by youth and family needs rather than minimum project requirements.
Often, productive collaborations were characterized by interagency sharing of information, resources, and connections, in an instrumental manner geared towards helping partners to more effectively execute their roles. This included staff ensuring that partners had the appropriate and up-to-date contact information for youth/families, sharing information and documentation about the activities of participants, providing entr e for partners into local schools, and providing physical space within buildings so that partners could spend time co-located with partner agencies or project participants, among others. Staff typically discussed resource and information sharing in positive or constructive terms, and this was an area where successful partnerships noticeably developed contextual practices outside project requirements. Across the state, weaker collaborations tended to rely almost exclusively on required NYESS data entry for coordinating information with service providers, while stronger relationships developed more ad hoc or informal practices, such as sharing student schedules, introducing providers to non-PROMISE school faculty, and providing service provider “office hours” (none of which were required by the project), in addition to required NYESS data sharing.
184.108.40.206Barriers to effective collaboration.
Project staff highlighted a few scenarios that made effective collaboration with partners especially difficult. As Table 11 below shows, staff turnover, organizational capacity issues, and partners failing to execute their role were the most frequently mentioned barriers. Again, there were multiple instances of barriers overlapping with one another, in particular staff turnover had notable effects on organizational capacity as well as role execution and role clarity.
|Partner not executing role||56||“It has been difficult to get someone from the parent center staff involved. A literal quote from them was “it’s not worth our time for only two students.” We don’t necessarily agree…Honestly…the parent center staff do not have the foundational knowledge needed to deal with our school or district population. They occasionally will give our parents advice about services that I do not think is accurate or well-tailored to our group.”|
|Staff turnover||24||“Coordination with the [parent center] is minimal to non-existent. This could be my fault, because I know [prior case manager] used to speak with them before she left, and it is possible that I heard from them early in the year and didn’t properly follow up. I make parents aware of parent center services, but I don’t really have a relationship with them.”|
|“With staff turnover and new hires, cases are shuffled or re-assigned or assigned for the first time, NYESS does not provide adequate information to know who is working with each student.”|
|“We lost one of our providers, and so only have one left for students. There has been some struggle related to this turnover— students needing to form new relationships and resisting that, I have been encouraging them to keep engaging. We are spending some time making sure that students are working with [service provider] staff who are a good fit (temperamentally, etc.); this takes a bit of extra time, but has been really important to keeping kids engaged.”|
|Partner lacks organizational capacity||19||“We think the [parent center] staff is spread too thin. There isn’t enough communication and we can’t really count on them out here because the parents already feel like the project is lagging (or, like, ‘why did we join at all, nothing is happening?’).”|
|“We have students who have gone months without services, even though they were referred and went through initial case management and all that. It’s a huge problem, we really don’t want the parents and students to feel like they are being forgotten, but there’s only so much we can do on our end if the providers don’t have the capacity to take the cases.”|
|“Agencies aren’t prepared to go beyond the assessment phase I can tell you that right now. They aren’t ready to provide actual services…These providers don’t have necessary sites and they don’t have a plan. They have a physical building but even those are bursting at the seams…Scope of the client base doesn’t mesh with what these providers…do best…This is not the population that these providers are used to serving…They’re used to working with high functioning adults and providing work supports. It’s a totally different ballgame.”|
There was a high level of turnover among project staff in some regions. In some cases this led to a complete breakdown in agency collaboration; at times it was unclear who the new contact person was within the partner organization, and there might be a long lag before that was resolved, or new staff were unaware they should be forming relationships with individuals at other agencies and previous connections languished. Some staff acknowledged that turnover within their own agency was the source of problems. Turnover also presented service-delivery challenges based on a loss of institutional knowledge, with situations arising where staff across organizations were unsure of who was providing what service to which participants. The impact of turnover appeared less severe in cases with a lower volume of turnover or where both sides made an immediate effort to establish a new person-to-person relationship, though even in those circumstances staff described a period of reestablishing effective processes and communications.
Another barrier arose when individuals or agencies failed to execute their role effectively. These failures presented in a variety of ways: from basic failures in timely communication, to a failure to execute services, to a lack of competency in providing services. RDS case managers most commonly felt that this resulted from a lack of agency capacity, which included issues with insufficient staffing, inadequate staff expertise, and a lack of physical resources, such as office space. Again, regional differences were observed, with concerns about service provider capacity being more commonly mentioned in NYC, and particularly in the D75 special education district, where concerns occasionally arose that providers did not have training or sensitivity to the particular need level or age range of their population.
This paper has several key findings. First, longitudinal analyses of levels of collaboration (LCS, Study 1) and partner communications (OAE, Study 2) provide evidence that task-oriented interagency agreements can meaningfully change collaborations between schools and providers. Second, qualitative analysis (Study 3) allowed a deeper dive into the attributes of successful partnerships, as described by project staff, including practices and processes that were part of the project intervention, as well some that were developed contextually by staff. Third, the differences observed in the “indigenous” (CAP and WNY) and “hybrid” (NYC) models of service coordination implicate program design considerations, and the importance of tailoring models for interagency collaboration to localized bureaucratic, demographic, and geographic realities. Finally, the predominance of practical and organizational barriers to collaboration in the qualitative findings, and accompanying strain on partnerships, reinforces the importance of setting realistic goals and expectations for partners at the macro (state), meso (regional) and micro (agency) levels (see also Golden, Karpur, & Podolec, pre-press). By mapping qualitative analysis to the psychometrically-validated constructs utilized in Frey et al.’s (2006) LCS, these findings may help to refine understandings of task-oriented constructs within interagency collaboration, including contextual understandings of what formality of service coordination, resource sharing, and frequency of communication might look like in practice. Importantly, the findings also highlight areas where these constructs need further clarification and explication.
Studies 1 and 2 provided a quantitative perspective as to whether there were meaningful changes in the PROMISE transition network. Study 1 indicated that, in the two upstate regions, overall collaboration ratings increased each year until the project’s penultimate year, with a small decline in the final year. At their highest, mean ratings were between coordination and cooperation (between 2 and 3 on the LCS scale). In New York City, with its continually evolving network model, mean ratings were between networking and coordination (1 and 2 on the LCS). Thus, in the upstate, “indigenous” networks, there were tendencies towards reporting more integrated communication, role clarity, decision making, and resource/information sharing. Study 2 helps contextualize how communications did, and did not, change during the project. According to OAE data, the frequency of communication did not change during the project, but satisfaction with provider communications did increase over time, again with regional differences observed. In the existing transition literature, there is minimal discussion of the meaning of frequent or effective communication within task-oriented continuums of collaboration. Future studies should explore what minimum level of weekly/monthly contacts lead to meaningful changes in service coordination. For instance, OAE analysis found that most partners communicated monthly or more, adhering to project requirements for increased, formal contact, but not tending to increase the frequency of contact as the project progressed.
Study 3 provided a qualitative perspective as to where and why meaningful changes were observed in the PROMISE transition network, as well as where they fell short. The characteristics of partnerships described as effective included: (a) joint objectives and clearly defined roles; (b) extensive sharing of information and resources; and (c) frequent communication (along an interesting formal-to-informal progression). In the upstate regions, early-stage, formally-planned meetings were gradually supplanted by more informal connections, including deepened interpersonal ties, as distinct from more formal, field-oriented agency-level connections, developed contextually to meet project needs. This maps well to Frey et al.’s (2006) conceptual framework, and the continuum from required/formal contact through more frequent contact characterized by mutual trust. These informal connections were not mentioned as frequently in NYC, where collaborations were, on average, described as weaker along Frey et al.’s (2006) continuum. Confirming regional differences in the OAE findings, the qualitative data evidenced infrequent or poor communication as a bigger problem in NYC, whereas communications in the other regions were more frequently described in neutral or positive terms. Interview responses routinely indicated that contextually-developed practices of resource and information sharing, according to project staff, entailed relatively low cost and effort once a level of familiarity and lines of communication were established. For instance, multiple respondents that went beyond formal requirements for communication cited successful practices as simple as sharing student class schedules with providers, and offering space for in-school provider “office hours.” The centralized case management data system also provided a direct, and contractually required, means of formally sharing information during the course of the project, with staff utilizing the system differently, from meeting minimal entry/reporting/billing requirements to using the system as a tool for shared decision-making and information sharing practices above project requirements.
However, another major theme from the qualitative findings was the impact on organizational capacity issues on effective collaborations. Staff turnover, organizational capacity, and failure to execute roles are predictable, yet difficult, challenges to reckon with. Qualitative analysis indicated that barriers to collaboration often resulted from practical, rather than synergistic, concerns. These barriers to successful network development may be managed at the project design or implementation phases through careful consideration and structuring of processes, policies, and contracts in order to support effective collaboration. Additionally, pre-assessment of potential network partners may help to mitigate strains of capacity, resource availability, and staffing by allowing projects to initiate collaborations with the strongest available partners. Issues of capacity and staff turnover were more acute in NYC, where familiarity and comfort with a partner agency was more likely to rely on a single point of contact. Larger, more diffuse networks likely require different models of collaboration than smaller networks with fewer partners and more cohesion. Future research should aim to clarify the contextual factors that impact the efficacy of recommended practices in interagency collaboration, and account for the processual and practical challenges that more complex networks (e.g., urban, high population, more complicated bureaucracy) may encounter. Finally, our findings implicate a continuing need for empirical research on the sustainability of collaborations beyond grant periods, and what types of strong partnerships tend to survive external funding or contractual end dates. Some findings from this analysis indicate losses in self-reported collaboration ratings as the project approaches closeout. It is evident that further work is needed to develop specific strategies that are sustainable, adapted to different network structures and types, and cognizant of challenges posed by staff turnover and capacity issues.
This study has a few important limitations. Due to the differing regional project implementation, statistical opportunities for comparison of waves were more limited in NYC than in CAP or WNY regions. In essence, the network analyses of the upstate and NYC examples provide two contextually different scenarios for collaborations in transition-focused grant projects: one where existing agency networks are optimized to improve service delivery (the indigenous model proposed in the project proposal; U.S. Department of Education, n.d.), and another where existing agencies collaborate closely with centralized external partners to improve service delivery. Additionally, because the project design in NYC was iterative, there are challenges in terms of describing the changes in the NYC network and conceptualizing what might have been the impact of the project intervention on collaborative interactions between agency partners as a whole.
Another limitation stems from the available evaluation data around collaboration and communications. Because the first waves of data collection occurred after the project had begun, with partner agencies in different stages of onboarding, the longitudinal analyses of change in partner collaborations (levels of collaboration) and communications (frequency and satisfaction) lack a counterfactual scenario as to the existing relationships that predated the project. Staff turnover, which we discussed above in terms of the implications for project implementation and interagency relationships, was also an issue for data analysis of agency collaborations and communications. Differing respondents at the agency-level for all three research instruments utilized in this analysis present a challenge to describing agency-level change in a finite manner. These limitations were enough to require a conservative analysis of the overall change in the regional and statewide networks, but nevertheless the availability of data from multiple sources offered exploratory opportunities to identify areas for future research.
Feedback from project staff provides clarification of the experiences of agency staff coordinating services, and deepens contextual understandings of systems collaboration in two different scenarios of transition-focused grant projects (hybrid and indigenous intervention frameworks). The findings present some conservative evidence that task-oriented measures of collaboration can be manipulated and improved through interventions, but there is still a deep need to further develop the constructs that are frequently used and measured in transition research on interagency collaboration. Qualitative analysis indicated that many of the elements of successful interagency collaboration mentioned in the literature and existing task-oriented collaboration measurement tools are also on the minds of practitioners in the field of transition. Certain opportunities and strategies emerge out of this analysis, including new areas of potential research about: (a) the dosage of formal and informal communication needed to increase interagency coordination; (b) strategies and processes for information and resource sharing across contextually different transition networks; (c) frameworks for assessing partnership capacity and initiating formal agreements with clear roles and responsibilities; and (d) the integration of capacity, staffing, and resource issues into models of collaboration in transition.
Conflict of interest
None to report
The contents of this paper were developed under a cooperative agreement with the U.S. Department of Education, Office of Special Education Programs, associated with PROMISE Award #H418P140002 and a grant from the U.S. Department of Education, Office of Special Education Programs to the New York State Office of Mental Health and Cornell University through the Research Foundation for Mental Hygiene, grant number H418P130011. Corinne Weidenthal served as the project officer. The views expressed herein do not necessarily represent the positions or policies of the Department of Education or its federal partners. No official endorsement by the U.S. Department of Education of any product, commodity, service or enterprise mentioned in this publication is intended or should be inferred.
Analytic Technologies. (n.d.) Handling missing data. Harvard, MA: Analytic Technologies, Inc. Retrieved from, http://www.analytictech.com/mgt780/handouts/missing.htm.
Borgatti, S. P , Everett, M. G. & Freeman, L.C. (2002). UCINET 6 for Windows: Software for Social Network Analysis. Harvard, MA: Analytic Technologies, Inc.
Biswas, S , Tickle, A , Golijani-Moghaddam, N & Almack, K. (2016). The transition into adulthood for children with a severe intellectual disability: parents’ views. International Journal of Developmental Disabilities, 63(2), 1–11.
Dedrick, R. F, & Greenbaum, P. E. (2011). Multilevel confirmatory factor analysis of a scale measuring interagency collaboration of children’s mental health agencies. Journal of Emotional and Behavioral Disorders, 9(1), 27–40.
Fabian, E , Dong, S , Simonsen, M , Luecking, D. M & Deschamps, A. (2016). Service system collaboration in transition: An empirical exploration of its effects on rehabilitation outcomes for students with disabilities. Journal of Rehabilitation, 82(3), 3–10.
Francis, G. L , Gross, J. M , Magiera, L , Schmalzried, J , Monroe-Gulick, A & Reed, S. (2018). Supporting students with disabilities, families, and professionals to collaborate during the transition to adulthood. In Burke M. M., International Review of Research in Developmental Disabilities. London: Academic Press.
Frey, B. B , Lohmeier, J. H , Lee, S. W & Tollefson, N. (2006). Measuring collaboration among grant partners. American Journal of Evaluation, 27(3), 383–392.
Gazley, B & Guo, C. (2017). What do we know about nonprofit collaboration? A comprehensive systematic review of the literature. Academy of Management Proceedings, 2015(1).
Golden, T. P. , Karpur, A. , & Podolec, M. (pre-press), Communities of practice as a tool to re-calibrate context for systems and organizational learning: Centering communities to improve youth post-school outcomes through PROMISE.
Greebaum, P. E. & Dedrick, R. F. (n.d.). Interagency collaboration activities scale (IACAS). The Research and Training Center for Children s Mental Health. Tampa FL.
Henninger, N. A & Taylor, J. L. (2014). Family perspectives on a successful transition to adulthood for individuals with disabilities. Intellectual and Developmental Disabilities, 52(2), 98–111.
Huisman, M & Steglich, C. (2007). Treatment of non-response in longitudinal network studies. Social Networks, 30(4), 297–308.
Individuals with Disabilities Education Act, 20 U.S.C. §1400. (2004).
Kohler, P. D , Gothberg, J.E , Fowler, C & Coyle, J. (2016). Taxonomy for transition programming 2.0: A model for planning, organizing, and evaluating transition education, services, and programs. Western Michigan University. Available from http://www.transitionta.org.
Longoria, R. A. (2005). Is inter-organizational collaboration always a good thing?. Journal of Sociology and Social Welfare, 32(3), 123–140.
Morningstar, M. E & Mazzotti, V.L. (2014). Teacher preparation to deliver evidence-based transition. Gainesville, FL: CEEDAR Center.
National Council on Disability (2008), The Rehabilitation Act: Outcomes for transitioning youth.
Noonan, P. M , Morningstar, M. E & Erickson, A. G. (2008). Improving interagency collaboration. Effective strategies used by high-performing local districts and communities. Career Development for Exceptional Individuals, 31(3), 132–143.
Noonan, P. M , McCall, Z. A , Zheng, C & Erickson, A. G. (2012). An analysis of collaboration in a state-level interagency transition team. Career Development and Transition for Exceptional Individuals, 35(3), 143–154.
Oertle, K. M & Trach, J. S. (2007). Interagency collaboration: The importance of rehabilitation professionals’ involvement in transition. Journal of Rehabilitation, 73(3), 36–44.
Oertle, K. M , Trach, J. S & Plotner, A. J. (2013). Rehabilitation professionals’ expectations for transition and interagency collaboration. Journal of Rehabilitation, 73(3), 25–35.
Povenmire-Kirk, T , Diegelmann, K , Crump, K , Schnorr, C , Test, D.W , Flowers, C & Aspel, N. (2015). Implementing CIRCLES: A new model for interagency collaboration in transition planning. Journal of Vocational Rehabilitation, 42(1), 51–65.
R Core Team (2018). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.
The Rehabilitation Act of 1973, 29 U.S.C. §701 et seq. (1973).
Thomson, A. M , Perry, J. L & Miller, T. K. (2009). Conceptualizing and measuring collaboration. Journal of Public Administration Research and Theory, 19(1), 23–56.
Test, D. W , Fowler, C. H , Richter, S. M , White, J , Mazzotti, V , Walker, A. R & Kortering, L , (2009). Evidence-based practices in secondary transition. Career Development for Exceptional Individuals, 32, 115–128.
Trach, J. S. (2012). Degree of collaboration for successful transition outcomes. Journal of Rehabilitation, 78(2), 39–48.
U.S. Department of Education. (n.d.). Promoting the Readiness of Minors in Supplemental Security Income (PROMISE) FY 2013 application package.
Wasserman, S & Faust, K. (1994). Social network analysis: Methods and applications. New York: Cambridge University Press.
Wehman, C , Schall, P & Carr, S. (2014). Transition from high school to adulthood for adolescents and young adults with autism disorders. In Volkmar F. R., Reichow B., & McPartland J. C. (Eds.), Adolescents and Adults with Autism Disorders (pp. 41–60). New York, NY: Springer Publishing.
Weis, E. S , Anderson, R. M & Lasker, R. D. (2002). Making the most of collaboration: exploring the relationship between partnership synergy and partnership functioning. Health Education & Behavior, 29(6), 683–698.
Yee, T. W. (2010). The VGAM package for categorical data analysis. Journal of Statistical Software, 32(10), 1–34.