Development of a community life engagement fidelity scale to assess and improve day services and supports
Abstract
BACKGROUND:
Despite an increase in supported employment, a large and growing number people with intellectual and developmental disabilities (IDD) still participate in non-work day services. Quality day services that lead to community life engagement (CLE) play an important role in both leading to and complementing competitive integrated employment.
OBJECTIVE:
Building off guideposts developed in previous research, we aimed to develop and test a new instrument, the CLE Fidelity Scale, for service providers to assess whether their day services and supports are well-designed to support CLE.
METHODS:
The research involved four steps: item generation based on existing instruments, a self-advocate review panel and Delphi panel for content adequacy assessment, piloting the instrument with service providers, and internal consistency assessment and factor analysis of the pilot data.
RESULTS:
The end product of these four activities was an 18-item CLEFS that loaded onto three components. The CLEFS also displayed strong content validity (CVR over 0.5 for all items) and interrater reliability (average α = .837).
CONCLUSION:
The CLEFS can be a useful tool for service providers and state agencies seeking to assess and improve day services and supports.
1Introduction
While recent years have seen an increase in the number of people with intellectual and/or developmental disabilities (IDD) in the United States receiving supported employment services, there also continue to be a large and growing number participating in non-work day services and supports (Winsor et al., 2022). As of fiscal year 2019, approximately 657,826 people received day services and supports through a state intellectual and developmental disabilities service agency, and non-work services continue to comprise the largest percentage of expenditures for day and employment services (Winsor et al., 2022).
Paying attention to these non-work services and supports is important to achieving the goals of Employment First, with both state IDD agencies (Sulewski & Timmons, 2018) and service providers (Kamau & Timmons, 2018) noting a need for holistic models of day and employment services. It is increasingly clear that for people with IDD, as for all of us, a full and meaningful life involves both employment and other forms of community life engagement (CLE) such as recreation, volunteering, civic participation, and social connection. In addition to being a core outcome on its own, CLE plays an important role in leading to and/or supplementing employment.
Both employment and CLE outcomes must also be addressed to meet the standards of the Medicaid Home and Community Based Services (HCBS) Final Rule (CMS, 2014a), which calls for “full access of individuals receiving Medicaid HCBS to the greater community, including opportunities to seek employment and work in competitive integrated settings, engage in community life, control personal resources, and receive services in the community, to the same degree of access as individuals not receiving Medicaid HCBS” (CMS, 2014a, p. 249). This rule, along with the Department of Justice settlement agreements with Rhode Island (United States v. State of Rhode Island, 2014) and Oregon (Lane et al. v. Brown et al., 2015), set an expectation for day services and supports that focus on individual engagement in integrated, age-appropriate, community-based activities (Hall et al., 2018; Freeze et al., 2017).
Despite this emphasis, many existing day services and supports continue to isolate and segregate individuals with IDD (Winsor et al., 2022; Neely-Barnes & Elswick, 2016; McMichael-Peirce, 2015; Sulewski, 2010; Sowers et al., 1999), leading to limited opportunities for meaningful community engagement (Friedman & Spassiani, 2017; Miller et al., 2003; Rak & Spencer, 2015; Reilly, 2005; Verdonschot et al., 2009).
This landscape has created a need for guidance for service providers on how to provide day and employment services that both meet federal mandates around integration and lead to better employment and CLE outcomes for people with IDD.
1.1Impact of COVID-19
The situation has only been exacerbated by the COVID-19 pandemic, which led to temporary closures of many employment and/or day services (Association of Persons Supporting Employment First, 2020) as well as negative effects on people with IDD such as boredom (as observed by 80% of direct support providers), mood swings or depression (57%), and loneliness (48%; Hewitt et al., 2021). Ongoing staffing challenges continue to affect provider operations and limit access to day services and supports (Hewitt et al., 2021; APSE, 2020; Thompson & Nygren, 2020).
1.2Research on community life engagement
To address this area of need, our research team has been conducting a series of projects focused on supports for CLE over the past several years. This body of work has resulted in identification of a set of four CLE guideposts for improving day services and supports (Timmons & Sulewski, 2016) and development of a CLE Toolkit (Sulewski et al., 2016).
Most recently, we have developed and tested a CLE Fidelity Scale (CLEFS), which is the focus of this manuscript. This fidelity scale is designed to address an identified need for better ways of defining and measuring quality of day services and supports (Sulewski & Timmons, 2015; Sulewski & Timmons, 2018; NQF, 2016).
2Methods
We developed the CLEFS using standardized methods of developing and validating a new instrument. Churchill (1979) and Hinkin (1995) have identified seven steps to instrument development and our study completed five of the seven steps via three research activities. The first activity was item generation (step 1). The second was a Delphi panel review, aimed at content adequacy assessment (step 2). The third activity was piloting the instrument and analyzing the resulting data, which addressed questionnaire administration, internal consistency assessment, and factor analysis (steps 3, 4, and 5). Throughout this process, we regularly met with a Project Leadership Team (PLT) that included the project staff as well as representatives from ANCOR, APSE, and the SELN, to review each iteration of the CLEFS as well as emerging findings.
2.1Item generation
2.1.1Measures
The starting point for the CLEFS was an existing, 54-item Guidepost Self-Assessment Tool included in the CLE Toolkit (Sulewski et al., 2016). We refined the self-assessment tool through close review by our PLT and a self-advocate review panel, as well as identification and addition of new items from a database of measures developed by the RRTC on Outcome Measurement.
2.1.2Participants
As noted above, the PLT included the project staff as well as representatives from ANCOR, APSE, and the SELN with familiarity with day services and supports. The Self-Advocate Review Panel consisted of five self-advocates recruited through the staff and PLT’s professional networks. Self-advocates were each offered a $25 per meeting stipend.
The second part of this activity involved collection of items from the HCBS Outcome Measurement Database maintained by the Rehabilitation Research and Training Center on HCBS Outcomes Measurement (RTC-OM) at the University of Minnesota (Institute on Community Integration, 2018), which contains over 120 outcome-measurement instruments identified via an extensive review of existing instruments by the RTC-OM research team.
2.1.3Data collection
The PLT met twice in early 2020 to review the self-assessment measures. The Self-Advocate Review Panel met four times, once for each guidepost section, and reviewed a plain-language version of the tool.
The review of the RTC-OM took place in two steps. First, a team member identified a set of instruments most relevant to the Four Guideposts for CLE. Second, we reviewed each of these instruments to identify specific items that could be adapted for use in the CLEFS.
2.1.4Data analysis
Project team members recorded notes at each of the PLT and self-advocate panel discussions. The team then reviewed the feedback to identify potential edits to the tool. Finally, we reviewed the set of items identified from the review of the RTC-OM Database to determine which of them would fill gaps and/or provide duplicate measures.
2.2Delphi panel review
2.2.1Measures
The Delphi method’s aim is to obtain a reliable consensus using a group of experts (Linstone & Turoff, 1975). It is an iterative process of group-based research where panel members participate in two or more rounds of survey review, responding to feedback and changes from the previous round in each subsequent round until statistical agreement is met. An advantage of the Delphi process is that it can be completed entirely online, assuring that panelists are anonymous to each other which reduces bias and persuasion (Rowe & Wright, 1999). The Delphi process has been widely used in scale development for establishing content or face validity (Fernández et al., 2017; Gómez et al., 2015; Mengual-Andrés et al., 2016; Vicenten et al., 2017).
2.2.2Participants
In conjunction with project partners and collaborators, we identified and contacted 47 experts in CLE to be potential Delphi panel members. Thirty individuals agreed to participate and 25 completed both rounds of surveying, including provider staff and management, state agency staff, family members of a person with a disability, and researchers.
2.2.3Data collection
Round 1 began by emailing a link to a Qualtrics survey containing the 126 draft CLEFS statements to the Delphi panelists. Panelists were prompted to review each statement and then decide if the statement was “essential”, “useful, but not essential”, or “not essential” to understanding a provider’s fidelity to the corresponding CLE guidepost. The survey then asked if panelists had any comments about the statement including if it should be reworded or if it would be a better fit under a different guidepost. Upon analysis of the Round 1 results, we refined the CLEFS and sent the new version back to the Delphi panelists to be rated in the same manner during Round 2.
2.2.4Data analysis
The content validity ratio (CVR) of the results of both rounds was calculated using Lawshe’s formula, CVR=(ne-N/2)/(N/2), where CVR = content validity ratio, ne = number of panel members indicating “essential,” and N = total number of panel members. Comments the panelists had made were also taken into account when deciding to edit or remove a statement from the CLEFS.
2.3Pilot data collection and analysis
2.3.1Measures
The pilot study involved fielding the 72-item revised CLEFS.
2.3.2Participants
The target audience was providers of day services and supports to people with IDD throughout the United States. We recruited participants through professional organizations including APSE, ANCOR, and the SELN, as well as through the research team’s professional networks. Invitation emails were sent to key contacts at service providers nationally. Liaisons from thirty-five providers responded that they were interested in participating in the pilot study. Each provider subsequently recruited three to five staff members with a total of 166 participants. Each participant received the CLEFS via Qualtrics. As responses were collected, data was downloaded, cleaned, and converted to SPSS data file for further analysis.
2.3.3Data analysis
First, given that the CLEFS was a new assessment instrument and was scored using a Likert scale (an ordinal measurement), we conducted exploratory factor analysis (EFA; Baños & Franklin, 2002; Baglin, 2014; Mundfrom, Bradley, & Whiteside, 1994; Tomás-Sábado & Gómez-Benito, 2005). The purpose of this analysis was to identify items that were redundant or duplicative. Per standard statistical methods, any CLEFS items that loaded on more than one factor would be eliminated and items with a loading of less than 0.40 would be further examined for continued inclusion (Hinkin et al., 1997).
Next, we calculated Cronbach’s alpha within each provider to assess internal consistency. Cronbach’s alpha is a widely used metric that describes how all items in a test measure the same concept, and thus is connected to the inter-relatedness of the items within the test (Tavakol & Dennick, 2011).
3Results
3.1Item generation
Feedback from the self-advocate review panel and the PLT suggested using person-first language in the CLEFS statements and making sure that the statements focused on the intentionality of the day services and supports. An example of this was changing a statement in the self-assessment that read “Person centered plans are updated at least once a year” to “Plans are updated when they need to be to reflect the changing interests of the person.” We also added a neutral point to the Likert scale based on a recommendation from the PLT. The review of the RTC-OM database resulted in addition of 65 new items to the scale, resulting in a 126-item draft CLEFS.
3.2Delphi panel review
In Round 1, 85 of the 126 statements produced an “essential” CVR of at least .5 (min = .52, max = 1.0), 14 statements produced a “useful, but not essential” CVR between .46 and .31, and 27 statements produced a “not essential” CVR between .28 and – .62. We removed 20 statements and reworded seven based on comments from the Delphi panelists, leaving 95 statements to be rated in Round 2.
In Round 2, 74 of those 95 statements produced an “essential” CVR between 1.00 and .56, seven statements produced a “useful, but not essential” CVR between .48 and .41, and 14 statements produced a “not essential” CVR between .31 and – .26. As the panelists offered no suggested edits to any of the statements they rated as “essential”, the project team decided to remove all statements with CVR less than .5 and conclude content validity testing. The result was a draft CLEFS that contained 72 statements all of which had CVR of .5 or higher.
3.3Pilot data collection and analysis
The initial EFA for all 72 items failed to rotate due to lack of variation. After consultation with a survey methodologist, we selected items with a standard deviation lower than 0.80, which indicated low variability among participants, as candidates for removal. After a thorough screening process incorporating the content-wise importance of each item, a few items with SDs lower than 0.80 were retained and several items with SDs higher than the threshold were removed due to their redundancy. The 72-item CLEFS was reduced to a 44-item CLEFS which rotated on seven factors.
Adhering to standard practice of eliminating items, we then examined the structure matrix produced from the EFA and eliminated items that loaded on more than one factor, resulting in a 27-item CLEFS which produced a 5-factor solution.
As a final step, we highlighted items with a mean score higher than 4.3 (out of 5) and SD lower than 0.8, which indicated extremely high concordance in agreement and hence low variability among the participants, as candidates for removal. In addition to referencing to all quantitative indicators, the teams also took a qualitative approach to go through the 27-item CLEFS and removed items that were redundant or duplicative. The final version of the CLEFS contained 18 items that loaded on 3 factors. Factor 1 yielded a higher number of items (n = 9) than Factor 2 (n = 6) and Factor 3 (n = 3). We went through each item in Factor 1 and manually divided them into 2 groups according to the content themes.
3.4Cronbach’s alpha
Alpha values above 0.70 are considered acceptable, where values above 0.90 may indicate item redundancies and a need to shorten the measure (Tavakol & Dennick, 2011). Alphas for every provider were calculated as a measure of interrater reliability, producing an average of α=.837.
4Discussion
The main outcome of this series of studies was a CLEFS (https://www.communityinclusion.org/files/cle-toolkit/guideposts_assessment_22.pdf) that includes 18 items across four categories: organizational values, person-centered supports, community connections, and continuous quality improvement. The CLEFS differs from the earlier self-assessment tool in two notable ways.
First, the number of items has been reduced from 54 to 18. The initial stage of item generation increased the number of items from 54 to 126, mostly through addition of items identified via review of the RTCOM database. In the Delphi panel study, we eliminated items not deemed “essential” by the panel, reducing the number from 126 to 72 items. Based on the pilot study data, we eliminated items that either had low variability (meaning they were not useful for distinguishing among respondents) or loaded onto multiple factors (meaning they did not capture one clear concept), ultimately leading to an 18-item scale. This brevity will add to the usefulness of this tool since it doesn’t require extensive time or detailed knowledge to complete.
Second, the sections of the scale are different from the CLE Guideposts on which the self-assessment was framed. While the CLE Guideposts still serve as a useful framework for discussing effective strategies to support CLE, our analysis suggested that they did not reflect distinct components. Table 1 illustrates how the guideposts aligned with the CLEFS components. Guidepost 4 broke out into two components: organizational values and continuous quality improvement. Guideposts 2 and 3 were combined into one –community connections –while Guidepost 1 was renamed person-centered supports.
Table 1
CLEFS component | CLE guidepost(s) | Sample CLEFS items: My organization . . . |
Organizational values | 4: Ensure that supports are outcome-oriented and regularly monitored | makes sure all working-age individuals have opportunities to explore employment. |
Person-centered supports | 1: Individualize supports for each person | engages the individual throughout the person-centered planning process. |
Community connections | 2: Promote community membership and contribution | emphasizes building networks of support from family, friends, and community. |
3: Use human and social capital to decrease dependence on paid supports | ||
Continuous quality improvement | 4: Ensure that supports are outcome-oriented and regularly monitored | regularly reviews data and feedback collected and uses them to improve supports at the individual level. |
This reorganization of components also reflected the types of items eliminated via Delphi panel review. For example, some specific practices, such as using peer-to-peer supports or remote supports, were deemed not essential by the Delphi panel because they reflected specific strategies rather than big picture goals. These patterns led to a reduction of items in Guideposts 2 and 3, which may have contributed to their being collapsed into one component in the pilot stage.
4.1Limitations
Our research design had a few key limitations, mostly related to sampling and recruitment for the pilot study.
One was the number of respondents overall. We originally were aiming to collect 300 responses to the pilot survey, but due to the arrival of the COVID-19 pandemic in the midst of the study, gathering responses proved very challenging. With some extra effort from our partners and colleagues, and an added incentive of raffling off gift certificates to respondents, we were able to eventually collect 166 responses, a sufficient number to complete the analysis, but less robust than the original goal.
Second, the use of the research team’s and PLT’s networks for identification of pilot participants may have contributed to an overall lack of variability in responses (i.e. few responses on the lower ends of the scale items). Service providers that are engaged with organizations such as APSE are likely already paying attention to goals such as person-centeredness and community connection. Other service providers, such as day habilitation providers that are providing primarily facility-based and/or medically oriented services, were less likely to be part of our sample.
5Conclusions and directions for future research
This set of research studies has led to identification of a brief, simple CLEFS with strong validity and reliability. Such a tool has multiple potential uses in the current environment of reinventing services post-COVID while moving toward full implementation of the HCBS settings rule. Service providers can use the CLEFS to assess their own day services and supports, technical assistance providers can use it to identify potential areas of focus, and state IDD agencies can use it to assess and improve day services and supports among a network of providers.
At the same time, there is a clear set of next research steps to further assess the validity and reliability of the CLEFS. These include analyzing construct validity (how well does the CLEFS align with other related measures?), test-re-test reliability (do participants score the same when the CLEFS is administered twice over a short span of time?), and confirmatory factor analysis (do the same components still hold up with a different sample of service providers?). All these steps will require administration of the CLEFS to a new and larger sample of service provider respondents, reflecting a wider array of service types.
Acknowledgments
The authors would like to thank Ali Bahrani, graduate student at the School for Global Inclusion and Social Development, for his assistance with reviewing the RTCOM database of measures. They also thank Lee Hargraves of the UMass Boston Center for Survey Research for his technical consultation on scale development techniques. Finally, they thank the members of their Project Leadership Team and Self Advocate Advisory Panel for their thoughtful input.
Conflict of interest
The authors declare that they have no conflict of interest.
Ethics statement
Both data collection activities for this project were determined exempt by the institutional review board at the University of Massachusetts Boston. The study numbers were 2020103 (Delphi panel) and 2021049 (pilot data collection).
Funding
This project was funded by grant 90IFRE0025 from the National Institute on Disability, Independent Living, and Rehabilitation Research.
Informed consent
All participants completed an online consent form.
References
1 | Association of People Supporting Employment First. ((2020) ). Impact of COVID- on Disability Employment Services and Outcomes. Retrieved from https://apse.org/covid-19-impact-survey/ |
2 | Baglin, James ((2014) ) Improving your exploratory factor analysis for ordinal data: A demonstration using FACTOR. Practical Assessment, Research & Evaluation, 19: (5). https://doi.org/10.7275/dsep-4220 |
3 | Baños, J. H. , & Franklin, L. M. ((2002) ) Factor structure of the Mini-Mental State Examination in adult psychiatric inpatients. Psychological Assessment, 14: (4), 397–400. https://doi.org/10.1037/1040-3590.14.4.397 |
4 | Center for Medicare and Medicaid Services ((2014) a). Home and community-based services. Retrieved from: http://www.medicaid.gov/Medicaid-CHIP-Program-Information/By-Topics/Long-Term-Services-and-Support/Home-and-Community-Based-Services/Home-and-Community-Based-Services.html |
5 | Churchill, G. A. ((1979) ) A paradigm for developing better measures of marketing constructs. Journal of Marketing Research, 16: (1), 64–73. https://doi.org/10.2307/3150876 |
6 | Fernández, M. , Verdugo, M. Á. , Gómez, L. E. , Aguayo, V. , & Arias, B. ((2017) ) Core indicators to assess quality of life in population with brain injury. Social Indicators Research, 137: , 813–828. https://doi.org/10.1007/s11205-017-1612-6 |
7 | Freeze, S. Hall, A. , Collins, S. , Schumate, D. , Thomas, C. , & Brent, B. ((2017) ) Different States, Common Issues: Moving mountains, one service at a time. Journal of Vocational Rehabilitation, 46: (3), 265–271. https://doi.org/10.3233/JVR-170861 |
8 | Friedman, C. , & Spassiani, N. A. ((2017) ) Getting out there: Community support services for people with intellectual and developmental disabilities in Medicaid HCBS waivers. Inclusion, 5: (1), 33–33. https://doi.org/10.1352/2326-6988-5.1.33. |
9 | Gómez, L. E. , Arias, B. , Verdugo, M. Á. , Tassénsuremath M. J. , & Brown, I. ((2015) ) Operationalisation of quality of life for adults with severe disabilities. Journal of Intellectual Disability Research, 59: (10), 925–941. https://doi.org/10.1016/j.ridd.2021.104093. |
10 | Hall, A. , Butterworth, J. , Winsor, J. , Kramer, J. , Nye-Lengerman, K. , & Timmons, J. ((2018) ) Building an evidence-based, holistic approach to advancing integrated employment. Research and Practice for Persons with Severe Disabilities, 43: (3). https://doi.org/10.1177/1540796918787503. |
11 | Hewitt, A. , Pettingell, S. , Kramme, J. , Smith, J. , Dean, K. , Kleist, B. , Sanders, M. , & Bershadsky, J. ((2021) ). Direct Support Workforce and COVID-19 National Report: Six-Month Follow-up. Minneapolis: Institute on Community Integration, University of Minnesota. |
12 | Hinkin, T. R. ((1995) ) A review of scale development practices in the study of organizations. Journal of Management, 21: (5), 967–988. https://doi.org/10.1177/014920639502100509. |
13 | Hinkin, T. R. , Tracey, J. B. , & Enz, C. A. ((1997) ) Scale construction: Developing reliable and valid measurement instruments. Journal of Hospitality & Tourism Research, 21: (1), 100–120. https://doi.org/10.1177/109634809702100108. |
14 | Institute on Community Integration. ((2018) ). New database of instruments that measure HCBS outcomes for people with disabilities Retrieved from: https://ici.umn.edu/news/rtc-om |
15 | Kamau, E., Timmons, J. ((2018) ) A roadmap to competitive integrated employment: strategies for provider transformation. Think Work! Bringing employment first to scale: Issue No.. 20, 2018. https://scholarworks.umb.edu/cgi/viewcontent.cgi?article=1034&context=thinkwork&preview%20mode=1&z=1549300605 |
16 | Lane et al. vs. Brown et al. ((2015) ). Civil Action No. 3:12-cv-00138-ST. Retrieved from: https://www.justice.gov/opa/file//download |
17 | Linstone, H. A. , Turoff, M. , eds. ((1975) ) Copyright, Murray Turoff and Harold Linstone. Retrieved from. The Delphi Method: Techniques and Applications. Copyright 2002, Murray Turoff and Harold Linstone. Retrieved from: http://is.njit.edu/pubs/delphibook/ |
18 | McMichael-Peirce, T. ((2015) ) Is Community Integration Understood by Those Charged with Facilitating It? Journal of the Christian Institute on Disability, 4: (2), 41–54. |
19 | Mengual-Andrés, S. , Roig-Vila, R. , & Mira, J. B. ((2016) ) Delphi study for the design and validation of a questionnaire about digital competences in higher education. International Journal of Educational Technology in Higher Education, 13: (1), 1–11. https://doi.org/10.1186/s41239-016-0009-y. |
20 | Miller, K. , Schleien, S. , & Bedini, L. ((2003) ) Barriers to the inclusion of volunteers with developmental disabilities. The Journal of Volunteer Administration, 21: (1), 25–30. |
21 | Mundfrom, D. , Bradley, R. , & Whiteside, L. ((1993) ) A factor analytic study of the infant/toddler and early childhood versions of the HOME Inventory. Educational and Psychological Measurement, 53: , 479–489. https://doi.org/10.1111/j.1467-8624.1994.tb00790.x |
22 | National Quality Forum. ((2016) ) Quality in home and community-based services to support community living: Addressing gaps in performance measurement. Retrieved from https://www.qualityforum.org/Publications/2016/09/Quality_in_Home_and_Community-Based_Services_to_Support_Community_Living__Addressing_Gaps_in_Performance_Measurement.aspx |
23 | Neely-Barnes, S. L. , & Elswick, S. E. ((2016) ) Inclusion for People with Developmental Disabilities: Measuring an Elusive Construct. Journal of social work in disability & rehabilitation, 15: (2), 134–134. https://doi.org/10.1080/1536710x.2016.1162122. |
24 | Vicente, E. , Verdugo, M. A. , Gómez-Vela, M. , Fernández-Pulido, R. , Wehmeyer, M. L. , Guillén, V. M. ((2017) ) Personal characteristics and school contextual variables associated with student self-determination in Spanish context. Developmental Disability, 44: (1), 23–34. https://doi.org/10.3109/13668250.2017.1310828 |
25 | Rak, E. C. , & Spencer, L. ((2015) ) Community participation of persons with disabilities: Volunteering, donations and involvement in groups and organizations. Disability and Rehabilitation, 38: (17), 1705–1715. https://doi.org/10.3109/09638288.2015.1107643. |
26 | Reilly, C. ((2005) ) Volunteering and disability: Experiences and perceptions of volunteering from disabled people and organisations Stirling, Scotland, U.K.: Volunteer Development Scotland. |
27 | Rowe, G. Wright, G. ((1999) ) The Delphi technique as a forecasting tool: Issues and analysis. International Journal of Forecasting, 15: , 353–375. https://doi.org/10.1016/S0169-2070(99)00018-7 |
28 | Sowers, J. , Dean, J. , & Holsapple, M. (1999) The alternative to employment (ATE) study: Toward full inclusion. Salem: Oregon Office of Developmental Disability Services, Department of Human Services. |
29 | Sulewski, J. S. ((2010) ) In search of meaningful daytimes: Case studies of community-based nonwork supports. Research and Practice for Persons with Severe Disabilities, 35: (1ȓ2), 39–54. https://doi.org/10.2511/rpsd.35.1-2.39 |
30 | Sulewski, J. S. , & Timmons, J. C. ((2015) ). Introduction to community life engagement. Boston, MA: University of Massachusetts Boston, Institute for Community Inclusion. |
31 | Sulewski, J. , & Timmons, J. ((2018) ) State roles in promoting community life engagement: Themes from the State Employment Leadership Network’s working group. Engage Brief 8. Boston, MA: University of Massachusetts Boston, Institute for Community Inclusion. |
32 | Sulewski, J. , Timmons, J. , Hall, A. , Lyons, O. , Curren, H. , Tanabe, M. , & Boeltzig-Brown, H. ((2016) ). Community Life Engagement toolkit. Retrieved from: https://www.thinkwork.org/cle-toolkit |
33 | Tavakol, M. Dennick, R. ((2011) ). Making sense of Cronbach’s alpha. International Journal of Medical Education, 2: , 53–55. https://doi.org/10.5116%2Fijme.4dfb.8dfd |
34 | Tomás-Sábado, J. , & Gómez-Benito, J. ((2005) ). Construction and Validation of the Death Anxiety Inventory (DAI). European Journal of Psychological Assessment, 21: (2), 108–114. https://psycnet.apa.org/doi/10.1027/1015-5759.21.2.108 |
35 | Thompson, J. R. , & Nygren, M. A. ((2020) ) COVID-19 and the Field of Intellectual and Developmental Disabilities: Where Have We Been? Where Are We? Where Do We Go? Intellectual and developmental disabilities, 58: (4), 257–261. https://doi.org/10.1352/1934-9556-58.4.257 |
36 | Timmons, J. C. , & Sulewski, J. S. ((2016) ). High-quality community life engagement supports: Four guideposts for success. Boston, MA: University of Massachusetts Boston, Institute for Community Inclusion. |
37 | UnitedStates vs. State of Rhode Island. ((2014) ). Retrieved from http://www.ada.gov/olmstead/documents/ri-olmstead-statewide-agreement.pdf |
38 | Verdonschot, M. M. , Witte, L. P. , Reichrath, E. , Buntinx, W. H. , & Curfs, L. M. ((2009) ) Community participation of people with an intellectual disability: A review of empirical findings. Journal of Intellectual Disability Research, 53: , 303–318. https://doi.org/10.1111/j.1365-2788.2008.01144.x |
39 | Winsor, J. , Butterworth, J. , Migliore, A. , Domin, D. , Zalewska, A. , Shepard, J. , & Kamau, E. ((2022) ). StateData: The national report on employment services and outcomes through 2019. Boston, MA: University of Massachusetts Boston, Institute for Community Inclusion. |