Clinical Research Workload and Study Complexity Assessment

Membership

Membership is open to all personnel of CTSA Program Hubs. For more detailed information on membership you can consult the Working Group section of the Guidance for CTSA Program Groups.

You are not currently logged in. Log in to see additional information or to become a member.

Contact

Coordinator
Nicole
Iaquinto
CLIC Coordinator
Purpose

The purpose of this proposal is to identify metrics that will inform a scoring rubric to determine clinical research workload and study complexity for clinical research conducted in outpatient settings. The rationale for focusing on outpatient settings is that many protocols are now implemented in ambulatory care settings. Our Workgroup will identify core elements (task) [4] associated with pre-study activities (e.g., protocol review, institutional review board submission, source documentation completion), active study activities (e.g., study questionnaires, biospecimen collections, processing of specimens), and follow-up and evaluation of study output. Four principles guide this project: 1) usability, 2) workload measurement for clinical research professionals, 3) ability to determine study complexity and 4) usefulness for all phases of study trials. Our Workgroup will collaborate with key members of the Research Committee from the International Association of Clinical Research Nurses (IACRN), the TIN to help promote the scoring rubric to the CTSA consortium, and the RIC for assistance with participant recruitment (clinical research professionals and scientists).

Initial Instrument and Item Development
An item pool will be developed from a priori literature review with a content analysis related to the complexity and workload of clinical trials and/or clinical research studies.
Items will be generated to fit the identified dimensions of study complexity and workload guided by the Donbedian Quality Model [5]. Donabedian's model consists of 3 dimensions: Structure, Process, and Outcomes. For this project, we define structure as the environmental and institutional infrastructure that needs to be present for study approvals and conduct. This includes research team size and composition. Process refers to the proposed study protocol and research team interactions to complete the study. Outcomes encompass the success of study completion and an evaluation feedback loop to access methodology and output

Guidelines for technical and grammatical principles will be followed to produce clear and concise items that use language familiar to clinical research professionals and scientists [6]. Following a review for clarity and grammar by the project team, the newly created items and subscales will undergo face and content validity testing through the use of an expert panel. Face validity is a subjective assessment of whether or not the created items and subscales cover the concept of workload and study complexity [7]. Content validity ensures that the new scoring rubric and items are relevant to the content being measured [8]. The expert panel will be recruited through our Workgroup's professional network. The convenience sample will include clinical research professionals and scientists with more than five years of experience in conducting various study designs.

Eligibility criteria to serve as an expert are as follows:
• engaged in clinical research for five years or more
• experience in preparing, directing, or coordinating clinical studies sponsored by industry, foundation, and/or
government
• certified in Good Clinical Practice
• completed training in research, ethics, and compliance (such as offered by the Collaborative Institutional Training Initiative (CITI Program) or equivalent

We will recruit at least six participants. The panel will meet virtually. We will present open-ended questions that ask the panel to review the proposed dimensions and attributes capturing research study workload and complexity (Table 1). We will ask them to "think aloud" and discuss how each dimension is defined, what components are relevant or required, and what aspects of study complexity may be missing. Next, we will ask the panel to review each proposed instrument item to ensure its relevance to measuring the concept and its defined attributes independently. Participants will be asked to rate each of the items on a 4-point Likert scale that ranges from 'highly relevant' (4) to 'highly irrelevant' (1). Responses will be entered in SAS 9.4, and a content validity index (CVI) for both individual items (I-CVI), and each subscale (S-CVI) will be computed. I-CVI and S-CVI greater than .8 will be eligible for inclusion and further psychometric testing [8].

Pilot Testing and Initial Psychometric Analysis
Pilot testing will be used to conduct an initial item analysis and preliminary assessment of reliability [9]. The suggested minimum recommendation for initial scale development is 30 participants [10]. A second convenience sample of clinical research professionals and scientists will be recruited through the team's professional research network and assistance from the RIC. Based on an expected response rate of 40%, we anticipate a sample size of at least 40 participants for pilot testing (n=40).

Eligibility Criteria are as follows:
• engaged in clinical research for five years or more
• experience in preparing, directing, or coordinating clinical studies sponsored by industry, foundation, and/or government
• certified in Good Clinical Practice
• completed training in research, ethics, and compliance (such as offered by the Collaborative Institutional Training Initiative (CITI Program) or equivalent

An email will be sent to potential participants explaining the project, its voluntary nature, and the research team's contact information [11]. Interested participants will contact the team via email. An electronic version of the scoring metric will be accessible to participants via an anonymous electronic link generated by Qualtrics survey software. After three weeks of no response, a reminder email will be sent [12]. When data collection during pilot testing is complete, data will be analyzed in SAS 9.4, and item analysis and reliability statistics will be computed. Descriptive statistics, such as range, mean, and standard deviation, will be calculated. Items with good variation across participants will be retained. Inter-item correlation and the coefficient of reliability (Cronbach's alpha) for each subscale will be calculated [13]. Items with an alpha that does not contribute to the overall reliability of the scale will be removed. Corrected item-total correlations that range from .30 to .70 will show if items correlate to each individual subscale. After pilot testing, all remaining items that appear to contribute to the scoring metric's overall reliability will be retained for field testing and factorial analysis. Field testing and factorial analysis will be conducted in a future proposal.

Goals
Identify the dimensions and attributes of clinical research complexity
Develop a scoring rubric to scale clinical research study workload and complexity
Establish the initial psychometric properties of an instrument that aims to measure clinical research study workload and complexity.

Leadership

  • Co-Chair
    Bernadette
    Capili
    Director, Heilbrunn Family Center for Research Nursing
  • Co-Chair
    Gallya
    Gannot
    Program Director CTSA/DCI/NCATS
Meetings:
Meetings are the 3rd Thursday of the Month from 1:00-2:00 pm ET (starting May 2021)

No deliverables at this time.