Frequently Asked Questions

Clear Impact Scorecard

I developed a “scorecard” and a “program.” What is the difference between the two?
A scorecard is a communication tool, whereas a program is an agency or service system that is responsible for helping achieve a result. The main difference is that you create a Scorecard to communicate. Through the use of a scorecard you are presenting your programs, performance measures, strategies, actions and you are documenting all your work, etc. A program aims to help you to achieve a result or outcome.
Do I create a separate scorecard for each program?
No, you can have multiple programs on a single Scorecard.
How do I use the tags?
Tags are used to categorize Scorecard objects in order to help you find and identify them quickly. Each tag can be up to 15 characters long. Generally, people use acronyms they can easily remember. When you hover over a tag in the Scorecard software, a description will appear; the description text can be up to 250 characters.
Can I use the “Import Data Values” setting in the Scorecard software for importing the Common Metrics into Scorecard from other applications?
Using this feature is helpful when handling a large amount of data on a frequently repeating schedule. Given the small amount of data required for this project, the economies of scale that usually make importing a good idea may not apply. Also, narrative text can’t be imported from Excel since it is rich formatted text. You would still need to add narrative to the software.
Will we be using the Scorecard software going forward after initial implentation? If so, who will be responsible for the associated costs?
Yes. This software is being used to facilitate communications within hubs and across the Consortium. It will also be used for strategic management and to communicate with NCATS Program Directors. Further information about costs for licenses will be provided at a later date.
How do I include the forecast in my Scorecard?
The Edit Forecast button allows you to plot a forecast visually on the graph. Once you click the Forecast button, a forecast edit bar will appear at the top of the graph section. You can add additional points to drag and drop them in the pattern you choose. Click ‘save’ on the forecast message bar when you are done to save the forecast.
What time periods should be in Scorecard? Monthly, quarterly, or just one annual data point?
The first three Common Metrics should be entered with one annual data point; however, you may want to create an additional performance measure in which you track monthly or quarterly data to assist in strategic management at your hub.
Can we use just one Scorecard to all the strategies in our plan for a given metric? It would be easier to just look at one Scorecard and see if our strategies are working, instead of having to switch back and forth.
You can choose what is most effective for your team when building your child Scorecards. Clear Impact suggests each strategy have its own Scorecard; however, if it is more convenient for you to have just one Scorecard for all of the strategies in a metric, that is perfectly acceptable.

Implementation

Is there a specific mechanism for submitting language to revise the guidelines and/or communicate any additions (e.g., definitions of engaged in research)?
Those questions and suggestions can be emailed to the CLIC team who will forward them to NCATS.
Will the work related to the Common Metrics project be added to the annual progress report?
There is no current plan to ask hubs to enter Common Metrics data into the annual RPPRs. You should use the Common Metrics data, the RBA framework and the Scorecard software for strategic management at your hub. However, if there are results or successes related to the Common Metrics that you would like to share, you can add them to your hub’s RPPR.

Metrics - General

What is the deadline for completing data entry for the first three metrics?
NCATS and the Tufts Implementation Team are drafting a timeline for entering performance measure scores into the Scorecard software. For the first three metrics, hubs are asked to enter the data and develop strategic management plans as early as possible.
What definition of “annually” should we be using? Calendar year, grant year, fiscal year, or academic year?
The Operational Guidelines were updated to indicate CTSA program hubs should use the calendar year for each of the first three Common Metrics.
Our CTSA first received funding last August, so currently there is very little data for what has happened since then. Should we look back at data from before we received our CTSA in order to have more of a trend, or should it be based solely on data after the award?
Only enter data related to what happened after the award. If your hub has historical data that reflects information for earlier time periods, it should be considered in your work on developing your Story Behind the Curve.
With the Turn the Curve methodology, we were asked to look for a baseline and do a forecast. Our baseline is currently just one data point. Should we be thinking about also collecting metrics for previous calendar years?
If the data are available and you have the resources to go back and calculate them for previous years, this could be helpful in determining your forecast; however, per the Operational Guidelines, this is not required.

Metrics - Careers

We already completed our data collection for this year on TL1 and KL2 program graduates but did not include information on underrepresented persons. Can we wait until the next time that we survey graduates to collect this information?
Please collect the data when you begin the implementation of the Common Metrics so that all of the hubs are collecting data in a standardized manner.
There are multiple definitions for underrepresented minorities. Which one are we using?
We are using the NIH definitions. The Operational Guideline has a link to more information about these definitions. The most up to date versions of the Operational Guidelines can be found on the Established Common Metrics page.
When is a TL1 or KL2 student eligible to be counted for the careers metric?
If they have completed the TL1 training program and are no longer on the TL1 grant, they can be assessed for whether they are engaged in research (add them to the denominator of the metric). For KL2s, following completion of CTSA-funded training as a KL2 scholar, add them to the denominator of the metric.
We are a relatively new hub and do not yet have any eligible graduates to be included in the Careers Common Metric. Should we do Turn the Curve planning for what we think will happen?
Yes. If you do some thinking about this now, you may identify some additional performance measures or strategies that you may want to implement (e.g., adding an exit interview to your process for departing graduates) even before you are able to start collecting data for the metric.
Many of our short-term TL1 trainees return to medical school and go on to residency after completing the program. Would that be considered “engaged in research”?
If medical students or residents do not have dedicated time for research, they are not considered to be “engaged in research.”
Should we be reporting numbers for the hub only, or for the hub and its affiliates?
You should report all of the TL1 and KL2 scholars who graduated from the program, regardless of where they are located.
Do we include institutionally funded KL2 and TL1 scholars or only those whose training was paid for by the grant?
Please include only CTSA program-funded KL2 and TL1 scholars.
The operational guideline notes that “underrepresented persons” include individuals from underrepresented racial and ethnic groups, individuals with disabilities, and individuals from disadvantaged backgrounds. Are we also to be collecting data on the number and percentage in each of the three categories, or just the overall number and percentage of underrepresented persons?
You should be collecting data on the number and percentage of underrepresented persons. You do not need to break it down into the three categories.

Metrics - IRB

My institution’s IRB defines receipt date and final approval date differently than the Operational Guidelines. Can we use these different definitions when reporting the Common Metrics?
If your institution has a slightly different definition, you will need to take the date they use and calculate the receipt date or final approval date according to the definition in the Operational Guidelines.
If the IRB review is complete and a study is granted a pending approval while the contracts office finalizes financial language for the consent form, would IRB-related work be considered complete?
No. If your IRB has determined the protocol and study are approved with no IRB-related stipulations, then it would be considered complete. This would be considered the final approval date. At some institutions, the IRB will not give their approval until all other reviews are completed. The approval date is the date the study can begin.
How is clinical research defined?
We are using the NIH definition of clinical research that has been added to the most recent version of the Operational Guidelines, which can be found on the Established Common Metrics page.
Our CTSA hub represents more than one institution, and as such, has more than one IRB. Should we report the median time to IRB approval for the two institutions if there are separate IRBs for each of the institutions within the hub?
For this metric, hubs should report the median duration from the IRB(s) from their primary institution. If the primary institution has three separate IRBs, then the data from all three should be combined together to compute the median. For hubs with more than one institution, and one is primary while the others are not, report only on the data from the IRB(s) of the primary institution.
What is the recommendation for how far back to collect data on the IRB Duration Common Metric?
The Operational Guideline (OG) specifies to collect this data annually, beginning with all protocols approved during calendar year 2015. If your hub has been collecting this data historically in a way that is the same or similar to the specifications in the OG, and it would be helpful to take into account when you are working on your Story Behind the Curve, you could also enter the historical (pre-2015) data to your Scorecard.
Please clarify the intent of the IRB Common Metric regarding exclusion of protocols that “are multi-site protocols reviewed based on a reliance agreement.” Does this exclusion criteria apply to both protocols for which the local IRB is not the relied upon IRB *and* those for which the local IRB is the relied upon IRB?
A hub should not include IRB applications conducted at another institution in the IRB Duration common metric. If a local IRB did not review a multi-site protocol because they were a relying IRB, the protocol should be excluded. However, if the local IRB is the relied upon IRB, the protocol should be included in the IRB Duration common metric for that site.

Metrics - Pilots

Should recently funded pilot awards that have not had sufficient time to publish or secure additional funding be included in the data?
Yes, please include these pilot awards in your data.
For the pilot funding metric, is there a set target for percentage of publications arising from pilot funding that is expected across all hubs?
No. The target is hub-specific and should be based on the results of your metric data collection and the design of your hub’s pilot program.
Are all publications that cite the UL1 grant considered as having resulted from the pilot funding?
Not necessarily. If your internal records show that at least one of the authors was a recipient of pilot funds, and there is a direct connection between those funds and the publication, then it would be considered as having resulted from the pilot funding.
When entering the pilot awards turn-the-curve plan in the Scorecard software, do we associate with the # of pilots metric or the % of pilots metric?
It should be associated with the % of pilot research projects that have a least one publication.
Do we count projects for the hub only, or do we include affiliates?
You should count any project at the hub or affiliates where funds are administered through the CTSA pilot studies program, even if the funds are solely from the institution (e.g., medical school).
If an investigator is awarded pilot funding more than once for the same project, how should resulting publications and funding be counted?
A project that receives more than one award should only be counted in the denominator once in the year of the first award. Subsequent publications or funding should also be counted only once in the numerator.
Is expended funding defined as “beginning to expend funds” or “finishing expending funds”?
Expending funds is defined as beginning to expend funds. Each pilot project gets counted only once, in the calendar year that they begin expending funds. Pilot projects that expended funds prior to January 1, 2012 only should not be included in this metric. However pilot projects that began expending funds prior to 2012, but also continued to expend funds in 2012, can be counted in 2012.
For the pilot publications metric, can we include publications without a PMCID but for which a free final text version is available?
It was decided to use the PMCID as a criterion since this matches what goes into the RPPR. For now, publications must have a PMCID to be counted.

Results-Based Accountability

What is the purpose of Results-Based Accountability?
Results-Based Accountability (RBA) is a disciplined way of thinking and taking action that can be used to improve the quality of life in communities, cities, counties, states and nations. RBA can also be used to improve the performance of programs, agencies and services systems.
What is the difference between indicators and performance measures?
Indicators are measures which help quantify the achievement of a result. Performance measures are measures of how well public and private programs and agencies are working.
What level of detail is needed when we are reporting on our Turn the Curve (TTC) plans?
A TTC plan is meant to be entered into Scorecard at the level of an Executive Summary. It will allow you to communicate across your hub about the metric data that has been collected, and the contextual information for that data. The Scorecard software also allows you to upload and attach a variety of supporting documents which can help you call attention to the details of a particular step in the process, share minutes from a meeting about the project, or share other tools that you may use during the improvement process.
Twitter logoFollow CLIC
Twitter logoFollow NCATS