Frequently Asked Questions

Reports

I received an email I have access to my hub's CM Report, how do I access it?
Hub reports are available to the hub PI, Administrator and Common Metrics lead (given they have current CLIC website accounts.) If you have access to a hub report, you should have received an automated email stating you hsve been added to the groupo. To access your hub’s Common Metrics Report, please login to the CLIC website. You can access the link to your hub’s report page two ways: From your Account page Look for the section labeled Group Membership From the Common Metrics Initiative page In the right sidebar, look for ‘Hub Reports’
How do I get access to my hub report if I am not a PI, Admin or CM Lead?
To gain access, you need to be added to the group by your PI, Admin or CM Lead. Those currently with access can login to the page and follow the Instructions on the hub page to give access to another user.
How do I get access to my hub report if I am a PI, Admin or CM Lead and did not have a CLIC website account before 2/28/19?
Please email common_metrics@clic-ctsa.org to request access after you have created your account OR ask for access from the PI, Admin or CM Lead. Instructions for adding access can be found on the hub report page.

Metrics – Informatics

What is the date range? Is it part of the TTC Plan narrative?
It is the range of data for which you have access and are reporting for each data domain. For example, in your research data warehouse you may be accessing and reporting on RxNorm ID for the years 1997-2018. In Scorecard, under the Observations Present performance measure, enter the date ranges in the table below the Inclusion/Exclusion Criteria section. Please enter just the years - do not enter days, months, or any type of text.
ICD 9/10 is listed for two domains, why is that?
ICD 9/10 is used in healthcare to code both diagnoses and procedures – be sure to enter data under the appropriate performance measure.
We are a multi-site CTSA, and we can only access one site. Is there a way to indicate that?
Collect the data that you have available. If you can collect data from more sites next year it will make your data look even better.
Our research Informatics team has little control over what data points are filtered to the warehouse – we are at the mercy of the health system’s IT efforts.
That is not an uncommon situation. Please collect the data that you can and then list this issue under the Negative: section of your Story Behind the Curve (in your TTC plan). For your TTC plan, consider partnering with your IT team and determine if there are strategies that could be implemented to improve your access to the data.
We have information for labs, procedures and medications. What do we need to report for the RxNorm domain?
Report on just the values for the RxNorm.
How much of the long-term plans should be included in the TTC plan?
Typically the TTC plan is for one year. If you are entering longer-term goals, please identify that the strategies will be addressed over the next ## years.
We filter our patient table to only include patients that have visit data in our location because we have billing data from several hospitals where our faculty work, but no actual visit data for those patients – no other information is available.
Include information that you think would be most helpful for investigators across the consortium.
The TTC plan entered for August 30, 2019, is a plan for which year? When do we report on progress afterwards?
The TTC plan entered for the August 30th deadline, represents the hub’s strategies for addressing how they will turn the curve in the coming year. TTC plans are entered annually, so progress is represented by comparing progress since the previous year.
Can you add dates in the script? Start Date will be beginning and End Date = 12/31/2018?
They will be updated for the 2019 reporting period.
How many TTC plans do we need to write for the Informatics Metric?
Only one TTC plan is needed for all of the data domains. Please place your TTC plan under the Observations Present performance measure in Scorecard. The Informatics metric is about the presence or absence of data, so the strategies in your TTC plan should address how you can access missing data and/or improve the quality of data reported.
Regarding gender, is patient reported sex counted as administrative sex?
Administrative gender is a person’s gender (from HL7 and OMB), which describes their sex. It’s a standard dataset (M or F). The domain reflects if it is recorded as a count.
The initial ACT script was only looking for ICD-9 and not ICD-10. Has this been fixed?
Yes, it has been fixed.
Is Postgres under consideration as a supported DBMS?
Hubs are welcome to use Postgres (or another data model). Hubs will need to create their own query and make sure that they match the measures of the existing script. If a hub creates a new query, please considering putting it in Github so that other hubs can use it. [SQLrender (https://github.com/OHDSI/SqlRender) allows OMOP sites to run against several other back end DBs beyond MS SQLserver and ORACLE: PostgresSql, Microsoft Parallel Data Warehouse (a.k.a. Analytics Platform System), Amazon Redshift, Apache Impala, Google BigQuery, BM Netezza.] At this time, scripts are available for these data models: OMOP, PCORnet, and i2b2/ACT. Hubs that use TriNetX, can contact TriNetX to get data.
What exactly is being recorded for the Medications/Drugs domain?
This data addresses the question, “Are you recording your patients using this as your standard vocabulary over the number of patients you have in your repository?” No medication counts as a medication.
How do we determine the data range for the domains from the research data warehouse?
The starting date is when your hub first started collecting the data – when you started using/recording the value for the domain.
Do the scripts provide the min/max dates to use for your data inclusion table?
For the launch of the metric the script did not include min/max dates. The scripts will be updated throughout the life of the metric. Check the Github site for the most current script.
Regarding the unique patients that are included in the denominator: Would the script address patients who have died or are no longer our patients? Should we be considering that for this metric at all?
For the Informatics metric launch, it’s the percentage of people who had those values at the time they were being seen.
Would you please provide a definition for each of the standard value data elements?
LOINC – Logical Observation Identifiers Names and Codes (https://loinc.org) – a set of codes that are used to identify test results and/or clinical observations for electronic reporting. RxNorm ID (https://www.nlm.nih.gov/research/umls/rxnorm/) – a set of codes that provide normalized names for generic and branded clinical drugs. ICD 9/10 – International Classification of Diseases (https://en.wikipedia.org/wiki/ICD-10) – set of codes for clinical diagnoses and/or procedures. CPT – Current Procedural Terminology (https://www.ama-assn.org/amaone/cpt-current-procedural-terminology) – set of codes used to report outpatient clinical services and/or procedures for billing and reporting purposes. SNOMED (www.snomed.org) – a collection of medical codes and terms used in clinical documentation.
How does a hub access a script?
The scripts are available on Github: https://github.com/ncats/CTSA-Metrics.
What are observations?
Observations are a measurement of a single variable or a single value derived logically and/or algebraically from other measured or derived values. A test result, a diastolic blood pressure, and a single chest x-ray impression are examples of observations. Essentially, this is a yes/no variable (1 or 0) – the presence or absence of observation data.
If our numbers are near 99% for each of these measures, what would you like to see us include in our TTC since there isn't much room to improve?
For hubs who have achieved, 100% or near 100%, please document the strategies that have helped your hub to reach that target. Also, you may want to begin to identify additional metrics that would be helpful consortium-wide.
Do the PCORNET scripts work with the version 4.1?
Yes. If you experience any problems with the script, please contact common_metrics@clic-ctsa.org.
We noticed that there is no script for the Notes or Observations. Why is that the case and is that data is still needed?
For both the observations and notes domains, the determination is about the “presence” or “absence” of data. Observations are a single variable or a single value – for example a test result, chest x-ray, etc. Notes are free text documentation entered during a clinical encounter. The scripts contain queries/sql for both the count and percent of free text notes, if the data model doesn't supports free text it is not included.
Incomplete data in the EHR can be outside the control of our CTSA. How can we remedy this issue without having the necessary authority?
Please enter this information in the Story Behind the Curve section of your TTC plan.
Some hospitals have data going back many years prior to having a research data warehouse. They may not have any visits to the hospital - other domains will have no data. It may be more useful to only count patients that had at least 1 visit.
Include information that you think would be most helpful for investigators across the consortium.
Is the Observations domain a simple nominal level count of everything in the warehouse for that domain?
Regarding observations, the scripts are looking for a general information on if observations are being recorded at all (present/absent) in hoped that future research data warehouse characterizations could include vital signs.
Is there a place where I can find simplified, step-by-step instructions on how to complete the Informatics metric?
Step-by-step instructions are located on the CLIC website.
What time interval should we include into metric?
During the launch phase, please include all of the information from the research data warehouse. For subsequent reporting, please check the CLIC website for updates.
Will CLIC provide hubs with feedback on their draft TTC plans?
If you want feedback, please submit a request to common_metrics@clic-ctsa.org.
Does free text include imaging notes (narratives/impressions) or just clinical progress notes and discharge summaries etc.?
No, only notes: admission, progress, discharge, procedure, etc.
Were the Informatics Webinars/Office Hours recorded? If so, where can I find the slides and the recordings.
All of the Informatics Webinars and Office Hours were recorded. The slides are available on the CLIC website. The recordings are available on the Google Drive. If you want access to the Google Drive, please contact common_metrics@clic-ctsa.org.
When we run the scripts, is the worksheet automatically populated? How does it get uploaded to the Scorecard?
You will need to copy and paste the script output into the worksheet and then upload the worksheet into Scorecard. You will also need to manually enter the data values for each of the data values into Scorecard.
What are the steps for loading our data into the Informatics Scorecard?
The steps for entering the Informatics data are on the CLIC website - Informatics Step-by-Step Scorecard Entry Resource.
Do hubs need to provide a date range for each of the data domains?
Yes, hubs need to enter the date range for each of the data domains. Please enter just the years, for example 2010-2018. Do not enter months, days, or any type of text.
We don’t have direct influence over the data domains in the Informatics CM. Are you looking for something different in this CM than in the other Common Metrics?
The focus of the Informatics metric is the presence or absence of data in the research data warehouse (RDW). The TTC plan should be focused on how to get absent data, or how to improve accessing data.
Can you tell me if the metrics will be looking at all patients (inpatient and outpatient) or are there specific guidelines for this?
The metric includes all patients in the research data warehouse– inpatient, outpatient, ER, etc.
Where do you enter the TTC plan for the Informatics metric?
Under the Observations Present performance measure. (Please see the Informatics Step-by-Step Scorecard Entry Resource on the CLIC website.)

Results-Based Accountability

What is the purpose of Results-Based Accountability?
It is the preferred strategic management framework recommended by NCATS.
What is the difference between indicators and performance measures?
Indicators are measures which help quantify the achievement of a result. Performance measures are measures of how well public and private programs and agencies are working.
At our hub we have been collecting the metrics we now call common metrics since 2008. We use the RBA to look at *all* of our data. For the purposes of NCATS, do we only include data from 2012, or do we include “all” our data if we have it?
You can include all of your data, but it is not a requirement. If you do track this information, it must be placed it in a separate Scorecard.
What level of detail is needed when we are reporting on our Turn the Curve (TTC) plans?
The TTC plans are intended to be entered at the level of an executive Summary. A complete set of TTC Plan tools are available to the hubs on the CLIC website.

Metrics - Pilots

Is expended funding defined as “beginning to expend funds” or “finishing expending funds”?
Expending funds is defined as beginning to expend funds.
Since we have pilots that span 2 years, we’ll be double counting people in the numerators - we fund the fiscal year so it will look like we're funding more people that we actually are. How should we enter this data?
You would count a pilot study once in the denominator in the year that they begin expending funds. If they are awarded in August 2015, and first expend funds in November 2015, they are counted in 2015. You would count them once in the numerator in the calendar year in which they publish their first article. They are not to be counted again in the numerator if there are additional publications. The metric is cumulative, not yearly so if a study spans more than one year it is only counted in the first year funds are expended.
For the pilot publications metric, can we include publications without a PMCID but for which a free final text version is available?
It was decided to use the PMCID as a criterion since this matches what goes into the RPPR. For now, publications must have a PMCID to be counted.
Do we only report on Pilots supported by CTSA funding or all pilots awarded in the calendar year?
NCATS has clarified that projects that are non-pilot grant funded but using CTSA pilot grant program resources are not considered one of their pilot grants and not included in this metric.
Is there is a time limit, for example 5 years, on tracking publications and follow-up funding for funded pilot projects?
No, there is no time limit specified in the Operational Guidelines.
When there are gaps in funding (pilots) - do hubs include those in pilot metric or restart their count of the cumulative number of pilots or the number of pilots published during the period. How do we handle metrics when there this a gap period?
Hubs should not include pilot metrics for years where the hub did not receive funding.  If they were funded in 2012, 2013 and 2015, they should include only those years in the metric.  It is anticipated that their cumulative number could potentially plateau or even drop due to the gap year.  Hubs should indicate this change in their Story Behind the Curve. 
Are we excluding publications from peer-reviewed journals that are not indexed by pub-med, such as engineering journals? I have some that do not have PMCIDs for this reason. Also, book chapters and conference proceeding write-ups are excluded?
It was decided to use the PMCID as a criterion since this matches what goes into the RPPR. For now, publications must have a PMCID to be counted.
How would you handle a case where a Pilot award with a 1st publication began prior to 2012 but expended funds in or beyond 2012?
NCATS has clarified that if a pilot award was funded in 2011 and had any funds expended during 2012, those should be included. So they would go into the denominator the first year after January 1, 2012 that they expended funds and would go into the numerator whenever they produced the publication.
If our CTSA pilot has a PI and multiple co-investigators, should we count the subsequent publication or grant or any of the group who exceed the pilot?
Yes if it is possibly related to this pilot. See the definition in the Operational Guideline: “A publication refers to any article on which pilot grant recipients (PI or co-PI) were authors or co-authors that was published in a peer reviewed journal. To be eligible, a publication must have a PMCID number and be considered a CTSA Program hub related publication.”
Can a publication be counted under two different grants if it happened to be influenced by both, or should a single publication only be counted one time?
Yes, a publication can be counted under two different grants if it can be shown to have been influenced by both.
Should recently funded pilot awards that have not had sufficient time to publish or secure additional funding be included in the data?
If the hub has expended funds, the pilot should be included in the data.  Pilots that have been awarded but have not expended funds should be excluded.      (Revised 7/22/2019)
Do we only report on Pilots supported by CTSA funding, or all pilots awarded in the calendar year? If the Pilot was funded by non-CTSA resources but the project uses CTSA resources/knowledge that CTSA supports, do we count them as well?
NCATS has clarified that projects that are non-pilot grant funded but using CTSA pilot grant program resources are not considered one of their pilot grants and not included in this metric. 
Should private donations be counted under subsequent follow-on grants?
If the donated funds go through the institution, then yes, it should be counted.  If an individual receives money from a private party, then no.
For the pilot funding metric, is there a set target for percentage of publications arising from pilot funding that is expected across all hubs?
There is no performance target or benchmark for this or any of the metrics.
In the “pilot funding publications and funding” metric, the timeframe is “since 1/1/12. If your hub was funded later, the beginning date is as of funding date.” Does this refer to the first time the hub was funded or the start of the current 5-year grant?
It refers to the first time your hub was funded as a CTSA. So if your hub was first funded in 2014, you would use that as your start date. 
How should we report pilot data if we were in a no-cost extension from 6/2017-6/2018? Some of the pilots were active before 6/2017 so we reported on them for CY2017, but for CY 2018, there were no active pilot awards. Our funding was renewed 6/2018.
If your program was still "running" and you had bridge funding and none were awarded, please enter 0 (zero). If you "froze" the program then leave the field blank - and please put a note in your TTC plan.   (8/7/2019)
Are all publications that cite the UL1 grant considered as having resulted from the pilot funding?
In order for a CTSI to count a publication, it has to cite the grant and include the PMCID.
Our pilot projects were not selected solely based on their likelihood of publications/grants. Do we need to re-think the way we have been implementing this program?
Other hubs have raised this issue. Not all pilot programs selected for projects are expected to publish, and for others the publication cycle may take a significant amount of time. The purpose of the Common Metrics is to be useful at the hub and the consortium level for strategic management. This information should be reflected in your Story Behind the Curve.
When entering the pilot awards turn-the-curve plan in the Scorecard software, do we associate with the # of pilots metric or the % of pilots metric?
It should be associated with the % of pilot research projects that have a least one publication.
We are in the final year of our grant and our renewal submission included significant changes to the structure/ administration of our pilot program. We are unsure how to proceed with Turn the Curve plans given these changes.
In the Story Behind the Curve, your team is asked to think about factors that are influencing the value of your Common Metric (and the trend line if one is available). These include factors that are positive and negative, current and anticipated. In your instance, you certainly have a number of factors, given the uncertainty, you may not yet be able to characterize all of them as positive or negative. Are there other partners who should be involved early, at least to be aware that the changes are coming? After you identify the factors and potentially some partners, you could begin talking as a team about at least 1-2 strategies that you might consider under the most likely of the scenarios going forward. Until you are able to begin making changes, it may be difficult to proceed. However, you may be able to identify any additional data you might want to collect going forward.
Do we count projects for the hub only, or do we include affiliates?
You should count any project at the hub or affiliates where funds are administered through the CTSA pilot studies program, even if the funds are solely from the institution (e.g., medical school).
We gave several grantees that have been awarded multiple grants for the same project. This means that the different grants share a common pool of follow-up publications. How should these be counted (numerator/denominator)?
For both the pilot publications and funding metrics, a project that receives more than one award should only be counted in the denominator once in the year of the first award. Subsequent publications or funding should also be counted only once in the numerator.
If an investigator is awarded pilot funding more than once for the same project, how should resulting publications and funding be counted?
A project that receives more than one award should only be counted in the denominator once in the year of the first award. Subsequent publications or funding should also be counted only once in the numerator.
How do other institutions track their publications? What do they rely on to report CTSA-related publications to them? Do they have a software or an electronic system that helps manage their publications? How do they search for CTSA-related publications?
It is common that authors do not cite the grant (the TTC plans typically include a range of strategies to improve the rate at which they do so). Therefore, hubs have often reported needing to use several strategies together in order to fully identify CTSA-related publications. Most who are doing so successfully regularly survey their awardees. During the period when funds are being expended, they ask them to report on any manuscripts submitted, accepted or published as part of their progress reports. After funding is concluded, they send regular (every 6 or 12 months) surveys. Some hubs have partnered with their medical librarian colleagues. They can: a) help do searches for publications that did cite the grant, b) teach awardees to cite, and help them to do so appropriately, and c) help awardees upload their publications to PubMedCentral.

Metrics - IRB

What is the Operational Guidelines definition of pre-review? Does it include administrative issues or only content-related review? Should a pre-review that entails review of the content of the protocol be excluded from the calculation?
Hubs may use pre-reviews to address administrative issues and content-related reviews. Pre-review time should be excluded from the calculation of the median IRB duration.
My institution’s IRB defines receipt date and final approval date differently than the Operational Guidelines. Can we use these different definitions when reporting the Common Metrics?
If your institution has a slightly different definition, you will need to take the date they use and calculate the receipt date or final approval date according to the definition in the Operational Guidelines.
What is the intent of the exclusion of protocols: “are multisite protocols reviewed based on a reliance agreement.” Does this apply to both protocols -the local IRB is not the relied upon IRB and those for which the local IRB is the relied upon IRB?
A hub should not include IRB applications conducted at another institution.  If a local IRB did not review a multisite protocol because they were a relying IRB, the protocol should be excluded. However, if the local IRB is the relied upon IRB, the protocol should be included for that site.
If the IRB review is complete and a study is granted a pending approval while the contracts office finalizes financial language for the consent form, would IRB-related work be considered complete?
No. If your IRB has determined the protocol and study are approved with no IRB-related stipulations, then it would be considered complete. This would be considered the final approval date. At some institutions, the IRB will not give their approval until all other reviews are completed. The approval date is the date the study can begin.
How is clinical research defined?
Please use this link (it's also available in the Operational Guidelines) - https://grants.nih.gov/grants/glossary.htm#ClinicalResearch.
What does "fully reviewed" mean?
Fully reviewed means that the protocol has received IRB approval.
Our CTSA hub represents more than one institution, and as such, has more than one IRB. Should we report the median time to IRB approval for the two institutions if there are separate IRBs for each of the institutions within the hub?
For this metric, hubs should report the median duration from the IRB(s) from their primary institution. If the primary institution has three separate IRBs, then the data from all three should be combined together to compute the median. For hubs with more than one institution, and one is primary while the others are not, report only on the data from the IRB(s) of the primary institution.
For the inclusion, do we include all submissions that occur during the time period, or all that have an end date in the timeframe?
Just include the protocols that have received IRB approval during the timeframe.
What is the recommendation for how far back to collect data on the IRB Duration Common Metric?
The Operational Guideline (OG) specifies to collect this data annually, beginning with all protocols approved during calendar year 2015. If your hub has been collecting this data historically in a way that is the same or similar to the specifications in the OG, and it would be helpful to take into account when you are working on your Story Behind the Curve, you could also enter the historical (pre-2015) data to your Scorecard.
What is a pre-review?
Some hubs conduct pre-reviews as a means to address administrative and content-related issues so that the protocol doesn't run into any barriers during the approval process. Pre-reviews are excluded from the calculation for the IRB duration metric.
Are there two IRBs for a CTSA that is multi-institutional? Will we be able to report the median time to IRB approval for two institutions if there are two IRBs for two separate institutions within the hub?
For this metric, you report the median duration from the IRB(s) from your CTSA hub's primary institution. If your primary institution has three separate IRBs, then the data from all three should be combined together to compute the median. But if you have more than one institution, one of which is primary and the others which are not, you only need to report the data from the IRB(s) of the primary hub institution. 
Please clarify the exclusion of protocols that “are multi-site protocols reviewed based on a reliance agreement.” Does this exclusion apply to protocols when the local IRB is NOT the relied upon IRB and those when the local IRB is the relied upon IRB?
A hub should not include IRB applications conducted at another institution in the IRB Duration common metric. If a local IRB did not review a multi-site protocol because they were a relying IRB, the protocol should be excluded. However, if the local IRB is the relied upon IRB, the protocol should be included in the IRB Duration common metric for that site.
Does it matter whether the protocols are industry or investigator initiated?
Protocols include all clinical research (including multisite studies). There is no exclusion for industry-related projects. So, if they are industry initiated or investigator initiated, they are both included in the metric.
Regarding the unit of analysis - Do the calendar years reflect work days only?
No, calendar days reflects total number of days that occur between the receipt of the application and its approval.
What is the recommendation for how far back to collect data on the IRB Duration metric?
The Operational Guideline (OG) specifies to collect this data annually, beginning with all protocols approved during calendar year 2015.
Does the metric apply to only CTSA-associated studies or is institutional wide?
Institutional-wide, not just CTSA-related studies.
For IRB protocols that are determined to be exempt or expedited, is the approval date the date at which final notification was given?
Exempted or expedited protocols are excluded all together for this metric.
A protocol is submitted in 2014 but approved in 2015. How is it counted?
It is the approval year that determines which cohort of data the protocol is included with. If it is submitted in 2014 and approved in 2015, count it in 2015.

Metrics - Careers

This is a cumulative metric. If a participant is engaged in research 1 year after program completion but no longer is 3 years after completion, how should this be tracked?
The total number of program graduates over time is cumulative. It gets updated once a year by adding the new number of graduates to the previous total to make an updated denominator for the metric. The numerator, the number and percent of graduates from the denominator who are currently engaged in clinical and translational research is assessed each year.
We are a relatively new hub and do not yet have any eligible graduates to be included in the Careers Common Metric. Should we do Turn the Curve planning for what we think will happen?
Yes. If you do some thinking about this now, you may identify some additional performance measures or strategies that you may want to implement (e.g., adding an exit interview to your process for departing graduates) even before you are able to start collecting data for the metric.
If a PhD student does not get compensated for time for research, but that research is a required part of the degree and they receive tuition assistance, should this be considered "engaged in CTR"?
Yes, this is considered to still be engaged in research.
Many of our short-term TL1 trainees return to medical school and go on to residency after completing the program. Would that be considered “engaged in research”?
If medical students or residents do not have dedicated time for research, they are not considered to be “engaged in research.”
Regarding "engaged in clincial research", if a scholar takes a break in order to start a family or care for parents, do we not include him in our Scorecard for that particular year or so, even though he will resume his research?
Graduates are to be assessed for their engagement in clinical and translational research annually.
Are we collecting data on the number and percentage of each of the three underrepresented categories (racial and ethnic groups, individuals with disabilities and those from disadvantaged backgrounds) or just the overall number and percentage?
You should be collecting data on the number and percentage of underrepresented persons. You do not need to break it down into the three categories. Please see NIH's Interest in Diversity statement at https://grants.nih.gov/grants/guide/notice-files/NOT-OD-18-210.html
Should we be reporting numbers for the hub only, or for the hub and its affiliates?
You should report all of the TL1 and KL2 scholars who graduated from the program, regardless of where they are located.
For T scholars, do we count them for the year ending data in the same year they finish their program or not until the following year? Ex: T scholar finished program in June 2014. Do we count their responses in the 2014 data or wait until 2015?
Graduates should be added to the denominator of the metric starting in the calendar year that they finish their program. In your example, you would add the T scholar in the denominator for 2014. You should then also assess their eligibility for the numerator of the metric (i.e., are they involved in CTS research) starting in 2014.
If a graduate is lost to follow up, should they be removed from the numerator and denominator of the metric? If a graduate does not respond to a survey, should they be included in the denominator?
Remove them from the numerator and denominator of the metric.
To what extent can eRA extract help with this data collection?
eRA will be a source of information but it won’t be a complete source, because they are not all inclusive; there are activities that are outside of the system.
Do we include institutionally funded KL2 and TL1 scholars or only those whose training was paid for by the grant?
Please include only CTSA program-funded KL2 and TL1 scholars.
It is not uncommon and indeed expected that KL2 scholars here apply for (and hopefully are awarded) other K awards (K01, K23). Does this mean that they should be excluded even though these are not held simultaneously?
Following completion of CTSA-funded training as a KL2 scholar, add them to the denominator of the metric. If they are engaged in further training by another K award, they are considered engaged in research (add them to the numerator).
If a graduate is lost to follow-up (e.g., no address or email to send a survey), should they be removed from the numerator and denominator of the metric?
Yes, remove them from the numerator and denominator of the metric.
Do we combine post-doc and pre-doc trainees in the common metrics for the TL1 program?
Yes.
What is the expectation for the number of years that participants are tracked upon completion of a program?
There is no limit on the number of years currently specified.
Please provide some guidance on how to define individuals from "disadvantaged backgrounds".
The following is a Notice of NIH’s Interest in Diversity: https://grants.nih.gov/grants/guide/notice-files/NOT-OD-18-122.html As presented in the notice:  C. Individuals from disadvantaged backgrounds, defined as:  1. Individuals who come from a family with an annual income below established low-income thresholds. These thresholds are based on family size, published by the U.S. Bureau of the Census; adjusted annually for changes in the Consumer Price Index; and adjusted by the Secretary for use in all health professions programs. The Secretary periodically publishes these income levels at http://aspe.hhs.gov/poverty/index.shtml.   2. Individuals who come from an educational environment such as that found in certain rural or inner-city environments that have demonstrably and directly inhibited the individual from obtaining the knowledge, skills, and abilities necessary to develop and participate in a research career.  The disadvantaged background category (C1 and C2) is applicable to programs focused on high school and undergraduate candidates. Thus, as the career metric is focused on TL1 trainees and KL2 scholars the disadvantaged category does not apply to this metric. 
Can a CTSA hub have a KL2 program that is only institutionally funded?
It is not possible to have a KL2 Program without a U54 CTSA Program award.
Several of the TL1 trainees at our institution are in the MD/PhD program. Once they have completed their dual degrees, they continue with a residency for several additional years as well as subsequent fellowships. Is this considered “still in training”?
If they have completed the TL1 training program and are no longer on the TL1 grant, they can be assessed for whether they are engaged in research (add them to the denominator of the metric). If they’ve completed the TL1 training program and are participating in additional training that has a research component, they are considered “engaged in research” (add them to the numerator of the metric). If the residency or PhD program includes dedicated time for research, they are engaged in research and added to the numerator.
All of our TL1 grantees are pre-doctoral scholars, after they complete the award they have years of medical school, etc. before starting their research careers. Should they all be excluded because they are still considered "in training programs"?
If they have completed the TL1 training program and are no longer on the TL1 grant, they can be assessed for whether they are engaged in research (add them to the denominator of the metric). If they’ve completed the TL1 training program and are participating in additional training that has a research component, they are considered “engaged in research” (add them to the numerator of the metric). If the residency or PhD program includes dedicated time for research, they are engaged in research and added to the numerator.
How should we report KL2/TL1 data if we were in a no-cost extension from 6/2017-6/2018? Some of the Ks were active before 6/2017 so we reported on them for CY2017, but for CY 2018, there were no active K awards. Our funding was renewed 6/2018.
If your program was still "running" and you had bridge funding and none were awarded, please enter 0 (zero). If you "froze" the program then leave the field blank - and please put a note in your TTC plan. (8/7/2019)
Our hub will not have any eligible graduates in the denominator as we are a relatively new hub. Therefore should we do any Turn the Curve strategizing for what we think is happening?
Yes. Include this information in your Story Behind the Curve and develop strategies that you may want to implement (e.g., adding an exit interview to your process for departing graduates) even before you are able to start collecting data for the metric.
Should we include Ks and Ts who fully participate in the CTSA Career Development Program that are funded by institutional funds, not the CTSA grants?
If they are not directly funded by NCATS then they are excluded from the metric.
Do we include KL2 and TL1 scholars that are institutionally funded or just NIH/NCATS funded?
NCATS has clarified that non-CTSA grant funded scholars and trainees who participate in your KL2 or TL1 program should not be included.
We already completed our data collection for this year on TL1 and KL2 program graduates but did not include information on underrepresented persons. Can we wait until the next time that we survey graduates to collect this information?
Please collect the data when you begin the implementation of the Common Metrics so that all of the hubs are collecting data in a standardized manner.
Should hubs apply the "Engaged in research" bullet points as a closed set of criteria that defines engagement or as examples that hubs should use to assess engagement in research?
The list of activities that indicate engagement in research are examples only and not criteria.
Many of our participants exit the KL2 program after getting into another “K” program. We view this as a success, but it seems that the definition does not view this as a success. Please clarify.
Following completion of CTSA-funded training as a KL2 scholar, they are eligible to be counted for the metric. If they are engaged in further training by another K award, they are considered engaged in research. However, if a scholar leaves the KL2 program without completing the full training program requirements, they are excluded.
There are multiple definitions for underrepresented minorities. Which one are we using?
We are using the NIH definitions. The Operational Guideline has a link to more information about these definitions. The most up to date versions of the Operational Guidelines can be found on the Established Common Metrics page. This is the link to the most current NIH definition: https://grants.nih.gov/grants/guide/notice-files/NOT-OD-18-210.html
We funded short-term pre-docs (12-wk summer experience) 2006- 2014 and then discontinued it and have just supported year-long pre-docs for the last 2 years. Should these short-term awardees be excluded from the common metric counts?
Exclude the pre-docs who have only attended the 12-week summer experience from your Careers Common Metric counts. These don’t sound like they are comparable to TL1 students who you might expect to pursue research careers.
For T or K scholars who either chose not to identify their race and/or gender, or self-identified with a non-female, non-male gender: How should this be handled in the current metric calculations?
It is optional for a KL2/TL1 Scholar to identify their gender and/or race. Hubs should continue to count the scholar and state in their story behind the curve that the Scholar did not identify their gender and/or race and therefore could not place them in a race or gender category for analysis.
When is a TL1 or KL2 student eligible to be counted for the careers metric?
If they have completed the TL1 training program and are no longer on the TL1 grant, they can be assessed for whether they are engaged in research (add them to the denominator of the metric). For KL2s, following completion of CTSA-funded training as a KL2 scholar, add them to the denominator of the metric.
What does "current" mean in terms of research engagement and reporting on it?
NCATS expects that hubs collect this data on an annual basis from each graduate for each year after completion of the program (for graduates since January 1, 2012). If a graduate does not respond in a given year, they are not included in either the numerator or denominator for that year. However, an attempt should be made to follow up with them again the next year to determine their status. A graduate should only be permanently removed from the denominator if they are deemed “lost to follow-up”, (e.g., no address or email to send a survey to, persistent refusal to respond).
Is there a minimum % for someone involved in research? If someone reports they are only involved in research 5% does that count or is there a minimum of for instance 20%?
See Career Metric Operational Guiedelines. “If primary role is as a clinician: some effort (e.g., 20%) in research or as a site PI for industry-sponsored clinical trials.”

Metrics - General

Is there an expectation about how the Common Metrics will be used in annual reports?
The Scorecard data for the common metrics is downloaded and sent to the CLIC. CLIC de-identifies and aggregates the data. NCATS receives a report of the de-identified aggregated data. Hubs receive individual reports. Whether hubs choose to include these data in their RPPR is their choice.
Is a Turn the Curve plan always needed [for the metric scores that require them]? We've regularly hit or exceeded our targets in for two of the common metrics.
TTC planning is important for the improvement of these metrics at the hub level, but the hope is also that we can share those best practices that are very successful in the hubs across the CTSA Consortium. We ask you to develop a TTC Plan and provide in your Story Behind the Curve some of the underlying factors that lead you to have such great success with a particular metric, as well the strategies that you used to sustain that performance and who the partners are involved to sustain that performance.  We ask you to document what you believe are the underlying factors and strategies that you use to achieve your performance. Hubs may find that there are few changes to the TTC plan year to year - this is acceptable.
How do we report data if we are in a "no-cost extention"?
If your program was still "running" and you had bridge funding and no awards were made, enter 0 (zero). If you "froze" the program then leave the field blank - and please put a note in your TTC plan. If you are working with cumulative metrics (careers and/or pilots) please continue with the data where you left off since the metric is cumulative. Please make a note of this in your TTC plan.   (Revised 7/22/2019)
What is the deadline for completing data entry for the Common Metrics?
The deadline is the August 31st of each year.
What definition of “annually” should we be using? Calendar year, grant year, fiscal year, or academic year?
The Operational Guidelines were updated to indicate CTSA program hubs should use the calendar year for each of the Common Metrics.
Our CTSA first received funding last August, so currently there is very little data for what has happened since then. Should we look back at data from before we received our CTSA in order to have more of a trend, or should it be based solely on data after
Only enter data related to what happened after the award. If your hub has historical data that reflects information for earlier time periods, it should be considered in your work on developing your Story Behind the Curve.
With the Turn the Curve methodology, we were asked to look for a baseline and do a forecast. Our baseline is currently just one data point. Should we be thinking about also collecting metrics for previous calendar years?
If the data are available and you have the resources to go back and calculate them for previous years, this could be helpful in determining your forecast; however, per the Operational Guidelines, this is not required.
If we have data prior to the start date projected in this plan, can we include that data as part of pilot data collection?
For the Pilot and the Career common metrics, include information only from 2012 forward. 

Implementation

Is there a specific mechanism for submitting language to revise the guidelines and/or communicate any additions (e.g., definitions of engaged in research)?
Those questions and suggestions can be emailed to the CLIC team (common_metrics@clic-ctsa.org) or submitted through the CLIC website. CLIC will forward them to NCATS.
Will the work related to the Common Metrics project be added to the annual progress report?
Not at this time.

Scorecard

How will the data entered into the Scorecard be shared? Will it be entirely private, or shared in some way?
Each hub is allocated two Scorecard user licenses - these are the only individuals in your hub that can see your data. The CLIC, as an administrator, can see what the hubs enter into the Scorecard software. Hubs can purchase additional licenses from Clear Impact.
Do I create a separate scorecard for each program?
No, you can have multiple programs on a single Scorecard.
I have been exploring the Scorecard application and came across an “Import Data Values” under the Admin section. Is this a working and viable option for importing data for the Common Metrics into Scorecard from other applications?
You are welcome to use the import feature. Narrative text can't be imported from Excel, so you will still need to enter the TTC plans manually or use copy/paste from an existing document.
Can I use the “Import Data Values” setting in the Scorecard software for importing the Common Metrics into Scorecard from other applications?
Using this feature is helpful when handling a large amount of data on a frequently repeating schedule. Given the small amount of data required for this project, the economies of scale that usually make importing a good idea may not apply. Also, narrative text can’t be imported from Excel since it is rich formatted text. You would still need to add narrative to the software.
How many strategies should we include in the scorecard? We understand there are many action plans that might fall under each strategy. Do we only enter those strategies that we planning to implement at this time, or all strategies we've considered?
There is no limit to the number of strategies that you can include in Scorecard. Enter the strategies that you believe you can reasonably implement - you can always enter more strategies in Scorecard at a later time. Please review the TTC Planning tools on the CLIC website for additional suggestions. You can use these tools to keep notes of your entire TTC Plan development process.
Will we be using the Scorecard software going forward after initial implentation? If so, who will be responsible for the associated costs?
Yes. This software is being used to facilitate communications within hubs and across the Consortium. It will also be used for strategic management and to communicate with NCATS Program Directors. Further information about costs for licenses will be provided at a later date.
How do I include the forecast in my Scorecard?
The forecast is not required, but you can include it. If you want to include a forecast in Scorecard, place it in the Current Target Value box when you enter your current data value.
How do you edit tags?
Please do not edit tags. Information for the data uploads is based on specific tag names. If you enter you data under a different tag, your data will not be reported accruately.
What time periods should be in Scorecard? Monthly, quarterly, or just one annual data point?
Common metric data and the Turn-the-Curve plans are entered annually. All metric data is downloaded for analysis on or near August 31st of each year. The CLIC communicates due dates to the hubs.
Can we use just one Scorecard for all the strategies in our plan for a given metric? It would be easier to just look at one Scorecard and see if our strategies are working, instead of having to switch back and forth.
If you want to use additional Scorecards, please put them in a section other than Common Metrics.
Who is responsible for the costs of the Scorecard software?
The CLIC provides two licenses for each hub. To request a license, contact common_metrics@clic-ctsa.org.
Twitter logoFollow CLIC
Twitter logoFollow NCATS