Frequently Asked Questions

Scorecard

Do I create a separate scorecard for each program?
No, you can have multiple programs on a single Scorecard.
Can I use the “Import Data Values” setting in the Scorecard software for importing the Common Metrics into Scorecard from other applications?
Using this feature is helpful when handling a large amount of data on a frequently repeating schedule. Given the small amount of data required for this project, the economies of scale that usually make importing a good idea may not apply. Also, narrative text can’t be imported from Excel since it is rich formatted text. You would still need to add narrative to the software.
Will we be using the Scorecard software going forward after initial implentation? If so, who will be responsible for the associated costs?
Yes. This software is being used to facilitate communications within hubs and across the Consortium. It will also be used for strategic management and to communicate with NCATS Program Directors. Further information about costs for licenses will be provided at a later date.
How do I include the forecast in my Scorecard?
The forecast is not required, but you can include it. If you want to include a forecast in Scorecard, place it in the Current Target Value box when you enter your current data value.
What time periods should be in Scorecard? Monthly, quarterly, or just one annual data point?
Common metric data and the Turn-the-Curve plans are entered annually. All metric data is downloaded for analysis on or near August 31st of each year. The CLIC communicates due dates to the hubs.
Can we use just one Scorecard for all the strategies in our plan for a given metric? It would be easier to just look at one Scorecard and see if our strategies are working, instead of having to switch back and forth.
If you want to use additional Scorecards, please put them in a section other than Common Metrics.
Who is responsible for the costs of the Scorecard software?
The CLIC provides two licenses for each hub. To request a license, contact common_metrics@clic-ctsa.org.
How will the data entered into the Scorecard be shared? Will it be entirely private, or shared in some way?
Each hub is allocated two Scorecard user licenses - these are the only individuals in your hub that can see your data. The CLIC, as an administrator, can see what the hubs enter into the Scorecard software. Hubs can purchase additional licenses from Clear Impact.
I have been exploring the Scorecard application and came across an “Import Data Values” under the Admin section. Is this a working and viable option for importing data for the Common Metrics into Scorecard from other applications?
You are welcome to use the import feature. Narrative text can't be imported from Excel, so you will still need to enter the TTC plans manually or use copy/paste from an existing document.
How many strategies should we include in the scorecard? We understand there are many action plans that might fall under each strategy. Do we only enter those strategies that we planning to implement at this time, or all strategies we've considered?
There is no limit to the number of strategies that you can include in Scorecard. Enter the strategies that you believe you can reasonably implement - you can always enter more strategies in Scorecard at a later time. Please review the TTC Planning tools on the CLIC website for additional suggestions. You can use these tools to keep notes of your entire TTC Plan development process.
How do you edit tags?
Please do not edit tags. Information for the data uploads is based on specific tag names. If you enter you data under a different tag, your data will not be reported accruately.

Implementation

Is there a specific mechanism for submitting language to revise the guidelines and/or communicate any additions (e.g., definitions of engaged in research)?
Those questions and suggestions can be emailed to the CLIC team (common_metrics@clic-ctsa.org) or submitted through the CLIC website. CLIC will forward them to NCATS.
Will the work related to the Common Metrics project be added to the annual progress report?
Not at this time.

Metrics - General

What is the deadline for completing data entry for the Common Metrics?
The deadline is the August 31st of each year.
What definition of “annually” should we be using? Calendar year, grant year, fiscal year, or academic year?
The Operational Guidelines were updated to indicate CTSA program hubs should use the calendar year for each of the Common Metrics.
Our CTSA first received funding last August, so currently there is very little data for what has happened since then. Should we look back at data from before we received our CTSA in order to have more of a trend, or should it be based solely on data after
Only enter data related to what happened after the award. If your hub has historical data that reflects information for earlier time periods, it should be considered in your work on developing your Story Behind the Curve.
With the Turn the Curve methodology, we were asked to look for a baseline and do a forecast. Our baseline is currently just one data point. Should we be thinking about also collecting metrics for previous calendar years?
If the data are available and you have the resources to go back and calculate them for previous years, this could be helpful in determining your forecast; however, per the Operational Guidelines, this is not required.
Is there an expectation about how the Common Metrics will be used in annual reports?
The Scorecard data for the common metrics is downloaded and sent to the CLIC. CLIC de-identifies and aggregates the data. NCATS receives a report of the de-identified aggregated data. Hubs receive individual reports. Whether hubs choose to include these data in their RPPR is their choice.
Is a Turn the Curve plan always needed [for the metric scores that require them]? We've regularly hit or exceeded our targets in for two of the common metrics.
TTC planning is important for the improvement of these metrics at the hub level, but the hope is also that we can share those best practices that are very successful in the hubs across the CTSA Consortium. We ask you to develop a TTC Plan and provide in your Story Behind the Curve some of the underlying factors that lead you to have such great success with a particular metric, as well the strategies that you used to sustain that performance and who the partners are involved to sustain that performance.  We ask you to document what you believe are the underlying factors and strategies that you use to achieve your performance. Hubs may find that there are few changes to the TTC plan year to year - this is acceptable.
If we have data prior to the start date projected in this plan, can we include that data as part of pilot data collection?
For the Pilot and the Career common metrics, include information only from 2012 forward. 
Why is my organization not listed in the federated list?
If your orgnization is new to working with CLIC, it is possible we haven't added it to our list yet. Please contact help@clic-ctsa.org and we can look it up. However, most likely your institution is not federated. 
Can I create accounts for other people at my hub?
This is not recommended. To create an account, you would need access to another person's university credentials, Google or LinkedIn email and password. For security reasons, it is not good to share passwords. Please encourage others on your team to create their own account. The New Member Guide is a great page to share because it has a simple getting started list and a step by step guide to logging in.
Is there an advantage to using a federated login?
Federated logins use your university credentials to authenticate. It will pull over information such as your name and university email address. It can also be easier for others to validate an account is yours when your university email address is listed. Other than that, the accounts work the same. 
Can my Administrator register for the DTF/WG pages?
Yes, however we would ask that both you and your Administrator complete the registration and DTF/WG subscription process. S/he would need to choose their appropriate Membership Type on the Requesting Membership page.
How do I edit my CLIC account information?
After logging in, visit your Account page. There should be a tab at the top labeled "Edit". After making your changes, select "Save" at the bottom of the page. To view your changes, please use the "View" tab at the top of the page.
How do I find content I added to the website?
To find any content you've submitted to the CLIC website such as news, events, opportunities, educational content, RFAs or resources, please log in and then go to your Account page. Look for the "Authored Content" tab at the top of the page. Still don't see what you are looking for? Please email help@clic-ctsa.org.
How can I save something on the website to easily find later?
Please log in. Look for the  ☆ Save  link on the page. It is typically near the bottom. To see everything you've saved, visit your Account page. On the righthand side, in the Account Functions menu, select "View Bookmarks".
How do I contact CLIC?
You can use eithe the contact form or the CLIC email: contact@clic-ctsa.org. Both methods connect to or Help Desk. We will connect you with the best person for your inquiry.  
How do we report data if we are in a "no-cost extention"?
If your program was still "running" and you had bridge funding and no awards were made, enter 0 (zero). If you "froze" the program then leave the field blank - and please put a note in your TTC plan. If you are working with cumulative metrics (careers and/or pilots) please continue with the data where you left off since the metric is cumulative. Please make a note of this in your TTC plan.   (Revised 7/22/2019)

Metrics - Careers

We already completed our data collection for this year on TL1 and KL2 program graduates but did not include information on underrepresented persons. Can we wait until the next time that we survey graduates to collect this information?
Please collect the data when you begin the implementation of the Common Metrics so that all of the hubs are collecting data in a standardized manner.
There are multiple definitions for underrepresented minorities. Which one are we using?
We are using the NIH definitions. The Operational Guideline has a link to more information about these definitions. The most up to date versions of the Operational Guidelines can be found on the Established Common Metrics page. This is the link to the most current NIH definition: https://grants.nih.gov/grants/guide/notice-files/NOT-OD-18-210.html
When is a TL1 or KL2 student eligible to be counted for the careers metric?
If they have completed the TL1 training program and are no longer on the TL1 grant, they can be assessed for whether they are engaged in research (add them to the denominator of the metric). For KL2s, following completion of CTSA-funded training as a KL2 scholar, add them to the denominator of the metric.
We are a relatively new hub and do not yet have any eligible graduates to be included in the Careers Common Metric. Should we do Turn the Curve planning for what we think will happen?
Yes. If you do some thinking about this now, you may identify some additional performance measures or strategies that you may want to implement (e.g., adding an exit interview to your process for departing graduates) even before you are able to start collecting data for the metric.
Many of our short-term TL1 trainees return to medical school and go on to residency after completing the program. Would that be considered “engaged in research”?
If medical students or residents do not have dedicated time for research, they are not considered to be “engaged in research.”
Should we be reporting numbers for the hub only, or for the hub and its affiliates?
You should report all of the TL1 and KL2 scholars who graduated from the program, regardless of where they are located.
Do we include institutionally funded KL2 and TL1 scholars or only those whose training was paid for by the grant?
Please include only CTSA program-funded KL2 and TL1 scholars.
Our hub will not have any eligible graduates in the denominator as we are a relatively new hub. Therefore should we do any Turn the Curve strategizing for what we think is happening?
Yes. Include this information in your Story Behind the Curve and develop strategies that you may want to implement (e.g., adding an exit interview to your process for departing graduates) even before you are able to start collecting data for the metric.
Should we include Ks and Ts who fully participate in the CTSA Career Development Program that are funded by institutional funds, not the CTSA grants?
If they are not directly funded by NCATS then they are excluded from the metric.
Do we include KL2 and TL1 scholars that are institutionally funded or just NIH/NCATS funded?
NCATS has clarified that non-CTSA grant funded scholars and trainees who participate in your KL2 or TL1 program should not be included.
Many of our participants exit the KL2 program after getting into another “K” program. We view this as a success, but it seems that the definition does not view this as a success. Please clarify.
Following completion of CTSA-funded training as a KL2 scholar, they are eligible to be counted for the metric. If they are engaged in further training by another K award, they are considered engaged in research. However, if a scholar leaves the KL2 program without completing the full training program requirements, they are excluded.
For T or K scholars who either chose not to identify their race and/or gender, or self-identified with a non-female, non-male gender: How should this be handled in the current metric calculations?
It is optional for a KL2/TL1 Scholar to identify their gender and/or race. Hubs should continue to count the scholar and state in their story behind the curve that the Scholar did not identify their gender and/or race and therefore could not place them in a race or gender category for analysis.
Is there a minimum % for someone involved in research? If someone reports they are only involved in research 5% does that count or is there a minimum of for instance 20%?
See Career Metric Operational Guiedelines. “If primary role is as a clinician: some effort (e.g., 20%) in research or as a site PI for industry-sponsored clinical trials.”
To what extent can eRA extract help with this data collection?
eRA will be a source of information but it won’t be a complete source, because they are not all inclusive; there are activities that are outside of the system.
It is not uncommon and indeed expected that KL2 scholars here apply for (and hopefully are awarded) other K awards (K01, K23). Does this mean that they should be excluded even though these are not held simultaneously?
Following completion of CTSA-funded training as a KL2 scholar, add them to the denominator of the metric. If they are engaged in further training by another K award, they are considered engaged in research (add them to the numerator).
If a graduate is lost to follow-up (e.g., no address or email to send a survey), should they be removed from the numerator and denominator of the metric?
Yes, remove them from the numerator and denominator of the metric.
Do we combine post-doc and pre-doc trainees in the common metrics for the TL1 program?
Yes.
What is the expectation for the number of years that participants are tracked upon completion of a program?
There is no limit on the number of years currently specified.
Can a CTSA hub have a KL2 program that is only institutionally funded?
It is not possible to have a KL2 Program without a U54 CTSA Program award.
This is a cumulative metric. If a participant is engaged in research 1 year after program completion but no longer is 3 years after completion, how should this be tracked?
The total number of program graduates over time is cumulative. It gets updated once a year by adding the new number of graduates to the previous total to make an updated denominator for the metric. The numerator, the number and percent of graduates from the denominator who are currently engaged in clinical and translational research is assessed each year.
Should hubs apply the "Engaged in research" bullet points as a closed set of criteria that defines engagement or as examples that hubs should use to assess engagement in research?
The list of activities that indicate engagement in research are examples only and not criteria.
We funded short-term pre-docs (12-wk summer experience) 2006- 2014 and then discontinued it and have just supported year-long pre-docs for the last 2 years. Should these short-term awardees be excluded from the common metric counts?
Exclude the pre-docs who have only attended the 12-week summer experience from your Careers Common Metric counts. These don’t sound like they are comparable to TL1 students who you might expect to pursue research careers.
What does "current" mean in terms of research engagement and reporting on it?
NCATS expects that hubs collect this data on an annual basis from each graduate for each year after completion of the program (for graduates since January 1, 2012). If a graduate does not respond in a given year, they are not included in either the numerator or denominator for that year. However, an attempt should be made to follow up with them again the next year to determine their status. A graduate should only be permanently removed from the denominator if they are deemed “lost to follow-up”, (e.g., no address or email to send a survey to, persistent refusal to respond).
If a PhD student does not get compensated for time for research, but that research is a required part of the degree and they receive tuition assistance, should this be considered "engaged in CTR"?
Yes, this is considered to still be engaged in research.
Regarding "engaged in clincial research", if a scholar takes a break in order to start a family or care for parents, do we not include him in our Scorecard for that particular year or so, even though he will resume his research?
Graduates are to be assessed for their engagement in clinical and translational research annually.
For T scholars, do we count them for the year ending data in the same year they finish their program or not until the following year? Ex: T scholar finished program in June 2014. Do we count their responses in the 2014 data or wait until 2015?
Graduates should be added to the denominator of the metric starting in the calendar year that they finish their program. In your example, you would add the T scholar in the denominator for 2014. You should then also assess their eligibility for the numerator of the metric (i.e., are they involved in CTS research) starting in 2014.
Please provide some guidance on how to define individuals from "disadvantaged backgrounds".
The following is a Notice of NIH’s Interest in Diversity: https://grants.nih.gov/grants/guide/notice-files/NOT-OD-18-122.html As presented in the notice:  C. Individuals from disadvantaged backgrounds, defined as:  1. Individuals who come from a family with an annual income below established low-income thresholds. These thresholds are based on family size, published by the U.S. Bureau of the Census; adjusted annually for changes in the Consumer Price Index; and adjusted by the Secretary for use in all health professions programs. The Secretary periodically publishes these income levels at http://aspe.hhs.gov/poverty/index.shtml.   2. Individuals who come from an educational environment such as that found in certain rural or inner-city environments that have demonstrably and directly inhibited the individual from obtaining the knowledge, skills, and abilities necessary to develop and participate in a research career.  The disadvantaged background category (C1 and C2) is applicable to programs focused on high school and undergraduate candidates. Thus, as the career metric is focused on TL1 trainees and KL2 scholars the disadvantaged category does not apply to this metric. 
Several of the TL1 trainees at our institution are in the MD/PhD program. Once they have completed their dual degrees, they continue with a residency for several additional years as well as subsequent fellowships. Is this considered “still in training”?
If they have completed the TL1 training program and are no longer on the TL1 grant, they can be assessed for whether they are engaged in research (add them to the denominator of the metric). If they’ve completed the TL1 training program and are participating in additional training that has a research component, they are considered “engaged in research” (add them to the numerator of the metric). If the residency or PhD program includes dedicated time for research, they are engaged in research and added to the numerator.
All of our TL1 grantees are pre-doctoral scholars, after they complete the award they have years of medical school, etc. before starting their research careers. Should they all be excluded because they are still considered "in training programs"?
If they have completed the TL1 training program and are no longer on the TL1 grant, they can be assessed for whether they are engaged in research (add them to the denominator of the metric). If they’ve completed the TL1 training program and are participating in additional training that has a research component, they are considered “engaged in research” (add them to the numerator of the metric). If the residency or PhD program includes dedicated time for research, they are engaged in research and added to the numerator.
Are we collecting data on the number and percentage of each of the three underrepresented categories (racial and ethnic groups, individuals with disabilities and those from disadvantaged backgrounds) or just the overall number and percentage?
You should be collecting data on the number and percentage of underrepresented persons. You do not need to break it down into the three categories. Please see NIH's Interest in Diversity statement at https://grants.nih.gov/grants/guide/notice-files/NOT-OD-18-210.html
If a graduate is lost to follow up, should they be removed from the numerator and denominator of the metric? If a graduate does not respond to a survey, should they be included in the denominator?
Remove them from the numerator and denominator of the metric.
How should we report KL2/TL1 data if we were in a no-cost extension from 6/2017-6/2018? Some of the Ks were active before 6/2017 so we reported on them for CY2017, but for CY 2018, there were no active K awards. Our funding was renewed 6/2018.
If your program was still "running" and you had bridge funding and none were awarded, please enter 0 (zero). If you "froze" the program then leave the field blank - and please put a note in your TTC plan. (8/7/2019)

Metrics - IRB

My institution’s IRB defines receipt date and final approval date differently than the Operational Guidelines. Can we use these different definitions when reporting the Common Metrics?
If your institution has a slightly different definition, you will need to take the date they use and calculate the receipt date or final approval date according to the definition in the Operational Guidelines.
If the IRB review is complete and a study is granted a pending approval while the contracts office finalizes financial language for the consent form, would IRB-related work be considered complete?
No. If your IRB has determined the protocol and study are approved with no IRB-related stipulations, then it would be considered complete. This would be considered the final approval date. At some institutions, the IRB will not give their approval until all other reviews are completed. The approval date is the date the study can begin.
How is clinical research defined?
Please use this link (it's also available in the Operational Guidelines) - https://grants.nih.gov/grants/glossary.htm#ClinicalResearch.
Our CTSA hub represents more than one institution, and as such, has more than one IRB. Should we report the median time to IRB approval for the two institutions if there are separate IRBs for each of the institutions within the hub?
For this metric, hubs should report the median duration from the IRB(s) from their primary institution. If the primary institution has three separate IRBs, then the data from all three should be combined together to compute the median. For hubs with more than one institution, and one is primary while the others are not, report only on the data from the IRB(s) of the primary institution.
What is the recommendation for how far back to collect data on the IRB Duration Common Metric?
The Operational Guideline (OG) specifies to collect this data annually, beginning with all protocols approved during calendar year 2015. If your hub has been collecting this data historically in a way that is the same or similar to the specifications in the OG, and it would be helpful to take into account when you are working on your Story Behind the Curve, you could also enter the historical (pre-2015) data to your Scorecard.
Please clarify the exclusion of protocols that “are multi-site protocols reviewed based on a reliance agreement.” Does this exclusion apply to protocols when the local IRB is NOT the relied upon IRB and those when the local IRB is the relied upon IRB?
A hub should not include IRB applications conducted at another institution in the IRB Duration common metric. If a local IRB did not review a multi-site protocol because they were a relying IRB, the protocol should be excluded. However, if the local IRB is the relied upon IRB, the protocol should be included in the IRB Duration common metric for that site.
Are there two IRBs for a CTSA that is multi-institutional? Will we be able to report the median time to IRB approval for two institutions if there are two IRBs for two separate institutions within the hub?
For this metric, you report the median duration from the IRB(s) from your CTSA hub's primary institution. If your primary institution has three separate IRBs, then the data from all three should be combined together to compute the median. But if you have more than one institution, one of which is primary and the others which are not, you only need to report the data from the IRB(s) of the primary hub institution. 
Does it matter whether the protocols are industry or investigator initiated?
Protocols include all clinical research (including multisite studies). There is no exclusion for industry-related projects. So, if they are industry initiated or investigator initiated, they are both included in the metric.
Regarding the unit of analysis - Do the calendar years reflect work days only?
No, calendar days reflects total number of days that occur between the receipt of the application and its approval.
What is the recommendation for how far back to collect data on the IRB Duration metric?
The Operational Guideline (OG) specifies to collect this data annually, beginning with all protocols approved during calendar year 2015.
Does the metric apply to only CTSA-associated studies or is institutional wide?
Institutional-wide, not just CTSA-related studies.
For IRB protocols that are determined to be exempt or expedited, is the approval date the date at which final notification was given?
Exempted or expedited protocols are excluded all together for this metric.
A protocol is submitted in 2014 but approved in 2015. How is it counted?
It is the approval year that determines which cohort of data the protocol is included with. If it is submitted in 2014 and approved in 2015, count it in 2015.
What is the Operational Guidelines definition of pre-review? Does it include administrative issues or only content-related review? Should a pre-review that entails review of the content of the protocol be excluded from the calculation?
Hubs may use pre-reviews to address administrative issues and content-related reviews. Pre-review time should be excluded from the calculation of the median IRB duration.
What is the intent of the exclusion of protocols: “are multisite protocols reviewed based on a reliance agreement.” Does this apply to both protocols -the local IRB is not the relied upon IRB and those for which the local IRB is the relied upon IRB?
A hub should not include IRB applications conducted at another institution.  If a local IRB did not review a multisite protocol because they were a relying IRB, the protocol should be excluded. However, if the local IRB is the relied upon IRB, the protocol should be included for that site.
What does "fully reviewed" mean?
Fully reviewed means that the protocol has received IRB approval.
For the inclusion, do we include all submissions that occur during the time period, or all that have an end date in the timeframe?
Just include the protocols that have received IRB approval during the timeframe.
What is a pre-review?
Some hubs conduct pre-reviews as a means to address administrative and content-related issues so that the protocol doesn't run into any barriers during the approval process. Pre-reviews are excluded from the calculation for the IRB duration metric.

Metrics - Pilots

Should recently funded pilot awards that have not had sufficient time to publish or secure additional funding be included in the data?
If the hub has expended funds, the pilot should be included in the data.  Pilots that have been awarded but have not expended funds should be excluded.      (Revised 7/22/2019)
For the pilot funding metric, is there a set target for percentage of publications arising from pilot funding that is expected across all hubs?
There is no performance target or benchmark for this or any of the metrics.
Are all publications that cite the UL1 grant considered as having resulted from the pilot funding?
In order for a CTSI to count a publication, it has to cite the grant and include the PMCID.
When entering the pilot awards turn-the-curve plan in the Scorecard software, do we associate with the # of pilots metric or the % of pilots metric?
It should be associated with the % of pilot research projects that have a least one publication.
Do we count projects for the hub only, or do we include affiliates?
You should count any project at the hub or affiliates where funds are administered through the CTSA pilot studies program, even if the funds are solely from the institution (e.g., medical school).
If an investigator is awarded pilot funding more than once for the same project, how should resulting publications and funding be counted?
A project that receives more than one award should only be counted in the denominator once in the year of the first award. Subsequent publications or funding should also be counted only once in the numerator.
Is expended funding defined as “beginning to expend funds” or “finishing expending funds”?
Expending funds is defined as beginning to expend funds.
For the pilot publications metric, can we include publications without a PMCID but for which a free final text version is available?
It was decided to use the PMCID as a criterion since this matches what goes into the RPPR. For now, publications must have a PMCID to be counted.
Do we only report on Pilots supported by CTSA funding or all pilots awarded in the calendar year?
NCATS has clarified that projects that are non-pilot grant funded but using CTSA pilot grant program resources are not considered one of their pilot grants and not included in this metric.
Is there is a time limit, for example 5 years, on tracking publications and follow-up funding for funded pilot projects?
No, there is no time limit specified in the Operational Guidelines.
Are we excluding publications from peer-reviewed journals that are not indexed by pub-med, such as engineering journals? I have some that do not have PMCIDs for this reason. Also, book chapters and conference proceeding write-ups are excluded?
It was decided to use the PMCID as a criterion since this matches what goes into the RPPR. For now, publications must have a PMCID to be counted.
How would you handle a case where a Pilot award with a 1st publication began prior to 2012 but expended funds in or beyond 2012?
NCATS has clarified that if a pilot award was funded in 2011 and had any funds expended during 2012, those should be included. So they would go into the denominator the first year after January 1, 2012 that they expended funds and would go into the numerator whenever they produced the publication.
If our CTSA pilot has a PI and multiple co-investigators, should we count the subsequent publication or grant or any of the group who exceed the pilot?
Yes if it is possibly related to this pilot. See the definition in the Operational Guideline: “A publication refers to any article on which pilot grant recipients (PI or co-PI) were authors or co-authors that was published in a peer reviewed journal. To be eligible, a publication must have a PMCID number and be considered a CTSA Program hub related publication.”
Can a publication be counted under two different grants if it happened to be influenced by both, or should a single publication only be counted one time?
Yes, a publication can be counted under two different grants if it can be shown to have been influenced by both.
Should private donations be counted under subsequent follow-on grants?
If the donated funds go through the institution, then yes, it should be counted.  If an individual receives money from a private party, then no.
When there are gaps in funding (pilots) - do hubs include those in pilot metric or restart their count of the cumulative number of pilots or the number of pilots published during the period. How do we handle metrics when there this a gap period?
Hubs should not include pilot metrics for years where the hub did not receive funding.  If they were funded in 2012, 2013 and 2015, they should include only those years in the metric.  It is anticipated that their cumulative number could potentially plateau or even drop due to the gap year.  Hubs should indicate this change in their Story Behind the Curve. 
Do we only report on Pilots supported by CTSA funding, or all pilots awarded in the calendar year? If the Pilot was funded by non-CTSA resources but the project uses CTSA resources/knowledge that CTSA supports, do we count them as well?
NCATS has clarified that projects that are non-pilot grant funded but using CTSA pilot grant program resources are not considered one of their pilot grants and not included in this metric. 
In the “pilot funding publications and funding” metric, the timeframe is “since 1/1/12. If your hub was funded later, the beginning date is as of funding date.” Does this refer to the first time the hub was funded or the start of the current 5-year grant?
It refers to the first time your hub was funded as a CTSA. So if your hub was first funded in 2014, you would use that as your start date. 
Our pilot projects were not selected solely based on their likelihood of publications/grants. Do we need to re-think the way we have been implementing this program?
Other hubs have raised this issue. Not all pilot programs selected for projects are expected to publish, and for others the publication cycle may take a significant amount of time. The purpose of the Common Metrics is to be useful at the hub and the consortium level for strategic management. This information should be reflected in your Story Behind the Curve.
We are in the final year of our grant and our renewal submission included significant changes to the structure/ administration of our pilot program. We are unsure how to proceed with Turn the Curve plans given these changes.
In the Story Behind the Curve, your team is asked to think about factors that are influencing the value of your Common Metric (and the trend line if one is available). These include factors that are positive and negative, current and anticipated. In your instance, you certainly have a number of factors, given the uncertainty, you may not yet be able to characterize all of them as positive or negative. Are there other partners who should be involved early, at least to be aware that the changes are coming? After you identify the factors and potentially some partners, you could begin talking as a team about at least 1-2 strategies that you might consider under the most likely of the scenarios going forward. Until you are able to begin making changes, it may be difficult to proceed. However, you may be able to identify any additional data you might want to collect going forward.
We gave several grantees that have been awarded multiple grants for the same project. This means that the different grants share a common pool of follow-up publications. How should these be counted (numerator/denominator)?
For both the pilot publications and funding metrics, a project that receives more than one award should only be counted in the denominator once in the year of the first award. Subsequent publications or funding should also be counted only once in the numerator.
How do other institutions track their publications? What do they rely on to report CTSA-related publications to them? Do they have a software or an electronic system that helps manage their publications? How do they search for CTSA-related publications?
It is common that authors do not cite the grant (the TTC plans typically include a range of strategies to improve the rate at which they do so). Therefore, hubs have often reported needing to use several strategies together in order to fully identify CTSA-related publications. Most who are doing so successfully regularly survey their awardees. During the period when funds are being expended, they ask them to report on any manuscripts submitted, accepted or published as part of their progress reports. After funding is concluded, they send regular (every 6 or 12 months) surveys. Some hubs have partnered with their medical librarian colleagues. They can: a) help do searches for publications that did cite the grant, b) teach awardees to cite, and help them to do so appropriately, and c) help awardees upload their publications to PubMedCentral.
Since we have pilots that span 2 years, we’ll be double counting people in the numerators - we fund the fiscal year so it will look like we're funding more people that we actually are. How should we enter this data?
You would count a pilot study once in the denominator in the year that they begin expending funds. If they are awarded in August 2015, and first expend funds in November 2015, they are counted in 2015. You would count them once in the numerator in the calendar year in which they publish their first article. They are not to be counted again in the numerator if there are additional publications. The metric is cumulative, not yearly so if a study spans more than one year it is only counted in the first year funds are expended.
How should we report pilot data if we were in a no-cost extension from 6/2017-6/2018? Some of the pilots were active before 6/2017 so we reported on them for CY2017, but for CY 2018, there were no active pilot awards. Our funding was renewed 6/2018.
If your program was still "running" and you had bridge funding and none were awarded, please enter 0 (zero). If you "froze" the program then leave the field blank - and please put a note in your TTC plan.   (8/7/2019)

Results-Based Accountability

What is the purpose of Results-Based Accountability?
It is the preferred strategic management framework recommended by NCATS.
What is the difference between indicators and performance measures?
Indicators are measures which help quantify the achievement of a result. Performance measures are measures of how well public and private programs and agencies are working.
What level of detail is needed when we are reporting on our Turn the Curve (TTC) plans?
The TTC plans are intended to be entered at the level of an executive Summary. A complete set of TTC Plan tools are available to the hubs on the CLIC website.
At our hub we have been collecting the metrics we now call common metrics since 2008. We use the RBA to look at *all* of our data. For the purposes of NCATS, do we only include data from 2012, or do we include “all” our data if we have it?
You can include all of your data, but it is not a requirement. If you do track this information, it must be placed it in a separate Scorecard.

Metrics – Informatics

Regarding gender, is patient reported sex counted as administrative sex?
Administrative gender is a person’s gender (from HL7 and OMB), which describes their sex. It’s a standard dataset (M or F). The domain reflects if it is recorded as a count.
What exactly is being recorded for the Medications/Drugs domain?
This data addresses the question, “Are you recording your patients using this as your standard vocabulary over the number of patients you have in your repository?” No medication counts as a medication.
Regarding the unique patients that are included in the denominator: Would the script address patients who have died or are no longer our patients? Should we be considering that for this metric at all?
For the Informatics metric launch, it’s the percentage of people who had those values at the time they were being seen.
What are observations?
Observations are a measurement of a single variable or a single value derived logically and/or algebraically from other measured or derived values. A test result, a diastolic blood pressure, and a single chest x-ray impression are examples of observations. Essentially, this is a yes/no variable (1 or 0) – the presence or absence of observation data.
We noticed that there is no script for the Notes or Observations. Why is that the case and is that data is still needed?
For both the observations and notes domains, the determination is about the “presence” or “absence” of data. Observations are a single variable or a single value – for example a test result, chest x-ray, etc. Notes are free text documentation entered during a clinical encounter. The scripts contain queries/sql for both the count and percent of free text notes, if the data model doesn't supports free text it is not included.
Is the Observations domain a simple nominal level count of everything in the warehouse for that domain?
Regarding observations, the scripts are looking for a general information on if observations are being recorded at all (present/absent) in hoped that future research data warehouse characterizations could include vital signs.
What time interval should we include into metric?
During the launch phase, please include all of the information from the research data warehouse. For subsequent reporting, please check the CLIC website for updates.
Does free text include imaging notes (narratives/impressions) or just clinical progress notes and discharge summaries etc.?
No, only notes: admission, progress, discharge, procedure, etc.
When we run the scripts, is the worksheet automatically populated? How does it get uploaded to the Scorecard?
You will need to copy and paste the script output into the worksheet and then upload the worksheet into Scorecard. You will also need to manually enter the data values for each of the data values into Scorecard.
Do hubs need to provide a date range for each of the data domains?
Yes, hubs need to enter the date range for each of the data domains. Please enter just the years, for example 2010-2018. Do not enter months, days, or any type of text.
Can you tell me if the metrics will be looking at all patients (inpatient and outpatient) or are there specific guidelines for this?
The metric includes all patients in the research data warehouse– inpatient, outpatient, ER, etc.
What is the date range? Is it part of the TTC Plan narrative?
It is the range of data for which you have access and are reporting for each data domain. For example, in your research data warehouse you may be accessing and reporting on RxNorm ID for the years 1997-2018. In Scorecard, under the Observations Present performance measure, enter the date ranges in the table below the Inclusion/Exclusion Criteria section. Please enter just the years - do not enter days, months, or any type of text.
We are a multi-site CTSA, and we can only access one site. Is there a way to indicate that?
Collect the data that you have available. If you can collect data from more sites next year it will make your data look even better.
We have information for labs, procedures and medications. What do we need to report for the RxNorm domain?
Report on just the values for the RxNorm.
We filter our patient table to only include patients that have visit data in our location because we have billing data from several hospitals where our faculty work, but no actual visit data for those patients – no other information is available.
Include information that you think would be most helpful for investigators across the consortium.
Can you add dates in the script? Start Date will be beginning and End Date = 12/31/2018?
They will be updated for the 2019 reporting period.
The initial ACT script was only looking for ICD-9 and not ICD-10. Has this been fixed?
Yes, it has been fixed.
How do we determine the data range for the domains from the research data warehouse?
The starting date is when your hub first started collecting the data – when you started using/recording the value for the domain.
Would you please provide a definition for each of the standard value data elements?
LOINC – Logical Observation Identifiers Names and Codes (https://loinc.org) – a set of codes that are used to identify test results and/or clinical observations for electronic reporting. RxNorm ID (https://www.nlm.nih.gov/research/umls/rxnorm/) – a set of codes that provide normalized names for generic and branded clinical drugs. ICD 9/10 – International Classification of Diseases (https://en.wikipedia.org/wiki/ICD-10) – set of codes for clinical diagnoses and/or procedures. CPT – Current Procedural Terminology (https://www.ama-assn.org/amaone/cpt-current-procedural-terminology) – set of codes used to report outpatient clinical services and/or procedures for billing and reporting purposes. SNOMED (www.snomed.org) – a collection of medical codes and terms used in clinical documentation.
If our numbers are near 99% for each of these measures, what would you like to see us include in our TTC since there isn't much room to improve?
For hubs who have achieved, 100% or near 100%, please document the strategies that have helped your hub to reach that target. Also, you may want to begin to identify additional metrics that would be helpful consortium-wide.
Incomplete data in the EHR can be outside the control of our CTSA. How can we remedy this issue without having the necessary authority?
Please enter this information in the Story Behind the Curve section of your TTC plan.
Is there a place where I can find simplified, step-by-step instructions on how to complete the Informatics metric?
Step-by-step instructions are located on the CLIC website.
Will CLIC provide hubs with feedback on their draft TTC plans?
If you want feedback, please submit a request to common_metrics@clic-ctsa.org.
Were the Informatics Webinars/Office Hours recorded? If so, where can I find the slides and the recordings.
All of the Informatics Webinars and Office Hours were recorded. The slides are available on the CLIC website. The recordings are available on the Google Drive. If you want access to the Google Drive, please contact common_metrics@clic-ctsa.org.
What are the steps for loading our data into the Informatics Scorecard?
The steps for entering the Informatics data are on the CLIC website - Informatics Step-by-Step Scorecard Entry Resource.
We don’t have direct influence over the data domains in the Informatics CM. Are you looking for something different in this CM than in the other Common Metrics?
The focus of the Informatics metric is the presence or absence of data in the research data warehouse (RDW). The TTC plan should be focused on how to get absent data, or how to improve accessing data.
Where do you enter the TTC plan for the Informatics metric?
Under the Observations Present performance measure. (Please see the Informatics Step-by-Step Scorecard Entry Resource on the CLIC website.)
ICD 9/10 is listed for two domains, why is that?
ICD 9/10 is used in healthcare to code both diagnoses and procedures – be sure to enter data under the appropriate performance measure.
Our research Informatics team has little control over what data points are filtered to the warehouse – we are at the mercy of the health system’s IT efforts.
That is not an uncommon situation. Please collect the data that you can and then list this issue under the Negative: section of your Story Behind the Curve (in your TTC plan). For your TTC plan, consider partnering with your IT team and determine if there are strategies that could be implemented to improve your access to the data.
How much of the long-term plans should be included in the TTC plan?
Typically the TTC plan is for one year. If you are entering longer-term goals, please identify that the strategies will be addressed over the next ## years.
The TTC plan entered for August 30, 2019, is a plan for which year? When do we report on progress afterwards?
The TTC plan entered for the August 30th deadline, represents the hub’s strategies for addressing how they will turn the curve in the coming year. TTC plans are entered annually, so progress is represented by comparing progress since the previous year.
How many TTC plans do we need to write for the Informatics Metric?
Only one TTC plan is needed for all of the data domains. Please place your TTC plan under the Observations Present performance measure in Scorecard. The Informatics metric is about the presence or absence of data, so the strategies in your TTC plan should address how you can access missing data and/or improve the quality of data reported.
Is Postgres under consideration as a supported DBMS?
Hubs are welcome to use Postgres (or another data model). Hubs will need to create their own query and make sure that they match the measures of the existing script. If a hub creates a new query, please considering putting it in Github so that other hubs can use it. [SQLrender (https://github.com/OHDSI/SqlRender) allows OMOP sites to run against several other back end DBs beyond MS SQLserver and ORACLE: PostgresSql, Microsoft Parallel Data Warehouse (a.k.a. Analytics Platform System), Amazon Redshift, Apache Impala, Google BigQuery, BM Netezza.] At this time, scripts are available for these data models: OMOP, PCORnet, and i2b2/ACT. Hubs that use TriNetX, can contact TriNetX to get data.
Do the scripts provide the min/max dates to use for your data inclusion table?
For the launch of the metric the script did not include min/max dates. The scripts will be updated throughout the life of the metric. Check the Github site for the most current script.
How does a hub access a script?
The scripts are available on Github: https://github.com/ncats/CTSA-Metrics.
Do the PCORNET scripts work with the version 4.1?
Yes. If you experience any problems with the script, please contact common_metrics@clic-ctsa.org.
Some hospitals have data going back many years prior to having a research data warehouse. They may not have any visits to the hospital - other domains will have no data. It may be more useful to only count patients that had at least 1 visit.
Include information that you think would be most helpful for investigators across the consortium.

Reports

I received an email I have access to my hub's CM Report, how do I access it?
Hub reports are available to the hub PI, Administrator and Common Metrics lead (given they have current CLIC website accounts.) If you have access to a hub report, you should have received an automated email stating you hsve been added to the groupo. To access your hub’s Common Metrics Report, please login to the CLIC website. You can access the link to your hub’s report page two ways: From your Account page Look for the section labeled Group Membership From the Common Metrics Initiative page In the right sidebar, look for ‘Hub Reports’
How do I get access to my hub report if I am not a PI, Admin or CM Lead?
To gain access, you need to be added to the group by your PI, Admin or CM Lead. Those currently with access can login to the page and follow the Instructions on the hub page to give access to another user.
How do I get access to my hub report if I am a PI, Admin or CM Lead and did not have a CLIC website account before 2/28/19?
Please email common_metrics@clic-ctsa.org to request access after you have created your account OR ask for access from the PI, Admin or CM Lead. Instructions for adding access can be found on the hub report page.
What is TEAMSS?
Transforming Expanded Access to Maximize Support and Study (TEAMSS) is a collaboration between four academic health centers that seeks to advance clinical care and translational research by improving patient access to experimental therapies.  The four institutions are: University of Michigan (lead site)  Duke University  University of Rochester  University of Texas Southwestern
What are the goals and aims of the TEAMSS project?
TEAMSS is focused on developing best practices for use of the Expanded Access pathway that can be implemented at medical centers to improve efficacy and access for patients.    Aim 1: Develop, demonstrate, and disseminate best practices of network-based Expanded Access programs across the CTSA consortium. Aim 2: Develop a network for cohort-based Expanded Access programs. Aim 3: Create a database to standardize Expanded Access data reporting and develop a body of real-world data. 
What is Expanded Access?
Expanded Access is a program in which the United State Food and Drug Administration (FDA) allows a qualified physician to access investigational medical products for patients in need.  By “investigational” this means that the drug, device or biologic is NOT already cleared or approved by the FDA and therefore is not generally available for purchase. 
Who may qualify for Expanded Access?
In order for patients to receive treatment under Expanded Access, the manufacturer must agree to provide access to the investigational product and the following criteria must be met: (1) The patient must have a serious or immediately life-threatening disease or condition with no comparable or satisfactory alternative to diagnose, monitor, or treat the disease or condition; (2) The potential patient benefit must justify the potential risks of the treatment use and the potential risks cannot be unreasonable in the context of the disease or condition to be treated; and (3) Providing the investigational drug for the requested use cannot interfere with the initiation, conduct, or completion of clinical investigations that could support marketing approval of the Expanded Access use or otherwise compromise the potential development of the Expanded Access use.
What is the first step that should be taken if interested in obtaining access to an investigational medical product through Expanded Access?
A consultation between a physician and patient (and/or caregiver team) is the first step in determining whether it is appropriate or not to seek out use of an investigational product for treatment.
What groups may be involved in obtaining approval for Expanded Access at an Academic Health Center?
Below are several groups that may be involved obtaining Expanded Access to an investigational product: Food and Drug Administration (FDA) Institutional Review Board (IRB) Treating physician who agrees to serve as the sponsor Company who is supplying the investigational product Regulatory Affairs Office Contracts and/or legal department at the treating physician’s hospital Pharmacy to assist with preparing and dispensing the investigational product
How does Expanded Access differ from normal patient care?
The therapeutic is not currently on the market in the United States so there is no way for the treating physician to purchase the product to supply it to the patient for the treatment of their illness or condition.  
How does Expanded Access differ from traditional research?
The purpose of traditional research is to learn more about the safety and effectiveness of the therapeutic. The primary purpose of Expanded Access is to treat a patient’s disease rather than to gain information about the safety and effectiveness of the medical product. FDA must approve the use of investigational products under Expanded Access, and federal regulations require oversight by an institutional review board (IRB). This is not always that much different from traditional research, and in many cases they do follow similar paths.  However, the final goal of Expanded Access is treatment of a patient’s serious or life-threatening condition.
What is Right to Try?
Right to Try is legislation passed in May of 2018 that creates a secondary route for patients to gain access to investigational medical products. Right to Try is a parallel pathway that coexists with FDA’s Expanded Access program.  Right to Try does not guarantee that a therapeutic will be available for the patient, it does not follow the FDA pathway, nor require regulatory oversight. For more on Right to Try, please refer to the additional resource: https://www.fda.gov/patients/learn-about-expanded-access-and-other-treatment-options/right-try
Where else can I find assistance with using Expanded Access process?
The Reagan-Udall Foundation offers an Expanded Access Navigator program which provides physicians, patients, and caregivers with guidance on EA and related topics.   The FDA Expanded Access website provides copious amounts of information about all types of Expanded Access studies that are available.
What does it mean for a drug to be in Phase 1? Phase 2? Or Phase 3?
Clinical trials are divided up according to how much research has already been conducted on the investigational drug.  Phase 1 studies are early, small-scale studies and mostly focus on determining the safety (and safe dosages) of the product in humans.  Phase 2 studies increase the number of people involved and look at both efficacy of the drug and continue to look at side effects.  Phase 3 studies enroll many more participants and continue to evaluate efficacy while also monitoring frequency of side effects. 
When would Expanded Access be appropriate?
To qualify for Expanded Access, a patient must have a serious condition, have exhausted standard of care therapies, and be ineligible for a clinical trial. A product must also be available that has a reasonable chance to benefit the patient, and the risks and possible benefits of the use must be appropriate. In addition to these standards, doctors and patients should discuss: In the best case scenario that the therapy works as intended, how much benefit would the patient expect to receive? Some therapies might be curative, while others may only help manage symptoms or extend time to disease progression for a little while. In the worst case scenario, what side effects might the patient expect? Could they be serious or even deadly? What are the patient’s goals for care? An investigational therapy may have side effects which could impact quality of life. Other therapies may be available not to treat the disease, but to improve quality of life and manage symptoms. How long could it take for the patient to start therapy? Some manufacturers do not allow for emergency requests, and some take a significant amount of time to review requests and get contracts in order. For non-emergency cases, the FDA can take up to 30 days to review the request, although it is unusual for review to take that long. What would treatment with the drug look like? Would it require hospitalization, or frequent visits to a clinic or lab?
How do I request Expanded Access?
Expanded Access can be requested through your physician who will file the paperwork with the FDA.
What are the requirements for the Informed Consent document?
An informed consent explains what is known and what is unknown about the investigational drug.  It helps ensure the patient understands the nature of the investigational medical product for the proposed treatment.  An Informed Consent document must be reviewed by an Institutional Review Board (IRB) before being provided to the patient for them to sign.
How do I find out whether a drug is available?
There are several ways to find out if a drug is available: Contact the manufacturer. Per the 21st Century Cures Act, manufacturers are required to post their Expanded Access policy and contact information online. Use the Company Directory on the Expanded Access Navigator, maintained by the Reagan Udall Foundation for the FDA. Look through entries in the www.ClinicalTrials.Gov database. Companies with Expanded Access programs for specific products will often post these on ClinicalTrials.Gov. Search for the drug name under “Other terms.” Results can be limited to Expanded Access programs through the “Status” or “Study Type” filters. These listings will contain the requirements for Expanded Access as well as contact information for the program.
Will an Investigational Drug or Device help me?
There is no way to be sure an investigational drug or device will help you. These therapies are, by definition, still being studied, and doctors have limited experience with their use. In order to request a drug or device through Expanded Access, your doctor must have a reason to believe that it might work, usually because of how it is thought to work (called a “mechanism of action.”) However, these therapies may not always work properly or work enough for a patient to benefit. Keep in mind that, less than 10% of drugs that enter clinical trials are eventually found to provide enough benefit to enough patients to be approved. Even in late stage (phase 3) trials, only about half of drugs go on to be approved.
I’m a patient. How do I ask my doctor about Expanded Access?
If you are a patient who has a serious condition, has exhausted all treatment options, and is not eligible for clinical trials, you may consider discussing drugs available through Expanded Access with your doctor. Not all doctors are familiar with Expanded Access, the process or the evolving requirements.  You may want to use resources like the FDA webpage for Expanded Access or this FAQ as resources to help with this discussion. Keep in mind, not all doctors are comfortable requesting Expanded Access, and Expanded Access is not right for all patients. This should be a discussion between you and your doctor to determine what is best for your situation.
Will I be charged for Expanded Access?
In some cases, you may be charged for a drug or device provided under Expanded Access, but in many cases it will be provided at no cost. When a manufacturer does charge, the cost is tightly restricted and must be approved by the FDA. You or your insurance company may be charged for the other costs of care associated with your use of the investigational product, including clinic visits, hospitalizations, infusions and injections, or lab monitoring. You may also be responsible for any costs incurred if you have a bad reaction to the therapy.
Twitter logoFollow CLIC
Twitter logoFollow NCATS