Common Metrics Initiative Frequently Asked Questions

Metrics - IRB

If the IRB review is complete and a study is granted a pending approval while the contracts office finalizes financial language for the consent form, would IRB-related work be considered complete?

No. If your IRB has determined the protocol and study are approved with no IRB-related stipulations, then it would be considered complete. This would be considered the final approval date. At some institutions, the IRB will not give their approval until all other reviews are completed. The approval date is the date the study can begin.

My institution’s IRB defines receipt date and final approval date differently than the Operational Guidelines. Can we use these different definitions when reporting the Common Metrics?

If your institution has a slightly different definition, you will need to take the date they use and calculate the receipt date or final approval date according to the definition in the Operational Guidelines.

Please clarify the exclusion of protocols that “are multi-site protocols reviewed based on a reliance agreement.” Does this exclusion apply to protocols when the local IRB is NOT the relied upon IRB and those when the local IRB is the relied upon IRB?

A hub should not include IRB applications conducted at another institution in the IRB Duration common metric. If a local IRB did not review a multi-site protocol because they were a relying IRB, the protocol should be excluded. However, if the local IRB is the relied upon IRB, the protocol should be included in the IRB Duration common metric for that site.

Regarding the unit of analysis - Do the calendar years reflect work days only?

No, calendar days reflects total number of days that occur between the receipt of the application and its approval.

What does "fully reviewed" mean?

Fully reviewed means that the protocol has received IRB approval.

What is a pre-review?

Some hubs conduct pre-reviews as a means to address administrative and content-related issues so that the protocol doesn't run into any barriers during the approval process. Pre-reviews are excluded from the calculation for the IRB duration metric.

What is the intent of the exclusion of protocols: “are multisite protocols reviewed based on a reliance agreement.” Does this apply to both protocols -the local IRB is not the relied upon IRB and those for which the local IRB is the relied upon IRB?

A hub should not include IRB applications conducted at another institution.  If a local IRB did not review a multisite protocol because they were a relying IRB, the protocol should be excluded. However, if the local IRB is the relied upon IRB, the protocol should be included for that site.

What is the Operational Guidelines definition of pre-review? Does it include administrative issues or only content-related review? Should a pre-review that entails review of the content of the protocol be excluded from the calculation?

Hubs may use pre-reviews to address administrative issues and content-related reviews.

Pre-review time should be excluded from the calculation of the median IRB duration.

What is the recommendation for how far back to collect data on the IRB Duration Common Metric?

The Operational Guideline (OG) specifies to collect this data annually, beginning with all protocols approved during calendar year 2015. If your hub has been collecting this data historically in a way that is the same or similar to the specifications in the OG, and it would be helpful to take into account when you are working on your Story Behind the Curve, you could also enter the historical (pre-2015) data to your Scorecard.

What is the recommendation for how far back to collect data on the IRB Duration metric?

The Operational Guideline (OG) specifies to collect this data annually, beginning with all protocols approved during calendar year 2015.

Metrics - Pilots

Can a publication be counted under two different grants if it happened to be influenced by both, or should a single publication only be counted one time?

Yes, a publication can be counted under two different grants if it can be shown to have been influenced by both.

Do we count projects for the hub only, or do we include affiliates?

You should count any project at the hub or affiliates where funds are administered through the CTSA pilot studies program, even if the funds are solely from the institution (e.g., medical school).

Do we only report on Pilots supported by CTSA funding or all pilots awarded in the calendar year?

NCATS has clarified that projects that are non-pilot grant funded but using CTSA pilot grant program resources are not considered one of their pilot grants and not included in this metric.

Do we only report on Pilots supported by CTSA funding, or all pilots awarded in the calendar year? If the Pilot was funded by non-CTSA resources but the project uses CTSA resources/knowledge that CTSA supports, do we count them as well?

NCATS has clarified that projects that are non-pilot grant funded but using CTSA pilot grant program resources are not considered one of their pilot grants and not included in this metric. 

For the pilot funding metric, is there a set target for percentage of publications arising from pilot funding that is expected across all hubs?

There is no performance target or benchmark for this or any of the metrics.

How do other institutions track their publications? What do they rely on to report CTSA-related publications to them? Do they have a software or an electronic system that helps manage their publications? How do they search for CTSA-related publications?

It is common that authors do not cite the grant (the TTC plans typically include a range of strategies to improve the rate at which they do so). Therefore, hubs have often reported needing to use several strategies together in order to fully identify CTSA-related publications. Most who are doing so successfully regularly survey their awardees. During the period when funds are being expended, they ask them to report on any manuscripts submitted, accepted or published as part of their progress reports. After funding is concluded, they send regular (every 6 or 12 months) surveys. Some hubs have partnered with their medical librarian colleagues. They can: a) help do searches for publications that did cite the grant, b) teach awardees to cite, and help them to do so appropriately, and c) help awardees upload their publications to PubMedCentral.

How should we report pilot data if we were in a no-cost extension from 6/2017-6/2018? Some of the pilots were active before 6/2017 so we reported on them for CY2017, but for CY 2018, there were no active pilot awards. Our funding was renewed 6/2018.

If your program was still "running" and you had bridge funding and none were awarded, please enter 0 (zero). If you "froze" the program then leave the field blank - and please put a note in your TTC plan.

 

(8/7/2019)

How would you handle a case where a Pilot award with a 1st publication began prior to 2012 but expended funds in or beyond 2012?

NCATS has clarified that if a pilot award was funded in 2011 and had any funds expended during 2012, those should be included. So they would go into the denominator the first year after January 1, 2012 that they expended funds and would go into the numerator whenever they produced the publication.

If an investigator is awarded pilot funding more than once for the same project, how should resulting publications and funding be counted?

A project that receives more than one award should only be counted in the denominator once in the year of the first award. Subsequent publications or funding should also be counted only once in the numerator.

If our CTSA pilot has a PI and multiple co-investigators, should we count the subsequent publication or grant or any of the group who exceed the pilot?

Yes if it is possibly related to this pilot. See the definition in the Operational Guideline: “A publication refers to any article on which pilot grant recipients (PI or co-PI) were authors or co-authors that was published in a peer reviewed journal. To be eligible, a publication must have a PMCID number and be considered a CTSA Program hub related publication.”

Is expended funding defined as “beginning to expend funds” or “finishing expending funds”?

Expending funds is defined as beginning to expend funds.

Is there is a time limit, for example 5 years, on tracking publications and follow-up funding for funded pilot projects?

No, there is no time limit specified in the Operational Guidelines.

Our pilot projects were not selected solely based on their likelihood of publications/grants. Do we need to re-think the way we have been implementing this program?

Other hubs have raised this issue. Not all pilot programs selected for projects are expected to publish, and for others the publication cycle may take a significant amount of time. The purpose of the Common Metrics is to be useful at the hub and the consortium level for strategic management. This information should be reflected in your Story Behind the Curve.

Should private donations be counted under subsequent follow-on grants?

If the donated funds go through the institution, then yes, it should be counted.  If an individual receives money from a private party, then no.

Should recently funded pilot awards that have not had sufficient time to publish or secure additional funding be included in the data?

If the hub has expended funds, the pilot should be included in the data. 

Pilots that have been awarded but have not expended funds should be excluded. 

 

 

(Revised 7/22/2019)

We are in the final year of our grant and our renewal submission included significant changes to the structure/ administration of our pilot program. We are unsure how to proceed with Turn the Curve plans given these changes.

In the Story Behind the Curve, your team is asked to think about factors that are influencing the value of your Common Metric (and the trend line if one is available). These include factors that are positive and negative, current and anticipated. In your instance, you certainly have a number of factors, given the uncertainty, you may not yet be able to characterize all of them as positive or negative. Are there other partners who should be involved early, at least to be aware that the changes are coming? After you identify the factors and potentially some partners, you could begin talking as a team about at least 1-2 strategies that you might consider under the most likely of the scenarios going forward. Until you are able to begin making changes, it may be difficult to proceed. However, you may be able to identify any additional data you might want to collect going forward.

We gave several grantees that have been awarded multiple grants for the same project. This means that the different grants share a common pool of follow-up publications. How should these be counted (numerator/denominator)?

For both the pilot publications and funding metrics, a project that receives more than one award should only be counted in the denominator once in the year of the first award. Subsequent publications or funding should also be counted only once in the numerator.

When entering the pilot awards turn-the-curve plan in the Scorecard software, do we associate with the # of pilots metric or the % of pilots metric?

It should be associated with the % of pilot research projects that have a least one publication.

When there are gaps in funding (pilots) - do hubs include those in pilot metric or restart their count of the cumulative number of pilots or the number of pilots published during the period. How do we handle metrics when there this a gap period?

Hubs should not include pilot metrics for years where the hub did not receive funding.  If they were funded in 2012, 2013 and 2015, they should include only those years in the metric.  It is anticipated that their cumulative number could potentially plateau or even drop due to the gap year.  Hubs should indicate this change in their Story Behind the Curve. 

Results-Based Accountability

At our hub we have been collecting the metrics we now call common metrics since 2008. We use the RBA to look at *all* of our data. For the purposes of NCATS, do we only include data from 2012, or do we include “all” our data if we have it?

You can include all of your data, but it is not a requirement. If you do track this information, it must be placed it in a separate Scorecard.

What is the difference between indicators and performance measures?

Indicators are measures which help quantify the achievement of a result. Performance measures are measures of how well public and private programs and agencies are working.

What is the purpose of Results-Based Accountability?

It is the preferred strategic management framework recommended by NCATS.

What level of detail is needed when we are reporting on our Turn the Curve (TTC) plans?

The TTC plans are intended to be entered at the level of an executive Summary. A complete set of TTC Plan tools are available to the hubs on the CLIC website.

Metrics – Informatics

Can you add dates in the script? Start Date will be beginning and End Date = 12/31/2018?

They will be updated for the 2019 reporting period.

Can you tell me if the metrics will be looking at all patients (inpatient and outpatient) or are there specific guidelines for this?

The metric includes all patients in the research data warehouse– inpatient, outpatient, ER, etc.

Do hubs need to provide a date range for each of the data domains?

Yes, hubs need to enter the date range for each of the data domains. Please enter just the years, for example 2010-2018.
Do not enter months, days, or any type of text.

Do the PCORNET scripts work with the version 4.1?

Yes. If you experience any problems with the script, please contact common_metrics@clic-ctsa.org.

Do the scripts provide the min/max dates to use for your data inclusion table?

For the launch of the metric the script did not include min/max dates. The scripts will be updated throughout the life of the metric. Check the Github site for the most current script.

Does free text include imaging notes (narratives/impressions) or just clinical progress notes and discharge summaries etc.?

No, only notes: admission, progress, discharge, procedure, etc.

How do we determine the data range for the domains from the research data warehouse?

The starting date is when your hub first started collecting the data – when you started using/recording the value for the domain.

How does a hub access a script?

The scripts are available on Github: https://github.com/ncats/CTSA-Metrics.

How much of the long-term plans should be included in the TTC plan?

Typically the TTC plan is for one year. If you are entering longer-term goals, please identify that the strategies will be addressed over the next ## years.

ICD 9/10 is listed for two domains, why is that?

ICD 9/10 is used in healthcare to code both diagnoses and procedures – be sure to enter data under the appropriate performance measure.

If our numbers are near 99% for each of these measures, what would you like to see us include in our TTC since there isn't much room to improve?

For hubs who have achieved, 100% or near 100%, please document the strategies that have helped your hub to reach that target.
Also, you may want to begin to identify additional metrics that would be helpful consortium-wide.

Incomplete data in the EHR can be outside the control of our CTSA. How can we remedy this issue without having the necessary authority?

Please enter this information in the Story Behind the Curve section of your TTC plan.

Is Postgres under consideration as a supported DBMS?

Hubs are welcome to use Postgres (or another data model). Hubs will need to create their own query and make sure that they match the measures of the existing script. If a hub creates a new query, please considering putting it in Github so that other hubs can use it. [SQLrender (https://github.com/OHDSI/SqlRender) allows OMOP sites to run against several other back end DBs beyond MS SQLserver and ORACLE: PostgresSql, Microsoft Parallel Data Warehouse (a.k.a. Analytics Platform System), Amazon Redshift, Apache Impala, Google BigQuery, BM Netezza.] At this time, scripts are available for these data models: OMOP, PCORnet, and i2b2/ACT. Hubs that use TriNetX, can contact TriNetX to get data.

Is the Observations domain a simple nominal level count of everything in the warehouse for that domain?

Regarding observations, the scripts are looking for a general information on if observations are being recorded at all (present/absent) in hoped that future research data warehouse characterizations could include vital signs.

Our research Informatics team has little control over what data points are filtered to the warehouse – we are at the mercy of the health system’s IT efforts.

That is not an uncommon situation. Please collect the data that you can and then list this issue under the Negative: section of your Story Behind the Curve (in your TTC plan). For your TTC plan, consider partnering with your IT team and determine if there are strategies that could be implemented to improve your access to the data.

Regarding gender, is patient reported sex counted as administrative sex?

Administrative gender is a person’s gender (from HL7 and OMB), which describes their sex. It’s a standard dataset (M or F). The domain reflects if it is recorded as a count.

Regarding the unique patients that are included in the denominator: Would the script address patients who have died or are no longer our patients? Should we be considering that for this metric at all?

For the Informatics metric launch, it’s the percentage of people who had those values at the time they were being seen.