Clinical Indicators Clinical Indicators

  • Measurement of clinical quality and service provision enables health practitioners and managers to be confident that the health programs they deliver and the system within which they operate are both efficient and effective.
  • Efficient (best use of available resources) – Consider measures such as:
    • % eligible patients referred to a program
    • Program expenditure within allocated budget (service indicator)
  • Effective (achieves intended outcomes) – Consider measures such as:
    • % patients physically active for 30 minutes on most days (behavioural)
    • % patients whose functional status is assessed (clinical)
    • % patients with improvement in a quality of life measure (health outcome)

Effectiveness can be evaluated for both internal quality improvement purposes and external comparison 'benchmarking':

  • Internal evaluations – Provide measurable ways of assessing progress towards a specific local goal
  • External evaluations – Compare a program outcome with the aggregate outcomes of other programs.

Overall program effectiveness is determined by aggregating various levels of service provision and delivery. Indicators measure different domains such as: health outcomes (mortality and readmission); patient report outcomes and quality of life; and clinical processes (adherence to the latest clinical guidelines).

Selecting an assortment of measures across several domains will provide sufficient data for evaluating the effectiveness of the program in delivering each component of care, as well as assessing overall program effectiveness.

The table below shows some examples of indicators suitable for cardiac rehabilitation (CR) or heart failure (HF) disease management programs.  Please check major guidelines for your region for suggested indicators (see references below).

In terms of calculating each indicator:

  • The denominator for each indicator is the number of patients in the program over a given time period
  • All indicators below describe the numerator, and hsould be expressed as a percentage of the denominator
Indicators % of eligible patients:
Access and utilisation of services
  • Referred to an appropriate rehabilitation or support program
  • Followed up of referral within a specified time period and patient contacted
  • Completing a programmed intervention
  • Physically active for 30 minutes daily or adhere to a recommended level of exercise
  • Smokers who stop smoking
  • With a personalised exercise program
  • Assessed on a standardised instrument of exercise capacity (such as the 6-minute walk test)
  • Prescribed appropriate medications at hospital discharge/at clinical reviews‡
  • Medications titrated to maximal tolerated doses‡
  • Screened for depression†
  • Assessed on a quality of life measure†
Self-care and education
  • Assessed for relevant lifestyle risk factors (e.g., smoking, poor nutrition, salt intake, alcohol intake, physical inactivity, unhealthy body weight)
  • With a personalised written action plan
Additional assessment as required
  • Health literacy, cognitive function has been assessed, mood has been assessed†
  • There is an advanced care plan (See Advance Care Planning Australia for further information)
Health outcomes at specified time periods (1 to 12 months)
  • Deaths within X days of hospital discharge (cardiac and non-cardiac causes)
  • X-day re-admission rates (all cause, cardiac-specific, heart failure related)
  • Change in quality of life over X months measured on a validated tool (5% difference in some measures is clinically significant)
  • Change in NYHA class over X months (HF only)
  • Change in ability to self-manage chronic disease measured on a validated tool

Key: † Assess using an appropriate instrument that is reliable and validated for clinical group, where available; ‡ Except where contraindicated or not tolerated.

Table: Examples of measures of cardiac rehabilitation and heart failure programs

Indicators specific for heart failure and cardiac rehabilitation are found in recent guidelines [#atherton-jj-sindone-a-de-pasquale-cg-et-al, #bonow-ro-ganiats-tg-beam-ct-et-al.-2012, #gallagher-r-thomas-e-astley-c-et-al.-cardiac-rehabilitation-quality-in-aust, #thomas-rj-balady-g-banka-g-et-al, #the-bacpr-standards-and-core-components-for-cardiovascular-disease-preventi]

Tip: Always collect patient demographic and psychosocial characteristic data, such as disease severity, clinical status, age and treatment setting. This is crucial if you wish to later attribute any differences in program outcomes to quality of care, rather than to the underlying difference in patient characteristics or program setting.

Performance evaluation has no ‘one size fits all’ guide. The level of evaluation depends on many factors and should be designed to suit the program, preferably using a combination of qualitative and quantitative methods.

Data collection tips:

  • Choose only the one or two indicators most pertinent to current information needs
  • Work out the simplest means to collect data. Are data routinely collected and recorded somewhere else, such as in a patient management system. The medical records department or health information service may be able to help locate patient records
  • Consider a brief ‘snap shot’ audit over a set time period, e.g., what proportion of patients with a myocardial infarction (MI) received a personalised care plan on discharge from hospital?
  • Where possible, record data in a centralised patient or service data system
  • An ideal indicator defines criteria for patient inclusion and exclusion. It may not be possible to collect all data in routine practice
  • Collection of performance measures should be integrated into routine clinical practice as evidence of a commitment towards continuous quality improvement

When interpreting results, consider:

  • Was the program delivered as intended? For example, when evaluating a multi-site health program involving several educators, in order to be confident that the program was the key to your positive results, check that all educators provided the same program (number and duration of sessions, education materials, etc.)
  • Are the results valid (i.e., represent the truth) and reliable? (i.e., you would expect to get the same results in the next group taking part in the program)
  • Could any other factors explain the results? In clinical practice, outcomes are influenced by variables other than the care provided to patients. Ideally, adjust for these variables (such as age) when comparing outcomes between patients or between programs. Otherwise, observed changes may be attributable not to the quality of care, but to underlying differences in patient characteristics or provider setting. For example, if your session boasted a 100% participation rate, but was also the only session providing lunch, you could not be sure whether the clients were attending because of the quality of the program or the free sandwiches!
  • Atherton JJ, Sindone A, De Pasquale CG, et al. National Heart Foundation of Australia and Cardiac Society of Australia and New Zealand: Australian clinical guidelines for the management of heart failure 2018. Med J Aust. 2018

  • Bonow RO, Ganiats TG, Beam CT, et al. ACCF/AHA/AMA-PCPI Heart failure performance measures. J Am Coll Cardiol 2012;59:1812-1832.

  • Gallagher R, Thomas E, Astley C, et al. Cardiac Rehabilitation Quality in Australia: Proposed National Indicators for Field-Testing [published online ahead of print, 2020 Apr 30]. Heart Lung Circ. 2020;S1443-9506(20)30109-8. doi:10.1016/j.hlc.2020.02.014

  • Thomas RJ, Balady G, Banka G, et al. 2018 ACC/AHA Clinical Performance and Quality Measures for Cardiac Rehabilitation: A Report of the American College of Cardiology/American Heart Association Task Force on Performance Measures. J Am Coll Cardiol. 2018;71(16):1814-37.

  • The BACPR Standards and Core Components for Cardiovascular Disease Prevention and Rehabilitation 2017 (3rd Edition).