Clinical Indicators Clinical Indicators

Measurement of clinical quality and service provision enables health practitioners and managers to be confident that the health programs they deliver and the system within which they operate are both efficient and effective.

  • Efficient (best use of available resources) – Consider measures such as:
    • % eligible patients referred to a program
    • Program expenditure within allocated budget (service indicator)
  • Effective (achieves intended outcomes) – Consider measures such as:
    • % patients physically active for 30 minutes on most days (behavioural)
    • % patients whose functional status is assessed (clinical)
    • % patients with improvement in a quality of life measure (health outcome)

Effectiveness can be evaluated for both internal quality improvement purposes and external comparison 'benchmarking':

  • Internal evaluations – Provide measurable ways of assessing progress towards a specific local goal
  • External evaluations – Compare a program outcome with the aggregate outcomes of other programs.

Key performance indicators comprise the selection, collection, analysis and reporting of standardised data.

Measures selected should be:

  • Clinically relevant
  • Reliable
  • Valid
  • Responsive
  • Easy to collect
  • Easy to interpret

Key_performance_indicators

Overall program effectiveness is determined by aggregating various levels of service provision and delivery.

Cardiac performance indicators are commonly categorised into four domains:[#sanderson-bk-southard-d-oldridge-n.-2004]

  • Clinical
  • Health
  • Behavioural
  • Service

Selecting an assortment of measures across all domains will provide sufficient data for evaluating the effectiveness of the program in delivering each component of care, as well as assessing overall program effectiveness.

Domain Description
Health  Global outcomes include morbidity, mortality, health status and health-related quality of life
Clinical  Clinical status of patient at entry into a program and measured at regular intervals thereafter
Behavioural  Patient’s ability to make recommended lifestyle changes that potentially lead to goal achievement in the clinical and health domains
Service  Effectiveness of the program structure and methods for delivering service

The Performance measures section describes various types of program evaluation indicators, categorised for easy reference.

Tip: Always collect patient demographic and psychosocial characteristic data, such as disease severity, clinical status, age and treatment setting. This is crucial if you wish to later attribute any differences in program outcomes to quality of care, rather than to the underlying difference in patient characteristics or program setting.

Performance evaluation has no ‘one size fits all’ guide. The level of evaluation depends on many factors and should be designed to suit the program, preferably using a combination of qualitative and quantitative methods.

Data collection tips:

  • Choose only the one or two indicators most pertinent to current information needs
  • Work out the simplest means to collect data. Are data routinely collected and recorded somewhere else, such as in a patient management system. The medical records department or health information service may be able to help locate patient records
  • Consider a brief ‘snap shot’ audit over a set time period, e.g., what proportion of patients with a myocardial infarction (MI) received a personalised care plan on discharge from hospital?
  • Where possible, record data in a centralised patient or service data system
  • An ideal indicator defines criteria for patient inclusion and exclusion. It may not be possible to collect all data in routine practice
  • Collection of performance measures should be integrated into routine clinical practice as evidence of a commitment towards continuous quality improvement

When interpreting results, consider:

  • Was the program delivered as intended? For example, when evaluating a multi-site health program involving several educators, in order to be confident that the program was the key to your positive results, check that all educators provided the same program (number and duration of sessions, education materials, etc.)
  • Are the results valid (i.e., represent the truth) and reliable? (i.e., you would expect to get the same results in the next group taking part in the program)
  • Could any other factors explain the results? In clinical practice, outcomes are influenced by variables other than the care provided to patients. Ideally, adjust for these variables (such as age) when comparing outcomes between patients or between programs. Otherwise, observed changes may be attributable not to the quality of care, but to underlying differences in patient characteristics or provider setting. For example, if your session boasted a 100% participation rate, but was also the only session providing lunch, you could not be sure whether the clients were attending because of the quality of the program or the free sandwiches!

For more information on how to choose and use indicators for program evaluation, see the Good Indicators Guide.

  • Sanderson BK, Southard D, Oldridge N. Outcomes evaluation in cardiac rehabilitation/secondary prevention programs: improving patient care and program effectiveness. J Cardiopulm Rehabil 2004;24:68-79.

    sanderson-bk-southard-d-oldridge-n.-2004