Abstract
PURPOSE We wanted to demonstrate a method for calculating the relative complexity of ambulatory clinical encounters.
METHODS Measures of complexity should reflect the complexity of the typical encounter and across encounters. If inputs represent the information transferred from the patient to the physician, then inputs include history, physical examination, testing, diagnoses, and patient demographics. Outputs include medications prescribed and other therapies used, including education and counseling, procedures performed, and disposition. The complexity of each input/output is defined as the mean input/output quantity per clinical encounter weighted by its interencounter diversity (range of possibilities used) and variability (visittovisit change). In complex systems, as the information in the input increases linearly, the complexity of the system increases exponentially. To assess the impact of the complexity of the encounter on the physician, we adjusted the estimated complexity by the durationofvisit.
RESULTS Using the 2000 NAMCS database, we calculated input and output complexities for 3 specialties. Construct validity was affirmed by comparing the relative rankings of complexity against relative rankings using other complexityrelated measures. Although total relative complexity was similar for family medicine (44.04 ± 0.0024 SE) and cardiology (42.78 ± 0.0004 standard error [SE]), when adjusted for durationofvisit, family medicine had a greater complexity density per hour (167.33 ± 0.0095 SE) than either cardiology (125.4 ± 0.0117 SE) or psychiatry (31.21 ± 0.0027 SE).
CONCLUSIONS This method estimates complexity based on the amount of care provided weighted by its diversity and variability. Such estimates could have broad use for interphysician comparisons as well as longitudinal applications.
 Systems theory
 nonlinear dynamics
 ambulatory care
 process assessment (health care)
INTRODUCTION
Since the implementation of resourcebased relative value scales (RBRVSs), primary care physicians have been fighting for a system of compensation that recognizes the complexity of the primary care encounter. In a system where sicker equals more difficult to treat, the struggle has been an uphill fight. Ultimately, this mentality has its roots in the reductionistic, causeandeffect view of illness taught in specialtyoriented medical education.
This system works well in very ill hospitalized patients, the home of most specialists. In this situation, illness tends to display linear dynamics with its predictability,^{1} diagnostic tests have greater specificity,^{2} and patient behavior is controlled. Thus, diagnosis and management are relatively straightforward. In ambulatory care settings, things are different. Here you find multiple agents (patient, family, friends, physician, office staff) interacting with the patient’s multiple, lessdefined illnesses, which display the unpredictability of chaotic or random dynamics, lessspecific diagnostic tests, and variable patient behavior. No longer is the system simply the sum of its parts.^{3}^{,}^{4} The constraints of the illness and the hospital setting transform the highcomplexity outpatient into the lowcomplexity inpatient.
EVIDENCE OF COMPLEXITY IN PRIMARY CARE
One outcome of the Future of Family Medicine project was the realization that family physicians recognize the complexity of care they provide.^{5} In 1996–1997, 24% of primary care physicians reported that the scope of care they were expected to provide was more than it should be, and 30% believed that it had increased in the previous 2 years.^{6} This complexity in family medicine encounters may explain the high intraphysician variability in patient management observed in general practitioners^{7} as they adjust the care they provide to the complexity of the clinical situation.^{8}
In addition, the complexity of health care is increasing.^{9} Between 1978 and 1994, the complexity of primary care increased as the number and variety of preventive services delivered increased, demographic diversity of patients increased, and medications commonly used changed.^{10} In fact, the complexity of primary care should continue to increase over the near future. Not only has the health care system grown more complex in terms of its payers, practice settings, technology, and medications, but the information explosion and demands for accountability will fuel the complexity fire. But can complexity be estimated?
ESTIMATING COMPLEXITY
Although methods for estimating complexity of ambulatory care do not currently exist, there are related measures used for risk adjustment; such casemix measures have been used to compare patients seen by primary physicians with patients seen by specialty physicians. None of these measures capture all of the relevant dimensions, including health status, demographics, health behavior, psychosocial issues, and social environment.^{11} For example, the Ambulatory Care Group system uses diagnoses, chronicity, and the minorvsmajor distinction to create ambulatory diagnostic groups (ADGs). These ADGs are combined with the patient’s age and sex to create 51 ambulatory care groups, which predict disease course, hospitalization, referral, disability, and life expectancy.^{12} Similarly, the Ambulatory Severity Index (ASI) combines biophysical and behavioral dimensions with severity of illness. In addition, the ASI considers complexity based on urgency, complications, functional status, social situation, compliance, and communication.^{13} Health status measures of risk adjustment consider demographics, diagnoses, and medications, in addition to health status.^{11} Although patientcentered measures of risk adjustment are related to the complexity of care, they are limited and do not fully reflect the complexity of practice.
CURRENT ESTIMATES OF COMPLEXITY
Methods exist for estimating the complexity of other systems. If we define the complexity of a system as the amount of information needed to describe it or its behavior,^{14} then there are currently 3 approaches used to estimate complexity. First, natural representations of the information involved (eg, DNA content) have been used to estimate the bits of information encoded within the system. Second, the amount of language needed to describe a system has been used as a measure of complexity (eg, 1 character of language = 1 bit of information). Finally, complexity has been estimated by counting the components of the system and all of their possible states.
There are 3 inherent problems with estimating complexity, however. First, there may be difficulty in counting all of the possible states of all of the relevant components. Second, any lack of knowledge of the full behavior of the system will result in an underestimate of its complexity. Finally, the framework in which the estimate is made must be appropriate for the behavior. This framework not only includes how the behaviors are measured, but the time frame over which they are measured. The shorter the time frame, the less likely you are to detect cyclic patterns that represent order and decrease complexity. Because of these limitations, the value to estimating complexity is not in the accuracy of a particular estimate, but rather in the estimation of complexity relative to another system. Thus, estimating relative complexity in 2 similar systems using the same methods is valid.^{14}
The purpose of this article is to demonstrate a method for calculating the relative complexity of ambulatory clinical encounters and illustrating this complexity by comparing the complexity in practice of 3 specialties: family medicine, cardiology, and psychiatry.
METHOD FOR ESTIMATING THE COMPLEXITY OF AMBULATORY CARE
Generalists and specialists differ in the breadth of care provided and their level of differentiation.^{12} In addition, error rates (a measure of complexity) are associated with, not only volume, but diversity, variability, and time limitations as well.^{15} Similarly, Boisot and Child^{16} suggest that complexity includes both cognitive complexity, which focuses on the content of information flowing, and relational complexity, which focuses on the interactions by which the information flows between agents. Hence, cognitive complexity is measured in counts, while relational complexity is measured in variability. Any measure of complexity must therefore be able to reflect the breadth of problems and the range of complexity seen by the primary care physician. In addition to the parameters considered in patientcentered approaches to risk adjustment, a measure of complexity, should include the diversity of symptoms and diagnoses encountered.^{11}
The focus of such a measure should be on relationships among the components of the system, because relationships are far more important to the complexity of a system than are the components themselves.^{17} Thus, the clinical encounter should be the focus of the measure of complexity because it represents the point of decision making.^{3} Information theory suggests that errors in transmission of information (a measure of complexity) depend upon the probability distribution of inputs (more generalized inputs of uniform probability imply more information), the nature of the channel or interaction, and the complexity of the decision criteria.^{18} Because a specialty is not defined by a single encounter, however, and because complexity in relationships often reflects the frequency with which change occurs, the measure of complexity needs to include interencounter variation as well. Whereas the complexity of an encounter includes the number of events occurring and the amount of information transferred, the complexity of a specialty or practice needs to include the diversity and variability of events across encounters. Just as the complexity of a situation is the sum of the complexity of the event and the average complexity encountered,^{18}^{,}^{19} our measure of complexity should reflect the complexity of the typical encounter and the complexity across encounters.
As any system can, clinical encounters can be divided into inputs and outputs. If inputs represent the information transferred from the patient to the physician, as well as the diagnosis, then inputs include history, physical examination, testing, diagnoses, and patient demographics.
Outputs include medications prescribed, other therapies used, including education and counseling, procedures performed, and patient disposition.
Computation of Complexity
Complexity measures are computed in 3 steps. First, the complexity of each input/output is defined as the mean input/output per clinical encounter weighted by its interencounter diversity (range of possibilities used) and variability (visittovisit change). Then, once the complexity of each component has been calculated, the total input and total output complexities are calculated by summing the component complexities. Finally, because there is a logarithmic relationship between input and output, total complexity is the product of output complexity and “2” raised to the power of the input complexity. Details about the computational approach to the measure of complexity are presented in the Supplemental Appendix, available online at http://annfammed.org/cgi/content/full/8/4/341/DC1.
Characteristics of Calculated Complexity
There are no particular units to this calculated complexity; they are “units of complexity.” Thus, its value is in comparing the complexity of ambulatory care provided by 2 or more specialties or changes in complexity of care provided by 1 specialty over time. In addition, the more complex the system, the more fundamental are its estimates.^{14} This fundamental nature suggests that estimates of complexity are generalizable in complex systems. Thus, we would expect that the patterns seen in our estimate of relative complexities would hold to similar estimates using other databases and other physicians of the same specialties.
Because complexity parameters are not computed using the individual practitioner as a unit of measurement, there were no corresponding measures of parameter variation. Bootstrap procedures were used to provide estimates of error for selected measures of complexity (Table 1⇓). The bootstrap method provides estimates of parameter variability by resampling observations from an empirical distribution. The sampling is conducted with replacement and the parameters recalculated with each random selection of cases.^{20} The distribution of random samples provides the basis for variance estimations.
The combined sample sizes of the 3 specialties from the 2000 NAMCS survey are relatively large; the sample of 6,561 patient visits consisted of 3,344 family medicine visits, 1,650 cardiology visits, and 1,567 psychiatric visits. By resampling each group using a drawing of N minus 1 for 500 times each with replacement, the complexity parameters were estimated using 4 different sample size selection schemes. The 4 sampling proportions were based on the total sample (proportion = 1.0), onehalf (proportion = 0.5), onequarter (proportion = 0.25), and oneeighth (proportion = .125) of the total (Figure 1⇓). Figure 1⇓ presents variance as 2 standard deviations around the mean, because 95% confidence intervals were so tight for even the smallest proportional sample size that their graphical change with sample size was lost.
CRITIQUE OF ESTIMATION METHOD
Validity of Complexity Estimates
Because complexity estimates provide relative measures of complexity, validation procedures need to assess the validity relative to other assessments rather than against a reference standard. Table 2⇓ displays the total input and total output complexities for family medicine, cardiology, and psychiatry. Construct validity was assessed by comparing the relative rankings of complexity against relative rankings using other complexityrelated measures.
Based upon the capacity of shortterm memory, physicians may be able to attend to a maximum of 7 ± 2 clinical findings simultaneously^{14}; realizing the limitations of extending this ability to multiple diagnoses or management options, inputs and outputs involving more than 9 items could be defined as complex. Table 2⇑ displays the proportion of clinical encounters, which involved at least 9 inputs or 9 outputs. Although less than 1% of the outputs are considered complex by this method, the relative ranking of input complexities is similar to that which our method found.
In addition, BarYam^{14} believes that acute situations should be more complex because of their lack of equilibrium, and situations of greater complexity should result in more diagnostic uncertainty.^{2} As Table 2⇑ shows, the rankings of input complexity in family medicine, cardiology, and psychiatry found by our methods match the proportion of acute problems seen and diagnostic uncertainty reported. Hence, these complexityrelated measures found the same interspecialty relationships as predicted by our complexity estimates.
COMPLEXITY DENSITY
The estimate of complexity of ambulatory care presented above is a measure of the complexity of the clinical encounter based on the quantity of information and events, diversity, and variability. Just as the capacity of a channel to deal with the amount of transmitted information is related to the transmission time,^{18} however, so, too, is dealing with complexity timedependent^{16}; the more time you have, the more likely you are to observe any cyclic behaviors, which decrease complexity.^{14} Thus, given a fixed complexity, the shorter the durationofvisit, the more complex the encounter will seem, and the greater the burden felt by the physician. The more complex the medical problem dealt with, the longer is the durationofvisit.^{21} In fact, inadequate time is often cited as the cause of medical errors,^{15}^{,}^{22}^{,}^{23} one measure of the complexity of a system.^{14}
If we are to assess the impact of the complexity of the encounter on the physician, we need to adjust the estimated complexity for the durationofvisit. Temte et al^{24} have suggested the encounter problem density (number of clinical problems addressed per hour) as a measure of complexity. Although simpler to measure, such assessments do not address the diversity and variability of patients and problems, which also contribute to the mental burden for the physician. For our purposes, the estimated complexity is divided by the durationofvisit to obtain the complexity per minute. An hourly complexity density estimate^{24} is derived by multiplying the complexity per minute by 60 (Table 3⇓).
POTENTIAL APPLICATIONS
How might our ability to estimate relative complexity be useful in understanding or investigating current health services quandaries? First, interspecialty comparisons of quality of care are appearing in the literature with increasing frequency. For example, both cardiologists^{25} and psychiatrists^{26}^{–}^{28} have compared the quality of care provided for specific disorders by their specialists with the quality of care provided by primary care physicians. Typically, these studies suggest that primary care physicians are not providing the same quality of care that is provided by the specialists; however, outcomebased quality of care studies can be misleading.^{11} Although not the only possible reason, such interspecialty differences in the level of care provided may be explainable in terms of differences in the complexity of care and its burden on physicians. Similarly, as medical errors receive increasing attention, there is growing evidence that many medical errors originate from systems problems.^{29}^{,}^{30} Just as the rate of systems errors reflects the complexity of the system,^{14} so too the increase in medical errors seen may be due to the complexity of the health care system.^{31} In fact, interspecialty differences in quality of care and medical errors may reflect the higher complexity inherent in family medicine.^{32} In addition to interspecialty comparisons, this estimation of complexity could be adapted to estimate interpractitioner complexity within a discipline.
With evidencebased medicine receiving growing attention, the use of practice guidelines is seen as a compromise between the need for evidencebased medicine and the pressures of the information explosion. Primary care physicians do not readily use guidelines, however, perhaps because guidelines do not reflect the complexity of their patients. In addition, the nonlinearity of the illness and the behavior of primary care patients does not lend itself to the predictable responses implied by practice guidelines.^{8} Thus, being able to study the complexity of care may allow us to better understand when and why practice guidelines are used by primary care physicians.
Another recent observation is, during the past decade, physicians perceive that the durationofvisit is decreasing and that they have inadequate time during patient encounters.^{33}^{–}^{35}
This perception is particularly true in primary care physicians.^{34} Yet, the average time per visit has actually increased during this decade.^{10}^{,}^{33}^{,}^{36}^{–}^{38} One explanation for this consistent misperception by physicians is that the complexity of care is increasing.
Finally, certain physician groups are beginning to raise the concern about physician burnout. As yet, studies have not investigated whether this burnout is related to the complexity of care or, more specifically, the complexity density as a measure of physician burden. A recent study,^{39} however, found that perceived complexity of care was consistently related to physician dissatisfaction among primary care physicians. Such studies could explain some of these trends in medical care and have health services implications.
There is increasing recognition that the complexity of medical care has important implications for health policy. Although risk adjustment methods exist, measures of complexity relevant to clinical care have not been developed. We developed a new method for estimating relative complexity of clinical encounters based on the amount of care provided weighted by its diversity and variability, which is appropriate for use with national databases. Such estimates of clinical complexity could have broad use for interspecialty, interpractice, and interphysician comparisons, as well as longitudinal applications.
Footnotes

Conflicts of interest: none reported

Funding support: This project was supported in part by a grant from the Texas Academy of Family Physicians Foundation
This project was presented in part at the annual meeting of the Society for Chaos Theory in Psychology and Life Sciences in Boston, Massachusetts on August 8–10, 2003.
 Received for publication April 16, 2003.
 Revision received December 16, 2009.
 Accepted for publication December 31, 2009.
 © 2010 Annals of Family Medicine, Inc.