A quality Assurance program includes activities that

After creating the Medicare program in 1965, Congress mandated efforts for organized quality assurance for Medicare beneficiaries. Successive federal activities have included Experimental Medical Care Review Organizations (EMCROs), Professional Standards Review Organizations (PSROs), and Utilization and Quality Control Peer Review Organizations (PROs). Antedating these efforts were “foundations for medical care,” a physician-based movement begun in California in the 1950s. These community-centered organizations of participating physicians monitored the use and quality of both hospital and ambulatory care services before payment by fiscal intermediaries (FIs) (Egdahl, 1973; Harrington, 1973; Lohr et al., 1981).

These activities (and the growth of the Joint Commission on Accreditation of Hospitals) were the most visible examples of a grassroots professional interest in the quality of medical care delivery that emerged following the Second World War. In contemplating the structure and purposes of a quality assurance system for Medicare that will carry into the 21st century, one should realize that organized quality assurance arose initially as a professional effort and that it has a modern-day history half a century old.

The nation's health care providers have devised many ways to assure the quality of health care.1 The Medicare program has used two main approaches. One is Medicare Conditions of Participation and the closely linked accreditation activities of the Joint Commission on Accreditation of Healthcare Organizations. The second is the program's successive Medicare quality review programs, which are discussed in this chapter. The sources of information include literature reviews, extensive site visits around the country, documents and staff briefings from the Health Care Financing Administration (HCFA), interviews with representatives of relevant organizations and institutions, and personal experience of members of the IOM committee.

The federal programs were fashioned to address incentives of the prevailing financing mechanisms for health care. For instance, when cost-based reimbursement was the predominant mode of hospital payment, utilization review to detect overuse of care had a key place in peer review efforts such as the PSRO program. Prospective payment prompts more attention to underuse and quality of care, as seen in recent activities of the PRO program. Nevertheless, utilization review and quality assurance are closely linked activities; both have been and will continue to be important in any program intended to assure the quality of care for the elderly. In designing the strategy for a new quality assurance program for Medicare (Chapter 12), we hope to create a program with the flexibility and appropriate tools that can respond to whatever incentives emerge from changing Medicare financing and reimbursement schemes. To lay the groundwork for such a program, we here examine past and existing quality assurance efforts for Medicare.

EXPERIMENTAL MEDICAL CARE REVIEW ORGANIZATIONS

Experimental Medical Care Review Organizations (EMCROs) were voluntary associations of physicians who reviewed inpatient and ambulatory services paid for by Medicare and Medicaid. The program, in existence between 1970 and 1975, was administered and funded by the National Center for Health Services Research and Development. Far more a research and development effort than an operational one, the EMCRO mission was to encourage physicians to work together and to upgrade methods for assessing and assuring quality of care. EMCROs were concerned with both inpatient and ambulatory care.

Although no comprehensive evaluation of the EMCRO program was ever done, analyses of data and activities of the New Mexico EMCRO documented important impacts of the program on the appropriate use of injectable drugs and on the quality of ambulatory care in the state's Medicaid program (Lohr et al., 1980). Those results were obtained through a dual approach that emphasized education (development and promulgation of injection guidelines) and economic sanctions (denial of payments for inappropriate services). EMCROs were essentially a prototype for PSROs, established about midway through the EMCRO program.

PROFESSIONAL STANDARDS REVIEW ORGANIZATIONS

Purpose and Structure

Professional Standards Review Organizations (PSROs) were established by the Social Security Amendments of 1972 (P.L. 92–603) to assure that physicians and institutions met their Medicare obligations; such obligations required that services provided or proposed to be provided to Medicare beneficiaries were medically necessary, of a quality that met local professionally recognized standards, and were provided in the most economical manner consistent with quality of care (Goran et al., 1975; Blum et al., 1977). Congress intended PSROs to lower public expenditures for medical care, to counter fee-for-service incentives toward overuse of services, and to help to ensure the quality of care.

PSROs were voluntary, not-for-profit, local physician organizations; each PSRO area covered approximately 35 hospitals and 2,000 to 3,000 physicians, on average, although the range was quite broad. The original PSRO areas numbered 203 (195 in 1977 and thereafter). By mid-1981, of these 195 designated areas, 182 had funded PSROs; of those, 47 were “fully designated,” 132 were “conditional,” and 3 were in the planning stage. Consequently, length of continuous operation, skills, and experience varied considerably across PSROs, and the history of the fully operational program was relatively short. The program was administered by HCFA's Health Standards and Quality Bureau (HSQB) in the Department of Health, Education and Welfare [later Health and Human Services (DHHS)]. HSQB used a complicated system of annual grants to PSRO entities consisting partly of congressionally appropriated general revenues and partly of Medicare Trust Fund monies.

Aspects of the PSRO Program of Importance to the PRO Program

Briefly, PSROs carried out the following activities: hospital utilization review, development of hospital discharge data (the PSRO Hospital Discharge Data Set), profile analysis, Medical Care Evaluation (MCE) studies and Quality Review Studies, and review of care rendered in other settings (ancillary services, nursing home care, and ambulatory care). Some PSROs contracted to do utilization review for private firms and municipal governments. A few PSROs collaborated in research studies (Chassin and McCue, 1986) and studies of variations in hospital use.

Utilization Review

Hospital utilization review was viewed as distinct from quality assurance and was given highest priority by PSROs. It usually took the form of preadmission certification for elective hospital admissions, certification of nonelective admissions (within three days of admission), and continued stay recertification; both concurrent and retrospective review was done. One lesson of the PSRO program was that 100 percent utilization review was excessive, and PSROs came to “focus out” about 50 percent of the admissions they might have reviewed at the start of the program. No consensus was ever reached, however, either on the appropriate criteria for such focusing out or on the sample sizes needed to achieve cost-effective utilization review. Issues of concurrent versus retrospective review and of focusing out providers or particular types of services are as pertinent for the PRO program as they were a decade ago.

Profile Analysis

Profiling is a form of retrospective review of patient care data to identify patterns of care over a defined period of time. Profiles can be constructed by groups of patients (e.g., diagnostic group), by provider (e.g., hospital or nursing home), or by practitioner (e.g., physician) to determine rates of use of services such as admissions or specific procedures and lengths of stay over time. They can be used to identify “outliers” that fall outside established standards of appropriate care, such as excessively long hospital stays; such providers or practitioners can then be targeted for closer scrutiny or corrective interventions. This targeting was the principal application of profiling in the PSRO program, and profile analysis continues to be a major tool for review.2

Quality of Care

Quality-related activities of the PSRO program included MCE studies, which were audits based on medical records of locally identified quality problems, typically related to specific diagnoses, technologies, or procedures. MCEs were done either by individual (delegated) hospitals or by PSROs for nondelegated hospitals or for groups of hospitals. The numbers of MCEs done by (or for) any one hospital were determined partly in conjunction with Joint Commission requirements. Toward the end of the PSRO program, MCEs evolved into Quality Review Studies, which were expected to rely on data beyond the medical record, identify a broader set of topics for study, and document more fully the impact of the review activity on quality of care.

Many innovative PSROs did area-wide MCEs that permitted hospitals to compare their audit outcomes with those of their peers; this was considered a valuable quality assurance mechanism. The difficulty, however, of demonstrating in quantitative terms the impact of MCEs (as contrasted with simply enumerating the number of MCEs conducted) contributed to the inability of the PSRO program to document a meaningful effect on quality of care. Studies of the MCE variety still dominate the quality assurance efforts in hospitals, owing in part to the familiarity of hospital staffs with this activity, but they are not part of the PRO scope of work.

Other Efforts

Often lost in the historical account of the PSRO program is that some PSROs embarked on a considerably broader review agenda than simply hospital utilization review and MCEs. Perhaps as many as one-third of PSROs became involved in hospital-related ancillary services review (e.g., radiology, medications, and laboratory tests), although budget constraints for this were severe. The ancillary services experience relates directly to the procedure-oriented review the PROs do now, but in some respects the PSRO experience represents a broader set of health care services.

Still other PSROs reviewed care in long-term-care facilities; although budget cuts curtailed these efforts, at the peak about 55 PSRO projects were underway in such facilities. Ten PSRO demonstration projects that reviewed nursing home care (which included pre-admission, admission, and continued stay review, quality assurance activities, MCEs, and data systems development) were given special attention in the late 1970s (Kane et al., 1979).

Finally, a few PSROs were involved in various ambulatory care review projects (e.g., physician office care as distinguished from ambulatory surgical procedures done in hospitals or free-standing clinics). Evaluations of these activities were inconclusive about dollar savings to the Medicare program (the key evaluative criterion), and PSROs made little progress in this arena, for several reasons. In the first place, ambulatory care is harder to review than inpatient care; it is more dispersed, involves more providers and sites, reflects many more patient-provider encounters, and has less well-developed methods. Second, the Medicare insurance claims forms of the day would not have provided adequate information for this endeavor. Third, physicians a decade ago were reluctant to facilitate review of their private office practice records. Fourth, the costs of outpatient care were less significant before the Medicare prospective payment system and the considerable shift of hospital care to the nonhospital setting, and ambulatory care offered less opportunity for meaningful savings than hospital care; given that cost control was the principal operational task of PSROs, they had less incentive to attempt ambulatory review. Finally, the program budgets for ambulatory review were quite low.

Delegation

One controversial aspect of the PSRO program was its ability—more accurately, its mandate and related budgetary incentives—to “delegate” certain quality assurance and utilization review functions to individual hospitals judged to be capable of carrying them out. Delegated hospital review was funded through negotiated budgets between the PSRO and the individual hospital. PSROs monitored the performance of delegated hospitals but usually did no ongoing data collection or abstracting in those facilities. By contrast, they did all such tasks directly in nondelegated institutions.

This form of delegation was not judged particularly successful at the time (HCFA, 1980). PSROs lacked administrative and financial control of delegated hospital review. For instance, removing delegated status was administratively very difficult. Further, delegated review costs were simply passed on to the Medicare Trust Fund, whereas nondelegated review was a PSRO line item in the federal budget; thus, the pressure to delegate was very great. Finally, delegated hospitals determined their own procedures, identified their own MCE topics, and in other ways operated quite independently of one another; this complicated the job of evaluating the impact of the program as a whole.

In an environment in which the majority of physicians and hospitals tolerated rather than enthusiastically supported the PSRO program, the performance of delegated hospitals was often a perfunctory exercise in “paper compliance.” Another issue was the mismatch between the expectation that the PSRO program act as a regulatory control mechanism for an activity— namely, health care delivery—from which they were twice removed; that is, they neither delivered health care directly nor, for delegated hospitals, directly reviewed the performance of caregivers.

This mirrored the great divergence in expectations for the PSRO program generally—namely, the congressional expectations that they were getting a cost-control program, the PSRO belief that they were doing quality assurance, and HCFA's view that the program did both. The end result was that, as PSROs were phased out and PROs phased in, HCFA regulations eliminated delegation as a program option.

Costs

According to the 1979 HCFA evaluation (HCFA, 1980), the mean dollar cost of hospital-based review per discharge in 1978 (i.e., fiscal year (FY) 1979, when program costs were $147.2 million) was $13.68 for the review activity itself and $7.10 for management and support tasks. The median total review cost for FY 1979 was $12.91, a figure the HCFA evaluation noted was “considerably greater than the target of $8.70” (HCFA, 1980, p. 92).

Mean costs per discharge differed markedly by type of review: $8.81 for concurrent review, $1.28 for MCE review, and $3.61 for areawide review; the highest and lowest cost ranges around these averages were fairly wide.3 Costs differed by who did the review. Delegated review (e.g., when delegated hospitals did concurrent hospital admission and continued-stay review themselves) was less expensive on a per-discharge basis than was nondelegated review; for instance, the median concurrent review cost for the larger PSROs (those responsible for 50,000 or more discharges) was $6.93 for delegated hospitals and $10.56 for nondelegated hospitals. In short, costs of review were extremely variable (as they remain in the PRO program).

Total PSRO funding rose from $4.3 million in 1973 to $173.7 million in 1981 (CBO, 1981). Funding was unstable over the period; for instance, it increased almost 43 percent between FY 1977 and FY 1978 but just under 2 percent between FY 1978 and FY 1979 and not quite 4 percent between FY 1979 and FY 1980.

Medicare expenditures during the PSRO program ranged from about $9.5 billion in FY 1973 to $42.5 billion in FY 1981. Hospital insurance (Part A) outlays alone were $6.8 and $29.3 billions, respectively (Committee on Ways and Means, 1989). Taking total outlays as the denominator, PSRO program costs by the end of the program amounted to only about one-half of 1 percent (about 0.45 percent) of outlays; the figures reach about 0.7 percent if only Part A expenditures are used as the base. As will be seen, PRO funding has been equally tight, if not more so.

Additional Aspects of the PSRO Program

The National Council

The PSRO legislation provided for a National Professional Standards Review Council, appointed by the executive branch. It consisted of 11 physicians not in the federal government who could represent or were recommended by practicing physicians, consumer groups, and other health care interests. The Council was charged with reporting at least annually to the Secretary of the Department of Health, Education and Welfare and to the Congress on its activities; the report was supposed to review the effectiveness and comparative performance of PSRO operations, develop recommendations concerning ways that the program might be designed more effectively, and provide comparative data indicating the results of review activities. At the time, the Council was regarded with some suspicion because its rather ambiguous charge was seen as a threat to the local autonomy of physicians and as an opening wedge in the establishment of national or model standards of care (Blumstein, 1976), all issues of vastly greater sensitivity 15 years ago than now.

In retrospect, the Council was not demonstrably successful in shaping the long-term policies of the program toward quality of care and away from cost containment. The economic and political forces pushing for control of utilization and costs were too strong. The Council also made little progress toward developing standards of care; again, the climate for such efforts was not receptive.

The Council's value was as a regular public forum for discussion of issues pertinent to the PSRO program. The public meetings were extremely well attended, fostered both formal and informal interaction among the Council, public attendees, and staff, and permitted timely information to be published in the lay press concerning program direction. The Council provided, albeit imperfectly, for some accountability of the program, and it gave some opportunity for early review and consideration of program plans and advice to HCFA by a well-disposed, but external, group of experts.

Sanctions and Regulatory Orientation

PSROs were perceived as essentially regulatory mechanisms for controlling medical practice. HCFA (1980), for instance, characterized PSROs as “formalized externally authorized and mandated local physician organizations expected to function as a regulatory system exercising control via performance evaluations tied to financial and professional sanctions” (p. 141).

The PSRO program was hampered by its relatively limited ability to act on this presumed regulatory power and to bring or recommend sanctions against providers. Aggressive PSROs initiated sanctions generally the way PROs do now, except that sanctions were pursued internally at HCFA, not by the Office of Inspector General (OIG), and with similar results (e.g., very high costs and reversals at the level of administrative law judges). Furthermore, hospitals and other providers were assumed to have a favorable “waiver of liability” status. PSROs could only recommend to the relevant FI that the waiver be revoked, but the decision to do so resided with the FI.4 In principle, the PROs have a considerably stronger hand in sanctioning physicians and hospitals than did the PSROs, in part because sanctions are now pursued through the OIG and in part because the waiver of liability issue has been muted. In practice (as will be seen), their regulatory power has not been demonstrably enhanced.

To gain the acceptance of the provider community in the early years, some congressional supporters and some executive branch directors of the program emphasized the quality-of-care (rather than the cost-control) thrust of the program (Blumstein, 1976). In quality assurance terms, this translated into an “educational” rather than a “regulatory” program. The ambiguity inherent in an “educational-regulatory” stance was never successfully resolved.

More importantly, a considerable ambiguity arose in the conflicting emphases on containing costs while maintaining quality. The framers of the PSRO legislation and program intended primarily that it lower the inappropriate or unnecessary use of services, as the alarming increase in the cost of medical care at that time was assumed to arise largely from overuse of services. Evaluations at the time were focused mainly on PSRO impacts on costs; for instance, from the General Accounting Office (GAO) in 1979 came Problems with Evaluating the Cost Effectiveness of Professional Standards Review Organizations, which was focused exclusively on savings in costs and patient hospital days, and from the Congressional Budget Office (CBO) in 1981 came The Impact of PSROs on Health-Care Costs. Evaluations of PSRO impacts on quality of care were never accorded similar status or conducted with equivalent sophistication. A major lesson of the PSRO program was that the conflict between using such agents simultaneously to contain costs and to maintain quality will almost surely short-change the latter unless strong programmatic steps are taken to protect and emphasize it.

Impact of the PSRO Program

The net impact of PSROs on utilization, expenditures, or quality remains uncertain. Several evaluations of the PSRO program conducted in the late 1970s yielded contradictory findings (e.g., CBO, 1979, 1981; GAO, 1979; HCFA, 1980). Overall, the PSRO program probably saved as many resources as it consumed, but in an era of rapidly escalating health (and Medicare) expenditures, this was not perceived as an adequate level of performance (Lohr, 1985). PSROs did appear to have a slight positive impact on quality of care as measured by documented changes in medical practices rather than by dollar savings (HCFA, 1980; AAPSRO, 1981). Again, however, in an environment concerned chiefly with rising expenditures, these effects were not persuasive as regards the success of the PSRO program.

Among the conclusions that might be drawn about the PSRO experience were the following. Monitoring and evaluating a program that operates through almost 200 individual organizations is difficult. Budget constraints, although a fact of life, will compromise the effectiveness of such a program. Delegating review authority, when not accompanied by the power to remove delegation promptly for poor performance, can undermine the effectiveness of a program. Finally, it is exceedingly difficult to combine cost and quality functions in one organization, especially when expectations and evaluations of the program concentrate on cost issues.

Movement to a New Program

Disappointment at the limited effectiveness of the PSRO program prompted calls for its abolition or restructuring, and it was phased out in the early 1980s as the PRO program was slowly put into place. Despite rhetorical emphasis on assuring quality of care, the principal focus of the new PRO program initially remained on use of services and costs. In other words, philosophically not much changed.

Structurally, much about the program was revamped. The ability of PROs to act against overuse of services and to curtail expenditures was strengthened (relative to the PSRO program), and administrative and financing arrangements were changed so that the program could, at least in theory, be better managed at the federal level. Nevertheless, many of the difficulties facing the PSRO program remained, not the least of them being the mismatch between the call for attention to quality of care and the funding for activities designed to control utilization and expenditures. In this vein, it is well to remember that the full name of the PROs is the Utilization and Quality Control Peer Review Organizations.

UTILIZATION AND QUALITY CONTROL PEER REVIEW ORGANIZATIONS (PROs)

The PRO program was a congressional response to considerable discouragement over the performance and impact of the PSRO program as well as an effort to design a system to fit the diagnosis-related group (DRG) prospective payment system (PPS) for hospital care that began in October 1983. Like PSROs, PROs are supposed to ensure that services rendered through Medicare are necessary, appropriate, and of high quality.

PRO activities, however, extend widely into many aspects of the administration of Medicare program. They are by no means confined to issues relating to use, costs, or quality of care, and certainly not just to ensuring the technical quality of care rendered to beneficiaries. PROs serve different purposes for different parties, not all of whom have the same interests or concerns. Given the hostility and disappointment registered about a PSRO program that was vastly less burdened with administrative and outreach responsibilities, this increase in the responsibilities and visibility of the program created to replace PSROs is somewhat ironic.

PROs carry out their complex assignments on a total annual budget that now approximates $300 million per year—a sum that seems large in the abstract but in fact accounts for about 0.3 percent of Medicare Part A and Part B expenditures. Thus, understanding the role and potential impact of PROs on assuring quality of care for Medicare calls for appreciating the many and complex tasks they have been assigned, the specificity of the contract requirements that govern those tasks, and the limited resources they can bring to bear on the required activities. The remainder of this chapter discusses these topics.

PRO Legislation and Regulations

Several pieces of legislation governed the development of the PRO program. The key act was the Tax Equity and Fiscal Responsibility Act (TEFRA) of 1982 [more specifically, the Peer Review Improvement Act, Title I, Subtitle C of TERRA (P.L. 97–248)], which amended Part B of Title XI of the Social Security Act. Other important legislation included the Social Security Amendments of 1983 (P.L. 98–21), the Deficit Reduction Act (DEFRA) of 1984 (P.L. 98–369), the Consolidated Omnibus Budget Reconciliation Act (COBRA) of 1985 (P.L. 99–272), the Omnibus Budget Reconciliation Acts (OBRA) of 1986 and 1987 (P.L. 99–509 and P.L. 100–203), and the Medicare and Medicaid Patient Program Protection Act of 1987 (P.L. 93–100).

Apart from legislation, numerous regulations and other directives govern the administration and operation of the PROs. The Administrative Procedure Act (APA) requires that regulations be promulgated through notice and comment rulemaking procedures. HCFA follows the APA procedures in some instances. As an adjunct to the cumbersome and often lengthy public rulemaking mechanism, the agency also relies extensively on PRO Manual transmittals, contracts and contract modifications, and other, less formal instructions.

PRO Organizational Characteristics

PSRO regions were consolidated into 54 areas (all the states, the District of Columbia, Puerto Rico, the Virgin Islands, and a combined area of American Samoa, Guam, and the Commonwealth of the Northern Marianas). Beginning roughly in 1986, eight PROs covered two areas, and one PRO covered three areas.5

Congress tried to retain some semblance of “local” peer review. To qualify as a PRO, the statewide organizations must demonstrate sponsorship by being composed of at least 10 percent of the physicians practicing in the area (known as a physician-sponsored organization), or it must have available for PRO review at least one physician in every generally recognized specialty in the area (known as a physician-access organization); the former have priority. Third-party payers can obtain PRO contracts only if no other eligible organization is available. A PRO may not be a health care facility or other entity subject to review, it must have at least one consumer representative on its governing board, and it must operate with objectivity and without apparent or real conflict of interest.

PRO Contracts

PROs are financed through competitively awarded contracts. The very complex set of review and intervention tasks are specified in great detail in the “Scope of Work” (SOW) in the Request for Proposal for these contracts, and PRO performance is evaluated on the basis of how well they meet these specifications. Compared to a grant mechanism (as in the PSRO program), contracting makes the program more manageable centrally but renders the local entities less able to respond flexibly and sensitively to local problems and needs.

PRO contracts were initially established for two years, but OBRA 1987 extended contract periods to three years to permit somewhat more stability in anticipated financing and planning. PRO contracts can be renewed triennially or cancelled and put up for competitive bidding, or they can be terminated by either the PRO or the Secretary of DHHS at any time. The Secretary, in accordance with a complex set of procedures, has the absolute right either to terminate or to choose not to renew a PRO contract. The Secretary's decisions in this regard are not subject to judicial review and thus cannot be overturned in court.

Third PRO Scope of Work (1988–1990)

The first SOW was used during the first contract cycle (1984 to 1986), the second during the 1986–1988 contract cycle, and the third covers the present period. All PROs were expected to be on the third SOW as of April 1, 1989.6

Many PRO activities have remained fairly constant over the three SOWs, although the first SOW emphasized controlling inappropriate utilization and the second and third SOWs gave more attention to assuring quality. To achieve consistency with minimum disruption to ongoing review activities, much of the second SOW remains in the third but with variations in the size of samples. The following section and Table 6.1 describe PRO activities for only the third SOW; Table 6.2 briefly compares key activities for the three SOWs. The focus is on inpatient hospital review, but the activities do not differ appreciably for nonhospital practitioners or settings.

A quality Assurance program includes activities that

TABLE 6.1

Elements of Required Peer Review Organization (PRO) Activities for the Third Scope of Work.

TABLE 6.2

Comparison of the Three Scopes of Work (SOWs) with Respect to Selected Utilization and Quality Control Peer Review Organization (PRO) Activities (Ordered by Tasks Pertaining to the Third SOW).

Required Review Activities for Hospital Inpatient Care

The following PRO review activities are required for all inpatient hospital cases reviewed retrospectively: (1) generic quality screening; (2) discharge review; (3) admission review; (4) review of invasive procedures; (5) DRG validation; (6) coverage review; and (7) determination of the application of the waiver of liability provision. These are described in more detail below.

Cases are identified for review through a random sampling process that constitutes 3 percent of all Medicare admissions and, in addition, through selection of cases for many specific reasons that reflect concern about use of services, costs to the Medicare program, or quality. Altogether, the pool of cases under review constitutes almost 25 percent of all Medicare admissions (Table 6.3).

TABLE 6.3

Numbers and Percentages of Cases Reviewed by Peer Review Organizations (PROs) Through May 1989 and Expected for Third Scope of Work.

Generic quality screening. Hospital generic quality screens are widely used to detect what are regarded as the most common causes or manifestations of potential quality problems (Table 6.4). Introduced in the second SOW in the fall of 1986 (without pilot-testing), they were carried over into the third SOW. Most changes in generic screens occur in the interpretive guidelines that are issued with the screens, not in the screens themselves. HCFA issued interpretive guidelines in May 1987 and in 1988 met with each PRO to review and critique the screens for modification in the third SOW. The six required generic quality screens covered the following: (1) adequacy of discharge planning; (2) medical stability of patient at discharge; (3) unexpected deaths; (4) nosocomial infections; (5) unscheduled return to surgery; and (6) trauma suffered in hospital. In addition, PROs can use an optional screen for medication or treatment changes (including discontinuation) within 24 hours of discharge without adequate observation. The third SOW also adds an adequacy-of-care screen to the set of “trauma” screens; it is defined as inappropriate or untimely assessment, intervention, and/or management resulting in serious or potentially serious complications. Generic screens are applied to all hospital charts under review by the PRO for any reason.

TABLE 6.4

Generic Quality Screens—Hospital Inpatient.

Figures 6.1A and 6.1B illustrate the generic screening process. Generic screens are first applied by nurse reviewers, who can determine that a case passes the screens. If a case fails any screen and has a potential quality problem, then it must be referred to a physician advisor for further evaluation; only the physician advisor can “confirm” a quality problem. (A physician advisor is a physician practicing in the state who is hired to do peer review.) Initially, nurse reviewers were required to refer all screen failures to physician advisors; this produced considerable numbers of false-positive cases and appreciable frustration and anger for reviewers and the medical community. HCFA later permitted PRO nurse reviewers to override this rule for screens relating to adequacy of discharge planning, nosocomial infections, falls, and decubitus ulcers as part of the trauma screen; all remaining cases failing a screen and involving a potential quality problem must still be referred to a physician advisor.

A quality Assurance program includes activities that

FIGURE 6.1A

Overview of the Quality Review Process for Inpatient Hospital, Home Health Agency and Outpatient Surgery Generic Screensa

A quality Assurance program includes activities that

FIGURE 6.1B

Overview of the Quality Review Process for Other Generic Screensb

Discharge review is intended to flag problems with premature discharge when the patient was not medically stable at discharge or when discharge was not consistent with the patient's continued need for acute inpatient care.

Retrospective admission review identifies whether inpatient hospital care was medically necessary and appropriate, by reviewing reasons for admission against pre-established criteria devised or adopted by individual PROs.

Invasive procedure review retrospectively examines the medical necessity of invasive procedures that affect the assignment of a case to one DRG rather than another (which means nearly all such procedures done in the hospital setting). The review is applied to cases already selected for review, not to any other cases. If the procedure is not medically necessary or is not a covered service, and if the procedure was the sole reason for admission, then payment for the entire admission and the procedure is denied. If the procedure is not medically necessary or is not covered, but the admission is medically necessary and other reasonable and necessary services were provided, then the physician's payment for the procedure is denied, and the DRG is changed.

DRG validation assures that cases are accurately classified for Medicare payment under PPS. It also ensures that the responsible physicians have certified that their narrative descriptions of the principal and secondary diagnoses and the major procedures are accurate and complete to the best of their knowledge (a statement known as physician attestation). A Registered Record Administrator or an Accredited Record Technician generally has the responsibility for the validation process at the PRO. The results of DRG validation can be to leave the DRG unchanged or to upgrade or downgrade it, thereby affecting the hospital payment.

Coverage review determines whether items or services normally excluded from Medicare coverage are medically necessary; it is done only in instances when coverage can be extended for specific items and circumstances if certain conditions are met. Under the waiver of liability (also referred to as limitation of liability), the PRO must determine whether the beneficiary or provider should be held liable for care not covered under Medicare because either the beneficiary or provider knew, or could reasonably have been expected to know, that such care was not covered.

Pre-admission and Pre-procedure Review

PROs are required to review 10 procedures, generally on a pre-admission or pre-procedure basis, for necessity and for appropriateness of setting (e.g., inpatient or ambulatory). They must review all proposed carotid endarterectomies and cataract procedures and an additional 8 procedures selected from a list of 11 supplied by HCFA.7 Each PRO establishes its own prior-authorization criteria, sometimes in consultation with local or state physician specialty groups, and some criteria are shared among PROs. Not surprisingly, PROs differ in the types of clinical factors or levels of patient functioning that they require to be present (or absent) before they will approve the procedure (see Table 6.5 for an example).

TABLE 6.5

Examples of Peer Review Organization (PRO) Criteria for Prior Authorization of Cataract Procedures as Part of Pre-Procedure Review.

Rural Providers

Rural physicians and hospitals have vigorously asserted that they are not reviewed by local peers and that their style of practice and the constraints under which they function are not well appreciated or taken into account. To help overcome these criticisms and to bring the peer review effort more fully into areas that were rarely visited, the third SOW mandates that at least 20 percent of all rural hospitals be reviewed on-site. Moreover, during the sanctioning process rural physicians (those in officially designated rural health manpower shortage areas or in counties of fewer than 70,000 residents) are given special protections that put exclusions from the Medicare program on hold until full hearings have been conducted, unless a judge determines that the provider or practitioner poses a serious risk to individuals in those areas if permitted to continue to furnish such services.

Nonhospital Review

Various acts direct PROs to undertake review in several nonhospital settings apart from physician offices. By and large, nonhospital review activities have not been very comprehensive. The main effort to date has been review of a small sample of cases receiving “intervening care”—mainly, care delivered by home health agencies (HHAs) and skilled nursing facilities (SNFs) between two related hospital admissions up to 31 days apart. This effort was not preceded by demonstration or pilot projects or pretesting. Initiatives in the other settings, especially ambulatory physician office care, are getting underway mainly as pilot projects.

For the third SOW, generic quality screens have been developed to review care rendered in the following settings: HHAs, SNFs, hospital outpatient departments (HOPDs), and ambulatory surgery centers (ASCs). The new screens are similar to inpatient generic screens, but they are supposed to be more relevant to the particular setting (Volume II, Chapter 6, Table 8.4). For example, the SNF screens deal with polypharmacy (multiple medication) issues and the mental stability of the resident. Screens for reviewing psychiatric care were issued to PROs in November 1989, and those for rehabilitative services are scheduled for completion in FY 1990.

PRO Responses to Quality or Utilization Problems

PROs can pursue several interventions when they have confirmed a quality or utilization problem. They can notify practitioners or providers of problems, put practitioners and providers on “intensified” review, require a wide variety of corrective actions, or institute sanction procedures. Although these interventions have been available since the start of the PRO program, they are now required to be part of a written quality intervention plan.

Quality Intervention Plan

The quality intervention plan (QIP) is “a prescribed blueprint which requires PROs to implement specific interventions in response to confirmed quality problems” (Federal Register, 1989a, p. 1966). The QIP is intended to promote greater consistency among PROs by a more systematic followup of identified problem practitioners or providers. (The terms practitioners and providers are used by the PRO program to refer, respectively, to physicians or other individual clinical caregivers and to facilities and institutions such as hospitals.) Minimum QIP requirements set by HCFA include a timeframe for completion of the review process, determination of the source of the problem, assignment of quality problem “severity levels” and weights, profiling, and quality interventions that are related to severity levels.

Timeframe. HCFA has determined a maximum time frame for quality review for the third SOW. If a potential Severity Level I problem exists (defined below), the case is held in a pending status until a pattern of problems emerges. For all other severity levels, the maximum time frame for completion of the review is 135 days.

Determine source of the problem. All initial case reviews are completed by a nurse reviewer, with potential quality problem cases passed on to a physician advisor. If the physician advisor and the PRO decide that an apparent quality problem does exist, the PRO determines the source of the problem (e.g., individual physician or hospital) and, after an opportunity for discussion with the caregiver in question, assigns a “severity level.”

Assign severity levels and weights. Severity levels are a way to categorize quality problems according to the nature of the problem and its potential for causing adverse patient outcomes. The relevant phrase, significant adverse effects, is defined as unnecessarily prolonged treatment, complication, or readmission or patient management that results in anatomical or physiological impairment, disability, or death. Weights (numerical points) assigned to severity levels indicate when PROs must take various corrective steps. The levels (with weights in parentheses) and definitions are as follows:

  • Severity level I (1): Medical mismanagement without the potential for significant adverse effects on the patient;8

  • Severity level II (5): Medical mismanagement with the potential for significant adverse effects on the patient; and

  • Severity level III (25): Medical mismanagement with significant adverse effects on the patient.

Conduct profile analyses. The purpose of profiling as part of the QIP is to identify areas for focused review or other corrective action. The PRO is required to produce several types of profiles of physicians, providers, and quality problems on a quarterly basis, as a means of tracking problems and determining whether various thresholds for mandatory interventions (see below) have been exceeded.

Quality interventions. When a PRO identifies and eventually confirms that a quality problem exists, then it develops a corrective action plan using a variety of interventions. These include:

1.

Notification. The PRO sends a notice that it has made a final determination of a confirmed quality problem to the practitioner or provider. This notice must describe the quality problem, what the appropriate action should have been, the severity level, and what interventions will be taken.

2.

Education. These include telephone and in-person discussions with the responsible parties, suggested literature reading, continuing medical education (CME) courses, and self-education courses.

3.

Intensification. The PRO may increase its scrutiny of the provider's or practitioner's cases through 100-percent retrospective review or intensified review of just certain types of cases (a focused subsample).

4.

Other interventions. The PRO may take other steps, such as concurrent or pre-discharge review; prior approval or pre-admission review; and referral to hospital committees (e.g., infection control, tissue, or quality assurance committees).

5.

Coordination with licensing and accreditation bodies. The PRO must disclose confidential information to state and federal licensing bodies upon request when such information is required by those entities to carry out their legal functions, and the PRO may do so even without a request (e.g., when a practitioner or provider has reached a weighted score of 25 points in one quarter).

6.

Sanction plans (discussed below).

The PRO must use certain thresholds, called weighted triggers, to decide what intervention it should use. The interventions and weighted triggers (points per quarter) are as follows: notification, 1 (or 5 per bi-quarter); education, 10; intensification, 15; other interventions, 20; coordination with licensing bodies, 25; and sanctions, 25. The PRO has some flexibility to take interventions before a threshold is reached (such as a weighted severity score of 25) or to apply lower-weighted interventions in special circumstances. The PRO also has some discretion not to invoke coordination and sanctions interventions, although it must consider them and document why it did not take such action.

Sanctions

PRO, OIG, and DHHS Responsibilities

The Secretary of DHHS, not the PROs, holds the authority to impose sanctions on Medicare providers. The Secretary has delegated that authority to the OIG. The PROs' power is in making sanction recommendations to the OIG in either of two instances: (1) cases of “substantial violation” of a practitioner's or provider's Medicare obligations “in a substantial number of cases” (Figure 6.2A), and (2) single cases of a “gross and flagrant” violation (Figure 6.2B). A substantial violation in a substantial number of cases is a pattern of care that is inappropriate, unnecessary, does not meet recognized professional standards of care, or is not supported by sufficient documentation. Gross and flagrant violation means a violation of an obligation (in one or more cases) that represents an imminent danger to a Medicare beneficiary's health, safety, or well-being or that places a beneficiary at an unnecessarily high risk.

A quality Assurance program includes activities that

FIGURE 6.2A

Overview of PRO/HHS Sanction Process for Substantial Violationsa

A quality Assurance program includes activities that

FIGURE 6.2B

Overview of PRO/HHS Sanction Process for Gross and Flagrant Violationsa

No regulations define the criteria to be used by a PRO in making a determination whether a practitioner or provider has violated a Medicare obligation. The preamble to the PRO regulations states that PROs must apply professionally developed standards of care, diagnosis, and treatment based on typical patterns of practice in their geographic areas (Federal Register, 1985). The PRO Manual also contains some material on the elements of a sanctionable offense.

For cases in which the PRO determines that the provider or physician has failed to comply substantially with a Medicare obligation in a substantial number of cases, it sends the practitioner or provider an initial sanction notice.9 This notice gives the recipient 20 days to respond to the notification with additional information or to request a meeting with the PRO. If, after considering the additional information, the PRO confirms its original finding, it develops a corrective plan of action. If the practitioner or provider fails to comply with that plan, the PRO sends a second sanction notice. In such cases, the provider or practitioner has a second opportunity to submit additional information or discuss the problem with the PRO (within 30 days of the second notice).

If the concern is not resolved, the procedures at this point follow the pattern for “gross and flagrant” violations. Several specific procedures direct how the PRO should forward its recommendation to the OIG and how it should notify the individual or organization that is has done so, what the recommended sanction is, and how further information can be forwarded directly to the OIG (again within 30 days). The PRO must also give the practitioner or provider a copy of the material it used in reaching its decision. At this point, the responsible sanctioning party is the OIG, not the PRO.

The OIG must determine whether the PRO followed appropriate procedures, whether a violation occurred, whether the provider has “demonstrated an unwillingness or lack of ability substantially to comply with statutory obligations” (known as the “willing and able” provision), and ultimately whether it agrees with the PRO recommendation (OIG, 1988b). In these determinations, the OIG is expected to consider the type and severity of the offense, the previous sanction record, previous problems that Medicare may have had with the individual or institution, and the availability of alternative medical resources in the community. The OIG can sustain the PRO recommendation, alter it, or reject it.

If the OIG does not accept the PRO's recommendation, the sanctioning process stops. If, however, a PRO recommends exclusion and the OIG does not act on that recommendation within 120 days, the exclusion automatically goes into effect until a final determination is made. To date, the OIG has met the statutory deadline in all cases.

If the OIG does accept the PRO's recommendation, it must give notice that the sanction is to be imposed, effective 15 days after the notice is received by the practitioner or provider. The OIG notifies the public by placing a notice in a newspaper of general circulation in the individual's or institution's locality.10 It also informs state Medicaid fraud control units and state licensing bodies, hospitals and other facilities where the practitioner has privileges, medical societies, carriers, FIs, and health maintenance organizations (HMOs).

Sanctioned providers or practitioners may appeal the OIG decision to an administrative law judge (ALJ), who conducts a separate hearing starting essentially from scratch. If practitioners or providers are dissatisfied with the outcome of their hearing, they can request review by the Department's Appeals Council and then still seek judicial review of the decision at the level of a federal district court.

If the OIG proceeds successfully with these steps, the Secretary, through the OIG, can apply two kinds of formal sanctions: (1) exclusion from the Medicare program (for one or more years) and (2) monetary sanctions (which at present cannot exceed the cost of the services rendered). Excluded hospitals and providers must petition to be reinstated in the Medicare program, and they can receive (with a few exceptions) no payment for services rendered or items provided during the exclusion period.

Historical Record of Interventions and Sanctions

The most frequent PRO intervention appears to be the formal letter of notification. By contrast, intensified review, formal education or similar programs, and sanction recommendations are used much less often, although during the second SOW more than 53 percent of hospitals were under intensified review for at least one quarter (HCFA, 1989c). PROs differ markedly in the rates at which they invoke various interventions. For instance, GAO (1988b) cites the following two ranges for letters of notification: zero to as frequently as 111 times per 1,000 “new” physicians, and zero to 396 times per 1,000 “repeat” physicians.

PRO activity. Tables 6.6 a, b, and c summarize intervention activity tabulated by HCFA for the second SOW, the most recent aggregate information. Of the more than 6.6 million completed reviews (mainly for the second SOW), PROs denied payment in over 4 percent of cases; the range across PROs was 1.2 percent to 25.5 percent. For about 33 percent of these denials the practitioner or provider requested a reconsideration (range, 0.6 percent to 69.6 percent). Of those reconsiderations, the denials were reversed in 44 percent (range, 15.1 percent to 100 percent); that is, the original decision was upheld in 56 percent of the cases (Table 6.6a).

TABLE 6.6a

Quality Intervention Activities of Peer Review Organizations (PROs) Through June 1989: Reconsiderations.

TABLE 6.6b

Quality Intervention Activities of PROs Through February 1989: Quality Interventions for Physicians.

TABLE 6.6c

Quality Intervention Activities of PROs Through June 1989: Sanctions.

Through early 1989, the PROs had identified more than 87,000 physicians with some level of quality problem (Table 6.6b). Over 81,400 of those problems had been resolved, presumably through the more than 70,000 quality interventions carried out (HCFA, 1989c). HCFA data compiled from the start of the program through June 1989 shows that 43 PROs had sent a total of 1,065 first notices; the vast majority were to physicians rather than hospitals. More notices to physicians were for gross and flagrant violations than for substantial violations; the opposite was true for hospitals.

They had also recommended a total of 119 sanctions to the OIG (Table 6.6c), the vast majority (80) for gross and flagrant violations by physicians. Many of the sanction cases date from earlier years of the program. The relatively lower numbers of sanction recommendations in more recent times has generated some debate and has been attributed to three factors: (1) revisions in procedures (prompted by the AMA) that give practitioners the right to counsel during discussions with PROs of possible sanctions, (2) the OIG directives that discouraged use of monetary fines as an alternative to exclusion, and (3) possibly the high reversal rate of the ALJs, who had upheld only 8 of 18 sanctions on appeal during this period (McIlrath, 1989). In addition to these points, the growing confusion and tension caused by mixed signals from HCFA and the OIG concerning the relative emphasis to place on an educational and disciplinary approach to PRO implementation may have played a role in the sanction-recommendation picture.

OIG activity. From FY 1986 through September, 1989, the OIG reported it had received 197 referrals (150 gross and flagrant, 46 substantial, and 1 lack of documentation) (unpublished data made available to the study). Of these, 79 cases (40 percent) were rejected. Of the remainder, two cases were closed because the physician died, three physicians retired before exclusion, and three cases were pending. A total of 110 sanctions had been imposed (56 percent). Of the latter, 83 were exclusions (82 physicians; 1 facility) and 27 were monetary penalties (25 physicians; 2 facilities). In short, the OIG accepts about three in five sanction recommendations from PROs, a figure that has been fairly constant across the years. Of cases rejected, about two in five are because the case did not meet regulatory requirements, about two in five because the practitioner could show he or she was willing and able to improve, and one in five for adequate medical evidence. Of the sanctions imposed, the great majority are exclusions from the program.

Other Required Activities

Beneficiary Relations

PROs are required to act on behalf of Medicare beneficiaries in four ways not directly related to the technical quality of care rendered by providers or physicians. First, they must monitor hospitals' distribution to Medicare patients of the Important Message from Medicare; this pamphlet describes patients' rights to appeal denials of hospital care. Second, they must monitor how well hospitals issue notices of noncoverage when the hospitals themselves determine that the patient's care is not (or will not be) covered because it is not medically necessary, is not delivered in the appropriate setting, or is custodial.

Third, PROs must conduct at least five specific types of community education and outreach activities. The required tasks are quite broad. They include: maintaining a toll-free hotline; responding to written inquiries; conducting education programs, seminars, and workshops to inform beneficiaries about PRO review, PPS, and their appeal rights; developing and disseminating informational materials (e.g., brochures, slides, tapes) about those same topics; and coordinating with concerned beneficiary and provider groups. They must also have at least one consumer representative on the PRO board.

Finally, PROs must investigate all written complaints from beneficiaries about the quality of care rendered by hospitals (inpatient or outpatient), SNFs, HHAs, ASCs, HMOs, and CMPs. Here, the focus is on overuse of care or care that does not meet professionally recognized standards because PROs are barred from reviewing complaints involving underuse.

Community Outreach

The PROs must conduct programs to inform beneficiaries about Medicare PRO review and PPS, more specifically about the purpose of the PROs and PPS, types of PRO review, and their right to appeal a PRO determination. The PROs are also expected to devise ways to explain how they ensure the quality of care and respond to complaints from beneficiaries.

Provider Relations

The PRO program continues to stress “peer review”; in PRO terms, this is taken to mean physician advisors who practice in a setting similar to that of the reviewed physicians and/or who were trained in the appropriate discipline. It also calls for an “interaction plan” to enhance the relationships between the PROs and providers, physicians, and other practitioners. That plan must describe how physicians will be given opportunities to discuss problems or proposed denials and how the PRO will carry out educational efforts. The outreach activities for practitioners and institutional providers are similar to those required for beneficiaries (seminars; informational material; etc.).

The PRO is also required to publish and disseminate (at least annually) a report that describes its findings about care that does not meet Medicare obligations (i.e., necessary, appropriate, and of acceptable professional standards). This task mirrors the requirement that DHHS should submit to the Congress an annual report on the administration, impact, and cost of the program; such reports have not been published to date, however.

Data Acquisition, Sharing, and Reporting

Rules governing PRO data acquisition, sharing, and disclosure are complex and open to different interpretations. PROs can obtain any records and information pertaining to health care services rendered to Medicare beneficiaries that are in the possession of any practitioner or provider in the PRO area. Often a quality problem may be adequately handled for Medicare patients only by addressing it for all patients; if authorized by the practitioner or provider, PROs can gain access to non-Medicare patient records.11

Generally, information or records acquired by a PRO are confidential12 and not subject to disclosure. PROs are granted by statute a flat exemption from requirements of the Freedom of Information Act. In some circumstances, PROs are required to disclose certain confidential information to appropriate agencies (for instance, when the PRO believes that not to do so would pose a risk to public health or in fraud and abuse situations).

Summary hospital-specific information that does not identity patients or physicians (such as average length of stay or death rates) is usually not considered confidential and thus can be released. PROs must, however, notify a hospital when it intends to disclose information about that institution (other than certain routine reports) and give the institution a copy of the information to be released and an opportunity to submit comments. Release of patient-identifying and physician-identifying information is limited to that required for PRO review or for other statutorily required reasons; one effect of this restriction is that hospitals might not be informed about physicians whose practice patterns are being examined by the PRO for quality-of-care reasons.

PROs are required to exchange information with FIs and carriers, with other PROs, and with other public or private review organizations. For instance, they must contact state medical licensing boards to exchange data about quality review efforts and to establish mechanisms by which the state medical boards can send to the PRO the names of physicians against whom the board has taken disciplinary action. The PRO is then required to review all of that practitioner's cases (except for services provided in the physician's office) for the three months following notification by the board. PRO responsibility to provide information to state agencies on physicians who are involved in quality interventions (corrective action plans) or in sanction proceedings is less clear but certainly is contemplated. PROs are not at present required to submit information about physician sanction recommendations to the National Practitioner Data Bank (which is being established through the Health Care Quality Improvement Act of 1986 (P.L. 99–660) (Federal Register, 1989b).

Costs

The annual PRO program budgets (excluding internal expenditures of HCFA) have risen markedly in absolute terms in the late 1980s, although in earlier years they did not keep pace with the funding levels of the PSRO program (see Table 6.7). Overall, the budget now approximates $300 million a year (estimated for FY 1989), up from $157 million for the first round of PRO contracts.13 The Congressional Research Service (CRS) cited a figure of $187.5 million for FY 1987 and a proposed level of $176 million for FY 1988 (Cislowski, 1987); a budget of $330 million is estimated for FY 1991.

TABLE 6.7

Medicare Program, Professional Standards Review Organization (PSRO) Program, and Peer Review Organization (PRO) Program Expenditures: Selected Fiscal Years.

PRO budgets are based on negotiated costs for “simple,” “complex,” and ambulatory reviews, for fixed administrative costs and some start-up costs (largely accounting system updates), for photocopy and postage costs, and for costs of CHAMPUS review for those PROs doing such review. According to data from negotiated three-year contracts, per-review costs average $17.03 for simple review (range, roughly $13 to $32), $33.29 for complex review (range, nearly $27 to over $48), and $9.16 for ambulatory review (range, $4 to almost $15) (unpublished HCFA data, 4/11/89).

In FY 1987, Part A Medicare benefits amounted to $50.8 billion and Part B outlays to $30.8 billion (for a total of $81.6 billion) (Committee on Ways and Means, 1989). Two different figures have been cited for FY 1987 PRO outlays: $187.5 million by the CRS and $155 million (for just inpatient review) by the General Accounting Office (GAO, 1988a). Taking the higher (total) figure, PRO expenditures as a percent of Medicare outlays that year still amounted to only about 0.2 percent of all outlays (Table 6.7); focusing on just inpatient care, PRO expenditures approximated 0.3 percent of expenditures.

Table 6.7 shows estimated PRO budgets and Medicare outlays for FY 1989 and FY 1991; an intermediate estimate for FY 1990 puts expenditures at better than $112 billion and the PRO budget at $290 million. In all cases, PRO program budgets as a percentage of Medicare outlays is about 0.3 percent.

In proportion to total Medicare expenditures, these amounts for the PRO program are lower than those for the PSRO program (see Table 6.7). Even if the $11 million or so intended for pilot projects (see below) were added in to the estimates above for the PRO program, its expenditures would not exceed those of the PSRO program as a percentage of expected Medicare outlays.

Given the expanded responsibilities of the PROs compared with the PSROs, the markedly changing environment of health care for the elderly, and the greater perception of threats to high quality care in the future, some view this level of funding as parsimonious. Furthermore, even if the $300 million per year were adequate for all the varied activities presently required of the PROs, the need for future congressional or executive branch assignments to be adequately budgeted should be clear.

Quality Review in Medicare HMOs and CMPs14

As of April 1989, one million Medicare beneficiaries were enrolled in 133 risk contracts held by health maintenance organizations (HMOs) and competitive medical plans (CMPs); that figure accounts for about 3 percent of the Medicare population. The history of quality review for the care rendered to such beneficiaries by HMOs and CMPs is both complex and significant, the latter chiefly because it ushered in efforts (a) to design a way to reduce required review for providers having an adequate quality assurance plan of their own, (b) to review “episodes” of care, and (c) to review ambulatory care provided in physicians' offices.

Before COBRA 1985, no specific legislative requirements existed for the review of services provided to Medicare beneficiaries enrolled in risk-contract HMOs and CMPs. Because of continuing concern about possible underuse in risk-contract programs, COBRA 1985 mandated “comparable review” of care rendered in HMOs and CMPs for services given after January 1, 1987; it did not provide for pilot projects or staged implementation. The “comparable review” language was interpreted to mean that the number of cases reviewed must be at the same level as was occurring under PPS in the fee-for-service system; this in turn implied a substantial volume of medical record review.

To stimulate competition among review organizations, OBRA 1986 allowed review of HMO and CMP services by entities other than PROs (namely, Quality Review Organizations or QROs). At the outset, one QRO was awarded the review contracts for the states of Illinois, Kansas, and Missouri.15 All remaining HMO and CMP review was done by PROs for plans in their own states, except for California Medical Review, Inc., which was awarded the review for Arizona and Hawaii.

Limited, Basic, and Intensified Review

HMO and CMP review has three possible levels: limited, basic, and intensified (see Table 6.8). Basic review is the core approach to HMO and CMP review. Limited review is intended to reduce the volume of active PRO review relative to that of basic review, mainly by requiring smaller sample sizes. Intensified review has the same general meaning as in fee-for-service settings; that is, it is invoked when a threshold for a quality problem is reached, and sample sizes are larger (usually 100 percent of relevant cases). The three levels are not a continuum, because for plans on limited review, quality problems that reach specified thresholds trigger intensified, not basic, review.

TABLE 6.8

Summary of Activities for Health Maintenance Organization (HMO) and Competitive Medical Plan (CMP) Review, by Requirements for Limited, Basic, and Intensified Review.

For all three levels, medical record review is now required for five main areas of care (Table 6.9). First are hospital admissions for certain “sentinel” conditions such as serious complications of diabetes, certain malignancies, and adverse drug reactions. For these, both pre- and post-hospitalization ambulatory care is reviewed against criteria developed by the PRO. Second is a random sample of inpatient admissions. Third are samples of readmissions within specified time periods. Fourth are nontrauma deaths in all health care settings. A fifth area is focused review of ambulatory care, for which PROs were given six months to develop a methodology. Finally, beneficiary complaints are also monitored, and PROs must do community outreach activities for risk-contract enrollees similar to those for fee-for-service beneficiaries.

TABLE 6.9

Topics to be Covered in Health Maintenance Organization (HMO) and Competitive Medical Plan (CMP) Review, by Type of Review.

Limited review is available only to those HMOs and CMPs that request it and then pass a review of their internal quality assurance program. It has two basic components. First, if the PRO judges the plan's written quality assurance program to be adequate,16 it re-reviews a subsample of cases already reviewed by the plan to validate the plan's judgments; this is done when the plan is first assessed and on a quarterly basis thereafter. The purpose is to monitor the plan's internal program, not to provide a generalized statement about the quality of care provided. If patterns of problems are apparent, the PRO would monitor the plan's corrective actions. Second, the PRO will conduct the types of reviews noted just above.

Plans not opting for or not eligible for limited review are placed on basic review. This focuses on the same five areas (and community outreach) described above but requires larger samples. Not included in basic review is the extra quarterly review of charts to validate decisions made by plans on limited review.17

Under intensified review, the sample of cases reviewed is larger (up to 100 percent of cases). Limited review plans move to intensified review in one of two instances: first, for cases in the subsample validation review, if the PRO finds that 5 percent or 6 cases in a quarter have a problem that the HMO or CMP did not detect and, second, if 5 percent or 6 cases of all other cases reviewed are found to have problems related to standards of quality, appropriateness of care, or access. Basic plans move to intensified review only in the latter instance. Plans placed in intensified review remain in this status for six months before the status is reviewed.

Episodes and Ambulatory Review

By and large, the process for reviewing care rendered to Medicare beneficiaries in risk-contract HMOs and CMPs is similar to that followed for traditional fee-for-service settings (e.g., use of generic screens, assignment of severity levels, physician or plan notification, and the like). The main difference is that HCFA has tried to implement an “episode of care” approach through review of “complex” cases. A “complex case” is one in which services being reviewed were provided in more than one setting or involve more than one hospital stay; for example, cases selected under the 13 sentinel conditions would normally be classified as complex.18 For complex cases, PROs are expected to review the care rendered in all relevant settings (ambulatory, hospital, and post-acute).

Arguably the most significant step was the requirement for ambulatory review. This left to each PRO the responsibility for developing a focused review methodology and establishing clinical screening criteria to be used in reviewing the care rendered in the office setting for the 13 sentinel conditions. Because the HMO industry, the PRO community, and HCFA agreed that the possibility of dozens of different approaches to ambulatory review was not an attractive proposition, these groups agreed that an industry-PRO Task Force would be established to develop model methods to recommend to the PRO community. As of mid-1989, experience with the set of instruments developed was limited, but the process of collaboratively developing acceptable tools for such an effort was considered valuable.

Other Initiatives

HCFA has embarked on several efforts to improve its ability to review and assure quality of care for the Medicare program. Among those considered most important by the agency are the Uniform Clinical Data Set, improvements in inpatient and ambulatory review through pilot projects, small area analysis, and remedial medical education efforts19 in conjunction with state medical societies and others (Morford, 1989b). The first three activities, plus one relating to the hospital/post-acute care interface, are briefly described below.

Uniform Clinical Data Set

HCFA began in 1987 a complex project to develop a data set, known as the Uniform Clinical Data Set (UCDS), for use by the PROs and the wider research community. It was intended to contain far more detailed clinical data than was heretofore available in the HCFA data files. The genesis of the UCDS was the recognition that PRO judgments about the necessity, appropriateness, and timeliness of care vary appreciably and are too subjective.

One objective of the UCDS, therefore, was to put in place a mechanism to make PRO review more objective, systematic, and efficient through the application of a uniform set of electronically applied decision rules (computer algorithms) for screening cases. The second purpose of the UCDS is to permit the development of more and better epidemiologic information about the effectiveness of medical practices. This would give PROs, among others, a broader and stronger basis for decisions about quality, appropriateness, and medical necessity of care than is available from billing data alone. More broadly, HCFA hopes to be able to set national and individual PRO goals to improve quality of care and to measure PROs' success in reaching those goals (Morford, 1989a). Finally, the agency plans to make the UCDS data available for intramural and extramural analysis.

The basic operating premise of the UCDS is that relevant clinical data will be abstracted from medical records of all inpatient admissions reviewed by the PROs for whatever reason. The total number of data elements available on the UCDS is about 1600, although not every data element is relevant for every case. The contents of the UCDS fall into 10 major categories: I. Patient identifying information, II. Patient history and physical examination findings, III. Laboratory findings, IV. Imaging and other diagnostic test findings, V. Endoscopic procedures, VI. Operative episodes, VII. Treatment interventions, VIII. Medication therapy in hospital, IX. Recovery phase, and X. Patient discharge status and discharge planning.

Medical record data will be gathered by PRO abstractors either on-site or at a central office; data will be entered via desktop or laptop computers. At present, data abstraction requires about one hour per case, but that time requirement is expected to decrease as software is improved and experience gained. Detailed guidelines describe the data to be acquired.

Quality-of-care algorithms have been developed to screen cases for potential quality problems automatically; nurse reviewers will have more organized, objective, clinical information with which to flag instances of potential quality deficiencies for more in-depth review, and physician advisors will have better organized information on which to base their quality-review decisions. The computer algorithms fall into several categories: surgery (12 specific procedures), disease-specific algorithms (12 conditions), organ system algorithms (10 systems), generic quality screens (six classes of problems), and discharge screens.

As of April, 1989, the project was in a small pilot-test phase. Field testing of the whole approach, including use of algorithms to assist in the selection of cases for physician review, is expected to begin during the winter of 1989–1990. An assessment and recommendation about whether to go forward with this approach as an integral part of the PRO quality review task is expected late in 1990.

Pilot Projects for PROs

HCFA and the PRO community are embarking on a series of pilot projects designed to begin several review activities called for in legislation over the last few years. The two primary topics of these efforts are reduced (or alternative) hospital review and review of care given in noninstitutional settings, specifically physician offices and post-acute (HHA and SNF) settings. The reduced hospital review pilots may be constructed around use of the UCDS by hospitals themselves; the entire proposal for this pilot has been opposed by some groups because it appears to be too close to “delegated review” of the sort discussed earlier with respect to the PSRO program (Vibbert, 1989e).

Approximately $9 to $11 million in Medicare Trust Fund monies will be set aside over three years to fund new pilot programs. Only PROs will be eligible for funding (through contract modifications), although they can and will subcontract with each other and with outside research and academic groups for relevant expertise. Two formal requests for contract modification proposals (for noninstitutional and alternative hospital review methods) were released in May and July 1989, and several PROs have submitted proposals. One pilot project on noninstitutional review began on December 1, 1989. The emphasis is on ambulatory (office-based) care, and the project will evaluate the practicality and usefulness of techniques to evaluate care in this setting.

Small Area Variations

Perhaps the most ambitious PRO project currently underway is a small area analysis of variation in utilization and outcomes of hospital care, which is being conducted by the American Medical Review Research Center (AMRRC, 1989). The project began in October 1987 and is expected to continue until June 1990. This project will compare rates of use of hospital services in 1984–1986 in approximately 4,800 hospital market areas. Using these data, project investigators will (a) develop and disseminate information on use and outcomes of hospital care; (b) engage 12 PROs in a complex pilot education program to review, interpret, and feed back information to physicians on identified practice patterns;20 (c) improve the use of small area analysis methods as an operational tool for PROs; and (d) examine various intervention strategies (such as physician study groups) to determine how they might best be applied in both the public and the private sectors. The physician study group phase will include five surgical conditions (coronary artery bypass graft, cardiac catheterization, carotid endarterectomy, male reproductive organ operations, and small and large bowel operations) and five medical diagnoses (chronic obstructive pulmonary disease, pneumonia, bronchitis and asthma, acute myocardial infarction, and diabetes).

Uniform Needs Assessment

OBRA 86 mandated the development of a “uniform needs assessment instrument” to evaluate the needs of patients for post-acute care such as HHA and other health-related long-term-care services. This instrument would be used by discharge planners, hospitals, nursing facilities, HHAs, and other providers, as well as by FIs, to make decisions about post-discharge needs and payment.

HCFA (specifically the Office of Survey and Certification of HSQB) has pursued instrument development with the assistance of an advisory panel. An extensive effort was made to solicit review and comment on the final draft of the instrument in preparation for its final approval and proposed field testing for reliability, validity, and administrative feasibility. HCFA plans to develop a users' manual and a standard training process in its use.

Monitoring and Evaluating PROs

PROMPTS-2

The PROMPTS-2 system focuses on whether individual PROs have ful-filled their contractual obligations. Specific attention is given to timeliness and accuracy of medical review, responsiveness to beneficiary and provider inquiries, personnel requirements, report generation, and cost effectiveness. This review is required twice during a contract cycle and is completed by HCFA Regional Office (RO) staff.

PROMPTS-2 does not generate information on the types of quality problems the PROs detect (or fail to detect). The process largely duplicates the SuperPRO effort, although on a considerably smaller scale. Questions have also been raised about inconsistency across ROs, the expertise of their medical reviewers, and the validity of their decisions, as well as about the ability of data so generated adequately to discriminate among PROs (OIG, 1989). A new PROMPTS is being developed to ensure consistency among regions.

SuperPRO

SuperPRO (a contract activity performed by SysteMetrics) conducts one aspect of performance evaluation of individual PROs by reviewing a sample of hospital records previously examined by each PRO and making an independent decision about necessity, appropriateness, and quality of care. The main objectives of SuperPRO have been (a) to validate the determination made by the PROs, specifically on admission review, discharge review, and DRG validation; (b) to validate the medical review criteria being used by nonphysician reviewers for admission review; (c) to verify that nonphysicians are properly applying the PRO's criteria for referring cases to physicians for review; and (d) to identify quality issues that should have been addressed by the PRO.

Cases identified by the SuperPRO as having quality problems are reported to the PRO, which can further review the case, appeal the judgment of the SuperPRO, and provide additional information in its rebuttal. Approximately 25 percent of PRO appeals lead to reversals of decisions in favor of the PRO.21 HCFA then attempts to compare SuperPRO findings with PRO findings to determine whether either the PRO program or individual PRO performance needs improvement or modification.

SuperPRO cannot provide information about the incidence of quality problems in the Medicare population because it only re-reviews cases already reviewed by the PRO; neither does it address how the PRO selects cases or whether cases not reviewed by the PRO should have been. Comparisons of PRO and SuperPRO information about the prevalence of quality problems cannot be exact because the review methods (particularly the level of information from the attending physicians or hospitals) are not the same. Generally, SuperPRO data cannot be used to assess a specific PRO's performance, and the value of SuperPRO review compared with that of PROMPTS-2 has been questioned (OIG, 1989). Until mid-1989, SuperPRO reports were considered “advisory” and did not affect payment of claims for Medicare services.

A new competitive contract for the SuperPRO was issued in mid-1989 and awarded to the previous SuperPRO contractor, SysteMetrics. It had several significant changes from the earlier SuperPRO effort (HCFA, 1989a). First, HCFA (not the PROs) will select the random sample of cases, now to be 600 per month per six-month cycle in the following allocations: inpatient admissions (217); HMO cases (195); and ambulatory surgery cases (188). Second, if the PRO disagrees with the SuperPRO decision and sends a rebuttal, the SuperPRO will do a review that may include “local” criteria. HCFA intends that the PROs and SuperPRO should be on a “level playing field” and that the SuperPRO should use the same information that the PRO originally had in making quality judgments—that is, the material reviewed by SuperPRO in making its decision is the information the PRO obtained from the hospital or physician in reaching its final decision. Nevertheless, the SuperPRO still will not seek additional input from the hospital or physician whose care is under question. (This is the point at which PRO and SuperPRO procedures differ and conceivably “bias” the evaluation against the PROs.) Third, HCFA will now use SuperPRO results as a formal (not advisory) part of its evaluation of PROs, and thus the PRO rebuttal process has been strengthened.

Because disputes between PROs and the SuperPRO are likely, HCFA has decided to implement a nationwide “physician consultant contract” by which they can be adjudicated (Vibbert, 1989c); another option is to ask the HCFA Regional Offices to resolve differences between SuperPRO and the PROs (OIG, 1989). Given the serious questions that have been raised about SuperPRO performance and usefulness, these moves must be regarded with some skepticism.

AMPRA 1989 Impact Survey

Neither of these evaluation activities provides any concrete sense of “how well” PROs are doing either individually or collectively in improving the quality of care rendered under Medicare. PRO evaluators give great attention to compliance with contract specifications, have a much more complex program to assess, and face essentially the same difficulties as did the evaluators of the PSRO program.

To help overcome this paucity of “real life” information about impact on quality and what PROs are doing to accomplish this goal, the American Medical Peer Review Association (AMPRA) begin in mid-1989 the first of several contemplated surveys on PRO impact. Topics of the survey include each PRO's general impressions of the impact of PPS on quality of care, the impact of PRO review on rates of hospital utilization and on quality issues, and the impact of DRG validation; it also asks each PRO to describe its educational focus, to give its views on how to improve PRO review methods, to document the level of involvement in private review activities, and to supplement the survey with commentary on PRO effectiveness. Results from these surveys were expected in fall of 1989.

CONTROVERSIAL OR PROBLEMATIC ASPECTS OF THE PRO PROGRAM

Several experts and sources of information for this study have pointed to various problems with the current PRO program. Some of these problems apply generally to the program's review methods, to its administrative and sanctions approaches, or to legal or financial constraints. Others relate to the efforts to move the fee-for-service (and PPS-) oriented review activities to the prepaid group practice (HMO and CMP) settings. This section briefly summarizes these problematic aspects of the PRO program; they are discussed more fully in Volume II, Chapter 8.

Generic Screens

Inpatient Generic Screens

The initial experience with inpatient generic screens has come under considerable scrutiny and criticism.22 Issues include extreme variation across PROs and poor yield of true quality problems.

Data compiled by HCFA through June 1989 reflect wide variation across PROs in the incidence of screen failures and confirmed quality problems; depending on the specific screen, screen failures among cases reviewed ranged from 0.2 percent to over 38 percent, and confirmed problems from 0.0 percent to 100 percent (Table 6.10). Similar variations were documented by the GAO (1988a) and by the OIG (1988a). GAO (1988b) noted that the PROs themselves rate generic screens behind nurse judgments and profiling and tied with intensified review in terms of their effectiveness in identifying cases with possible quality of care problems.

TABLE 6.10

Range of Generic Screen Failures and Confirmed Problems (in Percentages), by Type of Screen.

One drawback of these rate calculations is that the percentages of confirmed problems are based on a denominator of referred screen failures, not of the universe of cases reviewed. Thus, PROs that look quite different on the two measures may actually be detecting fairly similar rates of problems.23 The more fundamental question, therefore, is what fraction of all charts reviewed actually reflects a true quality problem.

Table 6.11 gives the average rates of screen failures and confirmed problems among screen failures compiled by HCFA through June 1989, which document the large differences across the different generic screens, based on more than 6.3 million cases reviewed. The first two columns clearly reflect the highly dissimilar rates of failures and confirmed problems as a percentage of screen failures. The third column of Table 6.11 gives the percentages of confirmed problems among all cases reviewed. It shows the very low yield of confirmed problems as a percentage of cases reviewed. Thus, the screens appear to be of some, but only modest, success; the most productive screens relate to adequacy of discharge planning and nosocomial infections.

TABLE 6.11

Percentage of Cases Failing Generic Screens and with Confirmed Problems, by Generic Screening and Universe of Cases.

Generic screens are applied to cases targeted for review for many reasons. The 3-percent sample could be said to represent the universe of Medicare admissions, and the lower panel of Table 6.11 reports the percentages of actual quality failures and confirmed problems for just that sample. If those figures are compared with the data relating to all reviewed cases (the upper panel of Table 6.11), the yield from the random sample is roughly the same than from all sources of reviewed cases, for every generic screen except medical stability at discharge. The latter includes, of course, the randomly selected cases, cases selected for expected quality problems, and other cases picked for review that do not relate presumptively to quality problems (e.g., those required for review by virtue of being one of the 12 Medicare Code Editor principal diagnoses). How useful the screens are for the last type of cases is unknown.

Another, and perhaps more pressing, issue is why PROs differ so dramatically in the rate of referrals and confirmed problems. The process is supposed to be quite standardized (through interpretative guidelines), but it clearly can differ very much from PRO to PRO (ProPAC, 1989). For instance, training for nurse reviewers and physician advisors, the use of specialists, and consultation with attending physicians are not standardized.

Another facet of the differences across PROs is that of quality problems never detected (and hence never addressed) in any formal way. One study estimated that as many problems were present among cases not flagged by screens (e.g., in about 5 percent of the cases reviewed) as were identified by the screens (ProPAC, 1989). Reasons for this may include the fact that nurse reviewers differ in how narrowly or expansively they interpret the screens and guidelines. Moreover, because of the required case selection specified in their contracts and the close relationship of the budgets to those required types of review, PROs may choose not to select “extra” providers, physicians, or problems for review even though they may suspect substandard care, although recently HCFA has begun to pay for such review.

Furthermore, PROs differ in the collection of cases to which they apply generic screens both because they have hospitals and physicians on 100-percent intensified review for different reasons and because they have different mixes of hospital transfers to other types of units. Finally, some cases are targeted for review precisely because a quality problem is considered more likely (e.g., day or cost outliers; the first of a pair of admissions within 31 days; and most cases on intensified review). The question here becomes the marginal productivity of the screens given that there is already reason to believe a quality problem might be present.

The PRO community initially argued for this type of review tool to be used nationally, and a majority of PRO officials and HCFA staff believe they have been at least moderately effective (OIG, 1988a; GAO, 1988b). Nevertheless, generic screens as applied so far have not been entirely successful in efficiently identifying quality problems, and generic screen data cannot be used to project “national rates of occurrence” of the various problems identified through the screens (HCFA, 1989b).

Various difficulties remain. Their application is highly labor-intensive. Apparently they still yield considerable false-positives, regardless of the relaxation of the requirement that nurse reviewers must refer failures for physician review, and they have a nontrivial false-negative rate as well. Revisions to the generic screens for the third SOW are essentially untried as of this date. Furthermore, some PROs have found that their own additional screens do as good a job or better than the HCFA screens.24 Finally, there are numerous reasons why PROs can legitimately differ in the rate of cases detected by the screens, making conclusions about the uniformity of this tool difficult to draw.

Thus, standard, well-known generic quality screens may allow HCFA and the public to track quality problems at least at a state level (depending on how much should be assumed about the reliability and validity of these, or any, generic screens). Less certain is whether they can or should be used to compare PROs' performance. In short, whether generic screens are a strong and reliable tool on which to base a considerable part of the Medicare quality assurance effort seems problematic, unless and until they receive closer examination and refinement.

Related Approaches

This experience with inpatient generic screens underscores the need for rigorous pilot-testing of similar instruments designed for application in nonhospital settings, where there is vastly less experience with them. Perhaps more importantly, it argues for considerable testing and review of the computerized screening algorithms now being developed for the UCDS, which are intended to supplant the present generic screen approach. Reasons for caution are that the UCDS approach is so radically different from what the PROs have used so far and that the cost of implementing such an extensive data collection effort is likely to be high.

Home Health Agency Review

PROs that had begun HHA review during the site visits for this study noted two significant problems. First, selecting an appropriate sample for this task requires that hospitals bill for the two admissions in a reasonably timely way. At least one PRO noted, however, that some hospitals bill for two admissions more than 31 days apart (which would not constitute a reviewable readmission) and only much later bill for the admission that occured within 31 days of the first admission. This practice severely complicates the identification of 31-day readmissions and hence of cases that would constitute the potential pool of HHA care. A related sampling problem is simply that the pool of HHA cases for readmissions only is itself small and whether it is representative of all HHA care is unknown.

Second, at least one PRO noted that the HHA sector is undergoing great growth and change, including the emergence and disappearance of “fly-by-night” agencies. Agencies might be out of business by the time the PRO knew what cases of HHA intervening care had fallen into its sample. Review in that case probably would be impossible and certainly would be moot.

Pre-procedure Review

Whether PROs should be doing pre-procedure authorizations is part of a complicated issue concerning what entities should be doing physician review. It has generated considerable debate for the Physician Payment Review Commission (PPRC, 1989). The debate concerns two issues. First, which entities (carriers, FIs, and or PROs) should conduct prior authorization of procedures? Carriers and FIs have a history of prepayment review of physician services more extensive than that of PROs.

Second, is this primarily an exercise in utilization and cost control or in quality assurance? It may never be possible to draw a firm distinction between prior authorization activities that serve a quality assurance function and those that are more purely intended to control use. To the extent that the latter purpose is preeminent, however, it could be claimed to detract from an emphasis on quality intended by Congress for the PROs.

Physician Review for Quality of Care

A related issue concerns the appropriate locus of responsibility for reviewing services, particularly ambulatory services. PROs, FIs and carriers have overlapping, or possibly conflicting, responsibilities. They operate in different ways, with different data bases and different rules, such as when (before or after hearings) they can deny payment and what information about review criteria and screen thresholds must be made public. They also collectively leave a big gap. According to PPRC (1989), none of these entities has specific responsibility for reviewing most Part B services for quality of care. Carriers have authority to deny payment and initiate sanctions for substandard, unnecessary, or inappropriate care. PROs, however, are charged with reviewing office-based (ambulatory) care, which they do not yet do (although one pilot project on office-based care has begun). In short, the picture of what agencies have what authority to review outpatient care for quality of care and to take action in the face of instances of poor care remains clouded.

PPRC (1989) made four recommendations concerning Part B carrier and PRO utilization and quality review. First, HCFA should establish procedures to encourage input from carriers and PROs in designing utilization and quality criteria, in developing physician profiling methods, and investigating physicians suspected or providing inappropriate or substandard care or billing inappropriately. Second, HCFA, carriers, and PROs should work together to delineate future roles of PROs in doing ambulatory care review. Third, PROs and carriers should consult with appropriate medical organizations when developing review criteria (over and above what they are required to do now). Fourth, HCFA should designate a single entity to support research, demonstrations, evaluations, and technical assistance for all three entities doing utilization and quality review.

Peer Review

Despite the historical emphasis on peer review in federal programs and on this specific emphasis in the PRO SOW, physicians and hospitals heard from during this study during widely contended that PRO reviewers are not “peers.” The points in contention concern rural practitioners and providers, specialists anywhere not reviewed by members of their own specialty, physicians fully in private practice reviewed by physicians only partly still in practice (e.g., because they are semi-retired), physicians for whom the relatively low reimbursements for PRO review are an important portion of their income, and physicians in prepaid group practice settings reviewed by those in fee-for-service settings.

Several issues arise concerning review of care rendered in the TEFRA risk-contract HMOs and CMPs. Most basically, physicians in fee-for-service practice are believed to be poorly placed (and historically to be ill-disposed) to judge the care in HMOs on a “peer basis”; the premises underlying prepaid practice and the resulting styles of practice are simply too different. There is some concern that using “local standards of care” may perpetuate existing practice patterns and vitiate the potential of prepaid systems for innovation and improved service to the Medicare population. Conflicts of interest can arise in several instances: when only fee-for-service physicians review prepaid group care, when HMO or CMP physicians review care rendered by a plan from which they may receive financial benefit, and when they review care from a competing plan. The PROs are expected to develop mechanisms for addressing these possible situations.

PROs we visited acknowledged the problems concerning rural areas and specialists but generally defended their record of using peers. They cited budget constraints as playing a large role in these problems; among these were not being able to maintain regional offices in rural areas and not being able to reimburse reviewers at competitive levels. The emerging debate about “quality denial letters” discussed elsewhere in this chapter is expected to add to the problems of recruiting specialists and, especially, sub-specialists.

Sanctions

Retention and Strengthening of Sanction Authority

The role of PROs in the sanctioning process, and the role of sanctions in the quality assurance efforts of the PROs, have both been misunderstood over the course of the program. PROs can only recommend sanctions to the OIG, not impose or enforce them. Although their recommendations are the driving force behind the sanction process, they may have little influence over the outcomes of sanction efforts that are carried through the entire set of legal procedures.

Nevertheless, PROs are virtually uniform in their view that having the sanction-recommendation capability is an indispensable tool in their dealings with providers and practitioners whose performance is unacceptable, as evidenced by statements on study site visits, testimony from the PRO community, and other information (GAO, 1988b). PROs would not be willing to relinquish the sanctioning authority they now have in favor of simply greater educational or persuasive interventions, even if that step seemed to place them in a more “positive” light vis-a-vis the provider community. In view of the difficulties of the PSRO program, which did not have all the regulatory powers of the PRO program, weakening them for the PRO program does not seem to be an attractive option.

Correcting other problems of the entire sanctioning process, however, does appear to offer ways to strengthen the government's ability to protect the quality of care delivered to Medicare beneficiaries (Jost, 1988). Several issues have been debated over the past year or two, and developments toward the end of 1989 may solve some of the more knotty problems, including monetary penalties, the “unwilling and unable” provisions, and adequacy of notice to practitioners and providers.

Three different groups (the OIG, the GAO, and the Administrative Conference of the United States [ACUS]) have all recommended that the monetary penalty option be strengthened. Options include, for instance, allowing the PROs to recommend a “substantial” fine of, for instance, up to $10,000 per violation of Medicare obligations (Vibbert, 1989c; OIG, 1988b) or enacting legislation that sets a fixed upper limit to monetary policies in place of the present cost-based limit (GAO, 1988b). The requirement that providers or practitioners be found “unable or unwilling” to meet their Medicare obligations (in addition to finding that they have not in fact complied with those obligations) has caused unending confusion and frustration with the sanction process. The problems were sufficiently apparent and persuasive toward the end of 1988 that the OIG recommended that DHHS submit a legislative proposal to the effect that failure to comply with patient care obligations was sufficient basis for sanctioning (OIG, 1988b). The ACUS has endorsed a recommendation to remove the “unwilling and unable” requirement before sanctioning and to build in protections concerning due process (Vibbert, 1989c),25 suggestions that seem worth pursuing.

The concept of not meeting “professionally recognized standards of care” has evidently been confusing to some parties (from PROs through ALJs). This creates difficulties for PROs in documenting the sanctionable infraction. PROs have in some cases issued vague charges and in other instances raised new issues at sanction meetings that were not reflected in the original notice (Jost, 1988). A possible result has been a high rate of reversals of OIG sanction actions by ALJs (10 of 18 cases by one recent count) (Vibbert, 1989b). In an effort to correct this problem, HCFA has issued model notice letters for PROs to use, but additional steps would probably be needed.

Denials for Substandard Quality of Care

COBRA and OBRA 87 allow PROs to deny Medicare payment for substandard quality; a draft proposed rule to implement these requirements was published in January 1989 (Federal Register, 1989a). It required payment to be denied when substandard care resulted in actual, significant adverse effects on the beneficiary (defined very broadly) or placed the beneficiary in imminent danger of health, safety, or well-being (i.e., put the beneficiary in a situation that constituted a gross and flagrant violation).

To protect the concept of peer review, the proposed rule specified that physician reviewers engaged in initial denial determinations of substandard quality be specialists in the same field as the attending or consulting physician whose care is under question. This requirement would be relaxed when meeting it would compromise the effectiveness or efficiency of PRO activities.

The proposed rule further provided that hospitals will be held financially liable even if they did not contribute directly to the substandard care rendered by the physician; thus, any denial of physician payment on these grounds would also result in a denial of reimbursement to the hospital. Furthermore, physicians may not charge patients for the care denied for these reasons and, if they have done so, must refund those payments to the beneficiary.

The proposal then specified that the PRO shall notify the patient when such payment has been denied on the basis of substandard care. The key paragraphs would read: “…Our determination [concerning denial of Medicare payment of a hospital admission or physician services provided in connection with that admission] is based on a review of your medical records, which indicates that the quality of services you received does not meet professionally recognized standards of health care. Denial decisions are made by the PRO physician. Your attending physician and hospital were given an opportunity to discuss your case with the PRO before the denial decision was made…” (Federal Register, 1989a). In the initial proposal, the letter would have been sent before providers were able to exercise their rights to appeal (i.e., to have the case reconsidered), rather than after a final determination had been made, although in this instance the initial denial determination was supposed to be made by a physician in the same specialty as the physician whose payment is questioned.

The entire quality denial process prompted much debate. Among the concerns was the lack of protection for physicians if they cannot invoke their full due process rights to reconsideration before their patients are notified of such denials and the expected increased difficulties in recruiting the specialists that will be needed to participate in these reviews and decisions. The ACUS has recommended that HHS proceed expeditiously to implement PRO authority for quality denials but with appeals before patient notification (Vibbert, 1989c).

Other criticisms centered on the effect of the “quality denial letter” on patients and physicians (and the patient-physician relationship), the impetus such letters might provide for increased malpractice suits filed by beneficiaries, and the impact of higher litigation on PRO activities. Yet other controversies focused on how much specific information the PRO should have to put in the letter to the beneficiary; some want to keep the letters general but specify that care was substandard, others want more specificity about what was discovered that led to that decision, and yet others want the letters to say only that care did not meet Medicare payment guidelines (and not refer to the denial as a quality denial) (Vibbert, 1989b).

OBRA 1989, passed in late November 1989 (after the main part of this study had been completed), addressed some of these issues (Congressional Record, House, November 21, 1989, p. 9380). First, it protected the physician or institutional provider from unwarranted notices to patients. Specifically, it provided that the PRO should not notify beneficiaries until after the PRO had notified practitioners or providers of its determination about the quality problem and their right to a reconsideration; if the practitioner or provider requests such a reconsideration, then one would be conducted before any notices to beneficiaries. Second, it softened the wording of the beneficiary notice, by saying that the letter need only state: “In the judgment of the peer review organization, the medical care received was not acceptable under the Medicare program. The reasons for the denial have been discussed with your physician and hospital” (p. 9380).

Administrative Procedures

The authority for PRO activities resides in several legislative acts, a broad array of regulations, guidelines, and directives, and various quasiregulatory documents. The practice of relying on Manual transmittals, contracts, and other less formal instructions, instead of promulgating regulations through “public notice and comment” rulemaking as required by the Administrative Procedure Act (APA), has raised serious questions (Jost, 1988). Arguably, HCFA has opened the door to accusations that it is attempting to govern the PRO program through “a continual and confusing stream of instructions [that has] severely hampered their [the PROs'] ability to carry out their mandate” (Jost, 1988) and earned the hostility of those governed by the program. Some experts argue that sound policy reasons support using the more cumbersome process (Jost, 1988). It promotes public participation and fairness to parties who will be affected by the rules; it also forces the agencies to consider their proposals with greater care and to express them clearly.

Legal suits and legislation in the last few years have clarified the situation somewhat, apparently in favor of somewhat more rigorous rulemaking and public procedures.26 Nevertheless, the question of public access to, understanding of, and ability to comment on the myriad rules governing the PRO program remains important. One approach to resolving it might be to appoint a “national advisory council” similar to the one that existed during the PSRO program.

Other or additional options also exist. These include: publishing PRO SOWs and any changes or modifications made during the contract cycle for an abbreviated period for comments; publishing final provisions at least 30 days before their effective date; making PRO contracts, interpretive rules, statements of policy, and guidelines of general applicability available in places of easy public access; and publishing updated lists of these materials every three months. The ACUS argues for even more formal rulemaking procedures “…except when the agency has ‘good cause' to believe the process is ‘impracticable, unnecessary, or contrary to the public interest'” (Vibbert, 1989c). The national PRO trade association favors the appointment of a “National Peer Review Council” comprising representatives from Congress, HCFA, PROs, providers, Medicare beneficiaries, and academic research. One major assignment would be to develop performance indicators for the PRO program as a whole (Vibbert, 1989e).

Evaluation

Considerable criticism can be leveled at how the PRO program itself is evaluated, especially in terms of its impact on the quality of care. Virtually no reliable or comprehensive examination of PRO program impact has been undertaken by DHHS. The several careful external examinations by, for instance, the OIG and GAO have tended to focus on specific operational aspects, such as the usefulness of generic screens or structural aspects of PROs. The same assessment can be made of how HCFA evaluates individual PRO performance; existing tools such as PROMPTS-2, although in transition to improved efficiency, have not been especially successful at providing a coordinated approach to evaluation. The OIG in particular has been critical of HCFA's ability to assess efforts at PRO performance evaluation (OIG, 1989). When combined with the lack of public oversight and accountability, these evaluation issues appear to have high priority for attention and correction.

Issues in HMO and CMP Review

Records and Case Selection

For HMOs and CMPs, cases for inpatient review are selected on the basis of claims submitted to FIs. This approach does not work well because of insufficient reporting of HMO and CMP admissions (because hospitals have no incentive to bill for such admissions). Thus, it produces a very inadequate “universe” of inpatient claims from which to select the relevant samples. Although efforts have been made to force hospitals to prepare and submit these bills to the FIs, HCFA still estimates that only about half are being submitted (O'Kane, 1989). HCFA has designed measures to overcome this inadequate pool of cases that rely on random sampling procedures; because HMOs and CMPs will differ in the proportion of their total hospitalizations subject to this form of random sampling, an additional source of variability has been added to review in the risk-contract segment of the Medicare program.

Obstacles to acquiring medical charts are also considerable. Obtaining hospital charts is not appreciably more difficult for the prepaid group practice sector than for the fee-for-service sector, although both systems contend that low reimbursement of copying costs ($0.049 per page) and lack of reimbursement for administrative costs have been problems.27 For outpatient records, however, the problems can be extreme, when records for one plan must be retrieved from numerous health centers. Although the problem is manageable for most group- and staff-model HMOs and even for group network models, it presents IPA-model HMOs with extraordinarily complex logistics, since large plans of this sort may have hundreds of physicians practicing in individual offices. HCFA has indicated it would support legislation to allow HMOs to be reimbursed for administrative costs of retrieving such records, which should alleviate the problem to some degree.

Limited Review

One of the more contentious issues in HMO and CMP review has been the limited success of so-called limited review (only 11 of 133 risk contractors currently). Several factors seems to have been at work. First, PROs were unfamiliar with the notion of reviewing an HMO's own quality assurance plan, and HMOs may have been reluctant to put themselves in the position of having their internal programs subjected to an unpredictable and uneven evaluation process. Second, PROs are expected to review all care rendered in a case that falls into the limited review sample even if, in the HMO's own program, only selected parts of that care had been subjected to review; the HMO thus became liable for a failure relating to care it had never reviewed as part of a quality assurance plan that the PRO had found acceptable. Third, the main argument for limited review was that it reduced the number of cases subject to review; in practice, however, HMOs subject to limited review can end up having as many cases reviewed as if they were on basic review. Finally, in theory the HMO under limited review can run more of a risk than the HMO under basic review of being subjected to intensified review because of the quarterly PRO review of the “validation subsample” (that is, the cases that the HMO was investigating as part of its own plan).

Ambulatory Care Review

With respect to ambulatory review, the interesting question is how physicians will respond to review of the care they provide in their own offices. Given the lack of experience and the absence of proven tools for ambulatory review, implementing fee-for-service office-based review incrementally is arguably a good strategy for the Medicare quality assurance program; the expected PRO pilot projects are a step in the right direction. This differential between the fee-for-service and the prepaid group practice sectors does, however, place the HMO community in a position that they can understandably regard as unfair. HMO physicians' resistance to being reviewed when their fee-for-service counterparts are not might add to the incentive for plans to withdraw from the risk-contract program.

Accountability

Who is responsible for quality problems is a question that arises in any health care delivery system, but it is especially salient in the complex world of prepaid group practice arrangements. For instance, for HMOs that do not own their own hospitals, that contract for certain types of care (e.g., subspecialty care), or that cover their members on a fee-for-service or contractual basis for out-of-plan care, the issue of whether they are accountable for care well beyond their ability to oversee or control becomes very complicated. When an HMO's patients are widely dispersed across hospitals and other providers, the HMO can find itself held responsible by the PRO program for quality problems without any authority or ability to monitor or control the performance of those providers. Legal precedent and rulings concerning whether entities that employ physicians and other professionals are held to different standards than those that only contract with physicians (essentially the distinction between group and staff models on the one hand and IPA-type models on the other) further complicate this picture.

Other Issues Relating to HMO and CMP Review

PROs differ dramatically in the proportions of quality problems they find in HMOs; one accounting showed a range of 1.8 percent in one state to upwards of 30 percent in three states (O'Kane, 1989). Variations of that magnitude call into question more than the true quality of the care being rendered. They raise red flags about whether HMOs operating in several states are being subjected to the “same” review (because the PRO in each state is responsible for the state-specific portion of the HMO risk-contract care28 and about the validity of inferences drawn from comparisons of HMOs with each other and with the fee-for-service system. The question of valid comparisons is especially problematic for states with only a single risk contractor, because information about numbers of cases reviewed and numbers and percentages of quality problems cannot be protected from public disclosure. For an HMO with an “unblemished” record, this obviously poses no problems, but for an HMO with anything less, the risk to its competitive position (vis-a-vis other HMOs in the state) could be considerable.

CONCLUSIONS ABOUT THE PRO PROGRAM

The most important conclusions drawn by the committee from this description and review of the PRO program, in the context of this study's long-term goals for Medicare quality assurance, are the following. First, the program is sufficiently well-established that it should be improved and built on, not dismantled. It is costly in financial and psychological terms to dismantle an existing program and to create a new one. Moreover, the existing program has procedures and organizational relationships (some dating from the PSRO program) that should be brought to bear for any future Medicare quality assurance program. The cadre of experienced professionals in PROs across the nation is a particularly valuable asset for any future quality assurance program for Medicare.

Second, Congress has invested the PRO program with responsibility for the quality of care of an appropriate range of health services, but definitional, operational, and strategic problems remain. We noted in Chapter 1 and elsewhere the importance of defining quality of care as a means of directing the efforts of a quality assurance program, and we have further emphasized the importance of health outcomes in that definition. Neither of those concepts is yet specifically tackled as part of the present PRO program.

Third, the program should be focused much more single-mindedly on quality review and quality assurance, less on direct cost and utilization control, and even less on activities of at best peripheral utility to improving the quality of health care. Some current program activities, such as review of hospital notices of denial and the Important Message to Beneficiaries or aspects of beneficiary and community outreach, warrant explicit assessment and re-consideration in terms of their contribution to improving quality and in terms of whether they should be conducted “locally” or through a different national effort. Some activities might be conducted by other agents. For instance, the carriers might profile physician claims to detect aberrant patterns of practice, leaving “on-site” review of quality of care that requires clinical judgment to the quality assurance program. Beneficiary communications and outreach materials might be developed and disseminated at the federal level.

Fourth, a more forceful emphasis on quality is especially important because the PRO program is not now in a good position to focus on important health outcomes, especially as broadly envisioned as in this report. It is also not well positioned to focus on populations (outside the small HMO and CMP enrollee population). Transforming the Medicare quality assurance program into one as heavily oriented toward patient well-being and population outcomes as intended by this committee will require considerable resources, sophisticated planning, and concentrated effort within the PRO and professional communities. Activities that do not obviously serve this central quality assurance purpose should be eliminated, downgraded, or moved to other agencies. For instance, one might envision much greater responsibility for claims analyses for FIs and carriers, with timely and substantial sharing of information on problem practitioners and providers to the Medicare quality assurance program. That program would, in turn, have much greater responsibility for the clinical aspects of quality review and assurance methods and for longitudinal patient outcomes.

Fifth, some rethinking of the methods for doing nonhospital review is warranted. Reviewing “intervening care” for the HHA and SNF settings does not make much sense in either a conceptual or a statistical sense, and it has proven technically troublesome and unproductive; a considerable overhaul of post-acute review would be in order. In addition, the mismatch between the fee-for-service and the prepaid group practice sectors in ambulatory care review (both scope and methods) seems unfair and possibly counterproductive. Limitations of and uncertainties about the implementation of the HMO-CMP ambulatory care effort suggest that ending the present approaches and conducting pilot projects in both sectors to develop appropriate methods might be desirable.

Sixth, the system of legislation, regulations, interpretative guidelines, transmittals, and so forth that comprise the rules governing the program has less public oversight and input than desirable. Moreover, the great complexity, confusion, and lack of uniformity in the program prompts questions as to how well agency planning, implementation, and oversight has served the congressional purposes for the program or the Medicare beneficiaries' needs. The program needs to have a more open or public mechanism for program planning, oversight, evaluation, and accountability.

Seventh, this committee has strongly endorsed a move to finding ways to emphasize positive achievements, to recognize good (and “excellent”) performance, and to reward providers and practitioners when they provide good quality care and mount successful quality assurance program. It has also emphasized that the Medicare quality assurance program should be able to identify and deal with poor performance. For the latter objective, a “quality intervention plan” with several types of interventions and sanctions has been developed. Although the new QIP procedures (especially sanctions) warrant some changes, generally it might be seen as a reasonable starting point for the regulatory aspects of the MPAQ.

The program has little or no experience, however, with the former goal, namely recognizing and rewarding good performance. One strategy is to reduce the level of external review for good performers (and perhaps concomitantly to increase the level of internal review). The acceptance of and results of limited review in the HMO and CMP plans to date suggest that that approach does not provide a satisfactory model. Although delegation in the old PSRO sense may not be an attractive plan and is actively opposed by some in the peer review and provider communities, some form of delegation clearly has to be contemplated. This is tantamount to saying that relying on hospitals to conduct chart review, with external oversight from PROs, deserves careful consideration and testing.

Virtually no information is available on more radical ideas, such as rewarding good performance with public acknowledgement or financial payments, certainly not in the PRO program. Thus, much attention will have to be given to achieving the related goals of meaningfully recognizing the provision of good quality care (or of maintaining a good internal quality assurance program) and reducing the level and intensity of external review. Greater public and expert inputs into and oversight of such efforts are desirable.

Eighth, it is unclear that the present approach to “peer review” provides the PROs with state-of-the-art professional knowledge or the highest levels of specialist expertise. Several factors militate against involvement of busy private practice specialists in PRO review, such as the low reimbursements for review activities (especially relative to what is paid by HCFA Regional Offices or other review entities), distaste for the “quality denial letter,” and knowledge of the frustrations inherent in certain PRO processes (such as review of high volumes of false-positive generic screen failures and the cumbersome sanctioning process). Lack of understanding of the need for specialist participation in developing and promulgating procedure-specific practice guidelines and prior-authorization criteria, and consequent lack of acceptance of the guidelines and criteria that PROs do develop, may also play a role. Finally, despite two decades of inspired leadership in the medical community in the quality assurance and peer review movement, many physicians remain suspicious of and hostile to PRO activities, continuing to perceive it at one and the same time as intrusive, arbitrary, and punitive—and fundamentally irrelevant to improving quality of care.

Ninth, PROs individually and the program more generally do a poor job of documenting their impact on quality of care. They are therefore not in a good position either to defend their own record or to judge and comment on how well other organizations are doing. These issues call for much improved evaluation criteria and procedures, so that documented improvements in quality of care become significant parts of the scope of work on which PROs would be evaluated. This in turn argues for an in-depth review of past and impending SuperPRO efforts, of the contemplated “physician consultant” program, of the role of the Regional Offices, and similar evaluation schemes. A thorough review of the contracting mechanism itself and a re-consideration of the potential advantages of using the grant rather than the contracting mechanism is also in order, especially to the extent that the quality assurance agent should be focusing on “local” in addition to “national” problems.

Tenth, two issues about PRO financing are important. The PRO program has been assigned a vast array of quality of care, utilization review, PPS implementation, and other responsibilities for the Medicare program. To meet these assignments, it has a budget that remains well under one-half of 1 percent of Medicare expenditures. This level of funding is no greater, proportionally, than it was for the PSRO program nearly a decade ago, yet the peer review program assignments have been appreciably expanded (not entirely for quality-of-care concerns, however). We view this overall investment in a program intended to monitor and improve the quality of care for the elderly as likely to be too low ever to accomplish the expected tasks adequately. We are also concerned that the mechanism of funding individual PROs through extraordinarily detailed contracts and contract modifications is too limiting; it seems to foster evaluations of contract performance rather than impact on quality of care and to constrain innovation and flexibility to meet local conditions and problems.

SUMMARY

Since nearly the beginning of the Medicare program, the federal government has tried to ensure that services reimbursed through the program are medically necessary, appropriate, and of a quality that meets professionally established standards. The two main efforts in this arena were the PSRO and then the PRO program.

These programs share some characteristics: They have adopted purposes and methods reflecting the expected incentives of the prevailing financing mechanisms of Medicare. They have been oriented more toward controlling utilization and costs than toward improving quality of care. Both attempted to preserve “local” and “peer” review. They have concentrated on inpatient care. Both programs have been funded at a rather anemic level (more or less one-half of 1 percent of Medicare expenditures). Both programs have produced a good deal of variability in review criteria, findings, and statistics, despite considerable detailed prescription (especially for the PROs) of their operations. Finally, neither program has been able satisfactorily to demonstrate an impact on quality of care for the elderly, in large part because their emphasis has been on cost and utilization control.

The programs also differ in important ways: the PRO program has been fully operational far longer than the PSRO program was, and it has responsibilities related to implementation of Medicare's PPS that the PSRO program could not have had. The PSRO program probably had more public oversight of its activities than the PRO program has had. The PSRO program was more local than the PRO program (e.g., 195 PSRO areas versus 54 PRO contracts today). PSROs were funded through grants, whereas PROs are awarded competitively bid contracts. Partly as a consequence of these two factors, the PSRO program was probably more flexible and responsive to local utilization and quality problems than the PRO program can be. The PSRO program attempted to implement a form of “delegated review” so as to reduce intrusive external efforts; the PRO program has been precluded from any form of delegated review, partly because of the perceived weakness of delegation as it was managed during the PSRO days. Finally, the PSRO program was subjected to more rigorous evaluation than the PRO program has been to date.

Apart from these comparisons, the PRO program has some strengths that should be acknowledged. Most important may be that PROs have a committed and experienced group of physicians, nurse reviewers, and administrators with considerable expertise in the tasks needed to be accomplished by any present or future Medicare quality assurance program. Much of this cadre of quality assurance experts came initially out of the PSRO and earlier peer review efforts. In addition, PROs can operate on the basis of better Medicare data sets than were available during PSRO days, and they have a considerable advantage in computer technology compared to the earlier program.

The PRO program also has some limitations that would constrain its ability to fulfill the goals and objectives of a Medicare quality assurance program as envisioned by this committee and detailed in Chapter 12. These include its presently inadequate ability to address or to affect health outcomes for the elderly, the relative paucity of public oversight or accountability, and the enormous burden of conducting activities that are not demonstrably related to improving the quality of care or that involve tasks (such as public outreach) for which PROs do not have a comparative advantage. Other problems include the fuzzy legal status of sanctioning authority (for both PROs and the OIG), regulations that forbid or constrain innovation (such as alternative approaches to in-hospital chart review), and continuing difficulties with data sharing and data release. In designing a strategy for quality review and assurance for Medicare that will put in place a program to assure quality of care as it was defined in Chapter 1, this committee will thus attempt to build on the known capabilities of PROs and offset the perceived weaknesses. That program and the strategy for implementing it are discussed in Chapter 12.

REFERENCES

  • AAPSRO (American Association of Professional Standards Review Organizations Task Force). PSRO Impact on Medical Care Services: 1980. Vols. I and II. Report of the 1980 Ad Hoc Task Force on Impact. Potomac, Md.: The Association, 1981.

  • AMRRC (American Medical Review Research Center). SMAA PRO Pilots are Progressing Well: An Update. AMRRC Update , pp. 1–2, March-July 1989.

  • Blum, J.D., Gertman, P.M., and Rabinow J. PSROs and the Law. Germantown, Md.: Aspen Systems Corporation, 1977.

  • Blumstein, J.F. Inflation and Quality: The Case of PSROs. Pp. 245–295 in Health— The Victim or Cause of Inflation? Zubkoff, M., editor. , ed. New York: Prodist (for the Milbank Memorial Fund), 1976.

  • CBO (Congressional Budget Office). The Effect of PSROs on Health Care Costs: Current Findings and Future Evaluations Washington, D.C.: Congress of the United States, Congressional Budget Office, June 1979.

  • CBO. The Impact of PSROs on Health-Care Costs: Update of CBO's 1979 Evaluation. Washington, D.C.: Congress of the United States, Congressional Budget Office, January 1981.

  • Chassin, M.R. and McCue, S.M. A Randomized Trial of Medical Quality Assurance: Improving Physicians' Use of Pelvimetry. Journal of the American Medical Association 256:1012–1016, 1986. [PubMed: 3735627]

  • Cislowski, J.A. The Peer Review Organization Program. Washington, D.C.: Congressional Research Service, U.S. Library of Congress, October 23, 1987.

  • Committee on Ways and Means. Background Material and Data on Programs Within the Jurisdiction of the Committee on Ways and Means. 1989 edition. Committee Print WMCP 101–4, March 15, 1989. Washington, D.C.: Government Printing Office, 1989.

  • Egdahl, R.H. Foundations for Medical Care. New England Journal of Medicine 288:491–498, 1973. [PubMed: 4567491]

  • Federal Register, Vol. 50, pp. 15364–15389, April 17, 1985. [PubMed: 10299992]

  • Federal Register, Vol. 54, pp. 1956–1967, January 18, 1989a.

  • Federal Register, Vol. 54, pp. 42722–42734, October 17, 1989b.

  • GAO (General Accounting Office). Problems with Evaluating the Cost Effectiveness of Professional Standards Review Organizations. HRD-79–52. Washington, D.C.: General Accounting Office, July 1979.

  • GAO. Medicare Improving Quality of Care Assessment and Assurance. PEMD-88– 10. Washington, D.C.: General Accounting Office, May 1988a.

  • GAO. Medicare PROs Extreme Variation in Organizational Structure and Activities. PEMD-89–7FS. Washington, D.C.: General Accounting Office, November 1988b.

  • Goran, M.J., Roberts, J.S., Kellogg, M.A., et al. The PSRO Hospital Review System. Medical Care 13:1–33 (Supplement), 1975. [PubMed: 1168294]

  • Harrington, D.C. Ambulatory Medical Care Data: 20 The San Joaquin Foundation Peer Review System. Medical Care 11:185–189 (Supplement), 1973. [PubMed: 4571099]

  • HCFA (Health Care Financing Administration). Professional Standards Review Organization 1979 Program Evaluation. Health Care Financing Research Report. HCFA Pub. No. 03041. Baltimore, Md.: Department of Health and Human Services, May, 1980.

  • HCFA. 1988–1990 PRO Scope of Work. Baltimore, Md.: Health Care Financing Administration, Department of Health and Human Services, 1988.

  • HCFA. Request for proposal for the SuperPRO contract, Attachment II. Baltimore, Md.: Health Care Financing Administration, Department of Health and Human Services, 1989. a.

  • HCFA. Technical Notes. Peer Review Organization Data Summary dated May 1989. Baltimore, Md.: Office of Peer Review, Health Standards and Quality Bureau, Health Care Financing Administration, 1989. b.

  • HCFA. Utilization and Quality Control Peer Review Organizations Second Scope of Work. Executive Data Summary. Report through March 1989. Report dated 7/5/89. Baltimore, Md. Health Care Financing Administration, 1989. c.

  • HCFA. Peer Review Organization Data Summary. May 1989 (includes June 1989 data). Baltimore, Md.: Office of Peer Review, Health Standards and Quality Bureau. Health Care Financing Administration, 1989. d.

  • Jost, T. Administrative Law Issues Involving the Medicare Utilization and Quality Control Peer Review Organization (PRO) Programs: Analysis and Recommendations. Report to the Administrative Conference of the United States. Washington, D.C.: Administrative Conference, November 8, 1988 (reprinted in Ohio State Law Journal 50(1), 1989).

  • Kane, R.A., Kane, R.L., Kleffel, D., et al. The PSRO and the Nursing Home: Vol. I, An Assessment of PSRO Long-term Care Review. R-2459/1-HCFA. Santa Monica, Calif.: The Rand Corporation, August; 1979.

  • Lohr, K.N. Peer Review Organizations: Quality Assurance in Medicare. P-7125. Santa Monica, Calif.: The Rand Corporation, July 1985.

  • Lohr, K.N., Brook, R.H., and Kaufman, M.A. Quality of Care in the New Mexico Medicaid Program (1971–1975): The Effect of the New Mexico Experimental Medical Care Review Organization on the Use of Antibiotics for Common Infectious Diseases. Medical Care 18:1–128 (January Supplement), 1980. [PubMed: 6986518]

  • Lohr, K.N., Winkler, J.D. and Brook, R.H. Peer Review and Technology Assessment in Medicine. R-2820-OTA. Santa Monica, Calif.: The Rand Corporation, August 1981.

  • McIlrath, S. Receding Tide of Physician Sanctions by Medicare PROs Triggers Debate. AMA News 3:57, April 21, 1989.

  • Morford, T.G., Director, Health Standards and Qualtity Bureau, HCFA. Testimony before the U.S. House of Representatives' Committee on Government Operations, Subcommittee on Human Resources and Intergovernmental Relations, April 4, 1989a.

  • Morford, T.G. UpDate. Federal Efforts to Improve Peer Review Organizations. Health Affairs 8(2):175–178, Summer 1989b. [PubMed: 2744693]

  • OIG (Office of Inspector General, Department of Health and Human Services). The Utiliztion and Quality Control Peer Review Organziation (PRO) Program Quality Review Activities. Washington, D.C.: Office of Inspector General, Office of Analysis and Inspections, August 1988a.

  • OIG. The Utilization and Quality Control Peer Review Organization (PRO) Program Sanction Activities. Washington, D.C.: Office of Inspector General, Office of Analysis and Inspections, October 1988b.

  • OIG. The Utilization and Quality Control Peer Review Organization (PRO) Program. An Exploration of Program Effectiveness . Washington, D.C.: Office of Inspector General, Office of Analysis and Inspections, January 1989.

  • O'Kane, M.E. PRO Review of Medicare Health Maintenance Organziations and Competitive Medical Plans. Paper prepared for the Institute of Medicine Study to Design a Strategy for Quality Review and Assurance in Medicare, May 1989.

  • OTA (Office of Technology Assessment). The Quality of Medical Care: Information for Consumers. OTA-H-386. Washington, D.C.: U.S. Government Printing Office, 1988.

  • PPRC (Physician Payment Review Commission). Annual Report to Congress, 1989. Washington, D.C.: Physician Payment Review Commission, April 1989.

  • ProPAC (Prospective Payment Assessment Commission). Medicare Prospective Payment and the American Health Care System. Report to the Congress. June 1989 . Washington, D.C.: Prospective Payment Assessment Commission, 1989.

  • Vibbert, S., editor. , ed. Watchdog Criticizes HHS IG Over PRO Monetary Sanctions. Medical Utilization Review 17(7):1, April 4, 1989a.

  • Vibbert, S., editor. , ed. PROs Denied 2 Percent of 1986–1988 Medicare Cases. Medical Utilization Review 17(9): 1, May 2, 1989b.

  • Vibbert, S., editor. , ed. Regulatory Activity. Legal Panel Backs PRO Reform Package. Medical Utilization Review 17(13):5–6, June 27, 1989c.

  • Vibbert, S., editor. , ed. PRO Sanctions Fight Moves to Senate. Medical Utilization Review 17(15):1–2, August 8, 1989d.

  • Vibbert, S., editor. , ed. HSQB Softpedals Controversial Pilot. Medical Utilization Review 17(18):3–4, September 21, 1989e.

  • Vibbert, S., editor. , ed. Hospitals Gain Class Action Status in Federal Suit over PRO Photocopying. Medical Utilization Review 17(21):1, November 2, 1989f.

1. For more complete discussions of approaches to quality measurement and assurance other than those mounted by the Medicare program, see Chapter 9 and Volume II, Chapter 6. Chapter 5 and Volume II, Chapter 7, discuss Medicare Conditions of Participation more fully. Volume II, Chapter 8 provides a more complete description of the PRO program; later sections of this chapter rely heavily on lengthy excerpts and tables from that volume.

2. Profiling can also be used to identify patterns of problems with the quality of care other than those related specifically to use of services, such as failures on generic screens or unexpected patient deaths. This is one application of profiling found in the present PRO program.

3. The HCFA evaluation cautioned strenuously against drawing inferences from these data, which are per discharge, about costs per review, because the PSRO program compiled no comprehensive information on the number of reviews that were conducted by hospitals or by PSROs.

4. Waiver of liability meant that unless a hospital “knew or could reasonably have been expected to know” that the care it was providing was unnecessary, the costs of that care would still be reimbursed and the hospital was not financially liable. Only if the hospital's waiver was revoked would it become financially at risk for days of care or services provided to a beneficiary, but revocation was rarely, if ever, accomplished because the necessary regulations were not promulgated.

5. The PRO for Washington State also reviews Alaska and Idaho. The following PROs review in two states: West Virginia for Delaware; Maryland for the District of Columbia; Hawaii for Guam/American Samoa; Indiana for Kentucky; Rhode Island for Maine; Iowa for Nebraska; New Hampshire for Vermont; and Montana for Wyoming.

6. HCFA put PROs into one of four categories. Two of those categories (28 states in all) were to be awarded full three-year contracts; the remaining categories (26 states and territories) were to be awarded either six or twelve-month extensions of their existing two-year contracts. As of summer 1989, four PRO contracts had not been awarded.

7. The 11 conditions are cholecystectomy, major joint replacement, coronary artery bypass graft, percutaneous transluminal coronary angioplasty, laminectomy, complex peripheral revascularization, hysterectomy, bunionectomy, inguinal hernia repair, prostatectomy, and pacemaker insertion. A PRO can also select a procedure not on this list if it can document why it should be subjected to 100 percent pre-admission review in the state.

8. Comments to study site visitors from PRO officials indicated that many Severity Level I cases ultimately turn out not to be quality problems as defined, because they are related to poor documentation. The “quality problem” is not confirmed when the target physician or provider provides sufficient additional information.

9. HCFA prescribes the format and wording of these letters (apart from the specifics of the case at hand). Presumably for legal reasons, they are very formal in tone and must contain the following information: (1) the Medicare obligations involved; (2) description of the activity resulting in the violation; (3) the authority and responsibility of the PRO to report violations of obligations; (4) a suggested method and time period for correcting the problem (at the discretion of the PRO); (5) an invitation to submit additional information or discuss the problem with the PRO within 20 days of the notice; and (6) a summary of the information used by the PRO in reaching a determination.

10. As part of a case involving the American Medical Association (AMA) (A.M.A. v. Bowen) settled three years ago, the OIG committed itself to seek a regulatory alternative to the practice of newspaper notices, one that would permit sanctioned physicians to inform their Medicare patients personally that Medicare would no longer pay for the physicians' services. The OIG drafted a proposed regulation that would require a physician to notify all his or her patients, not just those covered by Medicare. No regulations had been issued, however, as of mid-1989.

11. The preamble to certain PRO regulations notes that quality problems that affect Medicare patients usually affect all patients, particularly in the context of acute care. Consistently throughout this study, respondents at site visits confirmed this observation. Especially in hospitals and prepaid systems, our respondents noted that most quality problems tended to be with “systems” that cut across units and patient age groups. Moreover, facilities and groups with well-established internal quality assurance systems deliberately did not single out “the elderly” or “Medicare patients” for specific quality assurance attention (except insofar as they needed to meet PRO demands for records and similar requirements), believing that a more efficient and ultimately more successful approach to quality improvement would involve the entire institution, its entire staff, and its entire patient census.

12. Regulations classify “confidential information” as follows: (a) information that explicitly or implicitly identifies an individual patient, practitioner, or reviewer; (b) sanction reports and recommendations; (c) quality review studies that identify patients, practitioners, or institutions; and/or (d) PRO deliberations. “Implicitly identifies” means that the data are sufficiently unique or the numbers so small that identification of an individual patient, practitioner, or reviewer would be easy.

13. Because PRO budgets are tightly tied to the number of expected reviews, their individual budgets range vary widely. For instance, of the PROs awarded full three-year contracts for the third SOW, the California PRO was awarded nearly $82,838,000—the largest in the country and a record for the PRO program (Vibbert, 1989a)—and the PRO for Wyoming was awarded $1,210,000. Other PROs were awarded extensions of existing contracts for six or 12 months. Of these, the largest award, for 12 months, was to Texas (just over $16.1 million) and the smallest award, also for 12 months, was to American Samoa and Guam ($24,120) (HCFA, unpublished data made available to the study, April 1989).

14. Material for this section is based in part on a paper prepared for this study, “PRO Review of Medicare Health Maintenance Organizations and Competitive Medical Plans,” by Margaret E.O'Kane, Director of Quality Assurance, Group Health Association, Washington, D.C., May 1989 (O'Kane, 1989). The history of PRO review efforts for HMOs and CMPs is recounted in more detail in Volume II, Chapter 8.

15. As of December 1989, Quality Quest is the QRO only for Missouri. The Illinois PRO (Crescent Counties Medical Foundation) is the QRO for Illinois. The QRO contract for Kansas had not yet been awarded.

16. HCFA defined de novo a set of “areas of focus” by which HMO/CMP internal programs would be evaluated, rather than using existing models developed by the National Committee on Quality Assurance (an HMO industry group), the Joint Commission, and similar groups. The HCFA areas were: whether the plan reviews individual cases of patient care; whether it includes physician review of medical records; whether physicians make final decisions on quality issues; whether review includes all settings; whether the plan uses reasonable sampling methods to select cases for review; and whether the plan has been operating long enough for it to be able to demonstrate actual performance. The industry widely regarded these as rather old-fashioned and lacking in an understanding of what HMO and CMP QA plans actually do.

17. For plans on limited review, the total number of cases selected for the random validation subsample plus the total number selected for the remaining reasons for review cannot exceed the number that would have been reviewed under the basic plan.

18. By contrast, a “simple” case is one in which services being reviewed were provided in only one setting and during only one admission (if inpatient)—for instance, those in the 3-percent inpatient sample.

19. Volume II, Chapter 6 discusses several PROs' remedial education efforts.

20. The 12 PROs involved in the educational component of the project, known as Medical Assessment Program (MAP) pilots, are located in the following states: Arizona, Arkansas, Colorado, Connecticut, Illinois, North Carolina, Ohio, Pennsylvania, Texas, Utah, Virginia, and Washington. It is intended that all PROs receive data, technical training in small area analysis methods, and necessary computer software.

21. Data provided by HCFA reviewer of draft report. Also see GAO (1988a).

22. Recall that the screens are (1) adequacy of discharge planning; (2) medical stability of patient at discharge; (3) deaths; (4) nosocomial infections; (5) unscheduled return to surgery; and (6) trauma suffered in the hospital.

23. Take, for instance, the OIG calculations on data from 12 PROs for Screen 1 on adequacy of discharge planning. According to their data, PRO D failed 0.2 percent of 24,382 cases screened and confirmed problems in 100 percent of failures, whereas PRO F failed 0.6 percent of 80,624 cases and confirmed problems in only 40.5 percent. Both, however, detected confirmed problems in 0.2 percent of cases reviewed (see OIG, 1988b).

24. For instance, the Peer Review Organization of Washington reported during a site visit that some of its specially developed screens identify more failures and/or confirmed problems than do the HCFA screens.

25. Countering these moves are recommendations from the Commerce Committee of the U.S. House of Representatives to expand the time during which a physician can claim to be willing and able to change poor practice patterns, extend special appeal rights to most physicians (instead of just those in rural areas), and cap the monetary fines at just $2,500 (Vibbert, 1989d). As of mid-1989, the question of whether the PRO sanctioning capability would be strengthened or weakened was still open.

26. However, the course of one suit (Amer. Hosp. Assn. v. Bowen) seems to have left HCFA with considerable latitude, because the final court of appeals ruling held that the contracting process, issuance of the SOWs, and Manual transmittals were all covered by exceptions to the APA.

27. The issue of reimbursing hospitals for costs incurred in photocopying medical records has been especially contentious since 1985. A recent court order requires that HCFA reimburse hospitals retroactively for costs incurred in photocopying; the American Hospital Association argues in favor of a reimbursement at a level of $0.12-per-page (Vibbert, 1989f).

28. Multi-state HMOs visited during this study differed in their views on this issue. At least one felt very strongly that it wished to deal with only a single PRO because it was experiencing considerable, unexplainable variation in review from the different PROs in the different states in which it operated. By contrast, at least one other plan found PRO review sufficiently benign that differences across PROs were either not noticeable or not a problem.

What does quality assurance mean in health care?

The term "Quality Assurance" refers to the identification, assessment, correction and monitoring of important aspects of patient care designed to enhance the quality of Health Maintenance Services consistent with achievable goals and within available resources. Section 9.3. Purpose.

Why quality assurance is needed in healthcare?

Quality assurance programs not only help hospitals improve clinical outcomes but also offer an effective way to increase staff engagement by inviting team members at every level to provide their input and help improve the hospital as a whole.

Which is a method of controlling health care costs and quality?

The utilization management (or utilization review) is a method of controlling healthcare providers and quality of care by reviewing the appropriateness and necessity of care provided to patients prior to the administration of care.

Which is a type of HMO where healthcare services are provided to subscribers by physicians employed by the HMO?

Also called independent practice association (IPA) HMO, contracted health services are delivered to subscribers by physicians who remain in their independence office settings.