Introduction

Patient desire for information about their physicians has helped fuel the increasing availability of online ratings of individual physicians. There are two types of physician rating websites that differ in how they source their ratings of physicians. One type includes websites by private companies such as Healthgrades.com that report crowd-sourced data collected among online users (“independent websites”).1 , 2 Another type includes websites by health systems that report data collected from patients with recent office visits or hospitalizations as part of internal patient experience surveys (“health system websites”).3

Independent websites traditionally include both numerical ratings and free-text narrative comments. The ability of the websites to gather and present real-time data in a manner that many Internet users have become accustomed to is quite attractive. However, these websites are currently limited, as few physicians are reviewed on the websites, and the number of ratings per physician is small.4 In addition, the content of the ratings and comments may be flawed, in that the patients who choose to post reviews may not be representative of the patient population for a given physician.

In response to the limitations of independent websites, a few health systems have begun to post their internal patient experience survey results at the physician level to provide more information to patients. Numerical ratings and free-text narrative comments have been collected from standardized health system patient experience surveys for years as part of internal quality improvement programs.5 , 6 Numerical ratings from these surveys have been available online for years, but only at the hospital or physician group practice level. Health systems are just beginning to post these numerical ratings at the individual physician level, and some are also posting narrative comments about individual physicians obtained from these surveys.7 These data have the advantage of using a standardized survey [Consumer Assessment of Healthcare Providers and Systems (CAHPS®)] and systematic sampling based on interaction with the physician to better represent the patient population using a combination of phone, mail, and email.8 However, the source of these data may be less clear to patients searching for physician information online and, similar to the case with independent websites, the sample sizes may be insufficient for reliable reporting at the individual physician level.

The trend toward implementing physician rating websites raises important questions about how they are used, the perceptions of numerical ratings versus narrative comments, and the potential consequences of such websites from the perspective of both patients and physicians. Previous research has focused only on independent websites, where approximately one-fourth of patients reported using them,1 and physicians reported concerns about the representativeness and impact of the data on these websites.9

We surveyed patients and physicians within a large accountable care organization to examine their perceptions of both independent and health system websites, including the reported use of, trust in, and potential consequences of both numerical ratings and narrative comments.

Methods

Study Design

We conducted two cross-sectional surveys in August 2015: a web-based survey for physicians and a mailed survey for patients. Our surveys assessed 1) the use of physician rating websites, 2) the perceived accuracy of (physicians) or trust in (patients) the numerical rating scores and narrative comments, 3) support for making the numerical rating scores and narrative comments obtained from health system patient experience surveys publicly available, and 4) the perceived impact of publishing numerical rating scores and narrative comments from health system patient experience surveys. This study was deemed exempt from review by the Partners Human Research Committee.

Settings and Participants

Surveyed physicians worked within a large healthcare organization that included two academic hospitals, three community hospitals, and affiliated ambulatory clinics. We emailed surveys to all 1936 physicians identified using health system administrative databases as practicing at four of the five hospitals or affiliated ambulatory clinics (one academic hospital and its affiliated ambulatory clinics opted out of the physician survey due to competing demands). We excluded residents, fellows, and physicians who had not provided patient care in the previous 6 months. We mailed surveys to a random sample of 1500 patients who were over 30 years old (300 per hospital and affiliated ambulatory clinics) and who had at least one hospitalization or ambulatory clinic visit during May 2015. We excluded adults under 30 years of age, as these younger, healthier patients have fewer health care visits and may be less likely to visit physician rating websites. Neither physicians nor patients received an incentive for survey completion.

Survey Development and Implementation

We developed and pilot tested the instruments via one-on-one cognitive interviewing10 with 10 patients and 10 physicians. We administered the physician survey using the web-based REDCap data management and survey tool,11 delivering three email reminders to non-responders at weekly intervals, achieving a 43% response rate (828/1936). No statistically significant variation was found in physician response rates across hospitals (p < 0.05). Our final sample size was 808 after excluding 20 physicians who reported not having provided any patient care in the past 6 months. We administered the patient survey using a paper mailing, with one follow-up mailing to non-responders, achieving a 34% response rate (494/1461, after excluding 39 patients whose mailing addresses were no longer valid or who were deceased). We were unable to link patients to hospitals to determine patient response by institution.

Outcomes and Measurements

Use of Physician Rating Websites

Physicians reported (yes or no) whether they had ever visited a physician rating website to find reviews about themselves. Patients reported (yes or no) whether they had searched for reviews about physicians online.

Accuracy of or Trust in the Numerical Rating Scores and Narrative Comments

Using a five-point scale (“strongly disagree”, “somewhat disagree”, “neutral”, “somewhat agree”, “strongly agree”), physicians who had been exposed to numerical and/or open-ended comments from independent websites or health system data reported their level of agreement with the statements “The numerical ratings [or open-ended comments] about me on public websites accurately reflect the quality of care that I provide” (regarding independent websites such as Healthgrades.com) and “The numerical ratings [or open-ended comments] from patient experience surveys accurately reflect the quality of care that I provide” (regarding health system patient experience data).

Using the same five-point scale, patients reported their level of agreement with the statements “I trust the reviews that I read online about doctors” (regarding independent websites) and “I would trust the reviews about doctors from standardized patient surveys if they were posted on the hospital website where the doctor works” (regarding health system patient experience data).

Making Physician Ratings Available to the Public

Using the same five-point scale from “strongly disagree” to “strongly agree”, physicians reported their level of support for making health system survey data available at three levels: among staff in their own clinical practice, among staff across their entire health care organization, and open to the public on their health system’s website. Patients reported their level of support for making health system survey data available online using the same five-point scale.

Perceived Impact of Publishing Numerical Ratings and Narrative Comments About Physicians

Using a five-point scale (“very negative effect”, “somewhat negative effect”, “neutral”, “somewhat positive effect”, “very positive effect”), physicians reported the perceived effect of their hospital publishing numerical ratings and narrative comments from patient experience surveys online with regard to 1) the physician–patient relationship, 2) patient-reported experiences of care, 3) over-utilization of health care, and 4) physician job stress. Patients used a five-point scale (“strongly disagree”, “somewhat disagree”, “neutral”, “somewhat agree”, “strongly agree”) to respond to the statements “I would be less open with my anonymous written comments [number ratings] about the care I received from my doctor if my reviews were going to be publicly available on the Internet (even though my name would not be listed).”

Statistical Analysis

We fit two separate multivariable logistic regression models with the dependent variable defined as ever visiting a physician rating website and the independent variables including either physician characteristics or patient characteristics collected via survey. The physician model fit age as a continuous variable, and categorical variables included physician sex, presence of ambulatory clinic time, and specialty (“primary care”, “medical specialty”, “surgery”, and “obstetrics/gynecology” relative to “other”). The patient model fit age as a continuous variable, and categorical variables included patient sex, race (black, Asian, Hispanic, and other, relative to white), presence of college-level education, health status (“very good”, “good”, “fair”, and “poor” relative to “excellent”), and access to the Internet on most days of the week. We used generalized estimating equations to adjust standard errors for clustering of physicians by hospitals. We calculated 95% confidence intervals for the descriptive survey data. All analyses were conducted using SAS Enterprise Guide version 4.3 (SAS Institute Inc., Cary, NC). Unanswered questions were excluded from the analyses.

Results

Respondent Characteristics

Among physician respondents, the average age was 49 years, 28% reported having only inpatient clinical time, 41% reported having only ambulatory clinic time, and the mean number of clinic sessions per week was four (Table 1). The most frequent specialties were primary care (27%) and medical specialists (26%).

Table 1 Physician Respondent Characteristics

Among patient respondents, the average age was 66 years, the majority were female (63%) and white (87%), attended college (79%), and reported access to the Internet on most days of the week (85%, Table 2). Less than one-third (28%) of patients reported having previously completed a hospital patient experience survey.

Table 2 Patient Respondent Characteristics

Use of Physician Rating Websites

Both physicians (53%) and patients (39%) reported having visited a physician rating website at least once. In the physician logistic regression model, characteristics associated with greater odds of visiting a website included decreasing age in years (OR 1.03, 95% CI 1.02–1.03), having ambulatory clinical time (OR 2.1, 95 CI 1.4–3.2), and practicing in a surgical specialty (OR 1.4, 95% CI 1.2–1.7) or obstetrics/gynecology (OR 1.1, 95% CI 1.0–1.3). Physician sex was not associated with visiting a website.

Patient characteristics associated with greater odds of visiting a website included decreasing age in years (OR 1.04, 95% CI 1.02–1.06), female sex (OR 1.9, 95% CI 1.2–3.0), college education or above (OR 2.6, 95% CI 1.2–5.3), and regular Internet access (OR 7.0, 95% CI 2.0–24.1). Patient race and health status were not associated with visiting a website.

Accuracy of or Trust in Physician Rating Websites

Physicians more frequently reported that they “somewhat” or “strongly” agreed with the accuracy of numerical data (53%, 95% CI 46–60%) and narrative comments (62%, 95% CI 55–68%) obtained from health system patient experience surveys compared to numerical data (36%, 95% CI 31–41%) and narrative comments (36%, 95% CI 30–42%) on independent websites (Table 3). Patients more frequently reported that they “somewhat” or “strongly” agreed with trusting the accuracy of data obtained from independent websites (57%, 95% CI 49–64%) compared to data from health system patient experience surveys (45%, 95% CI 41–50%, Table 3).

Table 3 Physician and Patient Views on Accuracy of or Trust in Physician Rating Website Content

Making Physician Ratings Available to the Public

Physicians were less likely to support sharing of both numerical ratings and narrative comments obtained from health system patient experience surveys in increasingly public venues, with 21% “strongly” or “somewhat” supporting public sharing of narrative comments (Fig. 1). In contrast to physicians, one-half of patients “strongly” or “somewhat” supported making numerical ratings and narrative comments obtained from health system patient experience surveys available to the public.

Figure 1
figure 1

Physician and patient support for sharing physician rating data. Physician and patient ratings indicating very strong or somewhat strong support for sharing of data collected on health system patient experience surveys.

Perceived Impact of Publishing Numerical Rating Scores and Narrative Comments

The majority of physicians (78%) reported that making numerical ratings and narrative comments from health system patient experience surveys publicly available would have a “somewhat” or “very” negative effect on physician job stress. Smaller proportions of physicians perceived a similarly negative effect on the physician–patient relationship (46%), health care overuse (34%), and patient-reported experience of care (33%, Table 4).

Table 4 Physician Perceived Impact of Health System Physician Rating Websites

Slightly more than one-fourth of patients reported that the publication of their narrative comments (29%) or numerical ratings (27%) from health system patient experience surveys would cause them to be less open about their feedback.

DISCUSSION

In a survey of physician and patient views on physician rating websites, we found contrasting views between physicians and patients in their support for making ratings of physicians publicly available and in the accuracy of information based on the source. Physicians were less supportive than patients of sharing data publicly, and more commonly supported the accuracy of data from health system patient experience surveys compared to data from independent websites. Neither physicians nor patients expressed a difference in their support for sharing data publicly based on whether it included numerical ratings or narrative comments. This latter finding is particularly important as health system leaders are actively debating whether to publish narrative comments online.

The usage of physician rating websites by patients appears to have grown from one-fourth in 2012 to above one-third of patients.1 We found that decreasing age, female sex, having a college education, and reporting regular Internet access were associated with a higher odds of visiting physician rating websites, consistent with previous research.12 This suggests that as health system leaders and independent websites make ratings of physicians publicly available, strategies to make these websites more accessible to older, poor, less educated populations should be considered.

We found that over one-half of physicians reported having visited a website to look up reviews about themselves. Physicians practicing in the ambulatory setting, as opposed to inpatient care, demonstrated a higher odds ratio for visiting a rating websites. This is likely a reflection of the fact that patients have more choice regarding the selection of ambulatory physicians than do inpatient physicians; hence, ambulatory physicians are more interested in how they are portrayed in online venues.

Our survey further expands the literature by assessing physicians’ and patients’ opinions of physician rating websites based on the source of information. Physicians largely supported the accuracy of data obtained from health system patient experience surveys over independent websites, likely due to a longer historical experience with such data, as well as the scientific benefits related to the standardized sampling frames, larger sample sizes, and use of standardized tools. Patients, however, more commonly reported trust in the data obtained on independent websites compared to that of health system patient experiences surveys. It is possible that patient views reflect increased familiarity with other crowd-sourced review processes such as Yelp.com and Amazon.com. In addition, patients may lack trust in health system websites due to concerns regarding bias, as health systems are publishing reviews regarding their own physicians. Health systems seeking to publish patient experience survey data will need to work to engage their patients’ trust in what is very likely a new and complicated data source to them.

Interestingly, we did not identify notable differences between physician or patient support for numerical ratings compared to narrative comments. Health system leaders should consider including both types of reviews on websites in the future, since both are equally supported, and the narrative comments are likely to provide additional context for patients.

Publishing health system patient experience data publicly is not without significant challenges. Physicians were less supportive than patients about sharing data publicly, perhaps related to the finding that over three-fourths of physicians felt that posting these data would increase job stress. Adding to job stress might be that physicians may receive only a limited number of reviews and therefore have concern about a non-representative sample of reviews being published publicly. Physician burnout is a real problem that is leading to threats to patient safety, as well as physician turnover, and other challenges to the delivery of high quality care.13 As health systems seek to make patient experiences of care data publicly available, careful attention must be paid to establishing resources to support physicians in change management. This should include allowing physicians to become comfortable with the likely positive nature of these data before they are published,14 as well as permitting physicians to play an ongoing role in how the data are released and displayed—responding to negative comments, advocating for removal of inappropriate comments, and agreeing on a minimum number of reviews required per physician in order to publish reviews online. Over one-third of physicians also expressed concern about over-utilization of health care, their concern likely being that physicians will accede to patient requests for unnecessary or marginally necessary tests to avoid poor ratings. Health system leaders should consider monitoring the impact of public ratings of physicians on over-utilization of care.

Patients also expressed important concerns related to public availability of physician ratings from health system patient experience surveys. Over one-fourth of patients reported that publishing patient experience data publicly would affect their ability to given open feedback. Survey developers and policymakers need to consider how this might affect the results of pay-for-performance programs and other initiatives that rely on historical trends and relative comparisons between health systems that may or may not publicly share ratings of physicians.

Despite the challenges highlighted by our survey, making patient experience data publicly available has the potential to improve quality and engage patients as better-informed consumers. Patients clearly desire this information, with over one-half of patients in our survey supporting the online availability of health system patient experience data. One-fourth of physicians in our study felt that sharing ratings of physicians online would improve the physician–patient relationship, and one-third anticipated improvements in patient experiences of care measures. Such improvement has been the anecdotal experience of other health systems that have published physician-level numerical ratings and narrative comments online.3

Our findings have limitations. Our survey was conducted within a single accountable care organization in Massachusetts, and therefore the findings may not be generalizable to other systems. In addition, our survey response rates were less than 50% for both physicians and patients. However, we did capture the viewpoints of a large number of both physicians and patients. While we conducted extensive pre-survey cognitive testing of our instruments, the questions represented theoretical constructs, as many of the patients and physicians had not been exposed to the public release of patient experience survey ratings or narrative comments. Specifically, our findings regarding patient trust in health system data and physician and patient perceived impact of publishing health system data online were hypothetical in nature. The accountable care organization that served as our study site has not posted the results of routinely collected patient experience data, and it is possible that physicians’ and patients’ views would be different with such exposure. Finally, our study focused on patients over 30 years old, and therefore may not represent the views of those younger than 30 years.

Conclusions

Our study adds to our understanding of how physicians and patients perceive independent and health system physician rating websites, including the reported use of, trust in, and potential consequences of both numerical ratings and narrative comments. Physicians and patients have discordant viewpoints, with physicians expressing more trust in data on health system websites, and patients expressing more trust in data on independent websites. Their views on whether such data should be shared on public websites also differ, with more patients versus physicians expressing support for making health system patient experience data available publicly. Our study stresses the importance of monitoring the impact of independent and health system physician rating websites on both physicians and patients.