Physician Rating Scales Do Not Accurately Rate Physicians

Physician Rating Scales Do Not Accurately Rate Physicians

Review Article 
Physician Rating Scales Do Not Accurately Rate Physicians
Matthew B. Burn, MD; David M. Lintner, MD; Pedro E. Cosculluela, MD; Kevin E. Varner, MD; Shari R. Liberman, MD; Patrick C. McCulloch, MD; Joshua D. Harris, MD
Orthopedics.
Abstract
Abstract
The purpose of this study was to determine the proportion of questions used by online physician rating scales to directly rate physicians themselves. A systematic review was performed of online, patient-reported physician rating scales. Fourteen websites were identified containing patient-reported physician rating scales, with the most common questions pertaining to office staff courtesy, wait time, overall rating (entered, not calculated), trust/confidence in physician, and time spent with patient. Overall, 28% directly rated the physician, 48% rated both the physician and the office, and 24% rated the office alone. There is great variation in the questions used, and most fail to directly rate physicians themselves. [Orthopedics. 201x; xx(x):xx–xx.]
Full Text
Abstract
The purpose of this study was to determine the proportion of questions used by online physician rating scales to directly rate physicians themselves. A systematic review was performed of online, patient-reported physician rating scales. Fourteen websites were identified containing patient-reported physician rating scales, with the most common questions pertaining to office staff courtesy, wait time, overall rating (entered, not calculated), trust/confidence in physician, and time spent with patient. Overall, 28% directly rated the physician, 48% rated both the physician and the office, and 24% rated the office alone. There is great variation in the questions used, and most fail to directly rate physicians themselves. [Orthopedics. 201x; xx(x):xx–xx.]
In the current health care environment with an emphasis on the “value of care,” there has been a transition from a pay-for-performance model (rewarding quantity of care) to a value-based model (rewarding quality of care). Although cost (direct and indirect) can be easily calculated, measurement of the quality of care is complicated and multifactorial. Quality of care is determined by (1) the reviewer (ie, patient, provider, hospital, or payers) and (2) the method of measurement (ie, subjective patient-reported scales, objective outcome measures, readmission rates, complications or mortality, and so forth). Subjective patient-reported physician rating scales are being increasingly used for this purpose; however, most have been administered using physician documents, and the results have only been available to the hospital and providers (ie, Clinician and Group Consumer Assessment of Healthcare Providers and Systems developed by the Agency for Healthcare Research and Quality and administered by Press Ganey Associates). 1,2 Release of these scales (ie, physician rating websites) to the public will theoretically empower patients to make better decisions about their health care when selecting a provider.
Initially, physician rating scales were incorporated into review websites for all other commercial products. In 2004, RateMDs was developed specifically for patients to rate physicians; it was quickly followed by other physician rating websites, including Healthgrades and Vitals. However, in medicine, unlike in the food and travel industries, patients' best interests often do not align with their personal opinions or expectations. If satisfaction ratings are used to measure clinical success and determine reimbursement, this can lead to patients' either receiving unnecessary interventions or tests or not receiving necessary but undesirable testing or care. 3 Two common examples involve treating viral upper respiratory illnesses (ie, the common cold) with antibiotics and prescribing narcotic pain medications, with both leading to greater patient satisfaction but also increased antibiotic resistance and narcotic addiction, respectively. 4–8 Studies have shown that although the most satisfied patients frequently spend the most on health care and prescription drugs, they also have higher hospitalization and mortality rates. 3–5
In addition to these ethical dilemmas, the currently available “physician” rating scales (both conventional and online) have broad questions aimed at evaluating the entire patient “experience,” including elements outside of the direct control of the physician (ie, wait times, office décor, staff friendliness), rather than just interactions of the physician, leading to confusion among consumers. Although prior studies have reported the components evaluated by physician rating scales, they have not comprehensively evaluated the proportion of these scales specifically rating the physician directly. The authors performed a systematic review to determine (1) the components of care evaluated by currently available physician rating scales and (2) which of these factors are under the direct control of or directly grade the physician. The authors hypothesized that less than 50% of the factors used to rate physicians are under their direct control or directly grade them.
Websites
Identification
This systematic review was registered with the International Prospective Register of Systematic Reviews. It was conducted and reported using all applicable components of the protocol described by Preferred Reporting Items for Systematic Reviews and Meta-Analyses. 9 The search was performed by 2 of the authors (M.B.B., J.D.H.) using the Google search engine for the period between October 12, 2015, and November 12, 2015. Three sets of search terms were used: (1) “Doctor grade” OR “Doctor score” OR “Doctor rating” OR “Doctor review”; (2) “Provider survey questions” OR “Patient experience scorecard public”; and (3) “Physician” OR “Doctor” AND “Grade” OR “Rate.” Each search term, including the quotations (ie, “Doctor”) and modifiers (ie, AND and OR), was entered into the Google search engine exactly as listed above. Similar to previously published studies, the first 100 websites found using each of these 3 search terms were reviewed (a total of 300 websites) ( Figure ). 10
Figure:
Preferred Reporting Items for Systematic Reviews and Meta-Analyses flowchart illustrating application of exclusion criteria.
Selection Criteria
The 2 authors who performed the search reviewed all 300 websites. During the primary analysis, websites were excluded if they did not contain a physician rating scale that was completed by patients ( Figure ). Websites with (1) online news articles or blogs not discussing health care–related topics, (2) online news articles or blogs discussing health care–related topics, (3) commercial sales, (4) discussion of television doctor shows, (5) journal articles not discussing health care–related topics, and (6) journal articles discussing health care–related topics but not including physician rating scales were excluded. During the secondary analysis, the remaining websites were reviewed in-depth. Websites were excluded from final analysis if they (1) were a duplicate (ie, found using more than 1 of the 3 sets of search terms), (2) only contained links to other already included rating websites, (3) only graded hospitals (not individual physicians), and (4) only reported Medicare readmission codes rather than subjective patient grading. Two “physical document” grading scales (National Health Service survey and Clinician and Group Consumer Assessment of Healthcare Providers and Systems) were found during the initial search and, although they do not have publicly accessible online scores, were included because they contained patient-reported physician rating scales.
Data Extraction
Each website selected for final review was independently assessed by the same 2 authors without blinding for website identifiers, such as website name, company, and affiliations. Data collected fell into 3 categories: (1) website demographics and characteristics, (2) available physician demographics, and (3) components of the physician rating scale. Website demographics and features for each physician rating website were reviewed (Table 1 ). Online publicly available physician demographics were reviewed for each of the included physician rating websites (Table 2 ). The components of each rating scale were reviewed to count and categorize the questions patients were asked about their physician and their physician's office. The authors categorized these questions into 3 groups: (1) those that rated the physician directly, (2) those that rated both the physician and the office, and (3) those that rated the office directly (office staff, parking, location, cost, equipment, and so forth) (Table 3 ). Descriptive statistics were calculated for each physician rating scale.
Table 1:
Access, Characteristics and Demographics, and Searchability of the 14 Websites
Table 2:
Physician Information Available on the 14 Websites
Table 3:
Summary of the Components (Questions/Comments) Used by the Online Patient-Reported Physician Rating Scales
Results
With the use of these 3 sets of search terms and the Google search engine, 300 websites were identified. A total of 263 of these websites were excluded because they did not contain a patient-reported physician rating scale ( Figure ). An additional 23 websites were excluded because they were duplicates (n=6), linked to another already included website (n=5), contained rating scales of hospitals rather than individual physicians (n=10), or contained ratings based on Medicare data rather than patient-reported scales (n=2). Thus, 14 websites were available for final analysis 11–24 (Table 4 ), containing a mean of 9 (SD, 8; range, 1–33) checkbox questions and 1 (SD, 1; range, 0–2) comment box. Demographic information for each of the 14 physician rating websites, including (1) whether the website was free to access, (2) where patient reviews could be linked to social media, (3) the methods used by the website to verify patients, (4) the parameters patients can use to search for physicians, and (5) the demographics collected about the patients themselves, is listed in Table 1 . Most of the websites did not require verification that an appointment with the physician had occurred (71%). Publicly available demographic information for physicians is presented in Table 2 . Most of the websites included each physician's office address (100%) and phone number (75%) and contained a link to practice websites (67%). Years of experience (33%), procedures performed (33%), and conditions treated (33%) received less emphasis. Two websites (Table 3 ) were online versions of conventional physical document rating scales: (1) the National Health Service patient survey, 19 which had a total of 62 questions, with 33 questions being about the physician or office (24% grading the physician directly, and 36% grading both the physician and the office); and (2) the Clinician and Group Consumer Assessment of Healthcare Providers and Systems, 12 which had a total of 31 questions, with 18 questions being about the physician or office (46% grading the physician directly, and 31% grading both the physician and the office). Twelve websites (Table 3 ) represented free online patient-reported physician rating scales. These ranged in length from 1 question 15,23 (patient-reported, rather than calculated, “overall rating”) to 11 questions. The 3 websites with the greatest number of questions were Real-Self 21 (11 questions); Angie's List 11 (11 questions); and Healthgrades 17 (9 questions). The 3 websites with the highest percentage of questions directly grading the physician were IWantGreatCare 18 (67%); RateMDs 20 (50%); and Healthgrades 17 (44%). The top 3 websites with questions that grade the physician alone and the physician and the office together (excluding DrScore and Yelp, which had only 1 patient-reported, rather than calculated, overall rating question) were (1) IWantGreatCare 18 (100%); (2) Zocdoc 24 (100%); and (3) Vitals 22 (88%).
Table 4:
Websites That Contained Patient-Entered Physician Rating Scales
The most common questions used by these scales were versions of (1) courtesy of office staff (71%); (2) wait time, promptness, or punctuality (64%); (3) overall rating (entered, not calculated) (57%); (4) trust/confidence in physician/knowledge (50%); (5) time spent with patient (43%); and (6) recommend to family/friend (43%). Although 2 questionnaires (14%) included a patient-reported “treatment success” question, none (0%) included patient-reported “surgeon skill” questions or reported outcome scores to measure success. All (100%) of the online-only rating scales included an open-ended comment section for patients to complete.
Discussion
The authors reviewed online physician rating scales to (1) determine the components (ie, questions) used by these rating scales and (2) calculate the proportion of these under the direct control of, and therefore actually rating, the physician. Although there is much variation in the questions used by physician rating websites, they tend to follow several themes: office staff friendliness/courtesy (found in 71% of rating scales); wait times (64%); trust/confidence in the physician (50%); communication skills (bedside manner, 36%; listens/answers questions, 36%; and communication, 29%); and whether the patient would recommend the physician (43%). These findings were similar to those of prior studies. 10,25–27 Reimann and Strech, 10 who reviewed English and German physician rating websites, found that the most common questions were related to wait times (100% of rating scales), trust/confidence in the physician (86%), and office staff friendliness/courtesy (81%). The study by de Groot et al 28 found that patients deemed wait time and physician expertise to be the most important factors in selecting a physician or surgeon. Although these studies analyzed the components found on physician rating websites, no prior studies have discussed the proportion that directly rate the physician in question. In fact, this study found that most of the questions are out of the physician's direct control. This had been mentioned but not measured in prior studies, which stated that reviews focused more on office décor than on whether the physician delivered good health care. 29–32 Of the 14 rating scales reviewed, only 4 had a greater number of questions focused directly on the physician than on the office/physician or office alone. Overall, an average of 28% of questions directly rated the physician (confirming the authors' hypothesis of less than 50%), whereas 48% and 24% rated both the physician and the office and the office alone, respectively.
Despite the explosion of physician rating websites, little has been published regarding the quality and applicability of these scales. RateMDs, one of the first physician rating websites, had 500,000 available physician ratings in its first 4 years online (2004 to 2008). 33 In 2008, Healthgrades and Vitals entered the market and rapidly grew in popularity. By 2010, 24% to 47% of American adults reported reading physician reviews online prior to choosing a physician. In 2016, as a sign of the increased acceptance of physician rating websites, UnitedHealthcare, one of the largest health insurance providers, 34 linked Healthgrades ratings to physician profiles on its website to “reinforce quality of care.” 35 Although studies have shown that online rating scales correlate well with conventional paper rating scales, correlations between patient satisfaction and objective outcome scores have varied. 36 Some studies have shown that patient satisfaction correlates with higher medical costs, more hospital admissions, and a higher mortality rate without evidence that there is a correlation with a measurable improvement in objective outcomes, 37–43 whereas other studies have shown a correlation with improved outcomes. 44 Most physicians agree that patient satisfaction should be part of the way physicians are evaluated, but it should not be confused as being synonymous with overall physician performance or overall quality of care. 10,25 Outside of patient satisfaction alone not being a good measure of overall clinical care, the methodology used by most physician rating websites is lacking. In fact, most patient satisfaction surveys do not even include more than half of the components required to measure patient satisfaction. 10,25 Orthopedic surgery is one of the most frequently searched specialties, but it is reported to have the lowest overall online patient satisfaction ratings. 45,46 Bakhsh and Mesfin, 26 specifically examining online ratings of orthopedic surgeons, found that “surgeon knowledge” and “bedside manner” were most correlated with the physicians' overall rating, which correlates with prior studies showing poor communication and bedside manner to be the most common reasons why patients seek a second opinion after seeing an orthopedic surgeon. 47
To be broad and generalizable, physician rating websites are full of information of questionable value, which could be improved by specialty-specific rating scales. Although the focus, goals, and day-to-day work (including the level of patient interaction) of hospitalists, family practice physicians, emergency medicine physicians, dermatologists, radiologists, and surgeons vary considerably, physician rating websites attempt to evaluate all of these with a single all-inclusive rating scale. In a family practice, clinical success and outcomes may be measured by vital signs and laboratory values (ie, body mass index, blood pressure, glucose, hemoglobin A1c), and patient education can be the main driver in the achievement of these goals (ie, weight loss). In contrast, surgeons use patient education to foster informed decision-making, with clinical success and outcomes being primarily determined by patient selection and technical or surgical skills that are not measured by patient satisfaction scores. Currently, there are no freely or easily accessible databases containing reports of individual physicians' technical expertise or objective outcome measurements (eg, imaging [radiography], functional measures [strength, range of motion, dexterity], and time to return to sports, work, or even activities of daily living). 10 Public reporting of physician performance (including both patient satisfaction and objective outcome scores) should improve the quality of health care by allowing patients to make educated choices regarding physician and allowing physicians to monitor their own practices for areas of improvement. 48 However, if flawed or inconsistent methodology is used, the inaccurate ratings could mislead or confuse both patients and physicians. 48 Recently, websites have emerged (eg, ProPublica Surgeon Scorecard) that claim to be a publicly available surgeon “objective outcome measure.” Instead of a clinical outcome measure, they represent administrative data relying on an “adjusted complication rate” calculated from Medicare readmission codes that have a latency of 1 to 2 years and include a limited number of procedures (currently 8), only the first 30 days postoperatively, and many complications unrelated to the index surgical procedure. They fail to include the immediate postoperative period (during the index admission), during which 67% of postoperative complications occur, or any complications more than 30 days after surgery. 48–52 Penalizing surgeons for complications unrelated to the surgery itself (usually due to medical comorbidities) may encourage “cherry picking” of healthier patients for elective procedures. 49
Most physician rating websites, including all but 1 reviewed, are “open chain,” meaning that, in the interest of patient privacy, anyone with Internet access (including individual physicians, their office staff, or reputation-based businesses) can post 1 or more reviews without verification of actual consultation with the physician, thus permitting unscrupulous physicians to manipulate their own scores. 10,30,53 Additionally, some physician rating websites offer physicians upgraded membership, allowing them to choose to remove negative ratings. 22 Both the National Health Service survey and the Clinician and Group Consumer Assessment of Healthcare Providers and Systems survey are truly “closed chain” by being physical documents mailed to a patient after a confirmed visit with a physician. Zocdoc is the most closed chain of all, allowing reviews to be posted only by patients who make an appointment through their website only with physicians who pay to be listed. However, after making an appointment, patients can leave a physician rating after the appointment date regardless of whether they actually attended the appointment. Often information listed on physician rating websites is “inaccurate or outdated” because it is derived directly from other databases (ie, other physician rating websites, medical board websites). 53 Studies have shown that only 3% to 28% of physicians listed on physician rating websites have 1 or more ratings available, which leads to skewing and volatility of rating scales that may not be representative of the actual patient experience. 25,27,29,30,31,47,48,53–56 Physicians fear that there is a tendency for reviews to be posted only by outliers with terrible experiences; however, prior studies have shown that most (>85%) reviews posted on physician rating websites are positive. 4,10,25,29–31,54,57 Given all of these factors, it is not surprising that the public is skeptical regarding the accuracy of reviews. A poll conducted by C.S. Mott Children's Hospital indicated that although 30% of parents review online physician ratings, two-thirds (69%) believe that some of the reviews are fake, 64% believe there are not enough reviews to make a good decision, and only 11% have ever rated a physician online. 48,58 Most patients report that they rely heavily on recommendations from other physicians and from family and friends. 38 Trehan and Daluiski 45 suggested that some of these inconsistencies could be improved via frequent internal monitoring by physician rating websites to eliminate outlying reviews (either positive or negative) and to make a physician's ratings public only when a specific number of reviews have been received.
This study had limitations. First, the search results are dynamic. Searches done today may have different findings. Second, selection bias was introduced by reviewing only the first 100 search results. However, the authors' method of searching was a reproduction of the process used by patients when evaluating potential physicians. The first 100 results were chosen based on a prior study, and the authors agree that it is unlikely that most patients would review results beyond this. 10 Third, the search results were limited to English websites, which adds a selection bias. However, Reimann and Strech 10 found similar questions on English and German physician rating websites.
Conclusion
Current physician rating websites have the benefit of being freely and widely available, but they often contain unreliable data that dilute physician ratings with those of the office facility and staff. Although patient satisfaction has a role in evaluating the quality of care, it is but 1 piece of the picture and should be viewed along with objective outcome measures. More work is needed to adequately delineate which questions would optimally measure patient satisfaction with direct physician care and to incorporate measures of diagnostic and/or technical expertise and objective outcomes.
References
LaFon H. Baron Funds comments on Press Ganey Holdings Inc. GuruFocus. http://www.gurufocus.com/news/354300/baron-funds-comments-on-press-ganey-holdings-inc . Accessed December 30, 2015.
Hasse J. Buy Press Ganey? BMO Capital says it's time. Benzinga. http://www.benzinga.com/analyst-ratings/analyst-color/15/06/5595316/buy-press-ganey-bmo-capital-says-its-time . Accessed January 16, 2016.
Junewicz A, Youngner SJ. Patient-satisfaction surveys on a scale of 0 to 10: improving health care, or leading it astray?Hastings Cent Rep. 2015; 45(3):43–51. doi:10.1002/hast.453 [CrossRef]
Falkenberg K. Why rating your doctor is bad for your health. Forbes. http://www.forbes.com/sites/kaifalkenberg/2013/01/02/why-rating-your-doctor-is-bad-for-your-health/-7ddd2b6b2f15 . Accessed January 1, 2016.
Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012; 172(5):405–411. doi:10.1001/archinternmed.2011.1662 [CrossRef]
Cowan P. Press Ganey scores and patient satisfaction in the emergency department (ED): the patient perspective. Pain Med. 2013; 14(7):969. doi:10.1111/pme.12170_3 [CrossRef]
Ashworth M, White P, Jongsma H, Schofield P, Armstrong D. Antibiotic prescribing and patient satisfaction in primary care in England: cross-sectional analysis of national patient survey data and prescribing data. Br J Gen Pract. 2016; 66(642):e40–e46. doi:10.3399/bjgp15X688105 [CrossRef]
Lembke A. Why doctors prescribe opioids to known opioid abusers. N Engl J Med. 2012; 367(17):1580–1581. doi:10.1056/NEJMp1208498 [CrossRef]
Moher D, Liberati A, Tetzlaff J, Altman DGPRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Int J Surg. 2010; 8(5):336–341. doi:10.1016/j.ijsu.2010.02.007 [CrossRef]
Reimann S, Strech D. The representation of patient experience and satisfaction in physician rating sites: a criteria-based analysis of English- and German-language sites. BMC Health Serv Res. 2010; 10:332. doi:10.1186/1472-6963-10-332 [CrossRef]
Angie's List. https://www.angieslist.com . Accessed November 1, 2015.
Agency for Healthcare Research and Quality. CAHPS Clinician & Group Survey. http://www.ahrq.gov/cahps/surveys-guidance/cg/instructions/index.html . Accessed November 1, 2015.
DoctorBase. https://doctorbase.com/search . Accessed November 1, 2015.
DoctorScorecard. http://www.doctorscore-card.com . Accessed November 1, 2015.
DrScore. Winston-Salem, NC: Medical Quality Enhancement Corp; 2005. http://www.drscore.com . Accessed November 1, 2015.
EyeDoctorReview. http://eyedoctorreview.com . Accessed November 1, 2015.
Healthgrades. Denver, CO: Vestar Capital Partners; 1998. http://www.healthgrades.com . Accessed November 1, 2015.
iWantGreatCare. United Kingdom: Dr. Neil Bacon; 2008. https://www.iwantgreatcare.org . Accessed November 1, 2015.
GP Patient Survey. United Kingdom: National Health Service; 2005. https://gp-patient.co.uk . Accessed November 1, 2015.
RateMDs. San Jose, CA: VerticalScope Inc; 2004. https://www.ratemds.com . Accessed November 1, 2015.
RealSelf. https://www.realself.com . Accessed November 1, 2015.
Vitals. Lyndhurst, NJ: MDx Medical, Inc; 2007. http://www.vitals.com . Accessed November 1, 2015.
Yelp. http://www.yelp.com . Accessed November 1, 2015.
Zocdoc. https://www.zocdoc.com . Accessed November 1, 2015.
Kadry B, Chu LF, Kadry B, Gammas D, Macario A. Analysis of 4999 online physician ratings indicates that most patients give physicians a favorable rating. J Med Internet Res. 2011; 13(4):e95. doi:10.2196/jmir.1960 [CrossRef]
Bakhsh W, Mesfin A. Online ratings of orthopedic surgeons: analysis of 2185 reviews. Am J Orthop (Belle Mead NJ). 2014; 43(8):359–363.
Emmert M, Sander U, Pisch F. Eight questions about physician-rating websites: a systematic review. J Med Internet Res. 2013; 15(2):e24. doi:10.2196/jmir.2360 [CrossRef]
de Groot IB, Otten W, Dijs-Elsinga J, et al. Choosing between hospitals: the influence of the experiences of other patients. Med Decis Making. 2012; 32(6):764–778. doi:10.1177/0272989X12443416 [CrossRef]
Shute N. Online grades for doctors get an incomplete. http://www.npr.org/sections/health-shots/2013/01/04/168626218/grades-for-doctors-get-an-incomplete . Accessed January 1, 2016.
Ellimoottil C, Hart A, Greco K, Quek ML, Farooq A. Online reviews of 500 urologists. J Urol. 2013; 189(6):2269–2273. doi:10.1016/j.juro.2012.12.013 [CrossRef]
Gao GG, McCullough JS, Agarwal R, Jha AK. A changing landscape of physician quality reporting: analysis of patients' online ratings of their physicians over a 5-year period. J Med Internet Res. 2012; 14(1):e38. doi:10.2196/jmir.2003 [CrossRef]
Manary MP, Boulding W, Staelin R, Glickman SW. The patient experience and health outcomes. N Engl J Med. 2013; 368(3):201–203. doi:10.1056/NEJMp1211775 [CrossRef]
Ostrov BF. You can rate toasters, cars, and now doctors: Boston Globe. July7, 2008. http://business.angieslist.com/Visitor/News/PressDetail.aspx?i=52 . Accessed April 1, 2016.
Heilbrunn E. Top health insurance companies. U.S. News & World Report. November5, 2014. http://health.usnews.com/health-news/health-insurance/articles/2013/12/16/top-health-insurance-companies . Accessed March 4, 2016.
Lowes R. Big insurer's website displays physician ‘Healthgrades.’ http://www.medscape.com/viewarticle/862829 . Accessed April 1, 2016.
Greaves F, Pape UJ, King D, et al. Associations between Internet-based patient ratings and conventional surveys of patient experience in the English NHS: an observational study. BMJ Qual Saf. 2012; 21(7):600–605. doi:10.1136/bmjqs-2012-000906 [CrossRef]
Ketelaar NA, Faber MJ, Flottorp S, Rygh LH, Deane KH, Eccles MP. Public release of performance data in changing the behaviour of healthcare consumers, professionals or organisations. Cochrane Database Syst Rev. 2011; (11):CD004538.
Yahanda AT, Lafaro KJ, Spolverato G, Pawlik TM. A systematic review of the factors that patients use to choose their surgeon. World J Surg. 2016; 40(1):45–55. doi:10.1007/s00268-015-3246-7 [CrossRef]
Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013; 3(1):e001570. doi:10.1136/bmjopen-2012-001570 [CrossRef]
Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011; 17(1):41–48.
Gray BM, Vandergrift JL, Gao GG, McCullough JS, Lipner RS. Website ratings of physicians and their quality of care. JAMA Intern Med. 2015; 175(2):291–293. doi:10.1001/jamainternmed.2014.6291 [CrossRef]
Rao M, Clarke A, Sanderson C, Hammersley R. Patients' own assessments of quality of primary care compared with objective records based measures of technical quality of care: cross sectional study. BMJ. 2006; 333(7557):19. doi:10.1136/bmj.38874.499167.7C [CrossRef]
Chang JT, Hays RD, Shekelle PG, et al. Patients' global ratings of their health care are not associated with the technical quality of their care. Ann Intern Med. 2006; 144(9):665–672. doi:10.7326/0003-4819-144-9-200605020-00010 [CrossRef]
Glickman SW, Boulding W, Manary M, et al. Patient satisfaction and its relationship with clinical quality and inpatient mortality in acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2010; 3(2):188–195. doi:10.1161/CIRCOUTCOMES.109.900597 [CrossRef]
Trehan SK, Daluiski A. Online patient ratings: why they matter and what they mean. J Hand Surg Am. 2016; 41(2):316–319. doi:10.1016/j.jhsa.2015.04.018 [CrossRef]
Pfeffer GB. Raising the bar for online physician review sites. Am J Orthop (Belle Mead NJ). 2015; 44(1):11–12.
van Dalen I, Groothoff J, Stewart R, Spreeuwenberg P, Groenewegen P, van Horn J. Motives for seeking a second opinion in orthopaedic surgery. J Health Serv Res Policy. 2001; 6(4):195–201. doi:10.1258/1355819011927486 [CrossRef]
Friedberg MW, Pronovost PJ, Shahian DM, et al. A methodological critique of the ProPublica surgeon scorecard. RAND Corporation. http://www.rand.org/pubs/perspectives/PE170.html . Accessed January 7, 2016.
Rosenbaum L. Scoring no goal: further adventures in transparency. N Engl J Med. 2015; 373(15):1385–1388. doi:10.1056/NEJMp1510094 [CrossRef]
Dreyfuss JH. Secret data on surgeons made public. MDalert. August5, 2015. http://www.mdalert.com/article/secret-data-on-surgeons-made-public . Accessed December 28, 2015.
Bastian H. What's the score on surgeon scorecards?MedPage Today. October2, 2015. http://www.medpagetoday.com/Surgery/GeneralSurgery/53888 . Accessed December 28, 2015.
Pierce O, Allen M. Assessing surgeon-level risk of patient harm during elective surgery for public reporting. ProPublica. August4, 2015. https://static.propublica.org/projects/patient-safety/methodology/surgeon-level-risk-methodology.pdf . Accessed February 3, 2016.
Ellimoottil C, Leichtle SW, Wright CJ, et al. Online physician reviews: the good, the bad, and the ugly. Bulletin of the American College of Surgeons. September1, 2013. http://bulletin.facs.org/2013/09/online-physician-reviews . Accessed January 7, 2016.
Lagu T, Hannon NS, Rothberg MB, Lindenauer PK. Patients' evaluations of health care providers in the era of social networking: an analysis of physician-rating websites. J Gen Intern Med. 2010; 25(9):942–946. doi:10.1007/s11606-010-1383-0 [CrossRef]
Keckley PH. 2011 Survey of Health Care Consumers in the United States: Key Findings, Strategic Implications. Washington, DC: Deloitte Center for Health Solutions; 2011.
Mostaghimi A, Crotty BH, Landon BE. The availability and nature of physician information on the internet. J Gen Intern Med. 2010; 25(11):1152–1156. doi:10.1007/s11606-010-1425-7 [CrossRef]
López A, Detz A, Ratanawongsa N, Sarkar U. What patients say about their doctors online: a qualitative content analysis. J Gen Intern Med. 2012; 27(6):685–692. doi:10.1007/s11606-011-1958-4 [CrossRef]
C.S. Mott Children's Hospital. National poll on children's health: many parents wary of online ratings for doctors. http://mottnpch.org/sites/default/files/documents/032116_doctorratings_1.pdf . Accessed April 1, 2016.
Table 1
Access, Characteristics and Demographics, and Searchability of the 14 Websites
a

Images Powered by Shutterstock