Practice Management Software UX: Where Interface Debt Is Costing You

Portrait of Dennis Lenard in the UX design agency.

Dennis Lenard

Mar 2026

Physicians spend 5.8 hours with EHRs for every 8 hours of patient care. Practice management software has accumulated years of feature additions without addressing the structural interface problems underneath. A benchmarking update.

This article draws on Creative Navy's project work in medtech UX, spanning practice management software, surgical equipment, ventilators, blood pumps, infusion systems, and patient monitoring devices, including Class II and Class III regulated products. Our work in this sector covers clinical environments including the ICU and operating theatre, designing for surgeons, nurses, and biomedical engineers. Dennis Lenard, who leads this work at Creative Navy, is the author of User Interface Design For Medical Devices And Software, the practitioner reference on UX design for medical devices and software. Our approach integrates IEC 62366 usability engineering requirements and FDA Human Factors guidance as structural inputs to the design process, not post-hoc compliance activities.

Physicians in the United States now spend 5.8 hours interacting with an EHR for every 8 hours of scheduled patient care. That figure comes from deidentified event log data across 200,081 unique physicians at 396 organisations, published by the AMA in September 2024. The financial cost of the burnout this generates runs to an estimated $5.6 billion annually in 2023 dollars (Mayo Clinic Proceedings, April 2024).

Neither figure appeared without warning. The interfaces responsible for them have been reviewed, criticised, and continuously updated for years without resolving the structural problems underneath.

This article updates a benchmarking analysis originally conducted in 2021 and 2022 across practice management systems and electronic health records. Five years on, the question is not which products have the best feature lists. The question is whether the structural interface problems identified then have been addressed, or whether organisations are now paying to maintain the same dysfunction with a newer visual skin and an AI documentation layer on top.

This analysis is written for product directors and senior PMs who suspect the interface is costing them more than their support metrics are capturing. The products reviewed include Cliniko, DrChrono, Greenway Health, eClinicalWorks, AdvancedMD, and athenahealth, alongside new entrants that have gained meaningful market presence since the original benchmarks were conducted.

What the numbers say about EHR burden

Key statistics, as of Q1 2026

  • 5.8 hours of EHR time per 8 hours of scheduled patient care, averaged across ambulatory specialties; primary care averages 7.3 hours (AMA, September 2024)
  • $5.6 billion: estimated annual cost of physician burnout to the US healthcare system, in 2023 dollars (Mayo Clinic Proceedings, April 2024)
  • $80,000: average annual revenue decrease per burned-out physician (KLAS Arch Collaborative, 2024)
  • 28.4 minutes: increase in EHR time per 8-hour clinic session between 2019 and 2023, a 7.8% rise (Annals of Family Medicine, January 2024)
  • 22.5% of physicians spent more than 8 hours on EHR tasks outside normal working hours in 2024, up from 20.9% in 2023 (AMA, August 2025)
  • 17%: wrong-field data entry rate observed in settings where deep navigation hierarchies doubled the clicks required to reach documentation targets (PMC scoping review, 2025)
  • 41% higher odds of burnout and 21% higher odds of inpatient mortality in surgical patients in hospitals with poorer EHR usability, across 343 hospitals and over 1.28 million patients (PMC, 2023)

These are not perception figures. The EHR time data is drawn from event logs, not self-report. The wrong-field entry rate comes from direct observation. The implication for product directors is that the cost of interface dysfunction has been measured and attributed. The conversation about whether to invest in interface quality is, at this point, a conversation about whether to acknowledge what the research already shows.

The evidence on what that dysfunction costs sets up the question this benchmarking is designed to answer: which products have moved, and in what direction.

The Clinical Coherence Audit

The evaluation framework applied to each product in this update consists of three questions:

  1. Does the navigation model reflect clinical workflow logic, or does it reflect software module logic?
  2. Does the information architecture serve the busiest, most time-pressured user role?
  3. Have the structural problems identified in the 2021 and 2022 benchmarks been addressed, or have features been layered on top of them?

We refer to this three-part evaluation as the Clinical Coherence Audit. A product can pass all three questions with a limited feature set. A product can fail all three while adding an AI scribe tool, redesigning its dashboard aesthetics, and releasing 185 new features in a single quarter. The distinction matters because most product development in this category over the past three years has concentrated on features rather than coherence.

Each of the following reviews applies the Clinical Coherence Audit alongside a verdict on whether structural change has occurred since the previous benchmarks.

Cliniko and Halaxy: Five Years On

Cliniko's navigation structure, sidebar layout, and patient file organisation remain consistent with the 2021 benchmark. As of late 2025, Cliniko serves approximately 65,000 professionals. In 2024 the platform integrated Heidi AI, an AI scribe product, into its ecosystem. The core interface is untouched. Third-party assessments describe it as complex to navigate without extensive onboarding, and the patient search function has been characterised as too basic for busy clinics, a finding consistent with what the original analysis documented.

Against the Clinical Coherence Audit: the navigation model still reflects software module logic (appointments, patients, messages, financials, reports as primary navigation categories); the information architecture remains weighted toward first-time-user simplicity at the expense of receptionist efficiency under load; structural problems remain unaddressed.

The Heidi AI integration is instructive as a pattern. It adds real value at the documentation layer without touching the navigation problem beneath it. Clinicians who cannot locate the allergies section quickly are not helped by a better note-taking tool.

Halaxy's current interface status cannot be established from available evidence. No substantive changelog, redesign announcement, or independent benchmarking covering its current state was found in this research pass. Conclusions about Halaxy are suspended pending primary research access.

DrChrono and Greenway: Changed but Not Fixed

DrChrono has been acquired by EverHealth and issued substantive updates in late 2025. November and December 2025 release notes document a redesigned Labs section, new role-specific dashboards for front office, billing staff, and providers, and SNOMED-based data standardisation. The product now markets itself as AI-powered with embedded ambient documentation.

Against the Clinical Coherence Audit: role-specific dashboards are a meaningful structural improvement. The top-bar-only navigation criticised in the 2022 benchmark appears partially addressed through these panels, though a full navigation restructure has not been confirmed in public documentation. Feature accumulation continues alongside structural change, which means the improvement is real but incomplete.

Greenway Health received the Most Improved Physician Practice Solution 2026 recognition from KLAS Research, a meaningful market signal. The platform has migrated to AWS and added Greenway Clinical Assist, an AI documentation tool. A current user review nonetheless describes the medication input workflow as more time-consuming than it needs to be, noting that the specific task sequence requires more steps than the information density justifies. Structural improvement is real; sense decay at the task level persists where it was not directly targeted.

Both products have moved. The question for their product teams is whether the architectural changes driving that movement are sufficient to pull the Clinical Coherence Audit score above the threshold where users stop building compensations.

eClinicalWorks and AdvancedMD: The Holdouts

eClinicalWorks is the most consequential example in this benchmarking. It serves a substantial portion of the US ambulatory care market. Its structural interface problems are well-documented and unchanged.

A January 2026 Capterra review from a Nursing Supervisor describes navigation as challenging and certain workflows as not intuitive. A 2025 user review describes an interface that looks like it belongs in 1999. One Capterra account identifies a specific structural failure in precise terms: a navigation element labelled "Allergies" does not navigate to allergies.

The user must locate the information through a different pathway requiring an additional click and a loading delay. This is not a visual design problem. It is a gap between what the interface label promises and what the system behaviour delivers.

The vendor is aware. A Capterra Canada review states: "eClinicalworks is known for the amount of clicks it takes to complete a task. This is a bit much, but they are aware." That awareness, without structural response, is the operational definition of sense decay: the gap between what the product recognises as a problem and what it has done about it keeps widening with each release cycle that adds features rather than addressing the navigation model.

Against the Clinical Coherence Audit: eClinicalWorks fails all three questions in 2026 on the same grounds it failed them in 2022. The AI layers added (PRISMA health information search, Scribe automated documentation) have not changed that assessment.

AdvancedMD continues to be listed in current EHR comparisons as feature-rich for practice management and revenue cycle management. No major interface overhaul or navigation restructure appears in public changelogs or independent reviews. Its structural position against the Clinical Coherence Audit is unchanged.

Athenahealth: What Improvement Looks Like

Athenahealth is the strongest example of structural improvement in this benchmarking. The Fall 2025 release contained 185 new features, approximately 40% of which originated from the Voice of the Customer programme, a notable commitment to closing the loop between user behaviour and product decisions. A revamped interface from 2024 stated click reduction and navigation streamlining as explicit design goals. Athenahealth received Best in KLAS 2025 for Overall Independent Physician Practice Suite for the second consecutive year.

The 5-stage patient visit model (Check-in, Intake, Exam, Sign-off, Checkout) established in the 2022 benchmark remains the organisational framework. That is the key point: the framework is grounded in clinical workflow logic, not in software module categories. The tooling around each stage has been substantially updated without abandoning the workflow-oriented architecture that makes the interface cohere.

Against the Clinical Coherence Audit: athenahealth passes all three questions. The improvement is not attributable to increased feature volume; eClinicalWorks matches or exceeds athenahealth on that metric. The improvement is attributable to sustained commitment to restructuring around clinical workflow. The competitive signal this creates is direct. Athenahealth's market position has strengthened as competitors have held their structural architecture constant while accumulating features.

That pattern, improvement through architectural restructuring rather than feature accumulation, is what the comparison table below is designed to make visible.

New Entrants Worth Noting

Several products not present in the original benchmarks have gained meaningful market presence and qualify for inclusion in the competitive landscape.

Pabau targets medi-aesthetics and allied health with an integrated platform covering patient portal, injection plotting, and marketing automation. Its information architecture does not need to serve the full range of clinical roles simultaneously. That constraint, respected rather than worked around, produces coherence that general-purpose platforms struggle to achieve at scale. Pabau's growth is a signal that a defined niche with structural coherence outcompetes a broad feature set with structural fragmentation.

Jane App has achieved significant UK adoption in allied health, particularly for multi-practitioner practices, with usability positioned as a primary differentiator over Cliniko. SimplePractice dominates the mental health and therapy segment on similar grounds. Both products demonstrate that niche constraint produces interface discipline that general-purpose products have not consistently managed.

Tebra (formerly Kareo) publishes physician burnout research alongside its product. That positioning choice signals awareness that the EHR burden problem is a competitive consideration, not simply a user satisfaction metric.

Heidi AI represents an emerging category: workflow-layer AI tools that integrate with existing practice management systems rather than replacing them. The Cliniko integration is the most visible example. These tools clarify where structural problems are concentrated: documentation pathways, search, and role-specific data retrieval. They are worth monitoring precisely because they reveal the shape of what the interface fails to provide.

What the patterns reveal

The comparison across products yields a consistent finding: products that have improved have done so by changing information architecture. Products that have stagnated have added features without touching architecture. The AI documentation layer has been added across both groups without differentiating their Clinical Coherence Audit scores.

ProductCore UX status (Q1 2026)Navigation modelStructural change since 2022AI layer added
ClinikoLargely unchangedModule sidebarNoIntegration only (Heidi AI)
DrChronoMaterially changedTop bar plus role dashboardsPartialEmbedded (ambient docs)
Greenway (Intergy)Materially changedMixedPartialEmbedded (Clinical Assist)
eClinicalWorksUnchangedModal-heavy module logicNoEmbedded (Scribe, PRISMA)
AdvancedMDUnchangedTop nav plus tabsNoNot confirmed
AthenahealthMaterially changed5-stage clinical workflowYesEmbedded (extensive)
PabauNew entrantPurpose-built integratedN/AYes

The AI layer column is informative in isolation. Every major product in this category now offers AI-assisted documentation. None of those AI layers have changed the underlying navigation logic or information hierarchy of the product they run inside. The products where AI has been embedded in a structurally sound foundation (athenahealth) are outperforming those where AI has been embedded in a structurally fragmented one (eClinicalWorks).

For guidance on the specific design patterns in electronic health records that separate coherent from fragmented patient file design, the pattern benchmarking from our lab covers the information hierarchy decisions that determine whether a clinical record aids decision-making or impedes it.

A structurally coherent practice management interface organises screens around the decisions clinicians make at each stage of a patient visit, not around software modules. It keeps critical patient information visible without requiring navigation, limits modal dialogs for common tasks, and gives receptionists persistent search without forcing abandonment of their current context. Athenahealth's 5-stage visit framework is the closest model in current widespread use to this standard.

AI tools don't fix interface architecture

The counterargument to investing in interface restructuring runs as follows: AI documentation tools are reducing the documentation burden directly, which means the interface's structural problems matter less. If an AI scribe handles charting, does it matter how many clicks it takes to reach the allergies section?

It matters, for two reasons.

First, AI documentation tools address one node in the clinical workflow: note-taking. They do not address patient search disambiguation under phone load, medication reconciliation navigation, appointment scheduling under time pressure, or the task-switching overhead that the 2025 PMC scoping review identified as a direct driver of wrong-field entry errors. The documentation burden is real and worth reducing. It is not the only burden.

Second, the mortality finding in hospitals with poor EHR usability (21% higher odds for surgical patients, OR=1.21, across 343 hospitals) was measured in a context where AI scribe tools were not present as a mitigating factor. That odds ratio reflects the structural interface failure operating at a level below what AI documentation assistance addresses. AI tools reduce the time cost of one task. They do not change the cognitive overhead of navigating a modal-heavy, module-organised system under clinical load.

The pattern that causes AI-powered products to stall in adoption is not usually a model quality problem. It is an interaction layer problem: the gap between what the AI can do and whether the interface lets users reach that capability reliably. In practice management software, that gap is structural. Adding an AI layer to a structurally broken interface does not close it.

The competitive position that AI documentation tools create is real. They reduce one source of friction. They do not change the Clinical Coherence Audit score of the product they run inside.

What this means for product directors

The clearest diagnostic for interface debt is the workaround count. When clinicians paste notes from Word into EHR fields, build departmental conventions that bypass the intended workflow, or maintain personal shortcuts for frequently needed screens, the interface has failed them. A 2025 PMC scoping review observed a 17% wrong-field entry rate in systems where deep navigation hierarchies doubled the clicks required to reach documentation targets. If your support logs contain recurring queries about locating standard information, the navigation model needs structural review, not a help article or a new tooltip.

Across clinical and veterinary practice management engagements, the finding that consistently surprises product teams is that the most diagnostic signal for interface failure is not the error log. It is what is taped to the monitor. In field research conducted across 35 clinics, handwritten checklists appeared repeatedly next to screens that theoretically contained the same information. When users have built their own version of the interface on paper and positioned it adjacent to the display, the architecture has not been supplemented. It has been replaced.

Addressing this requires observational research in live clinical environments, not analytics. Usage data shows which screens are visited most frequently. It does not show what users do before, during, and between those visits to compensate for what the interface fails to provide. The workaround that routes around a broken navigation path will not appear in any click-path report.

The principles that follow from this analysis:

  1. Audit navigation logic against clinical workflow, not against software module categories. If primary navigation labels match backend system modules rather than clinical task stages, the architecture needs review before the next feature cycle.
  2. Measure wrong-field entry rate as a structural metric, not a training issue. When users consistently navigate to the wrong field for a common task, the interface label has failed them. Training will not change that.
  3. Treat AI documentation layer additions as a separate product decision from interface restructuring. They solve different problems and should not substitute for each other in the roadmap.
  4. Evaluate new entrants against the Clinical Coherence Audit, not against feature parity. Purpose-built products for defined niches consistently outperform legacy generalist platforms on coherence at lower feature counts.

Limits of this analysis

This benchmarking relied on public changelogs, user review platforms (Capterra, G2, KLAS Research summaries), and product documentation collected in March 2026. Primary research sessions with current users of each platform were not conducted for this update. User review data on platforms such as Capterra skews toward dissatisfied users, which may overrepresent frustration relative to satisfaction.

Halaxy and Medclinic current states could not be established from available evidence and are excluded from comparative conclusions. The new entrants (Pabau, Jane App, SimplePractice, Tebra, Heidi AI) have not been benchmarked against the Clinical Coherence Audit with the same depth as the legacy platforms, and the verdicts on them should be treated as directional rather than definitive.

The wrong-field data entry rate (17%) and mortality odds ratio (OR=1.21) findings come from specific study populations and should not be applied as universal benchmarks without reference to the original research conditions.

What this analysis cannot settle: whether structural interface reform is achievable within the full range of constraints facing legacy healthcare platforms. Regulatory documentation mandates, HL7 and FHIR data standard requirements, and ICD-10 billing code integration impose real architectural pressures. Athenahealth's improvement within those same constraints suggests structural reform is achievable. EClinicalWorks' stagnation over the same period, in a product whose click-count problem the vendor acknowledges, suggests it requires an organisational decision that goes beyond technical capability. That distinction is worth sitting with rather than resolving cleanly.

Conclusion

The EHR burden is not a feature gap. Physicians spending 5.8 hours with software for every 8 hours with patients are not missing a calendar feature or a reporting dashboard. They are working inside systems whose information architecture was organised around software logic rather than clinical workflow, and which have accumulated five years of additional features without addressing that foundational mismatch.

The products that have improved in this period have done so by restructuring workflow alignment, not by expanding feature counts. Athenahealth's consecutive Best in KLAS recognitions and Pabau's niche-specific growth both reflect the same underlying dynamic: the market is beginning to reward structural coherence. The competitive gap for organisations willing to treat interface architecture as infrastructure rather than as a surface concern to be addressed through feature releases remains very large.

The handwritten list next to the monitor is the most reliable indicator that the gap is open in your product.

Frequently asked questions

What does the KLAS Arch Collaborative 2024 research show about EHR usability and physician revenue?

The KLAS Arch Collaborative 2024 burnout report, drawing on data from 20,229 physicians and 32,782 nurses collected between January 2022 and August 2023, found that a burned-out physician is associated with an average annual revenue decrease of $80,000 per provider. Burnout in this research is directly associated with EHR burden. The figure represents a measurable financial consequence of interface dysfunction, not a qualitative measure of user satisfaction. For a practice with ten physicians, that exposure runs to $800,000 annually before replacement costs are factored in.

How does eClinicalWorks compare to athenahealth on structural interface change since 2022?

EClinicalWorks has not made substantive structural changes to its navigation model since the 2022 benchmark. Its modal-heavy interaction model and module-based navigation remain unchanged; user reviews as of 2025 and 2026 describe click volume and navigation confusion in terms nearly identical to the original analysis. The vendor acknowledges the click-count problem but has not resolved it. Athenahealth has revamped its interface with click reduction as a stated design goal, restructured tooling around its 5-stage visit workflow, and received Best in KLAS for two consecutive years. The Clinical Coherence Audit gap between them has widened since 2022.

What is the Clinical Coherence Audit and how is it applied?

The Clinical Coherence Audit is a three-question evaluative framework: does the navigation model reflect clinical workflow logic or software module logic; does the information architecture serve the busiest, most time-pressured user role; and have structural problems been addressed or have features been layered on top of them. It is applied by mapping a product's primary navigation categories and modal interaction patterns against documented clinical task sequences, assessing whether role-specific information needs are served without cross-module navigation. Feature volume is not a factor in the score.

Why do practice management systems with more features tend to have worse usability?

Feature accumulation in practice management systems typically follows a module-by-module release pattern that adds navigation destinations without restructuring the underlying information architecture. Each release makes the navigation problem worse by adding one more category to a sidebar or top bar that already exceeds working memory capacity for infrequent tasks. Products built for a defined clinical niche avoid this pattern because scope constraint forces architectural discipline. SimplePractice in mental health and Pabau in medi-aesthetics consistently demonstrate this dynamic at lower feature counts than legacy generalist platforms.

What do the AMA EHR time studies show about primary care physicians specifically?

Primary care physicians averaged 7.3 hours of EHR time per 8 hours of scheduled patient care in the AMA event log data published in September 2024, compared to the 5.8-hour average across all ambulatory specialties. A separate Annals of Family Medicine study tracking 141 academic primary care physicians from 2019 to 2023 found that orders time increased 58.9% and inbox time increased 24.4% over the period. The cross-specialty average meaningfully understates the burden on the specialty most reliant on efficient search and documentation navigation.

How has DrChrono changed since the EverHealth acquisition?

DrChrono has introduced role-specific dashboards for front office, billing staff, and providers; redesigned the Labs section for navigation of requisitions and results; and added SNOMED-based data standardisation and structured health assessment tools, documented in November and December 2025 release notes. Embedded ambient documentation has been added. The top-bar-only navigation criticised in the 2022 benchmark appears partially addressed through role-specific panels, though a full navigation restructure has not been confirmed. Whether the architectural change resolves the clinical coherence issues identified in 2022 would require primary research with current users to confirm.

References

American Medical Association. (2024, September). Physician time spent on EHR: Data from 200,000 physicians. AMA. https://www.ama-assn.org/practice-management/sustainability/doctors-productivity-and-ehr-time-burden

Arndt, B. G., Beasley, J. W., Selby, L. V., & Susman, J. L. (2024). More tethered to the EHR: Physician documentation time trends in academic primary care practices. Annals of Family Medicine, 22(1), 14-19. https://doi.org/10.1370/afm.3074

KLAS Research. (2024). Understanding and addressing trends in physician and nurse burnout 2024: Arch Collaborative report. KLAS Research. https://klasresearch.com/report/arch-collaborative-ehr-experience-report-physician-and-nurse-burnout-2024/2342

Linzer, M., Sinsky, C. A., & Poplau, S. (2024). Predicting primary care physician burnout from electronic health record use measures. Mayo Clinic Proceedings, 99(4), 633-643. https://doi.org/10.1016/j.mayocp.2023.09.014

Alobayli, F. (2023). Electronic health record usability, burnout, and patient safety culture among hospital health care professionals: Systematic review. JMIR Human Factors, 10, e43301. https://doi.org/10.2196/43301

Vawdrey, D. K., Wilcox, L. G., Collins, S. A., Feiner, S. K., Mamykina, L., Stein, D. M., & Bakken, S. (2025). Usability challenges in electronic health records: Impact on documentation burden and clinical workflow. PubMed Central. https://pmc.ncbi.nlm.nih.gov

American Medical Association. (2025, August). National physician survey: EHR time outside normal hours 2024. AMA. https://www.ama-assn.org/practice-management/physician-health/new-ama-research-highlights-pandemic-s-lasting-impact-physician

KLAS Research. (2026). Best in KLAS 2026: Most improved physician practice solution. KLAS Research. https://klasresearch.com

Athenahealth. (2025, October). Fall 2025 release highlights. Athenahealth. https://www.athenahealth.com/knowledge-hub/practice-management/fall-2025-release-highlights

In this story

A 2026 benchmarking update across Cliniko, DrChrono, Greenway, eClinicalWorks, AdvancedMD, and athenahealth applies the Clinical Coherence Audit to identify which products have addressed structural interface problems and which have added AI features to broken architecture. Includes business impact data from KLAS, AMA, and Mayo Clinic Proceedings and competitive implications for product directors.

20 min read

You might also like

POS Software UX Benchmarking 2026: The Coherence Gap
Embedded GUI Design

POS Software UX Benchmarking 2026: The Coherence Gap

Major POS platforms have released dozens of updates since 2022. The operational failures have not resolved. This benchmarking review identifies where the gap persists and where the competitive opportunity lies.

19 min read
Mobile Robot Software UX Benchmarking: Closing the Deployment Gap
Industrial GUI

Mobile Robot Software UX Benchmarking: Closing the Deployment Gap

Poor UX in robot programming software is not a cosmetic problem. It is a deployment cost that compounds across every platform change, operator hire, and robot brand added to the fleet.

21 min read
CFD Software UX Benchmarking: What the Interface Is Costing Organisations in 2026
Complex Systems UXScientific Interfaces

CFD Software UX Benchmarking: What the Interface Is Costing Organisations in 2026

Up to 80% of CFD engineering time is consumed by preprocessing, not simulation. A benchmarking review of eleven tools and three new entrants shows which products are treating interface quality as strategy and which are not.

19 min read