Mass Photometry Software UX Benchmarking: a systematic review

Portrait of Dennis Lenard in the UX design agency.

Dennis Lenard

Mar 2026

Five mass photometry analysis tools reviewed against a consistent UX framework. DiscoverMP, PhotoMol, ImageJ, CellProfiler, and BioImageIT each fail in documented ways that compound reproducibility risk across shared facilities.

Mass photometry has earned its reputation quickly. The technique measures individual molecular mass events as molecules land on a glass surface, in real time, without chemical labelling, in near-native conditions. What used to take days can take an afternoon. Since Refeyn commercialised the OneMP instrument in 2019, adoption has accelerated across cryo-EM preparation pipelines, biologic characterisation programmes, and structural biology labs worldwide. The hardware is fast, sensitive, and genuinely disruptive.

The software around it is not.

This review benchmarks five tools that together constitute the working mass photometry software ecosystem: DiscoverMP (Refeyn's primary analysis platform), PhotoMol (an open-source browser tool from EMBL Hamburg), ImageJ and its Fiji distribution, CellProfiler (Broad Institute), and BioImageIT (INRIA/CNRS). Refeyn's StreamlineMP automation layer is assessed alongside DiscoverMP because its existence is itself a usability signal about the platform it supplements.

Each tool is evaluated against three criteria: whether it honestly communicates what its model assumes and controls, whether it scales with operator expertise rather than flattening it, and whether the friction it creates serves scientific precision or something else entirely. The audience for this review is core facility managers running these tools daily, research software engineers making integration decisions, and structural biologists who need reproducible results across operators and time.

Several tools work adequately within narrow conditions. The problem is structural: together, they produce a software environment that actively works against the science it is meant to support.

Key statistics

MetricValueSource
Resolving power improvement, optimised vs standard navgUp to 2xACS Nano, Feb 2026
DiscoverMP default navg setting5ACS Nano, Feb 2026
Optimised navg setting in published study10ACS Nano, Feb 2026
StreamlineMP antibody analysis time reduction (vendor-stated)Up to 80%Refeyn, Nov 2024
BioImageIT: France-BioImaging platforms deployed10 (as of Oct 2022)Prigent et al., Nature Methods, 2022
Open-source bioimaging tools jointly covering data management and analysisMinorityPrigent et al., Nature Methods, 2022

Tools and evaluation criteria

Six tools are reviewed in this article. DiscoverMP is Refeyn's primary commercial analysis platform for mass photometry data. StreamlineMP is a modular complement Refeyn launched in November 2024 to address standardisation gaps in large-dataset workflows. PhotoMol is a browser-based open-source tool developed at EMBL Hamburg for researchers without Refeyn licences. ImageJ and its Fiji distribution are the dominant open-source image analysis environment in the life sciences. CellProfiler is the Broad Institute's automated image analysis platform. BioImageIT is a GUI-first framework developed by INRIA and CNRS to integrate image data management with analysis.

The evaluation framework applies the same five criteria to each tool.

CriterionWhat this means in practice
Epistemic transparencyDoes the interface expose and explain the parameters that determine measurement validity?
Mastery scalingDoes the tool support both novice and expert use without punishing either?
Friction alignmentDoes the resistance users encounter serve scientific precision, or licensing and development convenience?
Reproducibility supportDoes the tool make parameter choices traceable across operators and software versions?
IntegrationDoes the tool connect sensibly with adjacent tools at data handoff points?

No tool reviewed here scores well across all five criteria. The variation in where each fails is what the Comparative Analysis section draws on.

With those criteria established, the individual reviews that follow apply them consistently across all five tools.

Individual tool reviews

DiscoverMP

DiscoverMP is the environment most mass photometry researchers use for everything from acquisition to histogram export. For routine single-session work: acquisition, landing event capture, histogram inspection, and mass export, it is functional. The workspace model works for labs running a handful of samples per session, and the visual output is clean enough that experienced users can read peak positions quickly.

The problem starts when parameters matter. DiscoverMP offers broad flexibility that is highly desirable for many users, yet for those who want a standardized analysis workflow across a large number of datasets, it can prove time consuming, which is what led Refeyn to build StreamlineMP as a complement. Refeyn's own framing of this limitation, stated at StreamlineMP's launch in November 2024, is candid: DiscoverMP was built for flexibility, not for throughput or reproducibility at scale. StreamlineMP's Antibody Stability Module addresses one workflow and reports an 80% reduction in antibody data analysis time, as of launch in November 2024. That figure is vendor-reported and has not been independently validated; what it signals more clearly than its exact value is the scale of the problem it was designed to patch.

The deeper issue is the navg parameter. navg controls the temporal averaging applied to each landing event signal: how many consecutive camera frames are averaged to smooth the contrast reading from which mass is calculated. Threshold 1 and Threshold 2 set the sensitivity boundaries for what the software counts as a valid landing event. Together, these three parameters define the resolving power of every histogram DiscoverMP produces. Research published in ACS Nano in February 2026 documents the practical difference: standard settings of navg=5, Threshold 1=1.20, and Threshold 2=0.25 versus user-optimised settings of navg=10, Threshold 1=2.60, and Threshold 2=0.25 produced resolving power differences of up to a factor of 2 across the same datasets, tested against DiscoverMP v2024 R1. The paper's language is direct: DiscoverMP optimisation "requires some expertise."

Most users don't know what navg or the threshold parameters actually do, and the software doesn't guide them. Observational research in live laboratory environments is what closes this kind of gap: watching how operators actually interact with a tool surfaces the distance between what the interface assumes users know and what they can reliably apply under real lab conditions. DiscoverMP has been in active deployment for several years. That gap has had time to compound.

In mass photometry analysis, navg is the number of consecutive camera frames averaged to calculate the contrast signal from each molecular landing event. The setting directly controls resolving power. Research published in ACS Nano in February 2026 found that navg=5 (DiscoverMP's default as of v2024 R1) and navg=10 produce up to a factor-of-2 difference in resolving power for heterogeneous samples. Most users run default settings without knowing this choice materially affects their results.

In our work with scientific instrument software, the pattern that surfaces consistently is that the software prevents existing expertise from being applied. The DiscoverMP Concentration Calculator is a routine example: it lives in a submenu, not in the preparation step where sample concentration decisions actually happen. Users who know they need it still have to go looking for it. Users who do not know they need it never find it at all. This is not a documentation failure. It is a workflow design failure.

StreamlineMP offers a partial answer. Its guided antibody stability workflow addresses one specific use case and does so competently within that scope. But it is a module, not a redesign. The parameter transparency problem in DiscoverMP persists for every workflow StreamlineMP does not yet cover. At the time of this review, that is most of them.

PhotoMol

PhotoMol was developed by researchers at EMBL Hamburg specifically because Refeyn's software requires a licence that is not always accessible to visiting collaborators or institutions at the lower end of the equipment budget. As documented by Niebling et al. in Frontiers in Molecular Biosciences (2022), PhotoMol's required input is the events_Fitted.h5 file exported from DiscoverMP version <2.5; the version compatibility of this file format should be confirmed against current DiscoverMP releases, as this specification was accurate as of May 2022. The tool runs as a browser application hosted on EMBL Hamburg's eSPC platform.

Screenshot of the user experience of Photo Mol

Photo Mol user experience

Two structural problems limit its usefulness. PhotoMol is internet-dependent. Core facility environments with restricted external network access, or biosafety cabinet setups without browser connectivity, cannot use it. This is not an edge case in the facilities the tool is designed to serve. The second problem is reliability. As of March 2026, the live interface at EMBL Hamburg's eSPC platform surfaces the messages "This app has crashed and has been stopped" and "has been transferred to another user," indicating server-side instability that users encounter mid-analysis, without session state recovery.

Photo Mol interface export

Granular data export options in Photo Mol

An analysis that crashes mid-run loses more than time. In certain experimental contexts, with limited sample availability or time-constrained protocols, the run cannot easily be recovered. A tool that exists because proprietary software is inaccessible should not itself be less reliable than the software it replaces.

PhotoMol's existence is genuinely valuable: it makes mass photometry analysis reachable for researchers at institutions where Refeyn licences are absent or too expensive for occasional use. That value is real, and the EMBL Hamburg team who built it deserve credit for the attempt. The deployment model, however, introduces uncertainty at exactly the moment users need stability.

ImageJ and Fiji

While ImageJ and its Fiji distribution remain the default open-source image analysis environment across the life sciences, the platform runs on a steep learning curve, plugin instability, and analysis pathways with subjective steps that limit reproducibility in mass photometry workflows. The cognitive load of navigating a multi-window environment with hundreds of plugins, many with unclear maintenance status and some incompatible with current Java runtime versions, cannot be captured in a feature list. Measuring the interface directly is the only way to get at it.

ImageJ UX
ImageJ UX
ImageJ UX
ImageJ UX

The practical position in most mass photometry labs is this: ImageJ is used because it is universally available and researchers already know it from other imaging contexts. The moment any user needs to go beyond basic histogram inspection, the macro and scripting requirement appears. Version-pinning becomes necessary when a plugin that worked last month breaks after a Java update. Environment conflicts surface without warning. These are known costs the life sciences community has largely absorbed because the alternatives are either proprietary or poorly integrated.

Fiji mass photometry UX

Fiji mass photometry UI

Fiji mass photometry UX

Fiji mass photometry UX

Fiji mass photometry UX

Fiji user experience

What that acceptance costs is reproducibility. Two operators using ImageJ for the same analysis may produce different results depending on which plugins are installed, what versions are active, and how they have handled the subjective decision steps. The software does not record those choices automatically. It puts the full responsibility for reproducibility documentation on the operator, which in most multi-user facility contexts means it puts it nowhere.

CellProfiler

CellProfiler was built for automated cell image analysis and is genuinely good at it within its designed domain. Its headless execution mode and scriptable pipeline architecture make it powerful for high-throughput cell biology workflows. Adapted to mass photometry, it requires configuration that assumes substantial upfront familiarity. The floating-window model that gives advanced users flexibility becomes visual chaos for operators who have not already mapped its logic. Plugin breakage across updates is a recurring problem, as is the Java runtime requirement for certain modules.

Cell Profiler mass photometry UX

Cell Profiler mass photometry UX

Cell Profiler UX

Users can open countless windows.


CellProfiler rewards users who have already internalized its chaos. That observation appears in the platform's favour in some contexts; in a core facility that hosts rotating researchers and doctoral students, it is a liability. The effective user base is bounded by those willing to invest considerable time before extracting reliable results.

The broader analytical point is one this review returns to throughout: CellProfiler's design reflects the research context in which it was developed. Single-lab, expert-user, high-throughput cell biology. Mass photometry operates under different constraints: smaller teams, single-molecule sensitivity, and parameter choices that compound across a dataset rather than averaging out across thousands of cells. The tool was not designed for this context and it shows.

BioImageIT

BioImageIT addresses a problem the other four tools either ignore or worsen. Prigent et al. (2022) in Nature Methods document the core structural failure of the open-source bioimaging ecosystem: most tools are developed separately for either data management or data analysis, leaving users to bridge the two with ad hoc scripts or manual operations. BioImageIT's architecture is a direct response to this gap.

BioImageIT User Experience

BioImageIT User Experience

The platform provides a GUI in which analysis tools are connected on a visual canvas through drag-and-drop. Each tool runs in its own isolated Conda environment, which prevents the dependency conflicts that make ImageJ and CellProfiler unreliable across updates. This is a meaningful architectural decision rather than an interface cosmetic; it attacks the plugin instability problem at the structural level. As of October 2022, BioImageIT had been deployed across ten imaging platforms within France-BioImaging national infrastructure, establishing a baseline for institutional adoption.

The platform's stated purpose is accessibility for scientists without coding skills. Whether BioImageIT's current tool set handles Refeyn-specific output formats natively, or whether mass photometry integration requires custom node development, was not determinable from available documentation. This is the one question in this review we cannot resolve without hands-on integration testing. We return to it in Limits and Gaps.

What BioImageIT demonstrates, regardless of that open question, is that the architectural problems are solvable. Dependency isolation, visual workflow construction, and data management integrated with analysis can coexist in a production tool with real institutional deployment. The question is whether vendors in the mass photometry space treat this as a model or continue building isolated, format-locked platforms.

The patterns across these five reviews are not independent failures. Three cross-cutting failure modes run through the ecosystem.

The Three-Layer Failure Model

The five tools reviewed above fail in different places for different reasons. Across all of them, three patterns emerge consistently. Together, these constitute the Three-Layer Failure Model: a diagnostic structure for identifying where scientific software breaks down not because the underlying science is complex, but because specific design choices have made complexity appear unavoidable.

Layer 1: Epistemic dishonesty

Epistemic dishonesty in scientific software is the gap between what a tool does to data and what the interface tells users about what it is doing. DiscoverMP's navg parameter is the clearest example in this review: a setting that determines resolving power, described in published peer-reviewed research as requiring expertise to configure, presented to users with numeric defaults and no contextual explanation of what those defaults control. The interface surfaces the parameter; it does not hide it. But it provides no basis for users to evaluate whether the default is appropriate for their sample or their measurement goal.

PhotoMol's crash behaviour is a different form of the same failure. An interface that surfaces "This app has crashed and has been stopped" with no session state recovery is not honest about the reliability level it operates at. ImageJ's version and plugin dependencies affect results without being recorded. In every case, the interface withholds information users need to assess whether their results are valid.

Epistemic dishonesty in scientific software means the interface withholds information users need to assess whether their results are valid. In DiscoverMP, this takes the form of unlabelled parameter defaults that control resolving power. In PhotoMol, it is crash behaviour with no state recovery. In ImageJ, it is version and plugin dependencies that affect results without being recorded. A trustworthy alternative names its assumptions, exposes its model parameters in context, and records the configuration that produced each output.

Layer 2: Mastery erosion

Every tool reviewed here has a version of the same mastery problem: the interface does not scale well with expertise. DiscoverMP's workspace model is accessible for routine single-session use but does not support the parameter discipline experienced users need for multi-dataset consistency. CellProfiler rewards users who have already mapped its chaos but blocks everyone who has not yet invested the time. ImageJ hands power users meaningful control while requiring them to maintain private knowledge of plugin stability and Java environments. BioImageIT inverts the problem by making access the design priority, which means expert users trade scriptable depth for drag-and-drop convenience.

Mastery erosion compounds in multi-user environments. When a core facility hosts visiting researchers, rotating postdocs, and doctoral students across a year, the software set they encounter should develop their competence over time. Most of the tools reviewed here do not. They either demand upfront investment before producing reliable results, or they automate decisions experienced users need to make themselves.

Layer 3: Friction misalignment

Friction misalignment is the condition where the resistance users encounter in a tool does not serve scientific precision. It serves something else: licensing structure, legacy format decisions, dependency choices, development convenience.

PhotoMol requires internet connectivity in environments where it cannot be assumed. DiscoverMP's Concentration Calculator is buried in a submenu in a workflow where sample concentration is the first variable users set. ImageJ's plugin ecosystem requires manual version management that has no scientific justification. These are not features of a demanding technique. They are the accumulated cost of software built without asking whether the friction it introduces is aligned with what users are actually trying to accomplish.

The contrast with BioImageIT's Conda isolation architecture makes the distinction visible. That decision introduces its own friction (tool configuration, environment setup on first use), but the friction serves a scientific purpose: dependency isolation that supports reproducibility across updates. Some friction is inherent to scientific precision. Most of what the tools reviewed here impose is not.

The table below maps each tool against the five evaluation criteria applied throughout this review.

ToolEpistemic transparencyMastery scalingFriction alignmentReproducibility
DiscoverMPLowPartialPoorLow without manual workaround
StreamlineMPMedium within scopeHigh within scopeGood for covered workflowsHigh within scope
PhotoMolLowLimitedPoorVery low
ImageJ/FijiLowBifurcatedPoorLow
CellProfilerLowLow for newcomersPoorMedium if fully scripted
BioImageITHighHighGoodHigh

The pattern in this table raises an uncomfortable question: whether the failures it documents were inevitable, or chosen.

Complexity is not the problem

The standard explanation for why mass photometry software is hard to use is that mass photometry is a technically demanding technique. Complex instruments need complex software. Parameter opacity, workflow fragmentation, and steep learning curves are the natural consequence of operating at single-molecule sensitivity. This argument appears in vendor documentation, in user training materials, and in the unstated assumptions of most of the software teams that built these platforms.

It is not accurate.

The navg parameter is technically straightforward: it is a temporal averaging window. The reason it is unexplained in DiscoverMP comes down to a design decision: the interface was never built around the question of what users need to understand about it to trust their results. PhotoMol crashes because it runs on server infrastructure not built for production load, not because mass photometry analysis is inherently unstable. ImageJ requires version-pinning not because life science image analysis is irreducibly fragile, but because the plugin ecosystem grew without a stability contract.

Each failure mode documented in this review is a vendor or developer decision. The technique is powerful. The software reflects choices about what to expose, what to automate, what to document, and what to maintain. Those choices can be made differently.

The evidence from BioImageIT's architecture supports this directly. Per-tool Conda environment isolation addresses a real reproducibility problem and was a design decision, not a scientific constraint. StreamlineMP's guided antibody stability workflow shows that Refeyn knows how to build transparent, reproducible analysis pipelines. The question this review cannot answer politely is why that architecture was built as a bolt-on module rather than as the foundation of DiscoverMP's core parameter model.

The tools in this ecosystem fail their users not because the science makes failure inevitable, but because complexity has been used as a reason not to solve design problems that have known solutions.

Implications for practitioners

The Three-Layer Failure Model points toward specific actions for three groups: core facilities running mass photometry instruments, vendors developing the software, and research software engineers building on top of existing platforms.

For core facility managers, the most immediate implication is parameter documentation. Until DiscoverMP records navg, Threshold 1, and Threshold 2 in its exported output automatically, that responsibility falls on the facility. A single-page parameter sheet completed at the start of each analysis session, filed alongside the raw data, is the minimum intervention that prevents the worst-case outcome: results in a submitted manuscript or regulatory characterisation dossier that cannot be fully reconstructed because the software never recorded which settings produced them.

For vendors, the argument is direct. Extending transparency to the core parameter model in DiscoverMP is a redesign of the decision-exposure layer in one product that already exists, not a new product. The cost of not doing this is borne by researchers who produce results they cannot fully defend, and ultimately by the technique, which inherits the reputation of the software that delivers it when artefacts are attributed to the method rather than to undisclosed defaults.

For research software engineers, BioImageIT's architecture is the most transferable lesson from this review. Conda environment isolation per tool, visual workflow construction, and data management integrated with analysis rather than separated from it address three of the five evaluation criteria applied here. Whether that architecture can be extended to cover mass photometry's specific output formats is the open question that defines the practical scope of that lesson.

This review has been specific about what the evidence supports. It should be equally specific about where it does not.

Limits and gaps

This review is based on publicly available documentation, peer-reviewed research, and direct inspection of the live PhotoMol interface. It does not include longitudinal observational study of any of these tools in operating laboratory environments. That method, watching how operators actually use the software over time under real facility conditions, is what most reliably surfaces the gap between a tool's designed behaviour and its actual use. The findings in the individual reviews reflect the tools as they are presented, not necessarily as they perform under sustained real-world pressure.

The navg analysis draws on ACS Nano research published in February 2026, tested against DiscoverMP v2024 R1. Future DiscoverMP versions may change the parameter model or default settings. The specific values cited here may not hold forward.

BioImageIT's compatibility with mass photometry workflows specifically is unresolved in this review. The architecture addresses the right problems. Whether the current tool set includes nodes that handle Refeyn's output formats natively, or whether integration requires custom development, was not determinable from available documentation. This tension cannot be closed without hands-on integration testing.

CellProfiler's coverage here is thinner than the other tools. The strongest sourced claim about its adoption and usability impact could not be traced to a specific named paper in time for this publication. The analytical observations are grounded in the platform's documented architecture and direct comparative analysis, but the absence of a primary adoption citation is a gap in the evidence base this review acknowledges directly.

Conclusion

Mass photometry software carries a specific design debt. The teams who built these tools made choices that prioritise licensing enforcement, development convenience, and surface-level accessibility over the epistemic obligations scientific software carries.

The Three-Layer Failure Model documents three of those decisions as patterns across the ecosystem: epistemic dishonesty (parameters that control measurement validity are not explained), mastery erosion (tools that do not scale with expertise compound reproducibility risk over time), and friction misalignment (resistance that serves administrative rather than scientific ends). Every tool reviewed here exhibits at least two of these failure modes.

The consequences fall on researchers who cannot reconstruct their own published results, on facilities whose reproducibility depends on tribal knowledge of parameter settings, and on the technique itself, which inherits the reputation of the software that delivers it whenever artefacts are attributed to the method rather than to undisclosed defaults.

BioImageIT shows the architectural problems are solvable. StreamlineMP shows that Refeyn can build guided, reproducible workflows. Neither is sufficient on its own. A mature mass photometry software ecosystem needs honest parameter exposure at the instrument level, dependency isolation that survives software updates, and integration architecture that does not require researchers to build their own bridges between data management and analysis. These are design problems with known solutions.

The hardware that started this review is fast, sensitive, and disruptive. It deserves software that is honest about what it does. If your facility uses mass photometry and the parameters controlling your results are undocumented, start there today.

FAQ

What does the navg setting do in DiscoverMP, and what should I set it to?

navg is the number of consecutive camera frames DiscoverMP averages to calculate the contrast signal from each landing event. The default of navg=5 (as of v2024 R1, February 2026) is a conservative starting point, not an optimised choice. Research in ACS Nano (2026) found that navg=10 with adjusted threshold settings improved resolving power by up to a factor of 2 in heterogeneous samples. The appropriate value depends on your sample composition. The software does not guide that choice.

Is PhotoMol a reliable alternative to DiscoverMP for researchers without a Refeyn licence?

PhotoMol makes mass photometry analysis reachable for researchers without Refeyn licences, and its development at EMBL Hamburg gives it scientific credibility. As of March 2026, the live application surfaces server instability messages including crash notifications without session state recovery. It works for researchers with stable internet access and tolerance for occasional interruption, but it is not a dependable substitute in production analysis workflows.

What file format does PhotoMol require, and is it compatible with current DiscoverMP versions?

PhotoMol requires the events_Fitted.h5 file exported from DiscoverMP. As of May 2022 (Niebling et al., 2022), this was documented specifically for DiscoverMP version <2.5. Current DiscoverMP versions may produce different file structures. Confirm compatibility with EMBL Hamburg's eSPC documentation before relying on it in a production workflow.

Why does DiscoverMP produce inconsistent results across labs and operators?

The primary cause is that the parameters controlling measurement quality, specifically navg and the threshold settings, are not recorded automatically in DiscoverMP's exported output as of current versions. Two operators running identical samples on separate machines may be using different default settings without either knowing. Until Refeyn builds parameter recording into the export pipeline, facilities should log settings manually at every analysis session alongside the raw data.

What makes BioImageIT architecturally different from ImageJ and CellProfiler?

BioImageIT runs each analysis tool in an isolated Conda environment, preventing the library conflicts that cause ImageJ plugin instability and CellProfiler breakage across software updates. Tools are connected on a visual canvas without scripting, and data management is integrated with analysis rather than separated from it. The platform was designed for scientists without coding skills and had been deployed across ten France-BioImaging infrastructure platforms as of October 2022 (Prigent et al., 2022).

Should mass photometry labs wait for vendor software improvements, or act now?

Waiting is not a neutral position. Every month of analysis on undocumented default navg settings is a month of data whose reproducibility cannot be fully guaranteed. The interim solution for most labs is a manual parameter documentation protocol layered on DiscoverMP, combined with StreamlineMP where it covers the relevant workflow. This does not solve the underlying design problem, but it limits the exposure while the ecosystem matures. Facilities with integration engineering capacity should evaluate BioImageIT's architecture in parallel.

References

Niebling, S., Burastero, O., Büren, J., Barthel, F., Schiller, J., Struve Garcia, A., Hagenbach, A., & Henschel, J. (2022). Biophysical screening pipeline for cryo-EM grid preparation of membrane proteins. Frontiers in Molecular Biosciences, 9, 882288. https://doi.org/10.3389/fmolb.2022.882288

Prigent, S., Valades-Cruz, C. A., Leconte, L., Maury, L., Salamero, J., Kervrann, C., et al. (2022). BioImageIT: Open-source framework for integration of image data management with analysis. Nature Methods, 19, 1328–1330. https://doi.org/10.1038/s41592-022-01642-9

Refeyn. (2024, November 26). StreamlineMP platform to speed up key bioanalytical workflows. https://refeyn.com/post/streamlinemp-platform-to-speed-up-key-bioanalytical-workflows/

[Author names available at doi]. (2026). Deep learning-based event classification of mass photometry data for optimal mass measurement at the single-molecule level. ACS Nano. https://doi.org/10.1021/acsnano.5c13074 (Full author list at PMC accession PMC12875028)

BioImageIT Development Team. (n.d.). BioImageIT: A FAIR data management and image analysis framework. GitHub. https://github.com/bioimageit/bioimageit

In this story

A benchmarking review of the mass photometry software ecosystem applying five consistent UX criteria across DiscoverMP, PhotoMol, ImageJ, CellProfiler, and BioImageIT. Introduces the Three-Layer Failure Model and identifies where each tool breaks down, with practical guidance for core facility managers and research software engineers.

22 min read

You might also like

When Flexibility Becomes the Enemy of Good Design
Industrial GUI

When Flexibility Becomes the Enemy of Good Design

A four-iteration design project for an aircraft engine manufacturer found that designed constraints reduce error risk more reliably than flexible interfaces. Here is what the evidence showed at each stage.

16 min read
AI-Generated Medical Device Interfaces and the EU AI Act Localization Problem
Medtech & Healthcare Design

AI-Generated Medical Device Interfaces and the EU AI Act Localization Problem

AI-generated interface content in medical devices cannot be pre-translated, pre-tested, or submitted under EU MDR. The Adaptive UI Within Validated Language Boundaries principle defines the compliant design response before August 2027.

22 min read
Why the WHO Surgical Safety Checklist Fails: Confirmation Design
Medtech & Healthcare Design

Why the WHO Surgical Safety Checklist Fails: Confirmation Design

The WHO Surgical Safety Checklist achieved 98% compliance in Ontario and produced no mortality reduction. The problem was not the checklist. It was an interface that could not distinguish completion from genuine verification.

22 min read