A stricter keratitis benchmark,
built for multi-center growth.
K-ERA starts from a leakage-aware single-center study: 101 patients, 258 visits, and 658 white-light slit-lamp images.
The current result is modest, but the broader literature suggests CNN-family models improve as datasets scale. The platform is built to test that next step across hospitals.
Bacterial vs fungal differentiation
Early diagnosis is difficult.
AI has shown promise.
External validation is still the bottleneck.
Infectious keratitis remains one of the leading causes of preventable corneal blindness worldwide. Bacterial and fungal keratitis present with overlapping clinical features, making early differential diagnosis challenging even for experienced clinicians.
The founding K-ERA benchmark was intentionally strict: white-light images only, patient-disjoint 5-fold splitting, visit-level prediction, and leakage-aware controls. Under that setup, performance remained modest, which is exactly why larger and more heterogeneous cohorts matter.
That is the gap K-ERA is trying to close: turn routine clinical cases into a shared validation and training pathway, while keeping raw images and identifiers local.
Instead of over-claiming one hospital's model,
build the validation network first.
K-ERA turns routine care into a reviewable research workflow. A single-center benchmark establishes the starting point; federated site rounds create the path toward broader validation, larger cohorts, and stronger CNN performance at scale.
Designed for clinicians.
Structured enough for real research.
A research dataset grows through routine care, but the landing now draws a firmer line between what is already benchmarked, what the app already supports, and what the federated extension is still validating.
Three rails.
One honest boundary between evidence and roadmap.
Today: a founding-site benchmark.
Next: a multi-center validation network.
The current public evidence starts at Jeju National University Hospital. The purpose of the network is to make the next step explicit: more sites, more heterogeneity, and cleaner external validation.
Broader ophthalmic AI literature suggests CNN-based models usually improve as datasets expand. K-ERA is designed to test that expectation prospectively, rather than assume it from one internal study.
Hospital B → local training → Δ weights → Central review
Hospital C → local training → Δ weights
FedAvg aggregation → redistributed to approved nodes
Contact us to join →
Build the infrastructure first,
then earn the larger model.
The founding white-light benchmark is still modest. That is precisely the point. K-ERA is not presenting a finished answer; it is building the workflow, review logic, and site infrastructure required to produce stronger external evidence.
If the broader CNN scaling pattern holds in this disease area as well, larger multi-center cohorts should improve robustness. The platform is designed so that claim can be tested transparently, under real governance constraints.
with a large grant.
Sometimes it begins with
one patient.
one visit.
one image.