Surgical education is often described as a modern discipline, yet many of its core mechanics still resemble an older logic: apprenticeship, subjective supervision, and informal, non-personalized debriefing. Multiple surgical education reviews argue that training remains rooted in models developed over a century ago, with assessment still heavily dependent on expert impressions rather than reproducible performance evidence.
Subjectivity surrounds surgical activity, through training, day-to-day clinical work, benchmarking, and even patients’ perceptions. When there is no reliable method for objective assessment or predictive evaluation of skill sets, decisions can drift toward hierarchy, politics, or personal dynamics rather than measurable performance. Without quantifiable datasets and objective metrics, we also lose the ability to match an individual surgeon’s or team’s strengths to a specific patient and portfolio scenario. As an analogy, aviation, elite sport, or complex engineering: no pilot, athlete, or team is ‘best’ in every situation. Performance is situational, and strengths vary by scenario.
If we zoom out however, at the same time, history shows a clear trend: each major evolution in surgical training increased structure, but only gradually increased objectivity and personalisation. A brief historical arc illustrates why we are now approaching an inflection point.
A compressed history of training and assessment
- Ancient civilisations: Surgery was embedded in holistic medicine and lauded diagnostics; assessment focused on whether the healer achieved an endpoint rather than how technique performed. (Objectivity: minimal; technique has less importance and rarely quantified.)
- Medieval era: After separating from medicine, surgery became a guild-taught craft learned by apprenticeship, driven by practice and necessity. Barber-surgeons performed procedures gaining anatomy through repetition, more like a business-driven ‘manufacturer’ model than an academic pathway. (Objectivity: informal; limited comparison.)
- Classical apprenticeship (driven by ‘see one, do one, teach one’): Learning by observation and graduated responsibility dominated for generations, efficient for exposure but weak for reproducible measurement. (Objectivity: supervision-based; little standardised training evidence minimal individualism.)
- Halstedian residency: Formalised training structure and volume expectations advanced education, but assessment largely remained faculty impression plus case counts. (Objectivity: really low, improved structure, no measurement)
- Mayo model: A non-pyramidal residency ethos and integrated academic-clinical pathway strengthened merit-based progression and institutionalised mentorship. (Objectivity: stronger merit logic; assessment-focused, still centralised judgement.)
- Simulation era (inspired by aviation safety): Box trainers and VR introduced repeatability and the first scalable performance comparisons, even if systems were costly and often procedure-segmented. (Objectivity: on repetitive possibilities, measurable tasks appear; personalisation is still limited.)
- Scales and structured tools (OSATS/GRS, NTS tools): Rating scales improved standardisation and construct validity, but remained observational and therefore instructor-dependent. (Objectivity: need for measurables, and implement available objectivity, but ‘standardised subjectivity’.)
- Structured debriefing: Debriefing strengthened personalised learning by translating events into corrective actions, yet often without dense, quantitative evidence linking technique to predicted outcomes. (Objectivity: insight increases; objective measurement still sparse.)
- Competency-based training: Competency frameworks formalised progression gates, but many rely on checklists, rating tools, and fragmented data streams rather than integrated, high-resolution performance evidence. (Objectivity: higher accountability; but limited data integration.)
Full individualisation and platform level integration of data is the next step, at the level of surgical skill, surgical personality, team interplay, and patient-specific anatomy and risk profile.
We might now be at an inflection point. Two practical realities are colliding:
- Data capture has become trivial and interconnectable.
- Technology can now produce functional and predictive.
This is why the next evolution cannot be ‘another scale’ and is transformational. The direction is toward integrated platforms that connect: intraoperative recording + performance features (visual, kinematic, haptic where available) + end-result morphology + functional modelling + structured debriefing; and apply various models of algorithms, finite element analyses, pathophysiological predictions, AI tools together. In such systems, feedback can become partially automated, benchmarking can become continuous, especially individual, thus training can become case- and team-specific rather than generic. In minimally invasive and robotic surgery, an interconnected platform can integrate these signals, opening wide opportunities for data analytics and AI: intra-procedural monitoring, risk management, and progressive automation. The next level is preoperative case planning matched to objectively profiled surgeon/team skillsets, and then carried into the procedure through continuous monitoring; highlighting risk-intensive workflow segments and supporting alignment between case complexity and demonstrated performance capabilities.
Within this landscape, YourAnastomosis represents one example of a results-focused approach: replicating end results from lifelike simulated procedures into detailed morphological outputs and calculated functional indicators, with the intent to interconnect with broader data streams rather than act as a standalone endpoint. Not as a standalone solution, although it is evidenced to optimise learning curves, but as a connector to additional datasets: tracking, real-time visual analysis, and haptic data sources.
With that framing, the pessimism about ‘slow change’ and limitations during history mostly because of systematic, personal, institutional centralism becomes qualified hope: the missing components (data, compute, integration, and validation culture) are increasingly available; the responsibility now is to connect them into systems that improve training and patient safety without waiting another generation.