Questioning What we Seek, Evaluating Risk, and Accepting Proxies Amidst Complexity
- Alexander Morgan

- Jul 17
- 6 min read

"The curious paradox is that when I accept myself just as I am, then I can change"
- Carl Rogers, PhD
Your Introduction to an Ongoing Internal Debate
In the field of human performance‒ strength and conditioning and sport science, practitioners often carry the responsibility of navigating the intricate relationship between data, performance, and human individuality. Whether working with athletes or tactical professionals our tools, assessments, and interpretations often shape the very nature of what we value and how we train or practice. Yet, with the rapid evolution of technology and performance models, it’s crucial to pause and ask; what are we really seeking? Are we chasing numbers for their own sake? Are we unintentionally flattening the uniqueness of high performers into oversimplified outputs and descriptors? And how do we blend complexity with actionable insights?
This post explores three key themes: questioning the adaptations and behaviors we target, evaluating risk while collecting performance data, and accepting proxy measurements when confronting the complexities of sport performance and tactical readiness. Together, these reflections aim to prompt a more grounded, honest, and curious approach to human performance practice.
Should we question what we seek?
In the pursuit of performance, it’s easy to become enamored with specific markers—bigger jumps, faster sprints, a higher VO₂ max. These outputs are tangible, measurable, and comparable. But sometimes, our reliance on data and protocols can cause us to overlook what makes individuals unique. We risk seeking adaptation for adaptation’s sake—maximizing traits that are easily tested rather than optimizing those that are contextually valuable.
As human performance practitioners, do we recognize that performance is rarely linear or formulaic? Two athletes may hit near identical metrics in a countermovement jump (CMJ) but perform very differently on the field. Likewise, two tactical professionals might have similar isometric mid-thigh pull (IMTP) values but differ greatly in decision-making under cumulative stress or situational awareness. This distinction matters.
The danger lies in becoming overly reductionist. When we push for standardized behaviors or adaptations without appreciating the individual, we can unknowingly strip away the very elements that contribute to high-level performance—intuition, creativity, resilience, and even unquantifiable expressions of skill. For example, an athlete’s movement strategy might seem inefficient on force plates, but could be integral to their sport-specific success. Do we “correct” it, or do we learn from it?
There are countless examples in elite level sport. NHL players who set records yet go against the grain of lifestyle recommendations, UFC fighters who look like a lineman yet move like a gymnast, MLB players with unorthodox deliveries, and MLS players with abnormal swing phase patterns. Similarly, in tactical populations, the ability to tolerate and make sense of chaos is often more critical than hitting a certain wattage on a bike test. If we overemphasize physiological data and underemphasize the human element, do we risk engineering athletes and tactical professionals who are practically underprepared. Or worse, do we risk changing what got these individuals to where they are at?
The job seemingly then is not simply to engineer adaptation—but to understand context, respect individuality, and question whether the traits we are targeting are truly aligned with long-term performance and readiness. True high performance isn’t about producing clones; it’s about maximizing potential in complex, uncertain environments.
Do we need to evaluate risk? Can reflecting on data be useful?
Persistent dialogue in human performance surrounds the balance between data collection and data application. The criticism of collecting data for data’s sake is not unfounded. We’ve all seen dashboards overflow with information that never get used. However, the pendulum shouldn’t swing too far the other way. Collecting data without burden can often be a strategic advantage—especially if done passively, systematically, and with long-term curiosity in mind.
Take for example jump profiling. A practitioner may focus on RSI-mod (Reactive Strength Index - modified) and concentric impulse during CMJs as their primary decision-making variables. But that doesn’t mean eccentric braking force, time-to-takeoff, or center of mass displacement are irrelevant. These secondary metrics, passively collected and stored, can become invaluable when trends shift or when new questions arise. The same examples could be built out for GPS/GNSS, LPS, and HR monitoring, as well as additional measurement tools.
Evaluating risk is a necessary stepping stone when making decisions. Not only physiological or performance risk, but also risk of missing something important. Evaluating the potential negative impact of your support is crucial for ensuring that there are no inadvertent consequences. This evaluation involves a systematic approach that considers various aspects including effectiveness, efficacy, stakeholder interests, resource allocation, and broader impacts. The primary goal of a risk assessment is to provide a comprehensive understanding of the potential risks, enabling us to implement strategies to mitigate or manage these risks effectively.
Critically, do you ever determine the cost of data collection? Both in terms of time and cumulative load on the athlete or tactical professional. If collection is burdensome then every assessment must justify its inclusion. But in low-friction systems, background data allows for reflection, retrospective analysis, and hypothesis generation.
Perhaps you notice that an athlete’s concentric impulse trends mirror their 10-meter fly times. This may inform a deeper investigation into rate of force development, sprint-specific training, and the influence on “Repeat High Intensity Efforts”. Maybe you begin to see correlations between relative peak force in the IMTP and a tactical unit’s performance in load carriage tasks alongside a subjective reduction in non-specific low back pain reporting. These insights don’t emerge if we only look at the metric or assessment we believe to be the most important today.
This “data without urgency” approach supports a learning mindset. It’s not about overwhelming ourselves with every metric on every test. It’s about maintaining optionality—being able to go back, investigate, and connect the dots when it matters. Because ultimately risk evaluation is both about action and inaction: knowing when to intervene and knowing when to standby.
Why is accepting proxy measurements amidst complexity so difficult?
Sport is not science. Tactical performance is not a lab protocol. No matter how advanced our models or tools become, we must acknowledge that what we do as human performance practitioners is inherently a step removed from the chaotic reality of many sports and tactical environments. And that’s okay—as long as we recognize and respect that distance.
Strength and conditioning is not the sport. Force plates don’t measure fire fights. Sleep tracking doesn’t reveal psychological resilience on the court. These tools offer compartmentalized proxies—valuable glimpses into physiological capacity, neuromuscular readiness, or recovery status. But they are not always the full picture.
Accepting proxies means being honest about the limitations of our tools while still valuing their contribution. A vertical jump is not an agility drill, but it might offer clues about neuromuscular fatigue or motor unit recruitment. HRV is not a performance score, but it may reflect trends embedded within individual stress management patterns. A barbell doesn’t prepare an athlete for chaos, but it might provide the foundational strength that supports adaptive movement.
Importantly, potentially if we evaluate risk and remain evidence-guided—not evidence-obsessed—we can use these proxies with purpose? This means resisting the urge to claim that a single test “predicts” performance. It means focusing on patterns, relationships, and contexts—not isolated values. The old adage that an increase in ice cream sales and shark attacks does not suggest causality comes to mind.
It’s easy to become disillusioned by complexity. But if we accept that we are only seeing part of the picture and align our proxies with real-world outcomes, contributing meaningfully to performance, injury reduction, and long-term readiness is attainable.
I could be wrong, fishing with a wide net can yield more fish. However, the counter is a meticulous surgeon knows the exact tool needed to make the incision. Does complexity, chaos, and small margins call for precision?
Stay Curious, but Keep your Job
Daily we manage the tension between measurable inputs and immeasurable outcomes. We design systems, track variables, and seek improvements. But we must also question what we seek, evaluate risk in our methods, and accept that much of what matters is complex and only partially visible.
In questioning what we seek, we safeguard the individuality of those we support. We avoid reductionism and remain attuned to context and uniqueness. In evaluating risk, we learn to balance action with observation—embracing data collection not just for immediate decisions but for long-term insight. And in accepting proxies, we stay grounded—acknowledging the limits of our tools while still leveraging their utility to make better-informed choices.
What transcends is curiosity. Curiosity to explore what matters. Curiosity to reconsider our assumptions. Curiosity to adapt and evolve alongside the environments and people we support. Human performance is not about controlling every variable. It’s about understanding which variables matter, to whom, and why—then crafting an approach. This shouldn’t paralyze us, but rather focus efforts.
-
Disclaimer
Area 13 Training Systems, more specifically The Learning Ground, provides content for informational and educational purposes only which may contain copyrighted material. Although credit is always attempted to be given, such content available may not be specifically authorized by the copyright owner. A13 and TLG believes this constitutes fair use due to there being no known copyright or infringement intended. A13 and TLG encourages the exchange of said content to provide those interested with accessible research and educational information, so long as credit is appropriately given. Furthermore, A13 and TLG assumes no responsibility for any statements made or materials used by guest authors/presenters, which may not always represent our opinion. We also do not endorse any products or services that may be mentioned.

