The Veritas Method®

Chapter Eleven

Chapter Eleven

Measuring Partnership and Impact

How to know it is working and share the evidence credibly.

0:00 / 5:12
Chapter Eleven
4 min read

Measure capability, not activity

Measurement is an act of attention. What you choose to count determines what you see, and what you see determines what you improve. When organisations and individuals begin evaluating their AI partnerships, they reach almost instinctively for the wrong instruments. They count prompts. They count hours saved. They count tasks automated and features used. These numbers are easy to generate, satisfying to report, and almost entirely useless as indicators of partnership quality. They measure activity. The Veritas Method® requires something more demanding: a measurement discipline that assesses the quality of the engagement itself, not merely its volume. A practitioner can interact with AI a hundred times a day and learn almost nothing. They can automate a hundred tasks and think no more rigorously than they did before. Activity metrics track efficiency. What The Veritas Method® is designed to develop is capability. These are not the same thing, and measuring one while hoping to improve the other is a category error. The right question is not how much am I using AI. The right question is how well am I thinking with AI.

Three dimensions of partnership quality

Partnership quality has three distinct dimensions, each of which can be honestly assessed by a practitioner willing to look clearly at their own practice. Decision quality is the first and most consequential. Are the decisions made in genuine partnership with AI better than the decisions made before? Not faster; better. More thoroughly considered. More rigorously stress-tested. Drawing on a wider range of relevant perspectives. A practitioner genuinely applying The Veritas Method® should be able to point to specific decisions that were qualitatively different because of the partnership; not just faster, but more robustly reasoned. Thinking quality is the second dimension, and in some ways the more revealing one. Is the practitioner’s own reasoning becoming more rigorous over time? Not the outputs of AI-assisted thinking, but the practitioner’s capacity to identify assumptions, catch confirmation bias early, construct strong counterarguments, and ask better questions. This is what genuine partnership develops. AI becomes not just a tool for producing answers but a discipline for improving the quality of the questions. Capability growth is the third dimension, and the one with the longest time horizon. Am I genuinely more capable, at the end of six months of serious practice, than I was at the start? Not more productive in the narrow sense of completing more tasks; more capable in the broader sense of being able to think, decide, and do things that were previously beyond my reach. This is the question that matters most, and the one most rarely asked.

The Partnership Quality Self-Assessment

The detailed scoring section now lives in Appendix A, where the assessment is easier to complete and revisit over time.

If you want to determine your current level, take the quick and easy five-question multiple-choice questionnaire in Appendix A: Partnership Quality Self-Assessment [blocked]. It is designed to give you a clear snapshot of where your practice sits today, and what to strengthen next.

A note on the current evidence base. The outcomes documented in this book and across the Athena Veritas programme are real, documented, and produced with participant consent. They are not controlled trials, and they do not include failure cases. The method works most reliably for motivated individuals with consistent access to capable AI tools and the time and willingness to apply the system with discipline. It works less well for people in acute crisis, those with severely limited access to technology, or those unwilling to engage with genuine intent. As the programme grows, so will the body of evidence. What exists now is sufficient to demonstrate that the method works. It is not yet sufficient to define precisely the full range of conditions under which it works best. That is honest, and it matters.

Capability compounds

The ultimate indicator of whether The Veritas Method® is genuinely being applied is not a score. It is the honest answer to one question, asked six months in and then again a year in:

What am I capable of now that I was not capable of before? That question looks at the accumulated effect of consistent practice, not the output of any single session. Output metrics, the revenues generated, the time saved, the decisions documented, are real and valid. But they are lagging indicators. They tell you what the partnership has already produced. To improve the partnership itself, you need to attend to the leading indicators: the quality of the engagement as it happens, session by session, over the long arc of a genuine practice. The case studies in the chapters that follow illustrate what consistent application of The Veritas Method® produces in practice. Not projections. Documented outcomes, with participant consent, before public launch.

Appendix A

Use the quick and easy five-question multiple-choice questionnaire in Appendix A to determine your current level, then return here with a clearer view of what to strengthen next.

Previous

Chapter Ten

Habits

Next

Chapter Twelve

Begin Your Practice