Validation of potential consumers is one of the key aspects when developing a technology or a product, to ensure market-fit. In such a fast paced environment as Virtual Reality and 3D technologies, validating becomes a challenge, because in some cases the technology doesn’t yet exist. This was our case at the XReco project.
The Requirement For Validation
The “Lean Startup” methodology by Eric Ries suggests testing unfinished versions of your product before launching. In his book, he refers to them as MVPs, that is, Minimum Viable Products. The premise is that they be as simple and as cheap as possible. By having potential consumers play with an unfinished version, innovators can attain knowledge from the experience of the consumers and validate every aspect of the product: for instance, whether the value proposition is actually helpful or not, if it has been accurately translated to the product design, or if there are any other aspects that aren’t included that users manifest they need. By not performing a user validation you risk spending resources on a product that may not have a market fit. Even if innovators seem to know that a certain technology is going to be welcome by the market, every aspect should be taken as a hypothesis to be tested.

Figure 1: Lean-Startup-Book-by-Eric-Ries.
The Starting Point Of The Validation
We at XReco adopted this approach. When writing the proposal for the EU, there was already a clear value proposition coming from experienced partners that participate in the project: a platform to democratise the access to the creation of extended reality experiences and 3D development. The EU conceding us funding could be seen as a first validation such as there is, in fact, interest in making the proposal a reality.
Teams got to work on the first MVP. In parallel, the business and go-to-market team put together several workshops called “Joint Business Clinics”, with participants from different industries. Its aim was to understand their current needs, pains, gains, and overall view of the situation of extended reality within their sector. After pitching our platform to them, we had the chance to hear their perception of the value proposition and different ways in which professionals would use it. If you want to participate in the next ones, follow this link.

Figure 2: This image illustrates “prototyping” before having the final version.
Challenges In Designing A Validation Process
We gathered interesting insights during the Business Clinics, which served as input for the design of the different parts of the platform and the overall design of our first MVPs towards the hands-on validation. The goal of the validation was to measure and maximise two KPIs: improvement in user satisfaction and improvement in the professional user’s workflow.

Figure 3: Sample image for validation analyses.
To find out whether our tool marked an improvement in comparison to the state of the art tools available prior to the project, we would have to find a benchmark. This benchmark should comprise all the different technologies and value proposals that our platform offers. This was a challenge because part of the innovative value of our platform entails that users have a single portal to the services they already use separately, enhancing their workflow. A technology like this does not exist. This is a frequent challenge in innovation. It is difficult to find a one-size-fits-all benchmark when your value proposition is innovative, when you have no similar competitor. Another aspect that we had to approach in designing the validation process was related to the 5 different user types to whom the platform is targeted. For example, some users are interested in creation, while others simply want to download content. This user type differentiation is an important characteristic of the validation because the skills of each of them vary significantly. Not all users have the abilities to use the same tools, depending on their background and expertise. Therefore, the validation process needs to take into account these aspects for it to be rigorous and useful.
The Final Validation Procedure: A/B Testing
Considering this we decided to split the testing in the 3 value proposals that our platform offers:
- orchestrator asset search and license management
- 3D reconstruction
- and accessible authoring tools.
As a benchmark, we would use the versions of the technologies that partners had already developed before the project, which offered similar functionalities. The final version of these tools is what will be integrated into the platform, with enhanced performance and possibilities compared to the versions of the beginning of the project.
For the testing methodology we settled on A/B testing. This way we would compare tool A against tool B. Tool A would be the version of the tool prior to the beginning of the project, the so-called benchmark. Tool B would be the current version of the tool. Since we didn’t only want to measure a quantitative improvement, but also find out –following the Lean Startup methodology– what works and what doesn’t, we would have two B tests, called “B1” and “B2”. The B1 tests would happen by the midpoint of the project, 18 months in. The B2 tests would happen by the end of the project. This way, the B1 insights will serve to improve and attain better B2 results.
Part of the challenge of using the A/B test methodology was ensuring its rigorousness throughout the tests. In theory, the questions asked the prospective users should be the same ones for both the A and B tests. In the case of the XReco platform, improvements between versions include new features. Some features were already planned from the beginning. Others came from the learnings from the Joint Business Clinics. And others resulted from the developments of the fast paced industry that we are a part of. This is why the tests had to be slightly adapted to account for these new features and context, working to ensure the rigorousness of the process.

Figure 4: This image represents two options: A/B testing.
Further challenges in addressing our market maintaining the rigorousness of the tests required additional adjustments. In this case we are referring to accounting for the differences in expertise of the users. The benchmark for some tests consisted in using a well extended tool in some industries. If the user was not, however, a part of such industry, or didn’t hold a technical role within such industry, they couldn’t perform the test as they didn’t have the ability to do so. We provided an option for these user-profiles to mark, so they could skip certain sections of the test which they weren’t skilled to use. It is as important to us to measure the improvement for users who were already capable of using advanced tools as much as to account for the users with limited expertise but with significant interest in our value proposition.
Additionally, the A/B testing questions were also complemented by a standard SUS questionnaire as well as with qualitative questions to achieve a well rounded evaluation of the user perception.
The SUS questionnaire questions: (System Usability Scale)
- I think that I would like to use this system frequently.
- I found the system unnecessarily complex.
- I thought the system was easy to use.
- I think that I would need the support of a technical person to be able to use this system.
- I found the various functions in this system were well integrated.
- I thought there was too much inconsistency in this system.
- I would imagine that most people would learn to use this system very quickly.
- I found the system very cumbersome to use.
- I felt very confident using the system.
- I needed to learn a lot of things before I could get going with this system.
Conclusion
The validation of the XReco platform focused on two KPIs: improving user satisfaction and enhancing professional workflows. Defining a benchmark proved difficult, since the platform combines services in one portal without a direct equivalent. The diversity of five user types with different skills also required an adaptable yet rigorous process.
Validation was structured through A/B testing across three core value areas: asset search and license management, 3D reconstruction, and authoring tools. Earlier partner technologies served as benchmarks, while mid-project (B1) and final (B2) tests enabled iteration and refinement. Tests were complemented by the System Usability Scale (SUS) and qualitative feedback, ensuring a comprehensive view.
Results highlighted measurable improvements, insights for further development, and confirmed the platform’s innovative value across both expert and less technical users.
The results of the latest testing are still being evaluated. They will be presented and published in October 2025 at the conclusion of the XReco Project.
About Visyon
Visyon is a Barcelona based company, part of Grup Mediapro that creates extended reality experiences.
Follow XReco!