Our products create unbiased, labeled, synthetic datasets ideal for training top-performing Computer Vision AI models.

Defense and drone CV engineers face a persistent issue: field-collected data falls short for robust models, leaving gaps in edge cases like rare weather or occluded targets. No amount of flights or ground tests delivers the volume, diversity, or labels needed for mission-ready detection. Synthetic data addresses this directly by generating precise, scalable datasets that […]

Computer vision engineers, researchers, and AI practitioners are building models for various use cases like autonomous systems, surveillance, and industrial inspection, aiming for near-perfect accuracy in real-world deployment. They cope with rare scenarios like occlusions, low light, or unusual angles that cause model failures despite strong benchmark performance. These edge cases demand data that’s often […]

Biot, 19 January, 2025 – AI Verse, the leader in synthetic data generation for computer vision applications, announces a €5 million funding round to accelerate the development and commercialization of its proprietary technology. The round is led by Supernova Invest through Crédit Agricole Innovations et Territoires (CAIT), Amundi Avenir Innovation 3 (AAI4), and Creazur, bringing […]

We are proud to announce a recognition by French President Emmanuel Macron during his keynote address at the Adopt AI Summit in Paris.President Macron highlighted AI Verse’s strategic partnership with STARK, marking a significant endorsement of the company’s contribution to advancing Europe’s AI capabilities and technological sovereignty. This presidential recognition emphasizes AI Verse’s alignment with […]

We’re proud to announce a partnership between AI Verse and STARK. AI Verse, a French deep tech company and European leader in synthetic data generation for training artificial intelligence models, announces a strategic partnership with STARK, a German defence company that develops multi-domain unmanned systems. This collaboration aims to provide STARK with sovereign synthetic image datasets to train […]

Computer vision and synthetic data are reshaping how defense organizations see, understand, and act in complex environments. These technologies are moving from supportive tools to essential layers in modern defense infrastructure. Here’s where their impact is already being felt—and what’s next. 1. Situational Awareness Gets Smarter Defense systems now merge live visuals from drones, vehicles, […]
You can build a synthetic dataset for a fraction of the cost of a real-world image dataset. A 3D scene and a fully labeled image matching your use case are produced in seconds. Easily extend your dataset to match each new edge case throughout your development cycle.
Even if possible, in most cases, collecting real-world images is a daunting task. Privacy issues may also complicate the process. Procedural generation of synthetic datasets is a game changer. You create your own images in a few clicks and avoid any privacy issues.
You can build a synthetic dataset for a fraction of the cost of a real-world image dataset. A 3D scene and a fully labeled image matching your use case are produced in seconds. Easily extend your dataset to match each new edge case throughout your development cycle.
Even if possible, in most cases, collecting real-world images is a daunting task. Privacy issues may also complicate the process. Procedural generation of synthetic datasets is a game changer. You create your own images in a few clicks and avoid any privacy issues.
Winner:
Synthetic Datasets!
Research Summary
To evaluate the efficiency of synthetic datasets to train a model, we conducted a series of benchmarks, comparing trainings done with synthetic images against trainings done with real-world images (COCO dataset). As of today, the results were established for two different models (Yolo V5 and Mask R CNN), for three different tasks of increasing difficulty (sofa, bed and potted plant detection). We conducted these tests with a 1000 assets in our database.
Procedure
Real-world image training datasets were extracted from MS Coco (HERE) for each class of interest. We obtained 3682 images containing the label “bed”, 4618 containing the label “couch” and 4624 images containing the label “potted plant” from MS Coco.
For each test, we used our procedural engine to generate a synthetic dataset. For “beds” detection, we used a 63k synthetic dataset, for “couches”, 72k synthetic images and for “potted plants”, 99k images.
We also used Imagenet (HERE) for pre-training models in several experiments.
Validation Datasets were constructed for each class of interest from OpenImage (HERE). We extracted 199 images containing the label “bed”, 799 images for the label “couch” and 1533 images for the label “plant”.
Conclusions
The domain gap between training sets and validation sets or live images is not exclusive to synthetic datasets. It is a general issue which also exists from real images to real images.
In fact, synthetic images are generally more efficient than real images for training models. This might seem counter intuitive because synthetic images are less realistic than real images.
However, image realism is not key to train a model due to the domain gap. Variance and distribution of the parameters are the crucial factors to obtain a model which generalizes well.
Variance and distribution of parameters are not easily controllable with real images.
Models may be successfully pre-trained on synthetic images and fine-tuned on real images or the other way round. It depends on the task and on the model.
BEDS
PLANTS & COUCHES