LOGO white SP one line copie

Leverage light field technology and create visual digital twins in minutes!

Under terms like Neural Radiance Fields (NeRFs), Light-Fields, Neural Graphics Primitives, and Integral Photography, a new set of algorithms and technologies are revolutionizing graphics. 
Automated photograph capture and Deep-Learning optimization are enabling the creation of visual digital twins in minutes, with no manual modelling of products. This gives designers, marketers, and visual merchandisers, the power of “3D” without the painstaking process of creating it.

Why SmartPixels’ neural graphics primitives (NGP) for product visuals?

Produce high-quality, swiftly scalable photorealistic lifelike 3D objects and scenes in minutes.

Photorealistic renderings

NeRF uses a neural network to learn the relationships between the appearance of a scene and the 3D structure of the objects within it, allowing it to generate highly realistic lifelike “3D images” from a single 2D image or sequence of images.
Photorealism 3D rendering of a suit

Highly efficient and scalable

NeRF can be trained on large datasets, allowing it to process large amounts of information quickly and accurately. As it can process data in parallel, NeRF is faster than traditional rendering techniques. 

Interactive product experiences

Just like 3D renderings, Light field display enables an engaging, immersive, and innovative 360° product experience (3D for e-commerce, AR features). 

Create engaging product visuals with Neural radiance fields (NeRF)

Realistic 3D views of any objects and scenes may be captured and rendered with this end-to-end system, which comprises light field capturing gear, reconstruction cloud service, and rendering services.

Step 1: 

Light Field capturing gear

SmartPixels has developed several robotic photography machines, based on turn tables, motorized cameras, controllable lighting, and background. Once placed at the center of the machine, an object or product is photographed thousands of times, from every angle. 
The next step is detouring the photos. As the lighting and background are accurately controlled, this process is fully automated using regular computer vision algorithms, or deep learning-based object recognition and detouring for more complex cases (transparency, high shininess).
This step, common between our traditional computer graphics pipeline and our neural pipeline, is highly optimized. 

Step 2: 

NVIDIA Instant NeRF - Optimization and Training

In 2022, NVIDIA has introduced a highly optimized Nerf-like model to learn and render computational light fields: Instant-NGP. As a member of the NVIDIA Inception program for start-ups, SmartPixels has signed a licensing agreement on this technology. 
Instant-NGP allows us to train the neural primitive in minutes and render it at 60 images per second. Instant-NGP leverages the latest advances in NVIDIA GPUs for optimal performance. While optimal, Instant-NGP is a research-oriented project and does not provide a full 3D production pipeline for at-scale content delivery.
Neural radiance Fields NVIDIA & SmartPixels

Step 3: 

Neural Rendering setup and editability

SmartPixels has leveraged Unity to create a cloud rendering and experience platform tailored for luxury fashion. To benefit from years of existing development and integration into the fashion ecosystem, we have integrated the ability to render an NVIDIA NGP inside the Unity engine.
By integrating Light Fields inside Unity, we can create hybrid renderings, mixing 3D backgrounds, light-field products, or vice-versa.
The key features of this integration are:
Fully automatic capture of the light field, per above, optimization and training thanks to NVIDIA NGP.
One-click import into Unity editor.
Occlusion (the light field hides objects behind it and is hidden by the foreground).
Shadows: The light field casts shadows.
Transparency.
Reflections: The Light-field casts reflections.
Neural Radiance Fields for e-commerce catalogues and AR experiences
For your e-commerce and Augmented Reality experiences, get lifelike 3D models of the highest quality, within a few minutes and for all your inventory.

1) Improve the visual appeal and interactivity of your e-commerce catalog

Light-fields give you the “omnidirectional viewpoint” (free camera motion) of “3D”. It’s an integral photography: the photo of all photos, you can spin the object captured, zoom, and pan. 
In the context of e-commerce, light field rendering can be used to create more realistic and interactive product images for online catalogues. For example, a retailer could use light field rendering to capture detailed 3D images of a product from multiple angles, allowing customers to view the product from different perspectives and see subtle details that may not be visible in a traditional 2D image.

2) Enhanced realism and interactivity of AR experiences

Light field rendering can be used to create more realistic and immersive Augmented Reality experiences. For example, light field rendering can be used to display virtual objects in AR with realistic lighting and reflections, making them look like they are part of the real world. Light field rendering can also allow users to interact with virtual objects in a more natural way, such as by changing the focus of an image to see different parts of an object more clearly.

People also ask about...

When using a computer to render a light field, you can spin the object captured, zoom, and pan: “everything feels like 3D”, so the comparison/confusion with traditional 3D graphics is easy.
Let’s look at a few similarities and differences between “traditional computer-generated images” and “neural graphics primitives”:
 
Traditional
3D Computer Graphics
Neural Graphics Primitives
Arbitrary viewpoint and camera: look around in space freely.
Yes
Yes
Compose with other objects, add a background, compute transparency, occlusion, shadows, reflections
Yes
Yes (Advanced!)
The shape, material attributes, textures and light positions must be modelled individually and manually by graphics artists using tools like 3D scanners and material scanners.
Yes
No. Fully automatic.
The shape of the object can be edited.
Yes
No
The colors and materials can be edited.
Yes
No
The lighting can be edited.
Yes
Partially.
Some limitations.
The “asset” is usable in 3D worlds such as video games, metaverse, CAD, etc.
Yes, increasingly.
Not yet.
In short, by comparison with 3D computer graphics, Light-fields give you the “omnidirectional viewpoint” (free camera motion) of “3D” but not the editability. It’s an integral photography, the photo of all photos, but it is still just a photo: Easy to capture, hard to change.
Neural graphics primitives or light fields should be leveraged when the project falls into these conditions:

A physical sample of the product exists (common in fashion).
• Throughout the lifetime of the visual digital twin, the product will not need to be edited: the product is a permanent collection item. The brand can capture it once and use it everywhere.
• The ability to present the product in free camera motion (manipulate and rotate ‘in 3D’) is desirable (for instance, a moving image on social media has a 7x click rate compared to the static product pack shot).
The same product needs to be presented in a variety of backgrounds, contexts, resolutions, and angles. For instance, in visual merchandising, or when displaying products with different image constraints (online marketplaces with different background or aspect ratio requirements).
The visual asset has a very long lifetime (archives, art & couture pieces).
In all these instances, SmartPixels is deploying light-field-based imaging and rendering solutions.
Neural Radiance Fields (NeRF) is a machine learning algorithm that can be used to render 3D scenes with realistic lighting and reflections.

Some of the benefits of using NeRF for lifelike 3D rendering include:
Increased realism: NeRF uses a neural network to learn the relationships between the appearance of a scene and the 3D structure of the objects within it, allowing it to generate highly realistic 3D images from a single 2D image or sequence of images.
High efficiency: NeRF can be trained on large datasets, allowing it to process large amounts of information quickly and accurately. This can make 3D rendering more efficient, especially for complex or large scenes.
Improved speed: NeRF can process data in parallel, making it faster than traditional rendering techniques. This can be particularly useful for real-time rendering applications.
Enhanced flexibility: NeRF can be trained to adapt to different input data, allowing it to be used in a wide range of 3D rendering applications.
Reduced complexity: NeRF can automate certain aspects of 3D renderings, such as lighting and shading, reducing the complexity of the process and allowing artists to focus on other aspects of the scene.

Try our Light Field solution on your product catalogue!

The ultimate guide to Light-Fields

Willing to deep dive in the advent of modern GPUs and Deep Learning methods?
Download our e-book on how light fields are revolutionizing product visuals.
In this e-book you will discover:
– How to leverage the power of 3D without the painstaking of creating it?
– How a new set of algorithms and technologies are revolutionizing graphics?
– What are the processes of automated photograph capture and deep-learning optimization?

Complete this form to download our guide to Light-Fields.