Implementing 3D Model Search Services on AWS Cloud.

Nanthan Rasiah
5 min readJun 29, 2022


3D Models are extensively used in many sectors ranging from computer games, movies, engineering, retail business, advertising etc. There are many tools to produce 3D models in the market but tools are hardly available to search visually 3D model database to find similar 3D models since developing good 3D model search tool is very challenging. It requires complex computing and AI/ML framework to create model descriptor and extract feature vectors, database to persist and index large volume of shape data and near real time pattern matching on large dataset.

AWS, leading cloud provider on Earth, offers on demand computing platform, storage services, machine learning services etc on the cloud which allows to implement 3D visual search with ease.

In this post, let’s understand real word business problem in 3D model business and see how to implement the solution on AWS cloud.

Let’s start with a hypothetical business problem. Engineering Design Company X has large number of 3D models stored in legacy datastore and they want to spin up a new business to sell their models online. Company wants provide services to do visual search using a photo or hand-sketch or 3D model objects and find the matching 3D models so that customer can easily select and buy the model they are after.

Here, company X has large number of 3D models in legacy database. The first step is to download models to cloud storage (preferably S3)and extract shape and feature data of these models and then build indexing on the data in order to group similar models together and enable efficient searches.

The following diagram illustrates the architecture of shape and feature data generation and indexing.

Here are the steps you need to take to implement the solution.

  1. Configure AWS Batch, which provides serverless batch computing platform, to run a service that connects to legacy database and download the 3D models files into S3 bucket. You can schedule it to run nightly.
  2. Implement an AWS Lambda function to process the downloaded 3D models in S3 bucket and generate shape data using shape representation algorithms. Generated shape datas should be stored in Amazon DynamoDB. You can configure this Lambda function to trigger for S3 bucket put event.
  3. Implement another AWS Lambda function to create several snapshots of the 3D model at different angles and store them as images in S3 bucket.
  4. Extract features from generated images using a Convolutional Neural Net (CNN) model that is pre-trained on the well-known ImageNet dataset or model you trained and deployed using Amazon SageMaker which is a fully managed machine learning platform that allows to create, train and deploy machine learning models quickly in AWS cloud. Using this model, you can extract image textures, geometric data and metadata and store it in Amazon DynamoDB.
  5. Create another lambda function to enrich shape data generated in step 2 with feature data extracted in step 4. Now shape data is enriched with feature data. Shape data is a set of floating numbers. Next step is group similar shapes together.
  6. Using AWS lambda function, build a reference k-NN index on Amazon OpenSearch Service, which is a fully managed service that makes it easy for you to deploy, secure, and run elastic search cost-effectively at scale. Amazon OpenSearch Service offers k-Nearest Neighbour (k-NN) search, which lets you store shape data as vector and group similar shape data by Euclidean distance or cosine similarity using k-NN algorithm.

Now you have generated shape descriptor enriched with features and indexed them with k-Nearest Neighbour (k-NN) algorithm. Next, you present a 3D model or 2D views of the model (you can use a tool to sketch front view, top view and side view) to query to an application to find the similar models from the indexed data in Amazon OpenSearch.

The following diagram describes the architectures of real time 3D model search to find similar models from model repository.

  1. Using a web app hosted in S3, you can upload a 3D model object (if you have) or you can draw top, front and side view of the model using a sketcher app and upload the views as image. If you present more view images from different angle, you will get more accurate results.
  2. Uploaded image is sent to AWS Lambda via Amazon API Gateway.
  3. AWS Lambda function will generate shape descriptor for the uploaded model / images and then call Amazon SageMaker real time endpoint to extract feature data.
  4. AWS Lambda function will enrich shape descriptor with feature data.
  5. AWS Lambda function sends query to the k-nearest neighbour in the index in the Amazon Elastic Search Service (Amazon OpenSearch Service). It will return a list of k similar model data and their respective Amazon S3 URIs of the models are returned.
  6. AWS Lambda function generates pre-signed Amazon S3 URLs to return back to the client web application to visualise the similar models.

The purpose of this post is to explain the architectures and the high level implementation details of 3D model search service on AWS cloud using AWS services. FAQs section below is added to provide more details.


  1. What is 3D shape descriptor ?
    3D Shape descriptor is a set of numbers to represent the points on 3D model surfaces to capture the geometric essence of a 3D object. It is the compact representations of 3D object and the descriptors form a vector space with a meaningful distance metric.
  2. How to generate 3D shape descriptor ?
    There are many algorithm available to generate 3D shape descriptor. They produce a set of 2D view data which are produced by rotating 3D models at different angles. More views produce more accuracy. Popular algorithms are Light Field Descriptor (LFD) and Multi-View Convolutional Neural Network (MVCNN).
  3. What is pre-trained CNN model?
    A pre-trained model is a model created and trained by someone to solve a problem that is similar to the problem we have. In our case, we can use pre-trained resnet50 convolutional neural network which is trained on more than a million images from the ImageNet database. The resnet50 is available as a built-in algorithm in SageMaker.
  4. What is SageMaker?
    It is a fully managed machine learning service to build and train machine learning models quickly and easily, and then directly deploy them into a production-ready hosted environment.
  5. Amazon Elastic Search Service vs Amazon OpenSearch Service.
    Amazon Elastic Search Service is now called Amazon OpenSearch Service which offers the latest versions of OpenSearch and visualisation capabilities powered by OpenSearch Dashboards and Kibana. It enables you to easily ingest, secure, search, aggregate, view, and analyse large volume of data.
  6. What is k-NN for Amazon OpenSearch Service?
    It allows you search for points in a vector space and find the “k nearest neighbours” for those points by Euclidean distance or cosine similarity.



Nanthan Rasiah

AWS APN Ambassador | Solutions Architect | AWS Certified Pro | GCP Certified Pro | Azure Certified Expert | AWS Certified Security & Machine Learning Specialty