For Project One, I was interested in the 3D reconstruction of Scanning Electron Microscopy (SEM) Images for better surface analysis. For my scans, I used the Phenom ProX G6 Desktop SEM in the basement of the Nature Lab at Risd. SEM scans are very useful in the fields of biology and material engineering for surface analysis. SEM data comes in the form of digital images that users can snap of the specimen they are documenting. While these 2D images are helpful (and fun to look at!) they're fairly limiting in analyzing heights and distances between different surface features. My hypothesis going into the project is that reconstructing SEM images in 3D and allowing users to view these models in VR/AR would allow for better surface understanding in collaboration.
If you’re going to undergo a similar investigation, it's important to consider that there’s a bit of lead time required to get started. Typically the material has to be dried for a week to two weeks depending on what the material is. For example, the hydrogel was ready in a week since it’s a little hydrophobic unless soaked in water. A specimen like a leaf or bug would take longer since its biological material would need to be dried. I used the Research Stereoscope to capture images of the sample when wet.
There are a few ways to try and reconstruct 2D SEM data for 3D reconstruction: single view, multi-view, and hybrid. Of the photogrammetry software suggested (123D catch (Autodesk), photoscan (Agisoft), recap 360 (Autodesk)) Recap requires at least 20 images and was tedious to operate, and both photoscan and 123D seem to be discontinued. The previously mentioned software weren’t originally designed for SEM, and uses metadata in exchangeable file format.
In terms of constructing the model, I considered comparing the fidelity of models made in Mountain SEM and Rhino. Mountain SEM requires a license to use, however it is free for the first 72 hours, and can be extended to 30 days through another application. Mountain SEM is a software dedicated to SEM reconstruction, and allows users to use different types of imaging analysis (4 quadrant, stereophotogrammetry, reflectometry) to capture surface typology and measure distances and angles of the model’s features. Rhino is a 3D modelling software great for product design and architectural design. Displacement in Rhino allows users to manipulate a texture’s black and white points to create a displacement mesh on an object’s surface. Unfortunately I was unable to get a license for MountainSEM, so I used Rhino and plan to interview Georgia Rhodes on her experience using different photogrammetry software.
Making displacements in Rhino involves manipulating black and white points. Black points control the displacement of black colors in the texture while white points control the displacement on white color. It’s important to turn whatever image you’re planning to displace into greyscale in photoshop so that the program can more accurately control the points.
I was using this computer with the following specs:
Lenovo Slim Pro 7
AMD Ryzen 7 7735HS with Radeon Graphics
16 GB of RAM
8 GB Graphics Card
I had difficulty controlling the advanced settings panels as it’s a lot for my system to handle. If you have a computer with better specs, it would be helpful to alter refine steps and refine sensitivity in advanced settings. This may provide a cleaner mesh that’s easier to export. If you’re working with too complex of meshes, then the geometry may be broken when uploading the model to ShapesXR. To resolve this you can upload the image as a .glb, which simplifies the model for the web. The downside is a potential drop in model quality, so it’s important to make the model detailed enough in rhino so that even when simplified it has enough detail.
Here are the .glb models used for the project. Below are the displacement settings in Rhino for each model.
Hydrogel 18
Initial Quality: very high
black point: .94
white point: 1.02
Hydrogel 5
Initial Quality: very high
black point: .98
white point: 1.010
Wet Hydrogel
Initial Quality: very high
black point: .59
white point: 1
After making the wet hydrogel I used Smooth command to simplify the mesh.
Based on my experience with ShapesXR I decided to use the system usability scale, which is a standardized questionnaire for assessing perceived usability of a system or product. Typically the questionnaire has 10 questions alternating between positive phrases on even numbers and negative phrases on odd numbers that is rated off of a 5 pt Linkert scale. Considering that I have experience with Figma and 3D modelling, I anticipated that my rating would be fairly positive. However, due to my difficulty navigating prototyping and uploading models into ShapesXR my overall score (using a Figma questionnaire) was 45 which is 23 points below average. The software was great for importing simple models and organizing pre-created models in space, but is lacking when it comes to more complexity. Something to keep in mind when prototyping in ShapesXR is that it’s easy to break connections between frames. Below is a demonstration of what I mean.
The following are images of the spatial setup in VR and how users are meant to interact with the buttons.
For future projects I suggest adding a key so that the buttons would be easily understood.
To evaluate my project, I held an in class activity where I asked my classmates to team up and analyze reconstructed SEM models in ShapesXR. The following is data and a summation of the responses.
Overall the 3D reconstructions were preferred to the 2D SEM images and images from the Research Stereoscope. For future projects, it may be helpful to choose a specimen that can be viewed in its entirety under the SEM like a diatom so that the models are less flat. Reconstruction in VR provides a unique opportunity to alter lighting and color conditions as well. It would be interesting to study the best lighting, color, and material surface for helping users best understand surface topography.