Utilizing Style2Fab, creators can quickly personalize 3D-printable object models, including assistive devices, without compromising their functionality.
As 3D printers have become more affordable and widely accessible, there has been a significant increase in a community of novice makers who are creating their own objects. These amateur artisans often rely on free, open-source repositories of 3D models created by other users, which they download and produce using their 3D printers.
However, incorporating customized design elements into these models presents a significant challenge for many makers. This process typically demands the use of complex and costly computer-aided design (CAD) software, particularly when the original model isn’t readily available online. Additionally, even if a user manages to add personalized elements to an object, ensuring that these customizations don’t compromise the object’s functionality requires a higher level of expertise in the field, something that many novice makers may not possess.
MIT researchers have developed an AI-driven tool called Style2Fab, designed to assist makers in customizing 3D models without compromising the functionality of the fabricated objects. With Style2Fab, users can personalize 3D models using natural language prompts to describe their desired design modifications, and then they can 3D print the objects.
Style2Fab employs deep-learning algorithms to automatically divide the model into aesthetic and functional segments, simplifying the design process. This tool aims to empower novice designers and make 3D printing more accessible. It can also find applications in the field of medical making, where customization of assistive devices, considering both aesthetics and functionality, is important. For instance, a user could customize the appearance of a medical device like a thumb splint to match their clothing while keeping the device’s functionality intact. The tool is user-friendly and can benefit the growing DIY assistive technology community.
The research paper introducing Style2Fab was authored by Faraz Faruqi, a computer science graduate student, along with advisors Stefanie Mueller and Megan Hofmann, and other members of the research group. The findings will be presented at the ACM Symposium on User Interface Software and Technology.
Emphasizing the importance of functionality.
Online repositories like Thingiverse serve as platforms where individuals can upload open-source digital design files for objects, which can then be downloaded and produced using 3D printers.
To develop their AI-driven tool, Faraz Faruqi and his collaborators initially delved into these vast repositories to study the objects available. Their aim was to gain insights into the various functionalities present in different 3D models, providing them with a foundation for segmenting models into functional and aesthetic components using AI.
They recognized that the purpose of a 3D model is highly context-dependent. For example, a vase could be designed to sit flat on a table or be intended for hanging from the ceiling with a string. Therefore, the determination of which part of an object is functional couldn’t solely rely on AI; human input was required.
Consequently, they defined two key functionalities: external functionality, which encompasses the parts of the model that interact with the external environment, and internal functionality, which includes the parts of the model that need to fit together after being fabricated.
For a stylization tool to be effective, it needed to maintain the geometry of both externally and internally functional segments while allowing customization of nonfunctional, aesthetic segments.
To achieve this, Style2Fab utilizes machine learning to analyze the model’s topology and identify patterns of geometry changes, such as curves or angles where two surfaces meet. Based on this analysis, the system divides the model into a certain number of segments.
Then, Style2Fab compares these segments to a dataset the researchers compiled, which contains 294 models of 3D objects. Each model in the dataset has its segments annotated with functional or aesthetic labels. If a segment closely resembles one of these annotated pieces, it is labeled as functional. However, classifying segments based solely on geometry is a challenging task due to the wide variations in shared models. Therefore, the system initially provides recommendations to the user, who can easily adjust the classification of any segment, marking it as either aesthetic or functional.
Human involvement within the process
After the user approves the segmentation, they provide a natural language prompt describing their desired design elements, like requesting a “rough, multicolor Chinoiserie planter” or a phone case “in the style of Moroccan art.” An AI system called Text2Mesh then attempts to generate a 3D model that fulfills the user’s specifications.
Text2Mesh manages the aesthetic aspects of the model in Style2Fab, which involves adding texture, color, or adjusting the shape to closely match the user’s preferences. However, it doesn’t modify the functional components of the model.
All of these components are integrated into the backend of a user interface that automates the segmentation and styling of a model based on user input and a few clicks.
The researchers conducted a study involving makers with varying levels of experience in 3D modeling and discovered that Style2Fab was beneficial in different ways based on the maker’s expertise. Novice users could easily understand and utilize the interface for stylizing designs, offering a low-entry point for experimentation.
For experienced users, Style2Fab accelerated their workflow and provided more precise control over stylization through advanced options.
Looking ahead, Faruqi and his team aim to expand Style2Fab to grant users fine-grained control over physical properties in addition to geometry. For instance, altering an object’s shape could affect its load-bearing capacity and structural integrity during fabrication. Furthermore, they plan to enhance Style2Fab to enable users to create their own custom 3D models from scratch within the system. The researchers are also collaborating with Google on a follow-up project.
This research received support from the MIT-Google Program for Computing Innovation and utilized facilities provided by the MIT Center for Bits and Atoms.