Systematic synthesis of design prompts for large language models in conceptual design
Conceptual design can be modeled as a proposition making process, where designers make logical propositions to communicate and construct intangible concepts. Not only can LLMs interpret designers’ propositions, but they can also generate artificial ones. Moreover, common methods used in conceptual design (e.g. interviews, surveys, and analysis of user generated contents) rely on interpreting various ‘design languages’ for informed decision-making. These methods can be effectively mastered by LLMs. Beyond the generation tasks, LLMs can handle a variety of other tasks prevalent in conceptual design, including classification, detection, sentiment analysis, evaluation, and more.
Design prompts are classified into five categories: input, output, mechanism, control, and black box. All the prompt classes can accommodate textual prompts, and most classes can also accommodate visual prompts (e.g. images).
Input prompts serve to aid LLMs in interpreting designers’ intents and design operations. Firstly, task prompts guide LLMs in interpreting a specific design task regarding design objectives and evaluation criteria. Secondly, persona prompts enable LLMs to understand various personas, such as those of designers (e.g. promoting LLMs to understand a designer’s background), customers, relevant stakeholders, and even the LLMs themselves (e.g. prompting LLMs to assume various roles such as ‘industrial designer’ or ‘lead user’). Thirdly, context prompts enable LLMs to interpret relevant design contexts, including product usage environment, design process context (e.g. prompting LLMs to understand the current stage in a design process), competition context (e.g. prompting LLMs to understand relevant information about market segmentation, targeting, and positioning), and social context. Lastly, preference prompts enable LLMs to interpret specific preferences of designers or customers (e.g. promoting LLMs to prefer ‘sustainable design’ or ‘mechatronic systems’).
Lastly, LLMs can be prompted to visualize responses in design-specific forms, such as functional hierarchies, Morphological Charts, and design sketches.
A design-specific LLM agent, namely Design Prompt Assistant, was developed and is searchable in the customized versions of ChatGPT.
To overcome the inborn complexities of DPS, three structured synthesis patterns are prescribed: (1) top-down synthesis; (2) bottom-up synthesis; (3) coevolutionary synthesis. In top-down synthesis, designers begin with general design prompts and progressively decompose them into more concrete ones. Adjacent prompts belong to the same class and vary in their abstraction levels. Only after the LLM yields satisfactory responses, designers switch to a different class of design prompts. This pattern is identical to the decomposition process in engineering design and to the Least-to-Most Prompting in prompt engineering. In bottom-up synthesis, designers begin with multiple concrete prompts, followed by prompting the LLM to synthesize responses to address a more general prompt. Designers switch to a different prompt class only after the LLM yields satisfactory responses. In coevolutionary synthesis, designers frequently alternate between prompts of various classes across different abstraction levels.
Meanwhile, structured DPS must follow theoretical foundations from relevant design theory and methodology (DTM). According to the CIRP classification, DTMs that are relevant to creativity-based design include abductive reasoning, emergent synthesis, and intuitive approaches. Intuitive approaches align more closely with randomized DPS, whereas abductive reasoning and emergent synthesis are better suited to structured DPS. DPS represents a particular type of emergent synthesis.