ChatGPT for Mechanical Engineers: A Practical Playbook
This guide walks you through basic and advanced prompting techniques, with specific applications for design reviews, technical documentation, and team collaboration.
Mechanical engineers are increasingly using AI assistants like ChatGPT to speed up design workflows. We know this at CoLab because we’re primarily made up of mechanical engineers and do this work every day ourselves, while working alongside top companies that are leaders in design review.
The best use that we have seen for mechanical engineers when it comes to ChatGPT is:
- handling repetitive writing
- brainstorming ideas
- helping with communication
This prompt guide for mechanical engineers will walk you through:
- Basic ChatGTP prompting
- Advanced prompting techniques
- Using ChatGPT in design review
- Using ChatGPT in technical documentation
- ChatGPT in communication
By the end, we hope you have a solid sense of how and when to use ChatGPT as a mechanical engineer (when not to!).
Basic Prompting Strategies for Engineers
Good results start with good prompts. Basic prompts are direct and clear. Think of it like asking a colleague a question.
Quick start: three rules
- Give context. Application, materials, loads, constraints, environment.
- State the deliverable. “Bullet list,” “table,” or “JSON.”
- Bound the scope. “Limit to top five risks,” “focus on passive cooling.”
Basic example
Goal: shortlist materials for a hot gearbox part.
Prompt: “List the key material properties to consider when selecting a polymer for a gearbox component operating at 150 °C. Return a bullet list.”
Advanced Prompting Strategies
Advanced prompts give ChatGPT a role, more context, or specific output instructions. These help you get structured, tailored answers.
Role-Playing
“Act as a senior mechanical engineer reviewing a pressure vessel design. List any safety or manufacturing concerns.”
Context-Rich Prompts
“Given this system: a cantilever beam made of 7075-T6 aluminum, 1m long, with a 100kg load at the end, suggest design improvements.”
Structured Output
“List all steps for conducting a thermal stress analysis in bullet points.”
Prompt Iteration
Start broad, then refine:
- “List cooling options for a 500W electric motor.”
- “Now focus only on passive cooling options for small enclosures.”
Complex Multi-Factor Prompt
“Act as a design panel of three experts (materials, structures, manufacturing). Evaluate the following drone frame design for weight, strength, and manufacturability. Present findings per expert.”
Checklist Integration
“Here is our bolt joint checklist: [list]. Review the following bolted connection and highlight any concerns.”
Advanced Example Prompt:
“Act as an expert CAD reviewer. Based on the design summary below, create a review report listing issues by severity (critical, moderate, minor). Use this format:
[Severity] [Issue] – [Recommendation]”
Prompts for Design Reviews
Engineers can use ChatGPT to prepare for or document design reviews. It helps:
Brainstorm Issues
“Describe a bracket that supports a motor via two M8 bolts. Ask: What failure modes should I consider?”
Checklists and Standards
“Review this bolted joint design for common mistakes: bolt spacing is 2D, preload assumed 70% of yield, threads engaged 1.5D.”
Design Change Summaries
“Summarize differences between Version A and Version B of this linkage design and the rationale behind each change.”
Report Drafting
“Given this feedback list: 1. Factor of safety too low, 2. Material too heavy, 3. Missing fillet at stress point, draft a paragraph summary for a design review report.”
Design Review Prompt Example:
“I designed a bracket of 6061-T6 aluminum, 150x50x10mm, attached by two M8 bolts. Act as a design reviewer and identify possible concerns.”
ChatGPT might suggest: stress at bolt holes, fatigue resistance of 6061-T6, fillet radius concerns, and fastener strength.
Prompts for Technical Documentation
Engineers can use ChatGPT to reduce time spent drafting reports, specs, and manuals.
Report Summaries
“Here are the test results: Max temp = 85°C, acceptable. Vibration within limits. Draft a short paragraph summary.”
Clarify Language
“Rewrite this technical explanation in plain English for a client who isn’t an engineer.”
Boilerplate Content
“Generate a generic ‘Methodology’ section describing how FEA was conducted on a structural frame.”
Table/Comparison Requests
“Compare material A (7075-T6) and material B (304SS) in a table of strength, corrosion resistance, and machinability.”
Documentation Prompt Example:
“Write a paragraph for a design report explaining why we chose aluminum over steel for the chassis, considering weight and corrosion.”
Prompts for Communication
Save time writing clear, professional messages and updates:
Emails
“Write a professional email to a supplier explaining that we need stronger bolts due to an updated load requirement.”
Team Updates
“Draft an update for the team: thermal testing is complete, results are OK, one minor cooling issue will be fixed in next rev.”
Meeting Prep
“Here’s a list of meeting notes. Summarize into meeting minutes with key action items.”
Jargon Translation
“Explain fatigue failure in layman’s terms for a client email.”
Role-Play Prep
“I’m presenting a design to execs. Pose 3 questions they might ask about cost, risk, or schedule, and write sample answers.”
Communication Prompt Example:
“Compose a status update: vibration testing is complete, performance was better than expected, and the next step is to validate at high temperature. Keep tone positive.”
Guardrails
- Validate numbers. Ask for method and equations; verify the math yourself.
- Be explicit about units. Put units in your prompt and in the requested output.
- Avoid overreach. Don’t treat suggestions as sign‑off.
- Privacy/IP. Don’t paste sensitive CAD or pricing. Redact or abstract details.
- Determinism. If repeatability matters, ask for fixed structure (tables/JSON) and reuse the same prompt.
Five ready‑to‑run workflows
1) 15‑minute design‑review prep
# Role and Objective
- Act as a design reviewer to evaluate the provided engineering context and identify potential concerns.
# Instructions
- Begin with a concise checklist (3-7 bullets) of the evaluation approach you will take; keep items conceptual.
- Analyze the component based on the given context: [component], [material], [key dimensions], [loads/BCs], [environment].
- Identify and list up to 10 issues, classifying each by Severity (Critical, Moderate, Minor) and by Category (strength, fatigue, stiffness, manufacturability, assembly, service).
- Prioritize issues first by Severity (Critical > Moderate > Minor), then by overall impact.
- If fewer than 10 issues are found, list only those identified.
- After compiling the table, review the output for accuracy and completeness; if issues are ambiguous, refine or clarify as needed.
## Output Format
- Return a single Markdown table containing the issues.
- Table schema:
- **Severity**: enum ("Critical", "Moderate", "Minor")
- **Category**: one of strength, fatigue, stiffness, manufacturability, assembly, service
- **Issue**: description
- **Recommendation**: actionable suggestion
- Example:
| Severity | Category | Issue | Recommendation |
|------------|--------------------|--------------------------------|-----------------------------------|
| Critical | strength | Stress exceeds material limit. | Increase section thickness. |
| Moderate | manufacturability | Feature difficult to machine. | Simplify feature geometry. |
# Stop Conditions
- Task is complete when the prioritized, categorized table of issues is returned in the specified format.
2) Delta summary between two revisions
# Role and Objective
The assistant evaluates two provided revision summaries (Rev A and Rev B) to generate a structured comparison and risk analysis.
Begin with a concise checklist (3-7 bullets) of what you will do; keep items conceptual, not implementation-level.
# Instructions
- When given valid revision summaries, compare Rev A and Rev B to identify all differences, explaining the rationale for each.
- Present a Markdown table cataloging identified issues, prioritizing and categorizing them by severity, category, description, and priority.
- Conclude with a delta summary describing all detected changes, rationales, risks, and validation steps.
- After generating the table and the summary, briefly check that all detected differences and risks have been addressed appropriately. If any are missing or unclear, self-correct and update your output before responding.
- In cases of missing or malformed revision summaries, output: 'Input Error: Revision summaries missing or not in correct format' and do not attempt to generate a delta summary.
## Output Breakdown
### 1. Issues Table
- Display as a Markdown table with these columns:
- Severity (e.g., 'High', 'Medium', 'Low')
- Category (e.g., 'Security', 'Functionality')
- Description (detailed issue explanation)
- Priority (integer, ascending; 1 = highest)
| Severity | Category | Description | Priority |
|----------|--------------|----------------------------------------|----------|
| High | Security | Example security issue description | 1 |
| Medium | Functionality| Example functionality issue | 2 |
### 2. Delta Summary
- If revision summaries are valid, produce:
- *Differences* – bullet list stating each change with a sub-bullet for rationale.
- *Risks Introduced* – three-item numbered list of potential risks resulting from the changes.
- *Validation Steps* – three-item numbered list of recommended steps to address risks.
- Follow this structure:
**Differences:**
- [Change 1 description]
- Rationale: [Reason for the change]
**Risks Introduced:**
1. [First risk]
2. [Second risk]
3. [Third risk]
**Validation Steps:**
1. [First step]
2. [Second step]
3. [Third step]
# Stop Conditions
- Task is considered complete when both the issues table and—when applicable—the delta summary are provided in the specified formats.
# Output Format
- Use Markdown for tables and lists.
- Ensure the output follows the structure outlined above.
- If required data is missing or malformed, return only the input error message.
3) Test‑report digest
# Purpose
Provide clear, actionable instructions for summarizing and tabulating test report results for inclusion in documentation or further review.
# Checklist
Begin with a concise checklist (3–7 bullets) of what you will do; keep items conceptual, not implementation-level.
# Instructions
- When given a set of test result bullet points (e.g., "Max temp 85°C OK; vibration within limits"), write a concise 5–7 sentence summary of the test outcomes suitable for report inclusion.
- After the summary, generate a table listing Pass/Fail outcomes for each requirement ID following this schema: **Requirement** | **Result** | **Notes**.
## Sub-categories
- **Handling Missing or Ambiguous IDs**: If any requirement ID is missing, duplicated, or unparseable from input, create a separate entry in the table for it, set Result to 'Fail,' and specify the issue in Notes.
- **Ordering Rules**: Present table rows in the sequence in which requirements are introduced in the input. If order is ambiguous, sort ascending alphanumerically by Requirement ID.
# Context
- Relevant for summarizing and structuring engineering or QA test results.
- Inputs: Test result statements and, if applicable, revision (delta) summaries.
- Out of Scope: General QA procedures without supplied test results.
# Reasoning and Validation Steps
- Think step-by-step: parse input, extract requirement IDs and outcomes, generate summary, then output table per schema and ordering rules.
- After creating the summary and table, validate that the schema, order, and handling of ambiguous/missing/duplicate IDs match requirements. Self-correct if any discrepancies are found.
- Test against example input if available. Ensure delta summary is included when revisions are present.
# Output Format
- Show summary after heading "Test Outcome Summary:" as plaintext, not in a code block.
- Place Markdown-formatted table immediately after the summary.
- Table columns: Requirement (string), Result ('Pass' or 'Fail'), Notes (may be empty or explanatory).
# Verbosity
- Keep summaries concise and clear.
- Use Markdown formatting only for tables; do not encapsulate summaries in code blocks.
# Stop Conditions
- Consider complete when all of the following are provided in the specified formats:
1. Test-report digest (summary) when test results exist.
2. Table of Pass/Fail outcomes as specified.
3. Prioritized delta summary if revision summaries are provided.
4) Supplier request for change (RFC)
Begin with a concise checklist (3-7 bullets) of what you will do; keep items conceptual, not implementation-level. Compose a professional and concise email to [supplier] regarding the need to increase bolt strength due to a higher load requirement, using the details provided:
- Current specification: M8 class [X]
- Proposed specification: M8 class [Y]
- Reason: Load increased to [value]
- Deadline: Needed by [date]
- Request: Ask for lead time and cost difference for the new specification
All placeholders ([supplier], [X], [Y], [value], [date]) must be provided before generating the email. If any of these required fields are missing, respond with a JSON error message clearly identifying which fields are incomplete.
After generating the output, validate that all placeholders are filled and that the required output format is followed. If errors are detected, self-correct or respond with the appropriate error message.
## Output Format
If any required fields are missing, output only a JSON object in the following format:
{
"error": "Missing required fields: [field1], [field2], ..."
}
If all fields are provided, output the email body as plain text in the following standard format:
Subject: Request for Updated Bolt Specification and Pricing
Dear [supplier],
We need to increase the strength of the bolts due to an increased load requirement. Our current specification is M8 class [X], and we propose upgrading to M8 class [Y] because the load has increased to [value]. We need the updated specification by [date].
Please provide the lead time and any cost difference for the revised specification.
Thank you,
[Your Name]
5) Executive brief + Q&A
Begin with a concise checklist (3-7 bullets) of what you will do; keep items conceptual, not implementation-level. Write a one-paragraph status update summarizing the result of the vibration test, noting that it performed better than expected, and stating that the next step is high-temperature validation.
After the status, generate exactly 3 likely executive questions, each with a concise answer. The questions must focus individually on: cost, risk, and schedule respectively.
Output the result as a JSON object with these fields:
- "status": string — The status summary paragraph.
- "executive_questions": array — Exactly 3 objects, each with:
- "question": string
- "answer": string
If a suitable question or answer cannot be generated, use an empty string for the affected field.
# Output Format
{
"status": string,
"executive_questions": [
{"question": string, "answer": string},
{"question": string, "answer": string},
{"question": string, "answer": string}
]
}
After generating the output, perform a brief self-validation: check that the JSON object structure is correct, that exactly three executive questions are included and each one uniquely addresses cost, risk, or schedule. If validation fails, revise as needed before final output.
Structured output patterns (copy/paste)
Risk matrix (Markdown table)
Return a table with: Severity | Mode | Cause | Detection | Mitigation.
Limit to 8 rows. Severity scale: Critical/Moderate/Minor.
Checklist review
Use this checklist [paste]. Evaluate the design summary against it.
List Nonconformances first, then Observations. End with 3 recommended actions.
Comparison table
Compare [Material A] vs [Material B] for: yield strength, corrosion resistance, machinability, cost (qualitative).
Output a Markdown table. Add 2 sentence summary of tradeoffs.
Domain‑specific prompt snippets
Gears
What factors drive stress in a steel spur gear under load, and how can I reduce it?
Bullet list. Separate design vs manufacturing levers.
Cooling
List cooling options for a 500 W motor in a small enclosure.
Now focus only on passive options. Limit to top five with pros/cons.
Bolted joints
Given: spacing 2D, preload 70% of yield, 1.5D thread engagement.
Identify likely mistakes and risks. Return: Issue | Why it matters | Fix.
Bracket load case
Bracket 6061‑T6, 150×50×10 mm, two M8 bolts. Identify concerns.
Focus on bolt-hole stress, fatigue, fillet radii, fastener strength.
Return short bullets with references to what to check next.
Do / Don’t
Do
- Paste real context (loads, materials, environment).
- Ask for the format you need (table, bullets, JSON).
- Iterate: broad → narrow → final.
- Use checklists and severity tags for reviews.
- Verify math and units.
Don’t
- Treat outputs as sign‑off.
- Paste confidential drawings or pricing.
- Ask for everything at once—scope your ask.
- Accept vague answers—push for structure.
Quick reference
- Design brainstorm: “Suggest failure modes for a cantilevered aluminum bracket supporting 100 kg. Bullet list. Top 7 only.”
- CAD review support: “Act as a reviewer. Based on this design summary, list issues by severity (Critical/Moderate/Minor) with recommendations.”
- Report drafting: “Write a paragraph summarizing these FEA results for a formal report: [paste bullets].”
- Email drafting: “Email to supplier: We need stronger bolts due to increased load. Keep it concise and professional.”
- Meeting notes: “Turn these bullets into minutes with decisions and action items.”
- Jargon simplification: “Explain bearing preload in simple terms for a non‑engineer.”
- Checklist review: “Here is our design checklist: [items]. Review this design summary against it and list nonconformances first.”
- Exec role‑prep: “I’m presenting a prototype to execs. What questions might they ask about cost, risk, and schedule, and how should I answer?”