How to Invent (Start Smart → Investigate → Iterate → Compute)

How to Invent (Start Smart → Investigate → Iterate → Compute)

How to Invent (Start Smart → Investigate → Iterate → Compute)

Use a human‑in‑the‑loop workflow with GPT (and your chosen tool stack) to generate, triage, and de‑risk inventions—while keeping a full prompt chain and human‑contribution log to support inventorship and patentability.

Start from a System, Domain, or Technology. Ask GPT to expose liabilities & whitespace, then generate/critique invention angles. Jump to code to produce in-silico evidence, commit artifacts, and loop.

Start from a System, Domain, or Technology. Ask GPT to expose liabilities & whitespace, then generate/critique invention angles. Jump to code to produce in-silico evidence, commit artifacts, and loop.

Use your preferred stack—Claude Desktop + ToolUniverse or Kepler AI—to run a transparent idea‑to‑evidence loop. Keep PROMPTS.md, HUMAN_CONTRIBUTION.csv, DATA_SOURCES.csv, and CLAIM_CHART.csv updated as you go.

Use your preferred stack—Claude Desktop + ToolUniverse or Kepler AI—to run a transparent idea‑to‑evidence loop. Keep PROMPTS.md, HUMAN_CONTRIBUTION.csv, DATA_SOURCES.csv, and CLAIM_CHART.csv updated as you go.

In‑silico only. No wet‑lab, fabrication, or hazardous build instructions.

In‑silico only. No wet‑lab, fabrication, or hazardous build instructions.

In‑silico only. No wet‑lab, fabrication, or hazardous build instructions.

Keep it private until we file.

Keep it private until we file.

Keep it private until we file.

Payments only if we close a license or sale (20% of total deal revenue until $1,000,000; most submissions won't result in a deal).

Payments only if we close a license or sale (20% of total deal revenue until $1,000,000; most submissions won't result in a deal).

Payments only if we close a license or sale (20% of total deal revenue until $1,000,000; most submissions won't result in a deal).

1) Recommended tech stacks (pick one)

1) Recommended tech stacks (pick one)

1) Recommended tech stacks (pick one)

Option A — Claude Desktop + ToolUniverse (MCP)

What it is: A desktop setup that links Claude to engineering/science tools via Model Context Protocol, so the model can plan, query tools, and document steps.

Why it's useful: Structured plans, tool discovery, and transparent tool calls for provenance.

Where to start: Claude Desktop + ToolUniverse setup guide.

What it is: A desktop setup that links Claude to engineering/science tools via Model Context Protocol, so the model can plan, query tools, and document steps.

Why it's useful: Structured plans, tool discovery, and transparent tool calls for provenance.

Where to start: Claude Desktop + ToolUniverse setup guide.

Option B — Kepler AI

What it is: An enterprise agent platform for automating computational workflows with an audit trail, security, and reproducibility; suitable for curated pipelines and multi‑step analysis.

Why it's useful: Secure, end‑to‑end runs; change‑controlled pipelines; shareable, reproducible artifacts for your submission.

Where to start: Kepler AI product & docs.

What it is: An enterprise agent platform for automating computational workflows with an audit trail, security, and reproducibility; suitable for curated pipelines and multi‑step analysis.

Why it's useful: Secure, end‑to‑end runs; change‑controlled pipelines; shareable, reproducible artifacts for your submission.

Where to start: Kepler AI product & docs.

We don't endorse vendors. Use what fits your environment and security posture.

Choose a Starting Point → Investigate → Iterate → Compute

Choose a Starting Point → Investigate → Iterate → Compute

Choose a Starting Point → Investigate → Iterate → Compute

Pick one starting point: System (e.g., a RAN/core stack you want to improve), Domain (e.g., grid operations, multi‑robot facilities, satellite ground segment), or Technology (e.g., chip packaging flow, OpenFOAM topology optimization, IoT OTA/attestation).

Pick one starting point: System (e.g., a RAN/core stack you want to improve), Domain (e.g., grid operations, multi‑robot facilities, satellite ground segment), or Technology (e.g., chip packaging flow, OpenFOAM topology optimization, IoT OTA/attestation).

Loop (repeat until strong):

  1. Investigate (GPT): map physics/control/mechanism, liabilities, and whitespace

  1. Investigate (GPT): map physics/control/mechanism, liabilities, and whitespace

  1. Investigate (GPT): map physics/control/mechanism, liabilities, and whitespace

• Ask for: recent advances (last 3 years), field/benchmark failures and why, blocking physics/controls/materials/manufacturing issues, state of practice/standards context, key patent holders/themes, and unmet needs that can be addressed with computational de‑risking.

• Ask for: recent advances (last 3 years), field/benchmark failures and why, blocking physics/controls/materials/manufacturing issues, state of practice/standards context, key patent holders/themes, and unmet needs that can be addressed with computational de‑risking.

• Ask for: recent advances (last 3 years), field/benchmark failures and why, blocking physics/controls/materials/manufacturing issues, state of practice/standards context, key patent holders/themes, and unmet needs that can be addressed with computational de‑risking.

  1. Hypothesize (GPT): generate 5–10 concrete invention angles targeted at those gaps

• Examples: energy‑aware RAN controller; grid‑forming inverter control; chiplet assembly planning method; thermal topology for heat exchangers; OTA/attestation pipeline; multi‑robot arbitration protocol; satellite contact scheduler; materials coating/interlayer with modeled uplift.

• Examples: energy‑aware RAN controller; grid‑forming inverter control; chiplet assembly planning method; thermal topology for heat exchangers; OTA/attestation pipeline; multi‑robot arbitration protocol; satellite contact scheduler; materials coating/interlayer with modeled uplift.

  1. Critique (GPT): for each angle, demand kill-tests, draft claim language, and a 1-week evidence plan.

  1. Compute (Code): jump into an AI code editor (Cursor, VS Code + Codeium/Copilot, or Colab) and run in-silico analyses from the plan; push artifacts to your private repo/zip.

  1. Refine (GPT + Code): feed results back; iterate until one angle is clearly superior (evidence + claimability).

GPT prompt patterns (starting scaffolds—adapt & iterate)

GPT prompt patterns (starting scaffolds—adapt & iterate)

GPT prompt patterns (starting scaffolds—adapt & iterate)

Important: These are starting scaffolds, not copy-paste templates. You must adapt them to your specific problem, add your own insights, and iterate extensively.


The AI excels at: proposing multiple ideas (30–60+ in a single thread), investigating state-of-the-art, scoring concepts, and conducting patent searches when you force it to examine all relevant keywords and prior art.


Your contribution matters: Start with a specific problem, guide the research direction, critique outputs, make decisions, and document what you conceived vs. what the AI suggested. By cycling through dozens of ideas in the same conversation, you'll land on strong candidates by the end.

Phase 1: Start with a Problem & Research State-of-the-Art

Always begin with a concrete problem. Have the AI deeply research the current state: recent science/engineering, field failures, blocking physics/controls/materials, regulatory/standards hurdles, and competitive landscape.

Phase 1: Start with a Problem & Research State-of-the-Art

Always begin with a concrete problem. Have the AI deeply research the current state: recent science/engineering, field failures, blocking physics/controls/materials, regulatory/standards hurdles, and competitive landscape.

Example starting prompt (adapt this):
"I want to improve [SYSTEM/DOMAIN/TECHNOLOGY]. First, research the current state-of-the-art:

Example starting prompt (adapt this):
"I want to improve [SYSTEM/DOMAIN/TECHNOLOGY]. First, research the current state-of-the-art:

What are the recent scientific/engineering developments (last 3 years)?

What are the recent scientific/engineering developments (last 3 years)?

What are the recent scientific/engineering developments (last 3 years)?

What field deployments or benchmarks have failed and why?

What field deployments or benchmarks have failed and why?

What field deployments or benchmarks have failed and why?

What are the blocking physics/controls/materials/manufacturing issues (stability, thermal, RF, routing, standards constraints)?

What are the blocking physics/controls/materials/manufacturing issues (stability, thermal, RF, routing, standards constraints)?

What are the blocking physics/controls/materials/manufacturing issues (stability, thermal, RF, routing, standards constraints)?

What's the state of practice/standards and their limitations?

What's the state of practice/standards and their limitations?

What's the state of practice/standards and their limitations?

Who are the key patent holders and what themes do they cover?

Who are the key patent holders and what themes do they cover?

Who are the key patent holders and what themes do they cover?

What unmet needs remain that could be addressed with computational de-risking?"

What unmet needs remain that could be addressed with computational de-risking?"

What unmet needs remain that could be addressed with computational de-risking?"

Your role: Provide the specific system/domain/technology. Review the AI's research and add your own domain knowledge. Identify gaps in the research and ask follow-up questions.

Phase 2: Generate 30–60 Ideas (Cycle in Same Thread)

Don't stop at 5–10 ideas. Push the AI to generate 30–60+ concepts in the same conversation, covering diverse angles. The best ideas often emerge after cycling through many mediocre ones.

Phase 2: Generate 30–60 Ideas (Cycle in Same Thread)

Don't stop at 5–10 ideas. Push the AI to generate 30–60+ concepts in the same conversation, covering diverse angles. The best ideas often emerge after cycling through many mediocre ones.

Example starting prompt (adapt this):
"I want to improve [SYSTEM/DOMAIN/TECHNOLOGY]. First, research the current state-of-the-art:

Example starting prompt (adapt this):
"I want to improve [SYSTEM/DOMAIN/TECHNOLOGY]. First, research the current state-of-the-art:

Example starting prompt (adapt this):
"I want to improve [SYSTEM/DOMAIN/TECHNOLOGY]. First, research the current state-of-the-art:

  • Deployment/device innovations (edge/cloud split, packaging, thermal, antenna, route of deployment)

  • Materials/structures (dopants, interlayers, surface treatments, composites, metamaterials)

  • Sensor‑guided methods (decision thresholds; selection/control/maintenance)

  • Combinations with synergy (coordinated controllers/waveforms/schedulers/policies)

  • Design flows & orchestration (EDA/CFD/grid/planning tool‑calling agents)

  • Repurposing (new use contexts with standards alignment)

  • Deployment/device innovations (edge/cloud split, packaging, thermal, antenna, route of deployment)

  • Materials/structures (dopants, interlayers, surface treatments, composites, metamaterials)

  • Sensor‑guided methods (decision thresholds; selection/control/maintenance)

  • Combinations with synergy (coordinated controllers/waveforms/schedulers/policies)

  • Design flows & orchestration (EDA/CFD/grid/planning tool‑calling agents)

  • Repurposing (new use contexts with standards alignment)

For each concept, provide: (1) one-line claim angle, (2) novelty hook, (3) fastest kill-test, (4) feasibility (what we can compute now)."

For each concept, provide: (1) one-line claim angle, (2) novelty hook, (3) fastest kill-test, (4) feasibility (what we can compute now)."

Your role: After the first batch, ask for 20 more. Then 10 more. Push for diversity. Identify patterns. Combine elements from different ideas. This is where your creative contribution shines.

Phase 3: Score, Investigate, & Force Patent Search

AI is excellent at scoring and investigating when you give it clear criteria. Force it to do comprehensive patent searches using all relevant keywords, not just obvious ones.

Phase 3: Score, Investigate, & Force Patent Search

AI is excellent at scoring and investigating when you give it clear criteria. Force it to do comprehensive patent searches using all relevant keywords, not just obvious ones.

Example scoring & investigation prompt (adapt this):

"Score all 30+ ideas on a 0–5 scale for: Novelty/Patentability (30%), Performance Edge (25%), Commercial Upside (20%), Safety/Reliability (15%), Standards/Regulatory Feasibility (10%). Show a ranked table.


For the top 10, conduct a comprehensive patent search:

Example scoring & investigation prompt (adapt this):

"Score all 30+ ideas on a 0–5 scale for: Novelty/Patentability (30%), Performance Edge (25%), Commercial Upside (20%), Safety/Reliability (15%), Standards/Regulatory Feasibility (10%). Show a ranked table.


For the top 10, conduct a comprehensive patent search:

  • Search all relevant keywords: physics/controls terms, materials classes, device/packaging/flow terms, indication/use‑context names, route/deployment terms

  • Identify key assignees and their patent families

  • Map claim scope of blocking patents

  • Find whitespace where our concept is novel

  • Draft a preliminary independent claim that avoids prior art"

  • Search all relevant keywords: physics/controls terms, materials classes, device/packaging/flow terms, indication/use‑context names, route/deployment terms

  • Identify key assignees and their patent families

  • Map claim scope of blocking patents

  • Find whitespace where our concept is novel

  • Draft a preliminary independent claim that avoids prior art"

Your role: Review the scores and challenge them. Add keywords the AI missed. Verify patent search results. Decide which ideas to pursue based on your judgment, not just the AI's scores.

Phase 4: Iterate & Refine (Your Contribution is Key)

Take the top candidates and iterate. Combine elements, fix weaknesses, add your domain expertise. This is where human inventorship is established.

Phase 4: Iterate & Refine (Your Contribution is Key)

Take the top candidates and iterate. Combine elements, fix weaknesses, add your domain expertise. This is where human inventorship is established.

Example refinement prompt (adapt this):

"For the top 3 concepts, create 5 variants each that:

Example refinement prompt (adapt this):

"For the top 3 concepts, create 5 variants each that:

  • Address the weakest scoring dimension

  • Combine elements from other high-scoring ideas

  • Add a sensor/telemetry decision rule if missing

  • Strengthen the claim differentiation from prior art

  • Improve manufacturability, standards alignment, or deployment pathway

  • Address the weakest scoring dimension

  • Combine elements from other high-scoring ideas

  • Add a sensor/telemetry decision rule if missing

  • Strengthen the claim differentiation from prior art

  • Improve manufacturability, standards alignment, or deployment pathway

For each variant, provide: refined claim, delta from original, new score, 1-week compute plan."

For each variant, provide: refined claim, delta from original, new score, 1-week compute plan."

Your role: This is YOUR invention. You decide which variants to pursue, what elements to combine, and what makes sense scientifically/technically. Document your decisions in PROMPTS.md and HUMAN_CONTRIBUTION.csv.

Phase 5: Generate Compute Plan & Handoff to Code

Once you've selected the winning concept(s), have the AI generate a detailed computational plan that you'll execute in your code editor.

Phase 5: Generate Compute Plan & Handoff to Code

Once you've selected the winning concept(s), have the AI generate a detailed computational plan that you'll execute in your code editor.

Example compute handoff prompt (adapt this):

"For [SELECTED CONCEPT], create a detailed computational validation plan:

Example compute handoff prompt (adapt this):

"For [SELECTED CONCEPT], create a detailed computational validation plan:

  • Data sources needed (e.g., network traces, grid feeders, CAD/PDKs, meshes/BCs, flight logs, RF captures, materials datasets)

  • Specific analyses to run (ns‑3/srsRAN sims, RIC sandbox tests, OpenROAD/OpenLane flows, CFD runs, PyBaMM battery sims, grid power‑flow/forecasting, ROS/Gazebo trials)

  • Success criteria for each analysis

  • Python libraries/tools required

  • Shell commands to execute

  • Reproducibility requirements (seeds, environment, versions)

  • Data sources needed (e.g., network traces, grid feeders, CAD/PDKs, meshes/BCs, flight logs, RF captures, materials datasets)

  • Specific analyses to run (ns‑3/srsRAN sims, RIC sandbox tests, OpenROAD/OpenLane flows, CFD runs, PyBaMM battery sims, grid power‑flow/forecasting, ROS/Gazebo trials)

  • Success criteria for each analysis

  • Python libraries/tools required

  • Shell commands to execute

  • Reproducibility requirements (seeds, environment, versions)

Output as /tasks.md with checkboxes and estimated time per task."

Output as /tasks.md with checkboxes and estimated time per task."

Your role: Review the plan, adjust based on your available resources and expertise, then execute in Cursor/VS Code/Colab. Feed results back to GPT for interpretation and next steps.

Remember: The goal is to cycle through 30–60+ ideas in a single conversation thread. The AI will get better at understanding your domain and preferences as the conversation progresses. By the end, you'll have 2–3 strong candidates with clear differentiation, computational validation plans, and documented human contribution.

Remember: The goal is to cycle through 30–60+ ideas in a single conversation thread. The AI will get better at understanding your domain and preferences as the conversation progresses. By the end, you'll have 2–3 strong candidates with clear differentiation, computational validation plans, and documented human contribution.

Document everything: Keep PROMPTS.md (full conversation), HUMAN_CONTRIBUTION.csv (your decisions and insights), and DATA_SOURCES.csv (all data used). This establishes inventorship and reproducibility.

Document everything: Keep PROMPTS.md (full conversation), HUMAN_CONTRIBUTION.csv (your decisions and insights), and DATA_SOURCES.csv (all data used). This establishes inventorship and reproducibility.

Step into Code (Cursor / VS Code / Colab)

Step into Code (Cursor / VS Code / Colab)

Step into Code (Cursor / VS Code / Colab)

Editors

Cursor, VS Code (+ Codeium/Copilot), or Google Colab.

Create repo skeleton

/analysis

/data_raw

/data_proc

/notebooks

/models

/results

README.md

environment.yml

requirements.txt

Baseline Python stack (free & common)

Core: numpy, pandas, scipy, scikit‑learn, matplotlib/plotly, numba, joblib, pydantic.

Optimization: cvxpy, pyomo, nevergrad, pymoo, OR‑Tools.

Telecom/RAN: ns‑3 (Python helpers), srsRAN (scripts), RIC sandbox APIs, scapy.

IoT/Edge: paho‑mqtt, protobuf, cryptography (sign/attest), locust (traffic).

Semiconductors/EDA: OpenROAD/OpenLane (TCL/Make wrappers), KLayout/Python, Verilator, yosys.

CFD/Thermal: OpenFOAM (+ PyFoam), SU2 (pySU2), fenics/dolfinx for PDEs.

Batteries/Energy Storage: PyBaMM; basic electrochemical modeling helpers.

Electric Power/Grid: powsybl (bindings/REST), grid2op, pandapower, prophet/statsmodels for forecasting.

Robotics/Facilities: ROS 2 (rclpy), Open‑RMF APIs, Gazebo/ignition msgs, MoveIt Python.

SDR/RF/Physics: GNU Radio (Python blocks), liquid‑dsp bindings, numpy‑fft, xarray.

Reproducibility: conda/mamba, environment.yml, Dockerfiles, deterministic seeds; PROMPTS.md.

Core: numpy, pandas, scipy, scikit‑learn, matplotlib/plotly, numba, joblib, pydantic.

Optimization: cvxpy, pyomo, nevergrad, pymoo, OR‑Tools.

Telecom/RAN: ns‑3 (Python helpers), srsRAN (scripts), RIC sandbox APIs, scapy.

IoT/Edge: paho‑mqtt, protobuf, cryptography (sign/attest), locust (traffic).

Semiconductors/EDA: OpenROAD/OpenLane (TCL/Make wrappers), KLayout/Python, Verilator, yosys.

CFD/Thermal: OpenFOAM (+ PyFoam), SU2 (pySU2), fenics/dolfinx for PDEs.

Batteries/Energy Storage: PyBaMM; basic electrochemical modeling helpers.

Electric Power/Grid: powsybl (bindings/REST), grid2op, pandapower, prophet/statsmodels for forecasting.

Robotics/Facilities: ROS 2 (rclpy), Open‑RMF APIs, Gazebo/ignition msgs, MoveIt Python.

SDR/RF/Physics: GNU Radio (Python blocks), liquid‑dsp bindings, numpy‑fft, xarray.

Reproducibility: conda/mamba, environment.yml, Dockerfiles, deterministic seeds; PROMPTS.md.

First compute passes (typical)

Telecom/RAN: simulate xApp/rApp policies (handover, energy, slicing) in ns‑3/srsRAN; report latency/throughput/energy vs. baselines with CIs.

IoT/Edge: OTA/attestation pipeline under constrained airtime; reliability + airtime budget; security proofs.

Semiconductors/EDA: OpenROAD/OpenLane flows; tabulate PPA/DRC/yield proxies; convergence steps saved.

CFD/Thermal: mesh convergence + UA/ΔP/η metrics; manufacturability filters; sensitivity to BCs.

Batteries: PyBaMM‑based fast‑charge/formation profiles; SOH/SOC error bounds; thermal constraints.

Grid: OpenSTEF‑style forecasting + control; MAE/MAPE and stability metrics under contingencies; DER orchestration.

Robotics/Facilities: RMF scheduler trials; no‑deadlock proofs, task latency distributions, safety envelope.

SDR/RF: BER/BLER vs. SNR with adaptive waveforms/beamforming; synchronization robustness.

Telecom/RAN: simulate xApp/rApp policies (handover, energy, slicing) in ns‑3/srsRAN; report latency/throughput/energy vs. baselines with CIs.

IoT/Edge: OTA/attestation pipeline under constrained airtime; reliability + airtime budget; security proofs.

Semiconductors/EDA: OpenROAD/OpenLane flows; tabulate PPA/DRC/yield proxies; convergence steps saved.

CFD/Thermal: mesh convergence + UA/ΔP/η metrics; manufacturability filters; sensitivity to BCs.

Batteries: PyBaMM‑based fast‑charge/formation profiles; SOH/SOC error bounds; thermal constraints.

Grid: OpenSTEF‑style forecasting + control; MAE/MAPE and stability metrics under contingencies; DER orchestration.

Robotics/Facilities: RMF scheduler trials; no‑deadlock proofs, task latency distributions, safety envelope.

SDR/RF: BER/BLER vs. SNR with adaptive waveforms/beamforming; synchronization robustness.

All artifacts belong in the repo +

DATA_INVENTORY.csv

.

Evidence we expect in your submission

Evidence we expect in your submission

Evidence we expect in your submission

  • Quantified improvement tied to your claim: e.g., "non-inferior latency with ≥30% energy reduction," or "PPA closure X% faster at same DRC cleanliness," or "UA/ΔP uplift at fixed volume/weight," or "grid reliability index improved under disturbances," with confidence intervals.


  • Combination/coordination analysis: provide ablation studies, baselines, and statistical tests showing greater‑than‑additive uplift (not just stacking components).


  • Modeling package: physics/control assumptions, stability bounds, edge cases; for materials, phase/transport rationale and safety windows.


  • Analog/variant strategy: for materials/structures, show sensitivity to dopants/layers/geometry; for controllers, parameter bands and guardrails.


  • Sensor/telemetry rule: explicit decision logic (thresholds, inclusions/exclusions), feasibility of measurement, and expected uplift.


  • Reproducibility: exact commands, seeds, env file, and a /results summary with metrics & plots.

  • Quantified improvement tied to your claim: e.g., "non-inferior latency with ≥30% energy reduction," or "PPA closure X% faster at same DRC cleanliness," or "UA/ΔP uplift at fixed volume/weight," or "grid reliability index improved under disturbances," with confidence intervals.


  • Combination/coordination analysis: provide ablation studies, baselines, and statistical tests showing greater‑than‑additive uplift (not just stacking components).


  • Modeling package: physics/control assumptions, stability bounds, edge cases; for materials, phase/transport rationale and safety windows.


  • Analog/variant strategy: for materials/structures, show sensitivity to dopants/layers/geometry; for controllers, parameter bands and guardrails.


  • Sensor/telemetry rule: explicit decision logic (thresholds, inclusions/exclusions), feasibility of measurement, and expected uplift.


  • Reproducibility: exact commands, seeds, env file, and a /results summary with metrics & plots.

"What's lucrative right now" (how to tilt for licensing)

"What's lucrative right now" (how to tilt for licensing)

"What's lucrative right now" (how to tilt for licensing)

  • Deployment & setting shifts that move computation cloud → edge, or embed into devices/packaging/thermal routes with clear operator value.


  • Reliability/efficiency "biobetter" analogs for systems (controllers/designs that extend life, cut energy, or stabilize performance).


  • Materials/structures (coatings/interlayers/dopants) that unlock reliability, safety, or efficiency and fit existing manufacturing.


  • Repurposing with telemetry selection (clear decision rules) reducing deployment risk.


  • Coordinated systems with quantified synergy and robustness under disturbances.


  • Processes & flows that solve real problems (yield, cost, thermal, EMI, OTA reliability) and are manufacturable.

  • Deployment & setting shifts that move computation cloud → edge, or embed into devices/packaging/thermal routes with clear operator value.


  • Reliability/efficiency "biobetter" analogs for systems (controllers/designs that extend life, cut energy, or stabilize performance).


  • Materials/structures (coatings/interlayers/dopants) that unlock reliability, safety, or efficiency and fit existing manufacturing.


  • Repurposing with telemetry selection (clear decision rules) reducing deployment risk.


  • Coordinated systems with quantified synergy and robustness under disturbances.


  • Processes & flows that solve real problems (yield, cost, thermal, EMI, OTA reliability) and are manufacturable.

Human contribution (inventorship) — what to log

Human contribution (inventorship) — what to log

Human contribution (inventorship) — what to log

  • Who conceived the core idea and each claim element.


  • What the AI suggested vs. what you decided (keep PROMPTS.md + edits).


  • Why a human's contribution is significant to the claimed subject matter (aligns with current USPTO guidance on AI‑assisted inventions).


  • IDs/timestamps (issue/PR IDs, commit hashes) linking to files/figures.

  • Who conceived the core idea and each claim element.


  • What the AI suggested vs. what you decided (keep PROMPTS.md + edits).


  • Why a human's contribution is significant to the claimed subject matter (aligns with current USPTO guidance on AI‑assisted inventions).


  • IDs/timestamps (issue/PR IDs, commit hashes) linking to files/figures.