

The myth that stalls progress
The myth says AI hallucinates. The reality is that uncontrolled inputs and unclear rules invite junk answers. Give a junior analyst a box of unlabeled binders and a megaphone, results will be messy. Train that analyst, control the binders, and require citations, results tighten fast.
Governance profiles, not wild searches
AI for product intelligence should never roam the open web by default. Governance profiles define which data tiers each role can use, how strictly the model must cite, and when it should refuse to answer. Sales needs succinct, on‑label claims. Product management needs change tracking. Sustainability needs audit trails.
The four‑layer truth stack
Here is a practical layering that keeps answers anchored:
- Tier 1, manufacturer‑verified truth. PLM records, official spec sheets, verified bills of materials, signed utility data, past LCAs and EPD PDFs stored in your own repository.
- Tier 2, vendor‑maintained reference sets. Authoritative chemistry and materials libraries, electricity grid factors from recognized bodies, transport emissions factors from accepted inventories.
- Tier 3, curated whitelist. Approved online sources that are semi‑trusted for context. Think program operator FAQs or standard bodies, reviewed quarterly.
- Tier 4, open internet. Only when explicitly allowed, and always with a refusal policy if confidence or citations fall short.
Want to debunk EPD myths in your projects?
Follow us on LinkedIn for insights that help you navigate evolving standards and unlock new business opportunities.
Role‑based control that maps to risk
Different teams carry different risk. Sales might be locked to Tiers 1 and 2, with strict quote‑ready phrasing and zero extrapolation. Sustainability can open Tier 3 for context, but must attach the governing PCR and program operator link. Engineering can toggle Tier 4 for early research, while clearly marking anything non‑authoritative.
Guardrails that actually prevent hallucination
A few rules do the heavy lifting. Require source citations for every numeric output and every claim about environmental performance. Enforce refusal when the answer would rely on Tier 4 without corroboration in Tiers 1 or 2. Log prompts, sources, and versions so you can retrace a statement during a verification review. Flag when a cited PCR has been revised so the next update uses the correct rulebook.
Why this matters for EPDs and LCAs
Demand for transparent, low‑carbon products is rising because buildings and construction account for about 37 percent of global energy and process related CO2 emissions, which puts material choices under a brighter light (GlobalABC, 2024) (GlobalABC, 2024). In Europe, CSRD brings roughly 50,000 companies into mandatory sustainability reporting, which increases the need for traceable product data and documented methods (European Commission, 2024) (European Commission, 2024). That pressure shows up on spec sheets, in procurement portals, and in pre‑bid questionnaires.
Humans stay in the loop by design
Governance profiles do not replace expertise. They elevate it. A sustainability lead approves the evidence and guards language against overreach. An LCA practitioner picks the PCR, checks background datasets, and signs off on assumptions. AI prepares, fetches, validates ranges, and redlines discrepancies so experts focus on judgement, not copy‑paste.
Proof that AI can outgrind, not overreach
We do not ask AI to settle scientific debates. We ask it to scrape, normalize, reconcile and cross‑reference volumes of structured data that humans find dull. It never misses a row, never gets tired, and never forgets the last unit conversion. When it is unsure, it should say so plainly, then request more Tier 1 evidence. We dont reward guesswork.
A simple setup that scales
Start with your product truth. Inventory Tier 1 sources and assign owners. Approve Tier 2 references that match your categories and regions. Create a short whitelist for Tier 3. Write role profiles with default refusal rules and citation requirements. Pilot on one product family, track every generated claim with links, then expand.
Closing the loop
The myth was never that AI sees things that are not there. The myth is that we must accept that behavior. Manufacturers that lock in governance profiles and a layered truth stack get faster EPD prep, cleaner LCAs, and fewer sleepless nights. Let AI carry the load of data work, while people carry the responsibility for what is said, where it came from, and why it stands up in a review.


