On 10 May 2026, the European Data Protection Board (EDPB) officially brought into effect supplementary GDPR guidelines requiring all B2B standalone websites targeting the EU market (including official websites of Chinese suppliers) to clearly disclose, in their privacy policy or on a separate AI statement page, the countries of geographic origin of the training data used for generative AI content and the time range during which the data was collected. This provision directly affects the compliance of Chinese foreign trade companies’ official websites, and constitutes a substantive compliance checkpoint especially for export-oriented enterprises that rely on online standalone websites to conduct B2B business, warranting close attention from industries related to cross-border trade, digital marketing, SaaS services, and supply chain compliance.
On 10 May 2026, the European Data Protection Board (EDPB) officially implemented supplementary GDPR guidelines, explicitly incorporating the obligation to disclose the geographic origin of training data for generative AI content into the regulatory framework. It applies to all B2B websites providing goods or services to businesses within the EU, including standalone websites operated by entities registered in China. The specific requirement is: on the ‘Privacy Policy’ page or on a separately established AI statement page, the source countries of the training data relied upon by AI-generated content (such as the United States, India, China, etc.) and the corresponding data collection time range (for example, ‘January 2021 to June 2024’) must be clearly and readably indicated. This requirement is not retroactive, but any newly launched or updated AI functions from the effective date onward must comply immediately.
Chinese B2B export enterprises (such as industrial equipment, electronic components, and customized machinery manufacturers) generally reach EU buyers through self-built standalone websites. If their official websites use AI-generated content such as product descriptions, technical parameter summaries, and multilingual customer service responses, and fail to disclose the geographic origin of the training data, they may be identified by regulatory authorities in member states such as Germany and France as entities with compliance risks, thereby affecting buyers’ due diligence and order approval processes.
Manufacturing enterprises engaged in contract manufacturing (OEM/ODM) for overseas brands are also within the scope of application if their official websites display AI-assisted generated production line video scripts, ESG report summaries, compliance certification descriptions, and similar content. The impact is mainly reflected in buyers’ compliance audit procedures—some leading EU buyers have already incorporated GDPR AI disclosure items into the appendix of the Supplier Code of Conduct, and failure to meet the standard may trigger a warning in contract performance evaluation.
Third-party service providers offering cross-border logistics, compliance certification, and multilingual website-building services primarily serve small and medium-sized foreign trade enterprises. This provision indirectly raises their service delivery standards: for example, website-building service providers need to verify whether information on the source of clients’ AI content training data is traceable; compliance consulting firms need to incorporate geographic origin fields into GDPR self-assessment checklist templates; and if translation service providers use AI-assisted post-editing (PEMT), they also need to clarify the territorial attribution of the underlying model’s training data.
The current EDPB guidelines do not clearly define the criteria for determining the “geographic origin of training data” (for example: the place of registration of the model developer, the location of the data annotation team, the country where the original data scraping server is located, etc.). Enterprises should continue to follow the practical guidance released on the EDPB official website from the second quarter of 2026 onward, and avoid filling in disclosures on their own based on non-authoritative interpretations.
There is no need to uniformly overhaul all AI applications. What deserves more attention at present are modules with high user exposure and influence on commercial decision-making, such as: AI-generated product recommendation copy on the homepage, AI-generated technical comparison tables on product pages, and AI-driven real-time response script libraries on inquiry pages. AI tools used internally in the backend (such as inventory forecasting models) are not within the scope of this disclosure requirement.
If an enterprise procures third-party AI SaaS services (such as copy generation and image synthesis tools), it must retrieve the contractual clauses regarding the source of training data in the service agreement and request written explanations from the supplier. Some overseas AI service providers have not proactively disclosed the geographic distribution of their model training data, and enterprises need to list this as one of the preconditions for supplier compliance onboarding.
From an analytical perspective, the regulatory focus is on “whether the disclosed information can be proven true and verifiable,” rather than simply whether a field exists on the page. At present, enterprises would be better advised to understand this as follows: initiate an internal inventory of AI content usage, collect the AI tools, version numbers, supplier names, and obtainable documentation on the regions of training data used by each module, and form a basic traceability file to support subsequent page updates.
Observably, this provision is not a revision of the GDPR legal text, but rather a further interpretive extension by the EDPB of the “principle of transparency” based on the current Articles 5, 13, and 14, and belongs to a refinement action at the regulatory enforcement level. It is more like a structural signal—indicating that the EU is gradually extending AI governance from the stage of “algorithm accountability” to the stage of “data supply chain transparency.” At present, there have been no penalty cases specifically targeting this provision, but the German state of Bavaria and the French National Commission on Informatics and Liberty (CNIL) have already separately listed “AI disclosure compliance for B2B websites” in their 2026 enforcement priority lists. What the industry needs to continue paying attention to is whether regulatory authorities in member states will link this disclosure item with high-risk system determinations under the EU Artificial Intelligence Act (AI Act), and whether buyers will use this as a new dimension in supplier ESG ratings.

Conclusion: this provision marks that the EU’s regulatory focus on AI applications is extending from the end-user side (B2C) to the commercial transaction side (B2B). Its industry significance lies not in immediate penalty pressure, but in formally incorporating the geographic attributes of data into the compliance infrastructure of cross-border digital trade. At present, it is more appropriate to understand it as a normalized requirement to be included in the annual compliance checklist, rather than a sudden crisis event. Enterprises should advance their preparations based on the principles of “verifiable, traceable, scenario-based,” avoiding overreaction while also not overlooking its long-term institutionalization trend.
Information source note:
Main source: “Guidelines 02/2026 on GDPR Transparency Obligations for Generative AI in B2B Contexts” published on the official website of the European Data Protection Board (EDPB), effective on 10 May 2026.
Items pending continued observation: whether the EDPB will issue binding implementation rules on the definition of “geographic origin”; and whether regulatory authorities in member states such as Germany and France will introduce supporting enforcement rules or typical penalty cases.
Related Articles
Related Products