The Royal Institution of Chartered Surveyors Responsible Use of AI: What the New Standard Means for Firms

The surveying profession is entering a pivotal moment in its relationship with technology. Artificial intelligence has moved rapidly from a peripheral tool to something embedded across valuation, inspection, reporting, and data analysis. In response, the Royal Institution of Chartered Surveyors has introduced its first global professional standard on the responsible use of AI, designed to ensure innovation does not outpace professional accountability.
Published in September 2025, the Responsible Use of Artificial Intelligence in Surveying Practice professional standard becomes effective on 9 March 2026, at which point compliance will be mandatory for RICS members and regulated firms worldwide.
This date is significant because it marks the transition from guidance to enforceable professional expectations. From March 2026 onward, firms will need to demonstrate that their governance, processes, and professional judgement align with the requirements set out in the standard, rather than treating AI as simply another IT tool.

Why the Standard Has Been Introduced

AI is now used across the built environment to support data analysis, automate reporting, generate insights, and even inform professional opinion. While these tools bring clear efficiency and analytical benefits, they also introduce risks such as bias, inaccurate outputs, lack of transparency, and data misuse. The new RICS standard aims to balance these opportunities with safeguards that protect clients, public trust, and the reputation of the profession.
Importantly, the standard reinforces a core principle: AI can support professional work but does not replace professional responsibility. Surveyors remain fully accountable for outputs, regardless of the level of automation involved.

Key Requirements and Expectations

The new framework introduces a structured approach to how AI should be governed, procured, and used within surveying practice. It applies where AI has a material impact on service delivery, meaning firms must actively assess whether a system influences advice, analysis, or reporting outcomes.
One of the central expectations is governance. Firms must implement clear policies covering data use, system oversight, and risk management, including maintaining risk registers that document potential issues such as bias, erroneous outputs, and data retention concerns.
Professional judgement is another cornerstone. Surveyors are required to assess the reliability of AI outputs and document their reasoning where those outputs influence professional advice. This ensures the decision-making process remains transparent and defensible.
Transparency with clients is also mandatory. Firms must inform clients when AI is being used in delivering services and explain how it affects the work, reinforcing trust and allowing clients to understand the role of technology in professional advice.
In addition, there is a clear emphasis on competence. Members and staff must have a baseline understanding of AI systems, including their limitations, risks, and potential failure modes, supported by training and ongoing professional development.

What Firms Must Do to Adhere

To comply from March 2026, firms will need to embed AI governance into their operational frameworks rather than treating it as an informal or ad-hoc process. This typically involves creating or updating internal policies, documenting due diligence on AI providers, and ensuring staff understand both the capabilities and limitations of the tools they use.
They will also need clear audit trails. Decisions to rely on AI outputs, the reasons for doing so, and any limitations identified should be recorded in writing, particularly where outputs influence advice or valuations.
Risk management will need to be demonstrable. Maintaining registers of AI-related risks, mitigation plans, and monitoring processes will form part of showing compliance with the professional standard.
Finally, client communication processes may need to be updated, ensuring disclosures about AI use are built into terms of engagement, reports, or service documentation.

Practices That May Already Be in Breach

Although the standard is not enforceable until March 2026, many firms may already be operating in ways that would fall short of the new expectations.
A common example is the informal use of generative AI to summarise documents or draft sections of reports without documented review or validation. Where such outputs materially influence professional advice, failing to record the decision to use AI or assess its reliability would conflict with the new requirements.

Another potential issue is lack of transparency. If firms are using automated analytics or AI-driven tools in valuations or inspections without informing clients, this would not align with the new obligation to disclose AI use in service delivery.
Similarly, relying on third-party AI platforms without formal due diligence, risk assessment, or data governance processes may also fall short once the standard takes effect. The requirement to understand data risks and system limitations means passive or unexamined adoption will no longer be sufficient.
Finally, insufficient staff knowledge could present compliance risks. If teams are using AI tools without training or understanding of limitations, firms may struggle to demonstrate the competence and oversight expected under the new framework.

The Wider Impact on the Profession

The introduction of the Responsible Use of AI standard represents more than a compliance exercise. It signals a shift in how professional judgement is defined in an increasingly digital environment. By setting clear expectations around governance, transparency, and accountability, RICS is effectively positioning AI as a tool that must operate within the same ethical and professional boundaries as any other surveying methodology.
For many firms, the transition will involve formalising processes they have already begun developing. For others, it will require a more fundamental review of how technology is embedded into workflows. Either way, the direction is clear: AI will continue to play a growing role in surveying, but only within a framework that preserves trust, independence, and professional responsibility.

Stay Updated!

Subscribe to our newsletter for the latest news and updates.