An Approach to Ethical AI in the AEC/O industry
Discover how AI is transforming the AEC/O sector - unlocking innovation while navigating challenges with ethical and responsible solutions.
Author
Charlie Sheridan
Chief Data and AI Officer
This article belongs to the collection Artificial Intelligence.
To the topic pageThere’s no denying that the AEC/O sector is on the verge of a massive digital transformation, driven primarily by advancements in artificial intelligence (AI). From automating manual processes and design generation to allowing for predictive maintenance and optimizing energy usage, AI is actively reshaping the industry to bring about a more efficient and innovative future.
However, despite AI’s revolutionary potential, its implementation in the AEC/O industry comes with an equally unprecedented set of challenges. More specifically, between addressing emerging risks in safety and data security and meeting compliance with evolving regulations, an ethical and responsible use and approach of these technologies is paramount to ensuring a successful and sustainable transformation.
Identifying the Challenges & Risks
In a certain sense, the use of AI in the AEC/O industry is a classic example of how the potential for high rewards often corresponds with an elevated level of risk. Put simply, the more influence we allow these technologies to have over the construction and operation of buildings, the more careful we need to be in ensuring positive outcomes and preventing and/or responding to failures and complications.
We believe it's important to consider a few key risk factors that AEC/O professionals face when integrating AI-powered technologies.
Data Privacy and Security
AI’s ability to maximize creativity and optimize design generation is bound to result in a significant increase in innovation. As such, at Nemetschek, we champion the integration of advanced security measures and advocate for industry-wide cooperation to address these pressing challenges. Protecting new proprietary models and intellectual property (IP) created with AI is both increasingly important and complex, particularly as policymakers grapple with the implications of AI for IP ownership and infringement. Moreover, in most AEC/O use cases, the utilization of AI necessarily involves data collection, analysis, and management, the machinery in use, and virtually all other aspects of the building lifecycle. Safeguarding such sensitive data is vital. Cybersecurity attacks and data breaches are on the rise across the digital ecosystem, including identity theft targeting individual workers, as well as the financial extortion of entire organizations through the seizure of sensitive project data and critical operational systems – protecting project data and operational systems from such breaches and ransomware attacks becomes a critical priority.
Safety
Safety is also a major concern when it comes to the integration of AI into construction and operational processes. After all, construction sites are inherently complex and high-risk environments for workers even without AI, and ensuring structural integrity is imperative for protecting the welfare of building occupants. At Nemetschek, we emphasize the synergy of AI and human expertise. We advocate for rigorous testing, validation, and real-time oversight to ensure AI systems contribute to safer, more reliable outcomes. AI should empower professionals rather than replace their critical judgment.
Business Integrity and Compliance
Overlapping with the above two risks are the issues of business integrity and regulatory compliance. Whether related to safety or data privacy and security, the misuse of AI systems and AI-generated content can result in financial losses as well as costly legal challenges and irreparable damage to a company’s reputation. At Nemetschek, we advocate for a proactive approach. This includes implementing robust frameworks for safe AI integration, while maintaining a keen eye on increasingly complex and constantly evolving policies and regulatory requirements.
Navigating Evolving Regulations
One of the reasons integrating AI in the AEC/O space is challenging is because technological advancements have largely outpaced the introductions of new regulations. Policymakers worldwide recognize the need for requirements and oversight but differ in their speed of implementation.
The European Union leads globally with its AI Act, making it the first comprehensive, enforceable AI-focused regulation. It emphasizes a “human-centric” approach to ethical AI use and sustainability. This aligns with our own Nemetschek view on this technology – for us, it’s key to be “AI-driven, yet human-centric”. The EU has proactively assessed AI's societal and industrial impact since 2017, therefore setting the international tone for responsible innovation.
In the U.S., policymakers have been a bit slower to respond to the advancing capabilities of AI. However, President Biden’s October 30, 2023 Executive Order establishes a framework for the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Focused on the specific areas of risk management, innovation, public trust, and international collaboration, this marks a significant step toward formalized guidelines.
Across Asia, regulatory efforts and guidelines have been much more diverse. Japan, for example, has taken a notably cautious stance on AI, following in the footsteps of the EU by prioritizing ethical use and what it defines as the Social Principles of Human-Centric AI. South Korea, on the other hand, appears much more ambitious in its approach to AI innovation, with its National Strategy for Artificial Intelligence detailing intentions to aggressively increase investments in AI infrastructure and education, but without abandoning ethical considerations and the protection of personal data.
Elsewhere, countries including China, Singapore, and India are also advancing to find a unique balance between fostering innovation and mitigating potential risks.
While the EU is ahead, most nations remain in the early stages of AI regulation. It’s critical for the AEC/O industry to proactively adopt ethical, responsible AI practices, leveraging all relevant existing and emergent frameworks to shape future compliance initiatives.
Nemetschek's Vision & Commitment to Ethical AI
At Nemetschek, our vision for AI in the AEC/O sector has always been rooted in the need to implement these powerful technologies in a responsible and ethical fashion. This is why all iterations of the Nemetschek AI layer across brands and products as well as other AI-based features and functionalities, that are embedded in our products, are guided by the six principles of (1) transparency and explainability, (2) privacy and data security, (3) robustness and reliability, (4) accountability and governance, (5) sustainability and society, and (6) a commitment to being stewards of change to help the next generation of workers thrive in this new era of digital transformation.
Going forward, we hope that our continued dedication and comprehensive approach to ethical AI deployment and development will set the standard for others in our field, and that the AEC/O industry’s innovative future comes not at the expense but to the benefit of our environments and society at large.