Cyber risks for architects in a world of AI
RIBA's business partner Mitigo discuss the potential impact of Artificial Intelligence (AI) from a cybercrime perspective, and provide some tips on how architects can mitigate the risk that AI presents.
AI is a big topic. Many professional service firms are already using AI or exploring its potential to revolutionise the way they deliver their services. Foster + Partners, for one, are working on a ChatGPT-like programme that will assist their engineering and design teams.
Arup was reported recently as saying it sees AI and machine learning as fundamental to the future of the firm. It is already using AI technology to assess the stress resilience of buildings by interpreting images from drone-mounted cameras that survey a structure’s exterior.
Clearly, the speed and scope of AI offers extraordinary opportunities for designers, and it will not be long before architectural practices across the board are leaning on it heavily. But it’s not all good news for the profession.
Cybercriminals are also interested in the benefits of AI and how it can make their activities more profitable. We’ve written before about how easily cybercriminals can exploit laxness in a firm's email systems to steal data or incapacitate their IT infrastructure before making ransom demands. Here, we discuss the potential impact of AI from a cybercrime perspective and provide some tips on how to mitigate the risk AI presents.
There are three main aspects to consider.
1. Local unauthorised use of AI tools
Staff members may already be using ChatGPT and other AI to make their work more effective. In our cybersecurity assessments, we often see a significant footprint of AI tools that are being used locally on the employee’s computer. This may not be fully visible to the business and those responsible for security.
The issues here are:
- downloading of applications that aren’t subject to the appropriate level of due diligence
- uploading business information and data into hosted AI engines where control is lost
- loss of effectiveness of existing controls e.g. antivirus may not be configured to see these new processes
Takeaway actions:
- start with a policy that defines legitimate use and make sure it is published and understood
- create a process to assess and approve/decline existing use cases
- ensure local admin rights and anti virus (AV) settings prevent the download of applications to devices
- toughen browser and AV settings to flag the use of AI websites or websites with low trust scores
2. Poor development and implementation of AI
The core focus of the development and implementation of AI will be the benefit it can bring to a business, such as reducing costs or increasing efficiencies. Therefore, at the design stage, security elements can often be overlooked, which in turn can lead to vulnerabilities.
The issues here are:
- the development process will require you to experiment with different services and providers. This has an inherent risk as cybercriminals will move fast to insert malicious code into services (this is already happening)
- you are introducing a new supplier and processes into your supply chain and these need to be controlled
- the attack surface of your organisation has changed and potentially grown. You need to ensure you design appropriate controls and security
Takeaway actions:
- a separate environment should be created for the development and experimentation process to reduce the risk of a malicious actor connecting to your business-as-usual network
- a due diligence process should be designed and carried out on new suppliers
- existing policy needs to be updated to include the new technology and processes (for example: how are software patches identified and updated)
- your control framework needs to be updated. What controls, monitoring and alerts need to be created to secure the new business process
3. Increased sophistication of cyberattacks powered by AI
The adoption of AI by cybercriminals to launch attacks and exploit vulnerabilities is arguably the biggest threat to a business. This includes enhanced ability to get around cyber training and control measures.
The issues here are:
- spotting flaws in emails and websites has long been a protection against cybercrime. AI will enable greater sophistication of deception. Social engineering can be taken to a new level as multiple approaches can be coordinated to entrap a victim
- impersonation is often a key part of attacks. Imagine deep fakes of images and voices, and think about what the criminals could do with that
- the speed of development will increase. Every time a control stops a malicious bit of code, AI will have the ability to instantly analyse and code a solution for the criminals
Takeaway actions:
- simulated attacks on staff need to be more frequent and mimic the new approaches
- authentication and conditional access need to be improved to make the stealing of credentials even more difficult for criminals
- layers of defence will be essential. If a human gets duped, ensure there is sufficient control and alerting to stop the progression of an attack
- assessment and assurance will become increasingly important. Frequent assessment by experts will be required to keep you hardened against the increasing sophistication and scale of attack
AI is an astonishing technological development and one which looks set to utterly transform multiple aspects of our lives. However, the fact that technology leaders are warning of the dangers AI poses should be a clear signal to professional services firms that AI’s implementation must be handled with extreme care. The advice given here will help in that process.
To request cyber security help, even for just a suspected breach, have your RIBA Member number ready and call +44 (0)20 8191 1048 or email riba@mitigogroup.com.