IMPORTANT Website terms of use and cookie statement

A deep-dive into deepfakes

Deepfake technology - the leveraging of Artificial Intelligence (AI) to generate highly convincing but fabricated images, audio, and videos - is swiftly becoming a major concern in the cyber security field. As cyber criminals harness this technology to exploit weaknesses, it is imperative for architectural practices to comprehend its consequences and bolster their defences.

What are deepfakes?

Deepfakes utilise AI algorithms to create realistic imitations of people’s appearances. This advanced technology can produce authentic-looking and sounding images, videos, and audio recordings, making it increasingly difficult to differentiate between authentic and fake content.

Originally developed for entertainment and artistic uses, deepfakes have quickly been appropriated by cyber criminals for nefarious purposes. Deepfakes have been utilised to impersonate clients or colleagues, leading to unauthorised actions, such as financial transfers or the release of sensitive data. This fabricated content can threaten an architectural practice’s reputation by presenting compromising situations, ultimately damaging trust and credibility.

Cyber security implications of deepfakes

The use of deepfake technology by cyber criminals raises several concerning implications:

  1. Increased attack sophistication: AI enables a higher level of sophistication in cyber attacks. Deepfakes can be used to create realistic phishing emails, voice messages, or video calls, making it harder for individuals to identify scams.
  2. Advanced social engineering: Deepfakes take social engineering to a new level. Cyber criminals can impersonate senior executives or trusted colleagues to deceive employees into divulging sensitive information or authorising large financial transactions. For instance, an employee at a multinational firm was tricked into transferring $25 million to fraudsters who used deepfake technology to impersonate the company’s CFO in a video call.
  3. Circumventing security measures: Deepfakes can bypass traditional security controls. For example, AI-generated voice deepfakes can fool voice recognition systems, and AI-manipulated images can deceive facial recognition software.
  4. Rapid technological advancement: The speed at which AI technology evolves means that deepfakes will become even more convincing and harder to detect. Cyber criminals can continually improve their methods, making it essential for businesses to stay ahead of these developments.

Real-world impacts of deepfakes

Examples that highlight the tangible impact of deepfake technology:

  • In the financial sector, deepfake incidents surged by 700% in 2023. Criminals are using AI to imitate vocal patterns, successfully issuing fraudulent instructions over the phone.
  • The legal sector has been targeted, with the Solicitors Regulation Authority (SRA) warning lawyers about the risks of using video calls for client identification due to the threat of deepfakes.
  • The CEO of a leading advertising firm narrowly avoided falling victim to a deepfake scam. Cyber criminals used a fake WhatsApp account, voice cloning, and doctored YouTube footage to create a convincing virtual meeting.
    Thanks to the vigilance of the firm’s staff, the attack was unsuccessful.
  • Popular culture has not been spared either, with manipulated videos of celebrities like Taylor Swift being used to spread misinformation. These videos are widely shared on social media, illustrating the challenges in moderating such content.

Strengthening defences against deepfakes

To mitigate the deepfake threat, architect practices must adopt a multi-faceted approach:

  1. Staff training: Train your staff to stay vigilant to enable them to recognise and react appropriately to suspected attacks.
  2. Frequent simulated attacks: Test your training by conducting regular simulated attacks that mimic techniques used by cyber criminals. This helps in identifying vulnerabilities and improving response strategies.
  3. Enhanced authentication: Implement stronger authentication measures, such as multi-factor authentication and conditional access, to reduce the risk of unauthorised access using stolen credentials.
  4. Layered defence strategy: Establish multiple layers of protection. If one control is breached, ensure that there are additional safeguarding measures and alerting mechanisms to prevent further progression of an attack.
  5. Assessment and assurance: Regularly assess and audit security measures to ensure their effectiveness. Engage independent experts to provide an unbiased evaluation of your security posture.

Deepfake technology poses a significant challenge in cyber security. However, by understanding the implications and taking proactive measures, practices can better safeguard themselves against the sophisticated threats posed by deepfakes. Remaining informed and vigilant, combined with strong security practices, will be crucial in protecting against the evolving landscape.

RIBA has partnered with Mitigo to offer technical security services. Mitigo are offering a free no-obligation consultation for members. For more information contact Mitigo on 0161 883 3507, email riba@mitigogroup.com or fill out their contact form.

keyboard_arrow_up To top