The future of security in an AI-driven world
Artificial Intelligence (AI) has infected the mainstream. Everyone, from students to Chief Executive Officers, is using Generative Pre-trained Transformers (GPTs) to create content at increased levels of efficiency, accuracy, and scale. Even titans of industry like IBM are betting big on Large Language Models, confidently predicting that AI will become a personal and professional “fixture” by 2034.
However, as with any new rapid technological advancement, there are plenty of unknowns. Policymakers and Silicon Valley have done very little to consider the pitfalls of unchecked AI expansion, and as a result, risks and their potential impact remain undetermined. Even still, the influence AI will have on certain aspects of security is apparent, and in many cases, already taking shape.
Surveillance
Until recently, the practicality of surveillance hinged mostly on the ability for people to observe, process, and respond to camera feeds. AI has changed this completely. Now, organizations can surveil key areas without human oversight. Smart systems can simultaneously analyze and capture footage. Real-time processing means that AI tools can assess risks on the spot and alert first responders almost instantaneously. This suggests a future where robbing banks or vandalizing businesses is nearly impossible to pull off, especially in cities like London, where old generation Closed-Circuit Television has already had a net positive effect on reducing crime.
Privacy
On one side of the surveillance coin, there’s improved security. On the other is a major risk to privacy. Civil rights organizations like the American Civil Liberties Union paint the picture of a bleak future where uneasiness follows every pedestrian as they go about their daily lives. They compare it to the feeling a driver has whenever a police cruiser is behind them, regardless of whether they’re actually breaking any traffic laws. That feeling would be ever-present, always looming in the background.
Online, YouTube and others are working to crack user anonymity by requiring selfies or license photos in order to consume certain pieces of video content. Upon providing these images, AI tools process the data and confirm the user’s identity. But this comes at a cost: Biometrics are thereafter indefinitely stored on the company’s servers, making it much easier to track a citizen’s digital footprint.
Intelligence
In the national security space, AI has carved out an interesting role for itself. Given the confidential nature of most intelligence gathering efforts that aren’t strictly dependent on open-source research, GPTs are limited in their application. However, where they have undisputed utility is in processing data after a human has already collected it.
The U.S. Department of Homeland Security suggests that AI already outpaces people in data processing, pattern recognition, and accurately performing repetitive tasks without degradation in quality over time. Machine learning amplifies technology’s value here, in that AI tools can evolve to perform tasks without the need for constant prompting or programming. In other words, AI’s analytical prowess is constantly improving, and as it improves, it requires less oversight.
Terrorism
This is where things get sticky. Eco terrorism is almost certain to increase as AI becomes more prevalent. This is because data centers come with a massive carbon footprint. They also consume resources at an incredible rate, including water. Several data centers in Texas burned through nearly 500 million gallons of water in just two years, during a an intense statewide drought. Tomorrow's AI data centers are expected to require even greater volumes of water to operate, with one study suggesting a single facility could demand as much as 5,000,000 gallons per day.
With declining public trust in large corporations, it’s easy to see a scenario where desperate members of impacted communities turn to extreme measures of retaliation.
Cybercrime
Just as much as AI helps cyber defenders keep their organizations and the public safe, it also arms criminals with an increased capacity for creating chaos. Cybercrime is already a major problem online, and its prevalence is only expected to grow in the coming years.
Currently, the biggest concern for law enforcement is AI deepfakes – fabricated images that are so convincing, they’re able to dupe unsuspecting victims into donating money, clicking malicious links, or committing crimes under the influence of social engineering. In 2024, the extent to which this problem could impact the corporate world became apparent when a disgruntled employee in Hong Kong scammed his employer out of nearly $25 million. As deepfakes become less obvious, cases like this are almost guaranteed to surge.
Conclusion
AI isn’t going anywhere. All its perks and problems are here to stay. What needs to change is our relationship with it. We can’t treat AI as an advanced search engine or just some cool new app for our phones. We must accept that it’s a paradigm changer, and one that requires us to adapt our behaviors in ways that optimize its best features, while also keeping us safe from its dangers.
If you’d like to learn more about how to defend your organization from AI security risks, consider registering for Chameleon’s upcoming Security Executive Forum in London. Here, business leaders and policymakers will converge to learn best practices and field-tested strategies to help stay safe throughout these uncertain times.