Companies specializing in big data analysis and digital forensics increasingly rely on ChatGPT and other artificial intelligence (AI) systems to monitor social media users. Social Links, for example, presented a relevant “sentiments analysis” tool at the Milipol fair in Paris, which is aimed at the area of internal security. It can recognize the feelings of members of social networks such as X (formerly Twitter) or Facebook and highlight frequently discussed topics, reports the North American magazine Forbes. The aim is to identify emerging protest movements and put law enforcement authorities on alert.
Keywords and hashtags are sent to ChatGPT
Russian businessman Andrey Kulikov is one of the founders of Social Links. The company, founded in 2017 and which has around 600 customers in Europe and the USA alone, is headquartered in Amsterdam, but now also has a branch in New York. Meta described the company as a spyware vendor in a report in late 2022 and blocked 3,700 Facebook and Instagram accounts that it allegedly misused to repeatedly spy on the US company’s two platforms. Social Links rejects this, as well as allegations of connections to Russian intelligence.
In a demo shown in Paris, according to Forbes, a user ordered the Social Links tool to retrieve social media posts relevant to their area of interest. The user can then analyze this through the program’s user interface, with the data being saved on their computer. Social Links analyst Bruno Alonso used the software to assess online reactions to the controversial agreement that kept Spanish Prime Minister Pedro Sánchez in power with promises to the Catalan independence movement. The tool therefore searched for X posts with keywords and hashtags like “amnesty” and automatically passed them to ChatGPT.
“The possibilities are endless”
The bot then classified the posts’ sentiment as positive, negative or neutral and displayed the results in an interactive graph, the report said. The tool is also capable of quickly summarizing online discussions on other platforms, such as Facebook, and identifying interesting topics. According to the presentation, investigators could also use built-in biometric facial recognition capabilities to identify people who have allegedly made negative comments about an issue. Alonso emphasized: “The possibilities are truly endless.” Jay Stanley of the civil rights organization American Civil Liberties Union (ACLU) warns, however, that such AI agents enable an unprecedented automated form of surveillance of individuals and groups that goes far beyond human capabilities.
ChatGPT’s maker, OpenAI, declined to comment. In its terms and conditions, it effectively prohibits “activities that violate the privacy of individuals”, including “tracking or surveillance” without consent. “We strictly adhere to OpenAI guidelines,” a Social Links spokesperson assured Forbes. Ultimately, the system is only used for text analysis and content summarization. Andy Martin of Israeli forensics firm Cellebrite said in Milipol that large language models like GPT would be very useful for all types of law enforcement. The spectrum ranges from searching recorded calls to find anomalies in a person’s “pattern of living” to technologically supported interviews. AI can provide investigators with additional information during an interrogation. But he admitted that AI is always biased.