GPT-Based AI Chatbot for Spies: Did Microsoft Develop One for US Intelligence?
AI and elections, deepfakes, the metaverse and more...
BY Royal-RAIS Editorial Team | PUBLISHED: May, 2024
The article from TechTimes discusses the development of a new GPT-4-based AI chatbot by Microsoft specifically designed for use by US intelligence agencies. Unlike the standard ChatGPT AI chatbot, this new model operates without an internet connection, providing a more secure environment for analyzing classified information. However, there are concerns about the potential for the AI to generate false information if not carefully managed. The article also highlights the increasing use of AI in government and military applications, with President Joe Biden allocating funds for AI development across federal agencies. Overall, the focus is on the development of AI technology tailored for the specific needs of intelligence and defense sectors.
Reference: Richard, I. (2024, May 7). GPT-Based AI Chatbot for Spies: Did Microsoft Develop One for US Intelligence? Tech Times. https://www.techtimes.com/articles/304406/20240507/gpt-based-ai-chatbot-spies-microsoft-made-one-intelligence.htm
Microsoft expands availability of its AI-powered cybersecurity assistant
AI and elections, deepfakes, the metaverse and more...
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
Microsoft has announced plans to expand the availability of its artificial intelligence-powered cybersecurity tool, Security Copilot, starting April 1. Initially soft-launched last year, Security Copilot is designed to assist cybersecurity professionals by allowing them to run queries through a simple prompt box. This tool facilitates various tasks such as summarizing incidents, analyzing vulnerabilities, and sharing information with colleagues. Currently, around 300 customers are using Security Copilot, as revealed at a Microsoft event in San Francisco. In a move to make the tool more accessible, Microsoft intends to implement a 'pay-as-you-go' pricing model, moving away from a subscription basis. This strategy aims to lower entry barriers, as explained by Microsoft Corporate Vice President Vasu Jakkal.
The introduction of Security Copilot is part of Microsoft's broader efforts to leverage artificial intelligence for enhancing cybersecurity measures. The company's decision to adopt a usage-based pricing model reflects a commitment to making advanced security tools available to a wider audience. As cybersecurity threats continue to evolve, tools like Security Copilot represent a significant step forward in providing the necessary resources for professionals to effectively protect against and respond to cyber threats. The technology behind Security Copilot and Microsoft's strategic approach to its deployment highlight the potential of AI in transforming the cybersecurity landscape, offering innovative solutions to complex challenges faced by the industry.
Reference: G, Priyanka., & Dastin, Jeffrey. (2024, March 13). Microsoft expands availability of its AI-powered cybersecurity assistant. Reuters. Retrieved from https://www.reuters.com/technology/microsoft-expands-availability-its-ai-powered-cybersecurity-assistant-2024-03-13/
Generative AI CSAM is CSAM
AI and elections, deepfakes, the metaverse and more...
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
The National Center for Missing & Exploited Children (NCMEC) has raised significant concerns over the emergence of Generative AI (GAI) technologies and their implications for child safety. In 2023, the NCMEC’s CyberTipline received 4,700 reports related to Child Sexual Abuse Material (CSAM) that involved GAI. This new form of CSAM includes computer-generated images of children engaged in explicit acts, created using GAI platforms. Furthermore, GAI has been used to produce deepfake sexually explicit images and videos by manipulating photographs of real children. The NCMEC has also encountered instances where such illicit material was used for extortion purposes, adding to the distressing impact of GAI CSAM on children and their families.
The legal and ethical frameworks surrounding GAI technologies and their capacity to generate CSAM are currently under scrutiny. The NCMEC advocates for the urgent need to update federal and state laws to unequivocally categorize GAI CSAM as illegal, ensuring that victims have civil remedies for protection against further harm. There is a pressing call for legislative and regulatory actions to prevent GAI technology from being trained on or generating child sexual exploitation content. This includes mandates for GAI platforms to actively detect, report, and remove attempts at creating such content, holding them accountable for misuse. The NCMEC emphasizes the critical role of education, legal measures, and technology design in safeguarding children from the harms of GAI CSAM, urging a collective effort from technology creators, legislators, and child protection professionals to prioritize child safety amidst the rapid advancement of GAI technologies.
Reference: National Center for Missing & Exploited Children. (2024, March 11). Generative AI CSAM is CSAM. NCMEC Blog. Retrieved from https://www.missingkids.org/blog/2024/generative-ai-csam-is-csam
China's lidar technology faces intensified scrutiny in U.S.
AI and elections, deepfakes, the metaverse and more...
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
Amid growing concerns over national security and technological advancements, the United States is intensifying its scrutiny of Chinese autonomous driving technology, particularly focusing on Hesai Technology, a leading Chinese lidar manufacturer. The U.S. government's investigation into connected cars, announced late last month, aims to assess the potential national security risks posed by Chinese technological advancements in the automotive sector. These concerns are fueled by fears that Chinese vehicles equipped with light detection and ranging (lidar) sensors, potentially subsidized by the state, could dominate the U.S. market and collect sensitive data. Hesai Technology, known for its significant role in the lidar market, has faced termination of contracts with Washington lobbyists following these developments. The company's lidar sensors, crucial for mapping environments in 3D and having military applications, have placed Hesai at the center of U.S. national security discussions.
Hesai Technology, which controlled a substantial portion of the global lidar market, has encountered challenges as it navigates the heightened U.S. scrutiny. Despite its initial success and Nasdaq listing in February 2023, Hesai's shares have suffered following its inclusion on the Pentagon's list of "Chinese Military Companies" operating in the U.S. The company has protested this designation, emphasizing its civilian use of lidar technologies and denying any military affiliations. The unfolding situation highlights the broader context of U.S.-China technological competition, where U.S. policymakers are striving to balance innovation with security. The debate over lidar technology underscores the complex interplay between economic interests, technological advancements, and national security considerations in the evolving landscape of U.S.-China relations.
Reference: Moriyasu, K., & Wong, E. (2024, March 12). China's lidar technology faces intensified scrutiny in U.S. Nikkei Asia. Retrieved from https://asia.nikkei.com/Business/Technology/China-s-lidar-technology-faces-intensified-scrutiny-in-U.S
DOD’s generative AI task force looks for ‘blind spots’
AI and elections, deepfakes, the metaverse and more...
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
The Department of Defense (DOD) is intensively exploring the potential and limitations of artificial intelligence (AI) technologies through its Task Force Lima initiative, with a focus on generative AI tools for military operations. Launched in August, the task force aims to responsibly harness AI's power, assessing over 180 use cases across the Pentagon to determine these technologies' capabilities and boundaries. Navy Captain Manuel Xavier Lugo, the mission commander, emphasized the importance of hands-on experience with AI for understanding its limitations and ensuring user comfort. The task force explores a variety of applications, mainly automating routine tasks to enhance DOD personnel efficiency, while also examining the potential risks and "blind spots" associated with these technologies, such as the reliability of AI-generated email summarizations.
Despite the task force's scheduled 18-month operational period, its broader mission includes formulating policy recommendations and architectural guidance for the DOD's future use of AI. Roughly 60% of the explored use cases involve the development of chatbot technologies to facilitate more effective project initiation and management. By balancing the innovative potential of generative AI with a cautious evaluation of its drawbacks, Task Force Lima seeks to pave the way for a more informed, secure, and efficient adoption of AI within the military, ensuring that the technology serves as a reliable background tool without compromising essential information or tasks.
Reference: Graham, E. (2024, February 23). DOD’s generative AI task force looks for ‘blind spots’. Nextgov. Retrieved from https://www.nextgov.com/artificial-intelligence/2024/02/dods-generative-ai-task-force-looks-blind-spots/394404/
Cambridge Festival 2024: AI and elections, deepfakes, the metaverse and more...
AI and elections, deepfakes, the metaverse and more...
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
The University of Cambridge is set to explore the profound impact of Artificial Intelligence (AI) on society during the Cambridge Festival 2024. This event will delve into critical questions about AI's influence on elections, the challenges posed by deepfakes, and the anticipated emergence of the metaverse. With an array of discussions led by experts, the festival aims to address the potential of AI to disrupt democratic processes, its role in spreading disinformation, and its broader societal implications. Among the speakers are Dr. Ella McPherson, who emphasizes the need to examine the hype around AI and its potential to undermine faith in elections, and Dr. Jonnie Penn, who expresses hope that AI will not drastically alter the democratic landscape immediately but cautions against the misuse of these technologies by tech companies.
The festival will also feature interactive workshops and presentations on various AI and technology-related topics. For instance, a workshop on deepfakes and AI-generated media seeks to enhance teenagers' literacy in navigating online misinformation, highlighting the importance of critical thinking and ethical considerations in the digital age. Additionally, discussions will extend to the metaverse, with researchers debating its potential to revolutionize human experience through "phygital" interactions, and the necessity for making these virtual spaces secure and inclusive. By bringing together technology experts, academics, and the public, the Cambridge Festival 2024 aims to foster a comprehensive understanding of AI's impact and explore ways to navigate its challenges and opportunities.
Reference: Bevan, S. (2024, March 5). AI and elections, deepfakes, the metaverse and more... University of Cambridge. Retrieved from https://www.cam.ac.uk/stories/cambridge-festival-2024-ai-technology?fbclid=IwAR07349wvoIdN5J-PYz1XZz9OaCtsLOnP1t-lMYRMxSpT4KH5-Qms9nfx3g_aem_AZD1Oya4AX0g_hy5QcY18qAf-cbMVaBA93AAJLfxLz_hShwZtE1R9LzkzFTXT85eDY0
What AI-generated images of Trump surrounded by Black voters mean for this election
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
An AI-generated image of former President Donald Trump surrounded by Black voters created by conservative radio host Mark Kaye has stirred controversy and discussions about AI-fueled misinformation in politics. Kaye, who shared the image with his 1 million Facebook followers, claimed he used artificial intelligence to craft the image as part of storytelling, suggesting that the onus is on viewers to discern the authenticity of such content. The Trump campaign has distanced itself from these AI-generated images, emphasizing that they have no control over third-party content. This incident underscores the complex dynamics of race, voter outreach, and the ethical considerations surrounding AI in political campaigns.
The broader implications of AI-generated images and misinformation are significant, as they potentially influence voter perceptions and contribute to a zero-trust society where distinguishing truth from falsehood becomes increasingly difficult. Experts warn about the long-term effects of deepfake technology, emphasizing the need for critical media literacy among voters. The situation highlights a pivotal challenge for democracy and the integrity of elections, underscoring the urgent need for responsible AI use and the development of strategies to mitigate the impact of AI-generated misinformation in political discourse.
Reference: Bunn, C. (2024, March 8). What AI-generated images of Trump surrounded by Black voters mean for this election. NBC News. Retrieved from https://www.nbcnews.com/news/us-news/trump-ai-deep-fake-image-black-voters-2024-election-rcna141949
INTERPOL Financial Fraud assessment: A global threat boosted by technology
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
The INTERPOL Global Financial Fraud Assessment has revealed a concerning rise in global financial fraud, significantly enhanced by advancements in technology such as Artificial Intelligence (AI), large language models, and cryptocurrencies. Organized crime groups are exploiting these technologies to target victims worldwide more efficiently, employing methods like phishing, ransomware-as-a-service, and ‘pig-butchering’ scams—a combination of romance and investment fraud using cryptocurrencies. This assessment underscores the evolving landscape of financial fraud, highlighting the role of human trafficking in forced criminality within call centers to perpetrate these scams. INTERPOL Secretary General Jürgen Stock emphasizes the epidemic growth of financial fraud, stressing the urgent need for global collaboration and action to address these threats, including improving information sharing across borders and sectors and enhancing law enforcement training and capacity building.
The report, launched at the Financial Fraud Summit in London and intended for law enforcement use, identifies investment fraud, advance payment fraud, romance fraud, and business email compromise as the most prevalent global trends. It points out the necessity of strengthening data collection and analysis to develop more informed counter-strategies. INTERPOL’s I-GRIP initiative, a stop-payment mechanism launched in 2022, has already intercepted over USD 500 million in criminal proceeds from cyber-enabled fraud. Regional trends vary, with Business Email Compromise and pig butchering fraud notable in Africa, various types of impersonation and romance frauds prevalent in the Americas, pig butchering schemes expanding in Asia, and online investment frauds and phishing attacks rising in Europe. The report calls for building multi-stakeholder, Public-Private Partnerships to recover funds lost to financial fraud and bridge crucial information gaps.
Reference: INTERPOL. (2024, March 11). INTERPOL Financial Fraud assessment: A global threat boosted by technology. Retrieved from https://www.interpol.int/News-and-Events/News/2024/INTERPOL-Financial-Fraud-assessment-A-global-threat-boosted-by-technology
Vishing, smishing, and phishing attacks skyrocket 1,265% post-ChatGPT
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
Following the launch of ChatGPT in November 2022, there has been a dramatic increase in vishing (voice phishing), smishing (SMS phishing), and phishing attacks, with a reported rise of 1,265%, according to a survey by Enea. This surge in AI-powered fraud attacks has left 76% of enterprises admitting to a lack of sufficient protection against voice and messaging fraud. Despite significant losses from mobile fraud reported by 61% of enterprises, a startling majority over three-quarters have not invested in defenses against SMS spam or voice scams. The reliance on communication service providers (CSPs) for security is high, with 51% of enterprises expecting their telecom operators to shield them from voice and mobile messaging fraud, underscoring the critical role of CSPs in their telecom purchasing decisions.
The survey further reveals that only 59% of CSPs have implemented a messaging firewall, and a mere 51% have a signaling firewall in place, highlighting a substantial gap in defense against evolving threats. Security leaders among CSPs, identified by their superior capabilities and focus on security, are shown to be more successful in detecting and mitigating security breaches. They also view security as a revenue-generating opportunity, contrasting with less prepared CSPs. The insights from John Hughes, SVP and Head of Network Security at Enea, emphasize the urgent need for enhanced network security measures to combat the evolving threat landscape, particularly with the rise of AI-powered techniques in cybercrime. This situation calls for CSPs to address challenges such as skill shortages, budget constraints, and organizational complexity to prioritize and improve security measures effectively.
Reference: Help Net Security. (2024, February 29). Vishing, smishing, and phishing attacks skyrocket 1,265% post-ChatGPT. Help Net Security. Retrieved from https://www.helpnetsecurity.com/2024/02/29/mobile-fraud-losses/?web_view=true
AI Worm Developed by Researchers Spreads Automatically Between AI Agents
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
Researchers have developed a groundbreaking generative AI worm named Morris II, capable of autonomously spreading between AI systems, marking a significant evolution in cyber threats. Led by Ben Nassi from Cornell Tech, the team demonstrated the worm's capability to infiltrate generative AI email assistants, notably affecting major AI models such as OpenAI's ChatGPT and Google's Gemini. This worm operates by exploiting the interconnectedness and autonomy of AI ecosystems, allowing it to extract data, disseminate spam, and breach security measures effectively. This development suggests a shift in cybersecurity landscapes, mirroring the disruptive impact of the original Morris worm on the internet in 1988. The Morris II worm highlights the vulnerability of generative AI systems to cyberattacks, capable of not only spreading from one system to another but also stealing data or deploying malware, posing significant risks to the reliance on these technologies.
In response to the threats posed by the AI worm, cybersecurity experts advocate for traditional security measures coupled with vigilant application design and human oversight in AI operations to mitigate these risks. Monitoring for unusual activity, such as repetitive prompts within AI systems, is recommended for early detection of potential threats. The research underscores the urgency for the AI development community to prioritize security in the design and deployment of AI ecosystems to safeguard against the exploitation of generative AI systems. By raising awareness and implementing robust security measures, the potential for unauthorized activities by AI agents can be significantly reduced, ensuring the safe and responsible use of generative AI technologies.
Reference: Divya. (n.d.). AI Worm Developed by Researchers Spreads Automatically Between AI Agents. GBhackers on Security. Retrieved from https://gbhackers-com.cdn.ampproject.org/c/s/gbhackers.com/created-ai-worm/amp/