GPT-Based AI Chatbot for Spies: Did Microsoft Develop One for US Intelligence?
AI and elections, deepfakes, the metaverse and more...
BY Royal-RAIS Editorial Team | PUBLISHED: May, 2024
The article from TechTimes discusses the development of a new GPT-4-based AI chatbot by Microsoft specifically designed for use by US intelligence agencies. Unlike the standard ChatGPT AI chatbot, this new model operates without an internet connection, providing a more secure environment for analyzing classified information. However, there are concerns about the potential for the AI to generate false information if not carefully managed. The article also highlights the increasing use of AI in government and military applications, with President Joe Biden allocating funds for AI development across federal agencies. Overall, the focus is on the development of AI technology tailored for the specific needs of intelligence and defense sectors.
Reference: Richard, I. (2024, May 7). GPT-Based AI Chatbot for Spies: Did Microsoft Develop One for US Intelligence? Tech Times. https://www.techtimes.com/articles/304406/20240507/gpt-based-ai-chatbot-spies-microsoft-made-one-intelligence.htm
Microsoft expands availability of its AI-powered cybersecurity assistant
AI and elections, deepfakes, the metaverse and more...
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
Microsoft has announced plans to expand the availability of its artificial intelligence-powered cybersecurity tool, Security Copilot, starting April 1. Initially soft-launched last year, Security Copilot is designed to assist cybersecurity professionals by allowing them to run queries through a simple prompt box. This tool facilitates various tasks such as summarizing incidents, analyzing vulnerabilities, and sharing information with colleagues. Currently, around 300 customers are using Security Copilot, as revealed at a Microsoft event in San Francisco. In a move to make the tool more accessible, Microsoft intends to implement a 'pay-as-you-go' pricing model, moving away from a subscription basis. This strategy aims to lower entry barriers, as explained by Microsoft Corporate Vice President Vasu Jakkal.
The introduction of Security Copilot is part of Microsoft's broader efforts to leverage artificial intelligence for enhancing cybersecurity measures. The company's decision to adopt a usage-based pricing model reflects a commitment to making advanced security tools available to a wider audience. As cybersecurity threats continue to evolve, tools like Security Copilot represent a significant step forward in providing the necessary resources for professionals to effectively protect against and respond to cyber threats. The technology behind Security Copilot and Microsoft's strategic approach to its deployment highlight the potential of AI in transforming the cybersecurity landscape, offering innovative solutions to complex challenges faced by the industry.
Reference: G, Priyanka., & Dastin, Jeffrey. (2024, March 13). Microsoft expands availability of its AI-powered cybersecurity assistant. Reuters. Retrieved from https://www.reuters.com/technology/microsoft-expands-availability-its-ai-powered-cybersecurity-assistant-2024-03-13/
Generative AI CSAM is CSAM
AI and elections, deepfakes, the metaverse and more...
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
The National Center for Missing & Exploited Children (NCMEC) has raised significant concerns over the emergence of Generative AI (GAI) technologies and their implications for child safety. In 2023, the NCMEC’s CyberTipline received 4,700 reports related to Child Sexual Abuse Material (CSAM) that involved GAI. This new form of CSAM includes computer-generated images of children engaged in explicit acts, created using GAI platforms. Furthermore, GAI has been used to produce deepfake sexually explicit images and videos by manipulating photographs of real children. The NCMEC has also encountered instances where such illicit material was used for extortion purposes, adding to the distressing impact of GAI CSAM on children and their families.
The legal and ethical frameworks surrounding GAI technologies and their capacity to generate CSAM are currently under scrutiny. The NCMEC advocates for the urgent need to update federal and state laws to unequivocally categorize GAI CSAM as illegal, ensuring that victims have civil remedies for protection against further harm. There is a pressing call for legislative and regulatory actions to prevent GAI technology from being trained on or generating child sexual exploitation content. This includes mandates for GAI platforms to actively detect, report, and remove attempts at creating such content, holding them accountable for misuse. The NCMEC emphasizes the critical role of education, legal measures, and technology design in safeguarding children from the harms of GAI CSAM, urging a collective effort from technology creators, legislators, and child protection professionals to prioritize child safety amidst the rapid advancement of GAI technologies.
Reference: National Center for Missing & Exploited Children. (2024, March 11). Generative AI CSAM is CSAM. NCMEC Blog. Retrieved from https://www.missingkids.org/blog/2024/generative-ai-csam-is-csam
China's lidar technology faces intensified scrutiny in U.S.
AI and elections, deepfakes, the metaverse and more...
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
Amid growing concerns over national security and technological advancements, the United States is intensifying its scrutiny of Chinese autonomous driving technology, particularly focusing on Hesai Technology, a leading Chinese lidar manufacturer. The U.S. government's investigation into connected cars, announced late last month, aims to assess the potential national security risks posed by Chinese technological advancements in the automotive sector. These concerns are fueled by fears that Chinese vehicles equipped with light detection and ranging (lidar) sensors, potentially subsidized by the state, could dominate the U.S. market and collect sensitive data. Hesai Technology, known for its significant role in the lidar market, has faced termination of contracts with Washington lobbyists following these developments. The company's lidar sensors, crucial for mapping environments in 3D and having military applications, have placed Hesai at the center of U.S. national security discussions.
Hesai Technology, which controlled a substantial portion of the global lidar market, has encountered challenges as it navigates the heightened U.S. scrutiny. Despite its initial success and Nasdaq listing in February 2023, Hesai's shares have suffered following its inclusion on the Pentagon's list of "Chinese Military Companies" operating in the U.S. The company has protested this designation, emphasizing its civilian use of lidar technologies and denying any military affiliations. The unfolding situation highlights the broader context of U.S.-China technological competition, where U.S. policymakers are striving to balance innovation with security. The debate over lidar technology underscores the complex interplay between economic interests, technological advancements, and national security considerations in the evolving landscape of U.S.-China relations.
Reference: Moriyasu, K., & Wong, E. (2024, March 12). China's lidar technology faces intensified scrutiny in U.S. Nikkei Asia. Retrieved from https://asia.nikkei.com/Business/Technology/China-s-lidar-technology-faces-intensified-scrutiny-in-U.S
DOD’s generative AI task force looks for ‘blind spots’
AI and elections, deepfakes, the metaverse and more...
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
The Department of Defense (DOD) is intensively exploring the potential and limitations of artificial intelligence (AI) technologies through its Task Force Lima initiative, with a focus on generative AI tools for military operations. Launched in August, the task force aims to responsibly harness AI's power, assessing over 180 use cases across the Pentagon to determine these technologies' capabilities and boundaries. Navy Captain Manuel Xavier Lugo, the mission commander, emphasized the importance of hands-on experience with AI for understanding its limitations and ensuring user comfort. The task force explores a variety of applications, mainly automating routine tasks to enhance DOD personnel efficiency, while also examining the potential risks and "blind spots" associated with these technologies, such as the reliability of AI-generated email summarizations.
Despite the task force's scheduled 18-month operational period, its broader mission includes formulating policy recommendations and architectural guidance for the DOD's future use of AI. Roughly 60% of the explored use cases involve the development of chatbot technologies to facilitate more effective project initiation and management. By balancing the innovative potential of generative AI with a cautious evaluation of its drawbacks, Task Force Lima seeks to pave the way for a more informed, secure, and efficient adoption of AI within the military, ensuring that the technology serves as a reliable background tool without compromising essential information or tasks.
Reference: Graham, E. (2024, February 23). DOD’s generative AI task force looks for ‘blind spots’. Nextgov. Retrieved from https://www.nextgov.com/artificial-intelligence/2024/02/dods-generative-ai-task-force-looks-blind-spots/394404/
Cambridge Festival 2024: AI and elections, deepfakes, the metaverse and more...
AI and elections, deepfakes, the metaverse and more...
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
The University of Cambridge is set to explore the profound impact of Artificial Intelligence (AI) on society during the Cambridge Festival 2024. This event will delve into critical questions about AI's influence on elections, the challenges posed by deepfakes, and the anticipated emergence of the metaverse. With an array of discussions led by experts, the festival aims to address the potential of AI to disrupt democratic processes, its role in spreading disinformation, and its broader societal implications. Among the speakers are Dr. Ella McPherson, who emphasizes the need to examine the hype around AI and its potential to undermine faith in elections, and Dr. Jonnie Penn, who expresses hope that AI will not drastically alter the democratic landscape immediately but cautions against the misuse of these technologies by tech companies.
The festival will also feature interactive workshops and presentations on various AI and technology-related topics. For instance, a workshop on deepfakes and AI-generated media seeks to enhance teenagers' literacy in navigating online misinformation, highlighting the importance of critical thinking and ethical considerations in the digital age. Additionally, discussions will extend to the metaverse, with researchers debating its potential to revolutionize human experience through "phygital" interactions, and the necessity for making these virtual spaces secure and inclusive. By bringing together technology experts, academics, and the public, the Cambridge Festival 2024 aims to foster a comprehensive understanding of AI's impact and explore ways to navigate its challenges and opportunities.
Reference: Bevan, S. (2024, March 5). AI and elections, deepfakes, the metaverse and more... University of Cambridge. Retrieved from https://www.cam.ac.uk/stories/cambridge-festival-2024-ai-technology?fbclid=IwAR07349wvoIdN5J-PYz1XZz9OaCtsLOnP1t-lMYRMxSpT4KH5-Qms9nfx3g_aem_AZD1Oya4AX0g_hy5QcY18qAf-cbMVaBA93AAJLfxLz_hShwZtE1R9LzkzFTXT85eDY0
What AI-generated images of Trump surrounded by Black voters mean for this election
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
An AI-generated image of former President Donald Trump surrounded by Black voters created by conservative radio host Mark Kaye has stirred controversy and discussions about AI-fueled misinformation in politics. Kaye, who shared the image with his 1 million Facebook followers, claimed he used artificial intelligence to craft the image as part of storytelling, suggesting that the onus is on viewers to discern the authenticity of such content. The Trump campaign has distanced itself from these AI-generated images, emphasizing that they have no control over third-party content. This incident underscores the complex dynamics of race, voter outreach, and the ethical considerations surrounding AI in political campaigns.
The broader implications of AI-generated images and misinformation are significant, as they potentially influence voter perceptions and contribute to a zero-trust society where distinguishing truth from falsehood becomes increasingly difficult. Experts warn about the long-term effects of deepfake technology, emphasizing the need for critical media literacy among voters. The situation highlights a pivotal challenge for democracy and the integrity of elections, underscoring the urgent need for responsible AI use and the development of strategies to mitigate the impact of AI-generated misinformation in political discourse.
Reference: Bunn, C. (2024, March 8). What AI-generated images of Trump surrounded by Black voters mean for this election. NBC News. Retrieved from https://www.nbcnews.com/news/us-news/trump-ai-deep-fake-image-black-voters-2024-election-rcna141949
INTERPOL Financial Fraud assessment: A global threat boosted by technology
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
The INTERPOL Global Financial Fraud Assessment has revealed a concerning rise in global financial fraud, significantly enhanced by advancements in technology such as Artificial Intelligence (AI), large language models, and cryptocurrencies. Organized crime groups are exploiting these technologies to target victims worldwide more efficiently, employing methods like phishing, ransomware-as-a-service, and ‘pig-butchering’ scams—a combination of romance and investment fraud using cryptocurrencies. This assessment underscores the evolving landscape of financial fraud, highlighting the role of human trafficking in forced criminality within call centers to perpetrate these scams. INTERPOL Secretary General Jürgen Stock emphasizes the epidemic growth of financial fraud, stressing the urgent need for global collaboration and action to address these threats, including improving information sharing across borders and sectors and enhancing law enforcement training and capacity building.
The report, launched at the Financial Fraud Summit in London and intended for law enforcement use, identifies investment fraud, advance payment fraud, romance fraud, and business email compromise as the most prevalent global trends. It points out the necessity of strengthening data collection and analysis to develop more informed counter-strategies. INTERPOL’s I-GRIP initiative, a stop-payment mechanism launched in 2022, has already intercepted over USD 500 million in criminal proceeds from cyber-enabled fraud. Regional trends vary, with Business Email Compromise and pig butchering fraud notable in Africa, various types of impersonation and romance frauds prevalent in the Americas, pig butchering schemes expanding in Asia, and online investment frauds and phishing attacks rising in Europe. The report calls for building multi-stakeholder, Public-Private Partnerships to recover funds lost to financial fraud and bridge crucial information gaps.
Reference: INTERPOL. (2024, March 11). INTERPOL Financial Fraud assessment: A global threat boosted by technology. Retrieved from https://www.interpol.int/News-and-Events/News/2024/INTERPOL-Financial-Fraud-assessment-A-global-threat-boosted-by-technology
Vishing, smishing, and phishing attacks skyrocket 1,265% post-ChatGPT
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
Following the launch of ChatGPT in November 2022, there has been a dramatic increase in vishing (voice phishing), smishing (SMS phishing), and phishing attacks, with a reported rise of 1,265%, according to a survey by Enea. This surge in AI-powered fraud attacks has left 76% of enterprises admitting to a lack of sufficient protection against voice and messaging fraud. Despite significant losses from mobile fraud reported by 61% of enterprises, a startling majority over three-quarters have not invested in defenses against SMS spam or voice scams. The reliance on communication service providers (CSPs) for security is high, with 51% of enterprises expecting their telecom operators to shield them from voice and mobile messaging fraud, underscoring the critical role of CSPs in their telecom purchasing decisions.
The survey further reveals that only 59% of CSPs have implemented a messaging firewall, and a mere 51% have a signaling firewall in place, highlighting a substantial gap in defense against evolving threats. Security leaders among CSPs, identified by their superior capabilities and focus on security, are shown to be more successful in detecting and mitigating security breaches. They also view security as a revenue-generating opportunity, contrasting with less prepared CSPs. The insights from John Hughes, SVP and Head of Network Security at Enea, emphasize the urgent need for enhanced network security measures to combat the evolving threat landscape, particularly with the rise of AI-powered techniques in cybercrime. This situation calls for CSPs to address challenges such as skill shortages, budget constraints, and organizational complexity to prioritize and improve security measures effectively.
Reference: Help Net Security. (2024, February 29). Vishing, smishing, and phishing attacks skyrocket 1,265% post-ChatGPT. Help Net Security. Retrieved from https://www.helpnetsecurity.com/2024/02/29/mobile-fraud-losses/?web_view=true
AI Worm Developed by Researchers Spreads Automatically Between AI Agents
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
Researchers have developed a groundbreaking generative AI worm named Morris II, capable of autonomously spreading between AI systems, marking a significant evolution in cyber threats. Led by Ben Nassi from Cornell Tech, the team demonstrated the worm's capability to infiltrate generative AI email assistants, notably affecting major AI models such as OpenAI's ChatGPT and Google's Gemini. This worm operates by exploiting the interconnectedness and autonomy of AI ecosystems, allowing it to extract data, disseminate spam, and breach security measures effectively. This development suggests a shift in cybersecurity landscapes, mirroring the disruptive impact of the original Morris worm on the internet in 1988. The Morris II worm highlights the vulnerability of generative AI systems to cyberattacks, capable of not only spreading from one system to another but also stealing data or deploying malware, posing significant risks to the reliance on these technologies.
In response to the threats posed by the AI worm, cybersecurity experts advocate for traditional security measures coupled with vigilant application design and human oversight in AI operations to mitigate these risks. Monitoring for unusual activity, such as repetitive prompts within AI systems, is recommended for early detection of potential threats. The research underscores the urgency for the AI development community to prioritize security in the design and deployment of AI ecosystems to safeguard against the exploitation of generative AI systems. By raising awareness and implementing robust security measures, the potential for unauthorized activities by AI agents can be significantly reduced, ensuring the safe and responsible use of generative AI technologies.
Reference: Divya. (n.d.). AI Worm Developed by Researchers Spreads Automatically Between AI Agents. GBhackers on Security. Retrieved from https://gbhackers-com.cdn.ampproject.org/c/s/gbhackers.com/created-ai-worm/amp/
Computer Science Professor Earns $98K Grant to Improve Speech Recognition Technology
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
Assistant Professor Shengzhi Zhang, from the Metropolitan College Department of Computer Science at Boston University, has received a grant of $98,197 from Cisco to enhance speech recognition technology. The funding supports his project titled “Rethinking Adversarial Attacks Against Speech Recognition Systems,” aimed at addressing the vulnerabilities in speech recognition systems caused by adversarial examples. These examples mislead the systems by exploiting their machine learning algorithms, resulting in misidentification and errors. Dr. Zhang's research focuses on AI security, particularly on safeguarding AI-driven speech recognition systems—like those used in Amazon Echo, Google Assistant, Apple’s Siri, and Microsoft Cortana—by designing defenses against these flaws.
Dr. Zhang’s project hypothesizes that the key to improving speech recognition lies in better understanding and integrating human hearing aspects, specifically phonetic frequencies (formants), which are crucial for distinguishing phonemes, the sounds that make up words. By identifying the imperceptible features in adversarial examples that cause misrecognition, the project aims to mitigate vulnerabilities that attackers can exploit, such as embedding harmful commands in seemingly harmless music. This interdisciplinary approach, blending computer science with computational linguistics, exemplifies the integrated nature of problem-solving in the field and represents a significant step toward more secure and reliable speech recognition technologies.
Reference: Boston University Metropolitan College. (2024). Computer Science Professor Earns $98K Grant to Improve Speech Recognition Technology. Retrieved from https://www.bu.edu/met/news/computer-science-professor-earns-98k-grant-to-improve-speech-recognition-technology/
European AI Office
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
The European AI Office, established within the European Commission, serves as the central hub for AI expertise across the EU, playing a crucial role in the implementation of the AI Act, particularly for general-purpose AI. It aims to foster the development and use of trustworthy AI, enhance international cooperation, and protect against AI-related risks. The office is instrumental in ensuring AI safety and trustworthiness across the 27 Member States by providing legal certainty to businesses and upholding the health, safety, and fundamental rights of individuals. This initiative is part of the EU's broader strategy to maintain a single governance system for AI, which includes evaluating AI models, enforcing regulations, and promoting an innovative ecosystem for AI development.
In addition to supporting the AI Act, the European AI Office is committed to advancing the development of trustworthy AI by facilitating the exchange of best practices, providing access to AI sandboxes for real-world testing, and encouraging international collaboration on AI governance. The office works closely with Member States, the scientific community, industry, and civil society to ensure a comprehensive and inclusive approach to AI policy and practice. The launch of the GenAI4EU initiative aims to support startups and SMEs in creating AI that aligns with EU values and regulations, highlighting the EU's commitment to leading in the ethical development and application of AI technologies.
Reference: European Commission. (2024, March 8). European AI Office. Digital Strategy. https://digital-strategy.ec.europa.eu/en/policies/ai-office
A Beverly Hills middle school is investigating students sharing AI-made nude photos of classmates
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
Students at a middle school in Beverly Hills, California, used artificial intelligence technology to create fake nude photos of their classmates, according to school administrators. Now, the community is grappling with the fallout.
School officials at Beverly Vista Middle School were made aware of the “AI-generated nude photos” of students last week, the district superintendent said in a letter to parents. The superintendent told NBC News the photos included students’ faces superimposed onto nude bodies. The district did not share how it determined the photos were produced with artificial intelligence.
Reference: Tenbarge and Kreutz (2023, February 27). A Beverly Hills middle school is investigating students sharing AI-made nude photos of classmates. NBC News. https://www.nbcnews.com/tech/misinformation/beverly-vista-hills-middle-school-ai-images-deepfakes-rcna140775
Microsoft partners with Mistral in second AI deal beyond OpenAI
BY Royal-RAIS Editorial Team | PUBLISHED: March, 2024
In January 2023, Microsoft announced a significant extension of its partnership with OpenAI, involving a multiyear, multibillion dollar investment. This move aims to bolster OpenAI's AI research and democratize AI technology, with Microsoft becoming the exclusive cloud provider for OpenAI. The collaboration is set to enhance Microsoft's Azure services with advanced AI capabilities, including the deployment of supercomputing systems to support OpenAI's research efforts and the integration of OpenAI's AI models into Microsoft's consumer and enterprise products.
Reference: Warren, T. (2023, January 23). Microsoft extends OpenAI partnership in a ‘multibillion dollar investment’. The Verge. https://www.theverge.com/2024/2/26/24083510/microsoft-mistral-partnership-deal-azure-ai
The terrifying rise of 'voice cloning' scams: Hackers use AI to replicate your voice before placing fake calls to friends or family pleading for money
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
Voice cloning, an emerging form of deepfake technology utilizing artificial intelligence (AI) to simulate individuals' voices, has raised significant concerns due to its potential for misuse in scams and fraudulent activities. As reported by Shivali Best and Jordan Saward for Mailonline on February 21, 2024, this technology has advanced to the point where it requires only a brief audio sample to accurately replicate a person's voice, making famous figures and ordinary individuals alike vulnerable to identity theft and financial fraud. For instance, a CEO was deceived into transferring $243,000 due to a convincing fake phone call. The accessibility and affordability of voice cloning tools have evolved over the years, making it possible for individuals with minimal technical experience to execute scams that are increasingly difficult to distinguish from genuine communications.
To explore the effectiveness and dangers of voice cloning, the reporters allowed a professional hacker to clone the author's voice, revealing alarmingly realistic results. This demonstration, which involved using a five-minute audio clip to train an AI tool, showcased the ease with which malicious actors could generate fake messages in someone's voice, complete with natural-sounding inflections and pauses. The implications of this technology are particularly troubling for personal and corporate security, as it can be used to create highly convincing fraudulent communications that manipulate victims into urgent actions, such as transferring money or disclosing sensitive information. Despite certain detectable signs of cloned voices, such as unnatural pauses or background noise, the rapid improvement of AI means these indicators may become less apparent over time. The experts recommend vigilance, questioning unexpected requests for urgent action, and considering the establishment of a 'safe word' with close contacts as measures to counteract the risks associated with voice cloning scams.
Reference:
Best, S., & Saward, J. (2024, February 21). The terrifying rise of 'voice cloning' scams: Hackers use AI to replicate your voice before placing fake calls to friends or family pleading for money. Mailonline. Retrieved from https://www.dailymail.co.uk/sciencetech/article-13088183/amp/hacker-clone-VOICE-fake-audio.html
AlgorithmWatch and AI Forensics among the first organizations to request platform data under the DSA
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
With the introduction of the EU's Digital Services Act (DSA), a significant shift in the digital landscape aims to enhance online rights protection for citizens, marking February 15, 2024, as a pivotal day for its impactful provisions. Among these, the establishment of a Digital Services Coordinator (DSC) in every EU Member State and the opening of new pathways for researchers to access platform data stand out. This legal framework provides a beacon of hope for users like a Dutch teenager who lost her Instagram account and its 20,000 followers overnight due to malicious reporting. The DSA's complaint-handling mechanisms and the DSC's role in facilitating out-of-court settlements promise a more robust defense against such injustices, potentially offering a direct and independent means for users to safeguard their online rights.
However, the practical implementation of the DSA raises concerns about the readiness and efficacy of Digital Services Coordinators and the real accessibility of platform data for research purposes. Organizations like AlgorithmWatch and AI Forensics voice apprehensions regarding potential delays in DSCs becoming operational, as seen in Germany's expected legal appointment of a DSC by April, which could hinder timely action against election misinformation and other systemic risks. The DSA's provision for researchers to access internal platform data through DSCs introduces an unprecedented opportunity for scrutiny but is met with hurdles like application vetting and possible platform objections. As AlgorithmWatch and AI Forensics navigate these new regulations in their investigation into misinformation on Bing, the unfolding scenario underscores the DSA's ambitious yet challenging path towards democratizing data access and fortifying online protections in an election-critical year.
Reference:
AlgorithmWatch. (2024, February 15). AlgorithmWatch and AI Forensics among the first organizations to request platform data under the DSA. Retrieved from https://algorithmwatch.org/en/dsa-platform-data-request-2024/
Criminals Who Misuse AI Face Stiffer Sentences in DOJ Crackdown
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
The U.S. Department of Justice (DOJ) is intensifying its efforts to combat the misuse of artificial intelligence (AI) by directing federal prosecutors to pursue harsher penalties against criminals leveraging AI to further their illegal activities. This directive was announced by Deputy Attorney General Lisa Monaco during a speech at the University of Oxford in the UK on February 14, 2024. Monaco emphasized the DOJ's commitment to using existing statutes to address the challenges posed by rapidly evolving generative technology, with a particular focus on safeguarding election security, preventing discrimination, curbing price fixing, and protecting against identity theft. She acknowledged AI's potential to aid law enforcement in detecting crimes but also highlighted its dangers, referring to AI as “the sharpest blade yet” in technology's double-edged legacy.
In light of the upcoming 2024 election, Monaco expressed specific concern over AI's capacity to undermine U.S. electoral integrity through the proliferation of online harassment, disinformation, and the creation of deepfakes. The DOJ is also examining its own use of AI to ensure that its deployment does not infringe upon public rights or introduce safety risks. Monaco revealed that the DOJ has already begun utilizing AI for various purposes, including tracing opioid sources, managing FBI tip triage, and processing vast amounts of electronic evidence in significant legal cases, such as those stemming from the January 6 Capitol attack. This move by the DOJ reflects a broader strategy to both harness AI's benefits for justice and mitigate its potential for harm.
Reference:
Penn, B. (2024, February 14). Criminals Who Misuse AI Face Stiffer Sentences in DOJ Crackdown. Bloomberg Law. Retrieved from https://news.bloomberglaw.com/us-law-week/us-to-seek-stiffer-sentences-when-ai-facilitates-crimes
Microsoft says US rivals are beginning to use generative AI in offensive cyber operations
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
Microsoft has reported that U.S. adversaries, predominantly Iran and North Korea, along with Russia and China to a lesser extent, are beginning to utilize generative artificial intelligence (AI) to conduct or organize offensive cyber operations. This announcement was made on February 14, 2024, by the tech giant in partnership with OpenAI, highlighting their efforts in detecting and disrupting the malicious use of their AI technologies by these actors. Microsoft emphasized that the detected techniques were in early stages and not particularly innovative, but stressed the importance of public exposure as these adversaries leverage large-language models to enhance their network breach capabilities and influence operations. The use of machine learning for defense by cybersecurity firms has been a longstanding practice, mainly for detecting anomalous behavior in networks. However, the advent of large-language models like OpenAI’s ChatGPT has significantly intensified the technological cat-and-mouse game between cybercriminals and cybersecurity defenses.
Microsoft, having invested billions in OpenAI, coincided the announcement with a report on generative AI’s expected impact on malicious social engineering, including the creation of more sophisticated deepfakes and voice cloning. This development poses a significant threat to democracy, especially in a year with over 50 countries conducting elections, by potentially magnifying disinformation. Microsoft detailed instances of misuse by groups such as North Korea’s Kimsuky, Iran’s Revolutionary Guard, Russia’s GRU unit Fancy Bear, and Chinese cyberespionage groups Aquatic Panda and Maverick Panda, showcasing the varied malicious applications of generative AI from spear-phishing to studying evasion techniques in compromised networks. Despite the current limitations of generative AI in enhancing malicious cybersecurity tasks, experts predict that its role in cyber warfare will only grow, underscoring the urgency for the development of AI with security in mind.
Reference:
Bajak, F. (2024, February 14). Microsoft says US rivals are beginning to use generative AI in offensive cyber operations. AP News. Retrieved from https://apnews.com/article/microsoft-generative-ai-offensive-cyber-operations-3482b8467c81830012a9283fd6b5f529
Penn Engineering Announces First Ivy League Undergraduate Degree in Artificial Intelligence
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
The University of Pennsylvania School of Engineering and Applied Science is pioneering the academic landscape by introducing the first Bachelor of Science in Engineering (B.S.E.) degree in Artificial Intelligence (AI) among Ivy League universities, as reported by Holly Wojcik on February 13, 2024. This groundbreaking undergraduate program is positioned to meet the burgeoning demand for AI engineers who are adept at applying AI principles in a responsible and ethical manner across various sectors including health, energy, transportation, and national security. The program has been made possible through the generosity of Raj and Neera Singh, whose vision and philanthropy aim to empower students to develop AI tools that can significantly benefit society. The curriculum is designed to equip students with advanced knowledge in machine learning, computing algorithms, data analytics, and robotics, preparing them for the forefront of AI innovation and application.
The AI degree program at Penn Engineering, set to commence in fall 2024, emphasizes the creation of a society where AI serves as a fundamental force for good, under the leadership of George J. Pappas, a 2024 National Academy of Engineering inductee. The curriculum, developed to prepare students for future jobs in new or revolutionized fields, will be delivered by world-renowned faculty in the newly established Amy Gutmann Hall, a hub for data science. This initiative reflects Penn Engineering's commitment to advancing the development of AI through education, research, and innovation, addressing fundamental questions about the nature of intelligence and learning, and aligning AI with societal values to build trustworthy systems. The program represents a significant stride towards nurturing the next generation of engineers to leverage AI technology for positive societal impact.
Reference:
Wojcik, H. (2024, February 13). Penn Engineering Announces First Ivy League Undergraduate Degree in Artificial Intelligence. Penn Engineering Blog. Retrieved from https://blog.seas.upenn.edu/penn-engineering-announces-first-ivy-league-undergraduate-degree-in-artificial-intelligence/
Unicorn alert! Homeland Security posts rare ad for all-remote AI Corps
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
The Department of Homeland Security (DHS) is making a significant move to bolster its artificial intelligence (AI) capabilities by launching a hiring sprint for 50 technology experts who will work fully remotely from anywhere in the United States. This initiative, as reported by Molly Weisner on February 7, 2024, is part of DHS’s Office of the Chief Information Officer's efforts to address the urgent need for AI expertise within the government. The positions, which offer competitive salaries under the General Schedule's upper echelons, were introduced following a call from Secretary Alejandro Mayorkas and Chief Artificial Intelligence Officer Eric Hysen. This new DHS AI Corps is modeled after the U.S. Digital Service and aims to leverage AI for various critical missions, including countering fentanyl trafficking, combating child sexual exploitation, improving immigration services, securing travel, fortifying critical infrastructure, and enhancing cybersecurity.
The establishment of the DHS AI Corps comes amidst a government-wide push to embrace AI technology, with recent hearings on Capitol Hill focusing on agencies' AI utilization and needs. The Biden administration has emphasized AI education and deployment through an executive order and subsequent guidance, highlighting the technology's potential benefits and challenges. DHS’s decision to offer fully remote positions is a notable deviation from the trend of agencies calling workers back to offices post-pandemic, reflecting a flexible approach to attracting top talent in a competitive field. The initiative also includes using shared certificates to streamline the hiring process across multiple agencies, particularly for the critical skill gap in the Information Technology Management series (2210). This approach underscores the federal government's urgent and innovative efforts to recruit and retain AI specialists to address pressing national security and public welfare issues.
Reference:
Weisner, M. (2024, February 7). Unicorn alert! Homeland Security posts rare ad for all-remote AI Corps. Federal Times. Retrieved from https://www.federaltimes.com/newsletters/2024/02/07/unicorn-alert-homeland-security-posts-rare-ad-for-all-remote-ai-corps/
From Our Lab to the Pitch: Advancements in Soccer Analytics through AI
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
In this article, Keisuke Fujii, an associate professor at Nagoya University in Japan, explores the integration of machine learning in soccer analytics to enhance the understanding and evaluation of team sports. Fujii's research focuses on analyzing group behaviors in sports through the application of machine learning techniques, particularly in soccer, where both "event data" (like passes and shots) and "tracking data" (detailing player positions) are crucial for comprehensive analysis. The research emphasizes two main themes: trajectory prediction of players and reinforcement learning. By employing deep learning models for predicting player trajectories, Fujii and his team have been able to quantify the impact of players' movements on creating scoring opportunities, thereby evaluating off-ball actions and team defense positioning effectively. Additionally, the application of reinforcement learning models aims to model players as agents seeking rewards, further enhancing the understanding of strategic actions and decision-making in soccer.
Fujii's work represents a significant advancement in the field of sports analytics by proposing methods to evaluate nearly all players on the field, including those without the ball, a task that was previously challenging due to computational limitations and the complexity of modeling tactical movements. The article also discusses ongoing projects and future directions, such as the use of "Inverse Reinforcement Learning" and "Game Theory" to delve deeper into player decision-making processes. Despite challenges in data acquisition, Fujii's lab collaborates with the University of Tsukuba’s soccer team and works on publicly available soccer tracking datasets to make sophisticated analyses accessible to a broader audience. Through these innovative methodologies, the research aims to improve the quality of coaching and performance analysis across all levels of soccer, contributing significantly to the field of sports analytics.
Reference:
Fujii, K. (2024, February 7). From Our Lab to the Pitch: Advancements in Soccer Analytics through AI. Medium. Retrieved from https://medium.com/@keisuke198619/from-our-lab-to-the-pitch-advancements-in-soccer-analytics-through-ai-e16e65b55937
The AI Deepfakes Problem Is Going to Get Unstoppably Worse
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
The article by Maxwell Zeff, published on February 9, 2024, delves into the escalating problem of AI-generated deepfakes and the ineffectiveness of current measures to combat them. Despite recent actions like the Federal Communications Commission (FCC) outlawing deepfake robocalls and OpenAI, along with Google, introducing watermarks to label AI-generated images, these efforts are criticized for their limited impact. The article highlights a case where deepfakes duped a finance worker in Hong Kong out of $25 million, underscoring the sophisticated ability of deepfakes to blur the lines between reality and fabrication. Experts argue for a more comprehensive approach to deepfake detection, emphasizing the need for implementation at various points, including the source, transmission, and destination, to effectively combat this issue.
As the technology behind deepfakes rapidly advances, the detection methods lag, posing a significant challenge in curbing the spread and impact of these digital forgeries. The article points out the necessity for major platforms and service providers, such as Meta and X, to integrate deepfake detection into their services. However, the current state of deepfake detection technology, which is not yet fully accurate or widespread, complicates efforts to identify and mitigate the spread of deepfakes. This gap between generative AI capabilities and detection technologies indicates that the problem of deepfakes is expected to worsen, making them a potent tool for misinformation. The article concludes with a call for urgent improvements in detection technologies and regulatory measures to address the deepfake crisis.
Reference:
Zeff, M. (2024, February 9). The AI Deepfakes Problem Is Going to Get Unstoppably Worse. Gizmodo. Retrieved from https://gizmodo.com/youll-be-fooled-by-an-ai-deepfake-this-year-1851240169
CERN’s new robot detects radiation leaks in complex experiment areas
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
Scientists at CERN have introduced a novel robot, dubbed the CERNquadbot, designed to detect potential radiation leaks within the complex and challenging terrains of the research center's experimental areas. This innovative, dog-like robot has recently passed its radiation protection tests in the North Area, CERN's most extensive experimental zone, showcasing its ability to navigate terrains that are difficult for both humans and traditional wheeled robots. The CERNquadbot's development is part of an initiative to enhance safety protocols by monitoring environmental conditions and detecting potential hazards such as water, fire, or other leaks in experiment caverns, like those used for the ALICE detector focused on heavy-ion physics.
CERN's history of employing robots for maintenance, repair, and inspection tasks is extensive, including devices like the CERNbot, the CRANEbot, and the Train Inspection Monorail (TIM). However, these existing robotic solutions are limited in their ability to access the more cluttered or complex areas within the experiment caverns. The introduction of the CERNquadbot represents a significant advancement in CERN's capability to gather critical data in these hard-to-reach areas, marking a new era in the inspection and monitoring of high-value and high-risk laboratory equipment. This development not only enhances operational efficiency but also contributes to ensuring the safety and integrity of CERN's experimental environments.
Reference:
Lykiardopoulou, I. (2024, February 6). CERN’s new robot detects radiation leaks in complex experiment areas. The Next Web. Retrieved from https://thenextweb.com/news/cerns-robot-detects-radiation-leaks-complex-experiment-areas
Using AI to monitor the internet for terror content is inescapable – but also fraught with pitfalls
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
The article discusses the critical and challenging role of artificial intelligence (AI) in monitoring the internet for terrorist content, emphasizing its inevitability and the complexities involved. With millions of social media posts, photos, and videos being uploaded every minute, manual inspection of all online material for harmful or illegal content, such as terrorism and violence promotion, is impractical. Consequently, automated tools, including AI, have become essential in this endeavor. The article highlights the development of these tools, driven by new laws and regulations like the EU’s terrorist content online regulation, which requires the prompt removal of terrorist content from platforms. It outlines two primary types of tools used for identifying terrorist content: behavior and content-based tools, including matching and classification approaches, each with its own set of challenges, such as the creation of perceptual hashes and the need for extensive datasets for AI training.
Despite the advances in AI technology for content moderation, the article underscores the indispensable role of human moderators in maintaining databases, assessing flagged content, and handling appeals. However, it also points out the demanding nature of this work and the often inadequate working conditions of moderators, suggesting the development of minimum standards for their employment, including mental health support. Moreover, the reliance on third-party content moderation solutions, which may not be subject to the same level of oversight as tech platforms and might over-depend on automated tools, is cautioned against. The piece concludes by advocating for collaborative initiatives between governments, the private sector, and international organizations to develop and share resources, like the EU-funded Tech Against Terrorism Europe project and Meta’s Hasher-Matcher-Actioner tool, to effectively address the challenge of online terror content.
Reference:
Macdonald, S., Mattheis, A. A., & Wells, D. (2024, February 7). Using AI to monitor the internet for terror content is inescapable – but also fraught with pitfalls. The Conversation. Retrieved from https://theconversation.com/using-ai-to-monitor-the-internet-for-terror-content-is-inescapable-but-also-fraught-with-pitfalls-222408
A company lost $25 million after an employee was tricked by deepfakes of his coworkers on a video call: police
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
In an alarming incident of cybercrime, a company in Hong Kong was defrauded of $25 million due to an employee being deceived by deepfake technology. According to the report, the employee from the finance department was contacted by someone impersonating the company’s UK-based Chief Financial Officer (CFO) and was instructed to participate in a video call. This call included what appeared to be the CFO and other colleagues, all of which were fabricated using deepfake technology. Following the instructions received during this call, the employee made fifteen transfers totaling HK$200 million ($25.6 million) to various bank accounts in Hong Kong. The fraud was only discovered a week later when the employee reached out to the company's headquarters, prompting an investigation by Hong Kong police, who have yet to make any arrests.
The incident has raised significant concerns about the security risks posed by deepfake technology. Deepfakes, which are hyper-realistic digital forgeries of people in videos or audio recordings, have been used in various malicious ways, including creating false representations of public figures. The Hong Kong police reported that the scammers had generated the deepfakes using publicly available video and audio footage of the company's employees. This event underscores the growing global concern over deepfakes, leading to calls for more stringent legislation to combat their misuse, including a proposal in the United States to make the non-consensual sharing of deepfake pornography illegal.
Reference:
Tan, H. (2024, February 5). A company lost $25 million after an employee was tricked by deepfakes of his coworkers on a video call: police. Business Insider. Retrieved from https://www-businessinsider-com.cdn.ampproject.org/c/s/www.businessinsider.com/deepfake-coworkers-video-call-company-loses-millions-employee-ai-2024-2?amp
New Senate bill aims to kill proposed SEC rule on AI conflicts of interest
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
In the article, Senators Ted Cruz (R-Texas) and Bill Hagerty (R-Tenn.) are reported to have introduced a bill aimed at preventing the implementation of a proposed Securities and Exchange Commission (SEC) regulation concerning the use of artificial intelligence (AI) in the financial sector. This proposed rule, unveiled in July prior to the article's publication, mandates financial firms to identify and mitigate conflicts of interest that could arise from their deployment of AI technologies. Specifically, it requires these firms to ensure that their AI tools do not favor their own products over their clients' interests, aiming to prioritize the welfare of clients above the firms' financial gains.
The move by Cruz and Hagerty highlights the growing tension between legislative actions and regulatory initiatives in the realm of AI and its application in finance. The SEC's draft rule is designed to address potential biases and conflicts of interest, underscoring the regulatory body's focus on ethical AI usage that safeguards consumer interests. However, the opposition from the senators suggests a brewing conflict over the extent and nature of AI regulation, signaling potential future disputes between Congress and the White House as they navigate the establishment of guidelines for AI utilization across various sectors.
Reference:
Wilkins, E. (2024, February 6). New Senate bill aims to kill proposed SEC rule on AI conflicts of interest. CNBC. Retrieved from https://www-cnbc-com.cdn.ampproject.org/c/s/www.cnbc.com/amp/2024/02/06/new-senate-bill-aims-to-kill-proposed-sec-rule-on-ai-conflicts-of-interest-.html
Introducing Google’s Secure AI Framework
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
Google has introduced the Secure AI Framework (SAIF), aiming to establish industry security standards for AI technology development and deployment. This framework focuses on incorporating security best practices and addressing AI-specific risks, such as data poisoning and malicious inputs, to ensure AI systems are secure by default. The approach also emphasizes collaboration and community building to advance AI security, including partnerships and sharing best practices.
Reference: Google. (2023, June 8). Introducing Google’s Secure AI Framework. The Keyword. Retrieved from https://blog.google/technology/safety-security/introducing-googles-secure-ai-framework/
AI vision and autonomous lifeboats could be the future of sea rescue
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
The article highlights the innovative efforts of Zelim, a startup founded by sailor Sam Mayall, to revolutionize sea rescue operations through technology. Established in 2017, Zelim aims to make search and rescue missions safer and more efficient by utilizing AI-powered person-overboard detection technology and autonomous lifeboats. The company, based out of Edinburgh, has partnered with Ocean Winds for a trial at a floating wind farm off the coast of Portugal to test its AI detection system named ZOE. This system is capable of identifying and tracking people, vessels, and other objects in the ocean with high accuracy, even under challenging conditions like stormy seas. ZOE's ability to detect individuals overboard and trigger immediate alerts promises to significantly reduce the time taken to initiate rescue operations, thereby saving lives.
In addition to the AI detection technology, Zelim is developing an unmanned lifeboat, equipped with the ZOE system, capable of executing autonomous rescues. This lifeboat can rescue up to nine people simultaneously and features a unique conveyor belt system, named Swift, for rapidly bringing individuals on board without human assistance. The Swift system, which can also be installed on manned lifeboats and is being tested with entities like the Milford Haven Port Authority, exemplifies Zelim's commitment to improving safety in offshore environments. With the rise of offshore wind farms and the inherent risks of working at sea, Zelim's innovations could represent a significant leap forward in maritime safety and efficiency.
Reference: Geschwindt, S. (2024, February 2). AI vision and autonomous lifeboats could be the future of sea rescue. The Next Web. Retrieved from https://thenextweb.com/news/ai-autonomous-lifeboats-offshore-search-and-rescue
Deepfakes exploiting Taylor Swift images exemplify a scourge with little oversight
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
The article reports on the growing concern over deepfake technology, particularly highlighting the recent spread of fabricated pornographic images of pop icon Taylor Swift across social media platforms like X (formerly Twitter) and Telegram. Despite efforts to remove these images, they accumulated millions of views, underscoring the rapid advancement and dissemination of AI-generated deepfakes and the challenge policymakers face in keeping pace with technology. The manipulation of Swift's image exemplifies the broader issue of deepfakes, which are increasingly being used to create nonconsensual pornographic content, predominantly targeting women and, alarmingly, often underage girls. This phenomenon raises significant privacy and ethical concerns, as noted by law professor Danielle Citron, who emphasizes the coercive and identity-theft nature of deepfake porn.
The article also touches on the broader implications of deepfakes beyond nonconsensual pornography, including their potential use in political disinformation campaigns and unapproved celebrity endorsements. Despite some state-level legislation against nonconsensual deepfake pornography, federal action in the U.S. has been limited, with several bills still pending in Congress. The piece mentions that some tech companies, recognizing the dangers of AI-generated content, have begun to implement their own measures, such as requiring political ads made with AI to carry labels. However, these efforts are seen as insufficient in the face of the rapidly evolving technology and its capacity for misuse, highlighting the need for more robust regulatory and legal frameworks to address the complex challenges posed by deepfakes.
Reference: Chappell, B. (2024, January 26). Deepfakes exploiting Taylor Swift images exemplify a scourge with little oversight. NPR. Retrieved from https://www.npr.org/2024/01/26/1227091070/deepfakes-taylor-swift-images-regulation
Expect ‘AI versus AI’ cyber activity between US and adversaries, Pentagon official says
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
The article discusses the anticipated shift in cyber warfare towards "AI versus AI" conflict, as stated by Jude Sunderbruch, the director of the Defense Department's Cyber Crime Center (DC3). This evolution signifies a future where artificial intelligence systems will be utilized by adversaries to execute cyberattacks against the United States. Sunderbruch, speaking at the DefenseScoop's Google Defense Forum, emphasized that the U.S. and its allies need to innovate and leverage existing AI technologies to stay ahead of competitors like China. The incorporation of AI and machine learning into cybersecurity strategies is seen as crucial for enhancing both offensive and defensive capabilities, including threat and vulnerability analysis and system testing.
Furthermore, the Pentagon's updated cybersecurity strategy, released in September, outlines a more aggressive stance in cyber operations, particularly against China and Russia. It highlights the application of automated and AI-driven capabilities to bolster U.S. cyber defenses. This strategy was partly influenced by observations from Russia’s invasion of Ukraine in 2022, demonstrating the significant role of cyber capabilities in large-scale conventional conflicts. DC3's role in sharing information on cyber threats and its advanced forensics capabilities, which can recover data from damaged or destroyed hardware, is also noted as a critical component in the U.S.'s cybersecurity framework.
Reference: DiMolfetta, D. (2024, January 25). Expect ‘AI versus AI’ cyber activity between US and adversaries, Pentagon official says. Nextgov. Retrieved from https://www.nextgov.com/cybersecurity/2024/01/expect-ai-versus-ai-cyber-activity-between-us-and-adversaries-pentagon-official-says/393613/
Could AI 'trading bots' transform the world of investing?
BY Royal-RAIS Editorial Team | PUBLISHED: February, 2024
This article explores the emerging trend of AI "trading bots" in the investment world, which promises to manage investments with the potential for lucrative returns. However, the effectiveness and reliability of these AI systems are under scrutiny. The hype surrounding AI's capabilities has led approximately one-third of investors in a 2023 US survey to express willingness to entrust their investment decisions entirely to trading bots. Despite this enthusiasm, experts, including John Allan of the UK's Investment Association, urge caution. Allan emphasizes the importance of proven long-term success before fully relying on AI for investment decisions, highlighting the indispensable role human investment professionals continue to play. The article delves into the inherent limitations of AI, including its inability to predict unforeseen market events and the risk of deteriorating decision quality if initially fed with poor data or programming.
The challenges extend beyond the technical to include ethical concerns, such as bias and data security. Instances of AI failures, like Amazon's gender-biased recruitment tool, underscore the complexities of AI applications. Experts like Prof. Sandra Wachter of Oxford University raise alarms over AI's susceptibility to inaccuracies, bias, and security vulnerabilities. Despite these risks, the fascination with AI-driven investment persists, attributed by business psychologist Stuart Duff to a perception of machines as more objective and reliable than humans. However, Duff warns that AI tools may inherit their developers' biases and lack the intuitive experience to navigate unforeseen crises effectively, highlighting the nuanced debate over AI's role in future investment strategies.
Reference: Bloom, J. (2024, February 1). Could AI 'trading bots' transform the world of investing? BBC News. Retrieved from https://www.bbc.com/news/business-68092814
How scammers are using your Snapchat and TikTok posts in their AI schemes
BY Royal-RAIS Editorial Team | PUBLISHED: January 29, 2024
The article, written by Jon Michael Raasch for Fox News, discusses the emerging trend of AI scams that exploit social media content from platforms like Snapchat, TikTok, and Facebook. Criminals are using AI tools to create synthetic voices and images for ransom schemes, feeding data from these social media sites into AI programs. Morgan Wright, a security adviser and chief security adviser for SentinelOne, explains that social media has become a tool for reconnaissance by criminal groups, including those involved in human trafficking. The scammers use real-time location posts on social media to target victims when they are separated from their families. In recent incidents in Arizona, criminals used AI-generated voices of family members to conduct ransom calls, creating a believable and urgent scenario for the victims.
One of the victims, Jennifer DeStefano, recounted her experience of receiving a call with an AI-generated voice resembling her daughter's, demanding ransom and threatening harm. The article highlights the speed and believability of these AI-powered scams, giving criminals a significant advantage. The use of AI in such scams is becoming more common as the technology becomes more accessible and user-friendly. Wright and DeStefano both emphasize the need for caution and awareness about the potential misuse of AI in crimes, suggesting that boundaries and discussions around the ethical use of AI are necessary.
Reference:
Raasch, J. M. (2023, April 23). How scammers are using your Snapchat and TikTok posts in their AI schemes. Fox News. Retrieved from https://www.foxnews.com/tech/scammers-are-using-your-snapchat-and-tiktok-posts-in-their-ai-schemes
What AI Means For Networking Infrastructure In 2024
BY Royal-RAIS Editorial Team | PUBLISHED: January 28, 2024
The article by R. Scott Raynovich, published on Forbes, delves into the significant impact of Artificial Intelligence (AI) on networking infrastructure in 2024, particularly in cloud and communications. AI's influence is mainly seen in how infrastructure is developed to support AI-enabled applications, such as large language models (LLMs) and Generative AI (GenAI). These AI applications demand extremely low latency and high bandwidth, leading to a shift towards more distributed and accelerated platforms. Networking, therefore, becomes a critical element in the AI stack, as highlighted by industry leaders like Cisco and HPE, particularly in HPE’s acquisition of Juniper Networks. AI necessitates a robust infrastructure encompassing chips, specialized networking cards, and high-performance computing systems, significantly raising the networking profile in the AI cloud world.
The article also explores the dual aspect of AI's impact on networking: 'Networking for AI' and 'AI for Networking.' Building AI-optimized infrastructure is not trivial, requiring substantial investments and sophisticated engineering. Companies like Nvidia and Arista Networks have drawn significant investor interest for their roles in AI networking. Nvidia, for instance, offers a complete infrastructure stack for AI, including software, chips, and networking elements. The article also discusses the ongoing debate between InfiniBand and Ethernet for AI systems, with emerging Ethernet-based solutions as an alternative. Additionally, the role of AI in observability and automation of infrastructure is highlighted, with AI-driven tools being essential for efficient management of IT systems. The article concludes that AI's influence on networking and infrastructure is a key theme for 2024, with significant future implications for networking and infrastructure deployments.
Reference:
Raynovich, R. S. (2024, January 23). What AI Means For Networking Infrastructure In 2024. Forbes. Retrieved from https://www.forbes.com/sites/rscottraynovich/2024/01/23/what-ai-means-for-networking-infrastructure-in-2024/amp/
Expect ‘AI versus AI’ cyber activity between US and adversaries, Pentagon official says
BY Royal-RAIS Editorial Team | PUBLISHED: January 28, 2024
The article discusses the evolving landscape of cyber warfare, focusing on the increasing likelihood of "AI versus AI" conflicts as predicted by Jude Sunderbruch, the Director of the Defense Department’s Cyber Crime Center (DC3). Sunderbruch, speaking at the DefenseScoop’s Google Defense Forum, emphasized that the United States and its allies are just at the beginning of understanding and utilizing artificial intelligence (AI) in cyber operations. He pointed out the critical need for creative strategies to stay ahead of adversaries like China in this domain. AI and machine learning technologies are expected to significantly enhance the capabilities of both novice and nation-state hackers, enabling more sophisticated social engineering attacks and the development of advanced hacking tools. Sunderbruch also highlighted the potential of AI systems in threat and vulnerability analysis and system testing.
The article also sheds light on the U.S. Defense Department’s updated cybersecurity strategy, which was released in September and takes a more offensive approach towards cyber operations. This strategy specifically identifies China and Russia as the top cyberspace adversaries and emphasizes targeting cybercriminals and other groups that threaten U.S. interests. The strategy was influenced by Russia’s invasion of Ukraine, demonstrating how cyber capabilities can be used in large-scale conventional conflicts. Additionally, the article mentions an implementation plan released by a civilian agency for a U.S. cybersecurity strategy, which assigns various federal agencies tasks to enhance the nation’s cyber readiness. DC3's role as a cybersecurity analysis center with advanced forensics capabilities is also discussed, highlighting its ability to collaborate with intelligence community partners and share information about cyber threats.
Reference:
DiMolfetta, D. (2024, January 25). Expect ‘AI versus AI’ cyber activity between US and adversaries, Pentagon official says. Nextgov. Retrieved from https://www.nextgov.com/cybersecurity/2024/01/expect-ai-versus-ai-cyber-activity-between-us-and-adversaries-pentagon-official-says/393613/
OpenAI quietly removes ban on military use of its AI tools
BY Royal-RAIS Editorial Team | PUBLISHED: January 28, 2024
OpenAI has recently revised its policy to allow the military use of its artificial intelligence tools, including ChatGPT. This update marks a significant change from its previous stance, which explicitly banned the use of its models for activities with a high risk of physical harm, such as weapons development and military applications. The policy change was not publicly announced but was noted in an update on OpenAI's policies page. This shift in policy aligns with OpenAI's new collaboration with the U.S. Department of Defense, focusing on the development of AI tools, particularly in the area of open-source cybersecurity.
The involvement of OpenAI with the Department of Defense was confirmed by Anna Makanju, OpenAI's VP of Global Affairs, during an interview at the World Economic Forum. This collaboration was discussed alongside CEO Sam Altman, indicating OpenAI's strategic move towards engaging with defense-related AI applications. While OpenAI’s policies still prohibit the use of its services to harm individuals, the relaxation of its stance on military applications suggests a broader interpretation of the role its AI technologies can play in national security and defense sectors.
Reference:
Field, H. (2024, January 16). OpenAI quietly removes ban on military use of its AI tools. CNBC. Retrieved from https://www-cnbc-com.cdn.ampproject.org/c/s/www.cnbc.com/amp/2024/01/16/openai-quietly-removes-ban-on-military-use-of-its-ai-tools.html
Democratizing the future of AI R&D: NSF to launch National AI Research Resource pilot
BY Royal-RAIS Editorial Team | PUBLISHED: January 28, 2024
The National Science Foundation (NSF), in collaboration with various federal agencies and private organizations, announced the launch of the National Artificial Intelligence Research Resource (NAIRR) pilot program. This initiative aims to democratize and enhance AI research and development by providing U.S.-based researchers and educators with access to advanced computing resources, datasets, models, and software. The NAIRR pilot is a significant step towards creating a comprehensive research infrastructure that facilitates responsible AI discovery and innovation. It represents a collective effort among governmental, academic, and private sectors, signifying the urgency in advancing AI technology in the United States. NSF Director Sethuraman Panchanathan and Arati Prabhakar, Assistant to the President for Science and Technology, underscored the program's role in maintaining the nation's global competitiveness in AI and its alignment with President Biden’s goals for responsible AI advancement.
The NAIRR pilot program is structured into four focus areas: NAIRR Open, NAIRR Secure, NAIRR Software, and NAIRR Classroom, each addressing different aspects of AI research and education. The program targets areas such as healthcare, environmental sustainability, and infrastructure, and aims to foster cross-sector partnerships, particularly in industry collaboration. This initiative is part of the broader goal set by Executive Order 14110, and the pilot is a crucial step in testing the concept and guiding future investments. The NAIRR pilot portal (nairrpilot.org) is established for researchers to access and apply for these resources, with a broader call for proposals expected in spring 2024. The article highlights the enthusiasm and support from numerous partners in the private sector and academia, indicating a strong commitment to advancing AI research and education in the U.S.
Reference:
National Science Foundation. (2024, January 24). Democratizing the future of AI R&D: NSF to launch National AI Research Resource pilot. NSF News. Retrieved from https://new.nsf.gov/news/democratizing-future-ai-rd-nsf-launch-national-ai
Computers make mistakes and AI will make things worse — the law must recognize that
BY Royal-RAIS Editorial Team | PUBLISHED: January 28, 2024
The editorial in Nature addresses the legal implications of relying on computer systems and artificial intelligence (AI) in decision-making, particularly highlighting the UK Post Office scandal involving the Horizon accounting system developed by Fujitsu. This system falsely accused Post Office workers of financial discrepancies, leading to severe consequences including wrongful prosecutions and personal tragedies. The core issue was the legal assumption in England and Wales that computer systems are infallible, making it challenging to contest computer-generated evidence. The article emphasizes the need for legal reform, especially as AI technology is increasingly integrated into various sectors. The presumption of computer reliability is outdated, and current laws need to adapt to the complexities of AI and the potential for errors.
The article proposes solutions to ensure justice and accountability in the use of AI and computer systems. It suggests mandatory disclosure of relevant data and code in legal cases, as well as transparency in information-security standards, system audits, and error management. This approach was effective in a group claim against the Post Office, where IT specialists aided in revealing the system's flaws. The article reflects on the 1980s, when computer evidence required proof of reliability, and calls for a similar approach in the AI era. This will ensure that AI and computer systems are not blindly trusted in legal contexts, preventing injustices like those seen in the Post Office scandal.
Reference:
Nature Editorial. (2024, January 23). Computers make mistakes and AI will make things worse — the law must recognize that. Nature, 625, 631. https://doi.org/10.1038/d41586-024-00168-8
Deepfakes exploiting Taylor Swift images exemplify a scourge with little oversight
BY Royal-RAIS Editorial Team | PUBLISHED: January 28, 2024
The article discusses the growing concern over deepfake technology, especially its use in creating non-consensual pornographic content. Recently, deepfake images purportedly of Taylor Swift gained significant attention online, underscoring the ease of creating and spreading such content. While some deepfake applications are marketed for entertainment, the majority are used for creating explicit videos, primarily targeting women. Legal and privacy experts, like Danielle Citron, highlight the dangers these deepfakes pose to privacy and identity. These technological advances in deepfake creation are outpacing regulatory efforts, with laws at both federal and state levels still in development.
The article also touches on the wider implications of deepfake technology, including its use in political misinformation and online scams. Examples include fake videos of celebrities like Jennifer Aniston and Tom Hanks used in deceptive advertising. The challenges posed by deepfakes are multifaceted, affecting areas from personal privacy to political disinformation. Current efforts to regulate these technologies vary, with some states implementing laws against nonconsensual deepfake pornography, while broader federal regulations are still being considered. The article emphasizes the urgent need for legal and ethical frameworks to address the rapidly evolving landscape of deepfake technology.
Reference:
Chappell, B. (2024, January 26). Deepfakes exploiting Taylor Swift images exemplify a scourge with little oversight. NPR. Retrieved from https://www.npr.org/2024/01/26/1227091070/deepfakes-taylor-swift-images-regulation
Advances in Facial Recognition Technology Have Outpaced Laws, Regulations; New Report Recommends Federal Government Take Action on Privacy, Equity, and Civil Liberties Concerns
BY Royal-RAIS Editorial Team | PUBLISHED: January 24, 2024
A recent report from the National Academies of Sciences, Engineering, and Medicine calls attention to the rapid advancement of facial recognition technology and its outpacing of current laws and regulations. The report underscores the need for urgent government action to address privacy, equity, and civil liberties concerns associated with the technology. It acknowledges the usefulness of facial recognition in various identity verification and identification applications but highlights significant issues related to its broader use. The technology, powered by deep neural network-based machine learning, has improved in accuracy and speed, but lacks comprehensive regulatory oversight in the U.S. This gap in regulation raises concerns about its impact on privacy, civil liberties, and human rights. The report suggests that the federal government should consider federal legislation and an executive order, alongside attention from courts, the private sector, civil society organizations, and others working with facial recognition technology, to guide its responsible development and deployment.
The report identifies two primary areas of concern: potential harms from misuse or problematic use of the technology, and issues arising from errors or limitations in the technology itself, such as differing false positive or negative rates across demographic groups. It notes that many facial recognition systems in the U.S. are trained on imbalanced datasets, leading to higher false positive rates for racial minorities and reinforcing patterns of scrutiny in marginalized communities. To address these challenges, the report recommends various measures, including federal legislation to limit potential harms and protect against misuse, training and certification for operators, standards for image quality and accuracy, and a risk management framework to mitigate the risks of facial recognition applications. The study, sponsored by the U.S. Department of Homeland Security and the Federal Bureau of Investigation, underscores the need for a balanced approach that recognizes the technology's value while proactively addressing its potential negative impacts.
Reference:
National Academies of Sciences, Engineering, and Medicine. (2024, January 17). Advances in facial recognition technology have outpaced laws, regulations; New report recommends federal government take action on privacy, equity, and civil liberties concerns. Retrieved from https://www.nationalacademies.org/news/2024/01/advances-in-facial-recognition-technology-have-outpaced-laws-regulations-new-report-recommends-federal-government-take-action-on-privacy-equity-and-civil-liberties-concerns
Global ransomware threat expected to rise with AI, NCSC warns
BY Royal-RAIS Editorial Team | PUBLISHED: January 24, 2024
The National Cyber Security Centre (NCSC), part of the UK's GCHQ, has issued a warning in a new report about the increasing threat of ransomware due to advancements in Artificial Intelligence (AI). The report highlights how AI is already being used in malicious cyber activities and predicts a significant rise in the volume and impact of cyber attacks, including ransomware, over the next two years. AI's role in cyber attacks is described as evolutionary, enhancing existing threats without transforming the risk landscape. It facilitates access and information-gathering for unskilled threat actors, thereby increasing the efficiency and targeting capabilities of ransomware attacks. The UK government has responded by investing £2.6 billion in its Cyber Security Strategy, focusing on improving national resilience and promoting secure AI system development.
The NCSC report also touches upon the 'ransomware-as-a-service' model, mirroring the growing commoditization of AI-enabled capabilities. The National Crime Agency (NCA) notes that despite advancements in AI, ransomware remains a prominent cyber crime method due to its lucrative nature. The NCA and NCSC emphasize the importance of adopting AI technology responsibly while mitigating its risks. They recommend following NCSC's ransomware and cyber security guidelines to strengthen defenses against these evolving threats. The upcoming CYBERUK 2024 event will further discuss securing future technologies and preparing for emerging threats.
Reference:
National Cyber Security Centre. (2024, January 24). Global ransomware threat expected to rise with AI, NCSC warns. Retrieved January 24, 2024, from https://www.ncsc.gov.uk/news/global-ransomware-threat-expected-to-rise-with-ai?utm_source=substack&utm_medium=email
Fake Joe Biden robocall tells New Hampshire Democrats not to vote Tuesday
BY Royal-RAIS Editorial Team | PUBLISHED: January 22, 2024
The NBC News article reports on a fake robocall impersonating President Joe Biden, which told New Hampshire Democrats not to vote in the presidential primary. The call, appearing to be a digitally manipulated imitation of Biden's voice, discouraged voting by suggesting it would aid Republican efforts to elect Donald Trump. The New Hampshire attorney general's office is investigating this as an "unlawful attempt" at voter suppression. The incident has raised concerns about the use of artificial intelligence in political campaigns and its potential impact on election integrity.
Reference:
Seitz-Wald, A., & Memoli, M. (2024, January 22). Fake Joe Biden robocall tells New Hampshire Democrats not to vote Tuesday. NBC News. Retrieved from https://www.nbcnews.com/politics/2024-election/fake-joe-biden-robocall-tells-new-hampshire-democrats-not-vote-tuesday-rcna134984
AI Powered Cybercrime Will Explode in 2024: CrowdStrike Executive
BY Royal-RAIS Editorial Team | PUBLISHED: January 22, 2024
The article from Decrypt discusses the significant rise in AI-powered cybercrime anticipated in 2024, as stated by CrowdStrike's Shawn Henry. It highlights the increasing use of AI in cyberattacks, including sophisticated deepfakes and misinformation, particularly in the context of the upcoming elections in various countries. The decentralization of U.S. voting systems is seen as a safeguard against widespread election interference, but concerns remain about AI's ability to empower less technically skilled cybercriminals. Global efforts to regulate AI misuse and the challenges of discerning authentic information online are also mentioned.
Reference:
Nelson, J. (2024, January 3). AI Powered Cybercrime Will Explode in 2024: CrowdStrike Executive. Decrypt. Retrieved from https://decrypt-co.cdn.ampproject.org/c/s/decrypt.co/211596/ai-powered-cybercrime-will-explode-in-2024-crowdstrike-executive?amp=1
FBI, DHS Warn U.S. Firms of Cyber Threats from Chinese Drones
BY Royal-RAIS Editorial Team | PUBLISHED: January 22, 2024
The Flying Magazine article discusses a report by the FBI and CISA, which warns U.S. firms about cybersecurity threats from Chinese drones, notably those from DJI. The report, "Cybersecurity Guidance: Chinese-Manufactured UAS," doesn't hold legal weight but suggests safeguards for critical infrastructure against potential unauthorized data access by the Chinese government. The concern is that China's legal framework allows it to control data collected by Chinese firms, posing risks to U.S. national security. The article also touches on the U.S. response, including legislative actions to curb the use of Chinese drones in critical sectors.
Reference:
Daleo, J. (2024, January 18). FBI, DHS Warn U.S. Firms of Cyber Threats from Chinese Drones. Flying Magazine. Retrieved from https://www.flyingmag.com/fbi-dhs-warn-u-s-firms-of-cyber-threats-from-chinese-drones/
Capturing AI’s potential needs a ‘two-way street’ between the feds and states, cities
BY Royal-RAIS Editorial Team | PUBLISHED: January 22, 2024
The article discusses the evolving role of artificial intelligence (AI) in government, emphasizing the need for collaboration between federal, state, and local governments. Deirdre Mulligan, principal deputy U.S. chief technology officer, highlighted this during the U.S. Conference of Mayors’ winter meeting, advocating for a "two-way street" approach in AI regulation and usage. The federal government is encouraged to learn from local AI policies, such as those in Boston and San Jose, while local governments can utilize federal guidelines like those from the Office of Management and Budget. 2023 was notable for AI advancements in various cities and states, with policies and executive orders focusing on AI's responsible use. San Jose, in particular, has been proactive, using AI for municipal services like pothole detection and real-time translation of city communications, and has established a dedicated AI unit for risk assessment and guidance on technology application.
The article also stresses the importance of balancing regulation and innovation in AI deployment. Mayors like San Jose’s Matt Mahan and Seattle’s Bruce Harrell called for collaboration in sharing AI best practices among governments, with Mahan promoting the Government AI Coalition. Beverly Hills Mayor Julian Gold pointed out that AI's current state is mainly driven by private enterprises, and there's a need for governments to keep pace without stifling innovation. The Biden administration's stance, as articulated by Mulligan, is not to "run fast and break things" but to collaborate effectively to address significant challenges. The article highlights the complexity of integrating AI into government operations and the need for careful, collaborative approaches to harness its full potential.
Reference:
Manatt, Phelps & Phillips, LLP. (2024, January 10). New York Announces Two Major Artificial Intelligence Initiatives. JD Supra. Retrieved from https://www.route-fifty.com/emerging-tech/2024/01/capturing-ais-potential-needs-two-way-street-between-feds-and-states-cities/393436/
New York Announces Two Major Artificial Intelligence Initiatives
BY Royal-RAIS Editorial Team | PUBLISHED: January 21, 2024
In January 2024, New York Governor Kathy Hochul, along with the New York Office of Information Technology Services (NY ITS), announced two significant artificial intelligence (AI) initiatives. The first is the establishment of the “Empire AI” consortium, formed as part of the Governor's 2024 State of the State proposal. This consortium, which includes prestigious institutions such as Columbia University, Cornell University, and New York University, aims to create a technological hub in upstate New York. It will focus on harnessing AI for the public interest and is backed by substantial funding, including $400 million from various sources. The second initiative is a new state IT policy, NYS-P24-001, mandating strict guidelines for the use of AI by state entities. This policy emphasizes human oversight, fairness, transparency, risk assessment, privacy, security, and intellectual property protection. It requires entities to conduct an AI system inventory within 180 days and mandates risk assessments in line with the National Institute of Standards and Technology's framework.
These initiatives have far-reaching implications. The “Empire AI” consortium is expected to attract innovation and high-paying AI jobs to New York. The new IT policy, with its immediate effect, requires organizations to quickly establish comprehensive AI compliance programs. This policy not only affects state agencies but also extends to local governments, contractors, and third parties working with state AI systems. Particularly in healthcare, there's an expectation of compliance with these guidelines, underscoring the growing concern about AI's potential bias in decision-making processes. Organizations are advised to form Data Governance Committees and review their AI usage to align with the NY ITS policy, ensuring a responsible and ethical implementation of AI technologies.
Reference:
Manatt, Phelps & Phillips, LLP. (2024, January 10). New York Announces Two Major Artificial Intelligence Initiatives. JD Supra. Retrieved from https://www.jdsupra.com/legalnews/new-york-announces-two-major-artificial-7059880/
NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems
BY Royal-RAIS Editorial Team | PUBLISHED: January 20, 2024
The National Institute of Standards and Technology (NIST) has released a new publication, "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2)", as part of its broader initiative to develop trustworthy artificial intelligence (AI). This document, a collaborative effort among government, academia, and industry, aims to assist AI developers and users in understanding and mitigating the variety of attacks that AI systems might face. The report highlights that AI systems, integrated into various aspects of modern society from autonomous vehicles to chatbots, are vulnerable to manipulative attacks due to their reliance on large datasets for training. These datasets, often sourced from public interactions and websites, can be corrupted by bad actors, leading to undesirable AI behavior. NIST's publication categorizes major types of attacks into evasion, poisoning, privacy, and abuse attacks, each with specific characteristics and objectives. Despite current mitigation strategies, the report acknowledges the inherent limitations and challenges in fully securing AI systems against these threats.
NIST computer scientist Apostol Vassilev and other authors emphasize the need for continuous development of better defenses against adversarial attacks on AI. The publication serves as a warning against over-reliance on AI systems, which, despite significant advancements, remain susceptible to spectacular failures due to unresolved theoretical issues in securing AI algorithms. The report's breakdown of attack subcategories and mitigation approaches is a step towards raising awareness among developers and organizations about the vulnerabilities of AI and machine learning technologies. The authors stress the importance of recognizing the limitations of existing defenses and the dangers of complacency in assuming complete security.
Reference:
National Institute of Standards and Technology. (2024, January 4). NIST identifies types of cyberattacks that manipulate behavior of AI systems. Retrieved from https://www.nist.gov/news-events/news/2024/01/nist-identifies-types-cyberattacks-manipulate-behavior-ai-systems
What’s in the AI Literacy Act, and how will it impact you?
BY Royal-RAIS Editorial Team | PUBLISHED: January 10, 2024
The article discusses the introduction of the bipartisan Artificial Intelligence (AI) Literacy Act by Rep. Lisa Blunt Rochester and Rep. Larry Bucshon, which aims to make AI literacy an essential component of digital literacy. The act proposes classifying AI as a necessary part of digital literacy and ensuring that AI literacy is accessible through public schools, colleges, universities, and libraries. The legislation focuses on several key areas, including skills development, competition to maintain technological competitiveness, hands-on education through grants, fostering digital equity, and accountability through annual reports. It recognizes the importance of engaging minority and rural communities and addresses the underrepresentation of certain demographic groups in AI-related fields. While the legislation primarily targets students and educators, it also emphasizes the significance of AI literacy in workforce development and its potential impact on bridging the digital divide.
Reference:
Quinn, H., & Rao, S. (2024, January 5). What’s in the AI Literacy Act, and how will it impact you? Technical.ly Civic News. https://technical.ly/civic-news/ai-literacy-act-explained-digital-literacy/
AI Advances Risk Facilitating Cyber Crime, Top US Officials Say
BY Royal-RAIS Editorial Team | PUBLISHED: January 10, 2024
The article discusses the growing concern among top U.S. law enforcement and intelligence officials regarding the potential risks associated with advances in artificial intelligence (AI). These officials believe that AI technology may facilitate cybercrimes such as hacking, scamming, and money laundering by reducing the technical expertise required to carry out such illicit activities. Rob Joyce, the director of cybersecurity at the National Security Agency, expressed his concern that less-skilled individuals are increasingly using AI to guide hacking operations, making them more effective and dangerous. Furthermore, the FBI has observed a rise in cyber intrusions due to AI's ability to lower technical barriers for cybercriminals. Additionally, the article highlights how AI can be used to generate convincing messages and "deepfake" images, potentially enabling criminals to open accounts at scale, thus undermining established financial controls and identity verification systems.
Reference:
Cohen, L. (2024, January 9). AI Advances Risk Facilitating Cyber Crime, Top US Officials Say. U.S. News & World Report. https://www.usnews.com/news/top-news/articles/2024-01-09/ai-advances-risk-facilitating-cyber-crime-top-us-officials-say
Microsoft announces AI key on Windows 11 PCs
BY Royal-RAIS Editorial Team | PUBLISHED: January 5, 2024
Microsoft has introduced a significant update to its keyboards by incorporating an artificial intelligence (AI) key for Windows 11 PCs. This new key provides users with access to Copilot, a powerful AI tool developed by Microsoft in collaboration with OpenAI. Copilot assists users with various tasks like searching, email composition, and image creation. Microsoft's integration of AI into products like Microsoft 365 and Bing search in 2023 demonstrates their commitment to enhancing the user experience. The introduction of this AI key represents a transformative moment in keyboard technology and is expected to simplify and amplify the overall user experience. These innovative keyboards will be featured on new products starting from February, with Microsoft showcasing them at the upcoming CES tech event.
Reference:
Rahman-Jones, I. (2024, January 4). Microsoft announces AI key on Windows 11 PCs. BBC News. https://www.bbc.com/news/technology-67881373
Microsoft announces AI key on Windows 11 PCs
BY Royal-RAIS Editorial Team | PUBLISHED: January 5, 2024
Microsoft has introduced a significant update to its keyboards by incorporating an artificial intelligence (AI) key for Windows 11 PCs. This new key provides users with access to Copilot, a powerful AI tool developed by Microsoft in collaboration with OpenAI. Copilot assists users with various tasks like searching, email composition, and image creation. Microsoft's integration of AI into products like Microsoft 365 and Bing search in 2023 demonstrates their commitment to enhancing the user experience. The introduction of this AI key represents a transformative moment in keyboard technology and is expected to simplify and amplify the overall user experience. These innovative keyboards will be featured on new products starting from February, with Microsoft showcasing them at the upcoming CES tech event.
Reference:
Rahman-Jones, I. (2024, January 4). Microsoft announces AI key on Windows 11 PCs. BBC News. https://www.bbc.com/news/technology-67881373
Drones in the air, cops downstairs: Inside the NYPD’s Times Square massive security detail on New Year’s Eve
BY Royal-RAIS Editorial Team | PUBLISHED: January 2, 2024
In preparation for New Year's Eve in Times Square, the New York Police Department (NYPD) deployed an extensive security operation, which included the use of drones for surveillance. This marked the first year that the NYPD utilized drone technology to monitor the safety of the more than 1 million revelers attending the event. The drones were equipped with high-quality cameras, allowing real-time video streaming to police headquarters and individual officers' smartphones. While the technology was considered a game changer, it was emphasized that the drones did not employ any artificial intelligence or facial recognition software. Instead, they provided a visual overview to detect anomalies or suspicious activities during the celebration.
In addition to drone surveillance, the NYPD also deployed a substantial number of transit police officers to ensure the safety of subway passengers. Chief of Transit Michael Kemper briefed approximately 300 officers on their roles, emphasizing the need to remain vigilant for anything suspicious. He highlighted the large network of video surveillance cameras within the subway system, with many cameras monitored in real-time by officers. The goal was to keep both New Yorkers and tourists safe throughout the New Year's Eve celebration in Times Square.
Reference:
Moses, D. (2023, December 31). Drones in the air, cops downstairs: Inside the NYPD’s Times Square massive security detail on New Year’s Eve. amNewYork. https://www.amny.com/news/nypd-times-square-security-new-year-eve/
Cybersecurity guru Mikko Hyppönen’s 5 most fearsome AI threats for 2024
BY Royal-RAIS Editorial Team | PUBLISHED: January 2, 2024
In an article published on January 1, 2024, cybersecurity expert Mikko Hyppönen discusses the five most significant AI-related cybersecurity threats for the year. Hyppönen, Chief Research Officer at WithSecure, believes that the AI revolution will have a substantial impact on the cybersecurity landscape. His top concerns for 2024 include the proliferation of deepfakes, with a 3,000% increase in deepfake fraud attempts in 2023. While deepfakes have not yet caused large-scale financial scams, their refinement and accessibility pose a growing threat. To mitigate this risk, Hyppönen suggests implementing safe words in video calls to verify sensitive information requests. Additionally, he highlights the rise of deep scams, which involve massive-scale scams driven by automation, potentially leading to a surge in phishing and other fraudulent activities. Hyppönen emphasizes the need for vigilance in the face of AI-enabled cyber threats and calls for proactive measures to address these challenges.
Reference:
Macaulay, T. (2024, January 1). Cybersecurity guru Mikko Hyppönen’s 5 most fearsome AI threats for 2024. The Next Web. https://thenextweb.com/news/mikko-hypponen-5-biggest-ai-cybersecurity-threats-2024
Sorry AI, only humans can invent things, UK supreme court rules
BY Royal-RAIS Editorial Team | PUBLISHED: January 2, 2024
In a landmark ruling, the UK supreme court has determined that artificial intelligence (AI) cannot be recognized as the inventor of a new idea or product in patent applications. The case originated in 2018 when Stephen Thaler, the founder of Imagination Engines, sought patents naming his AI machine DABUS as the inventor of a food container designed for robots and a flashing emergency warning light. Both the European Patent Office (EPO) and the United Kingdom Intellectual Property Office (UKIPO) rejected the application, asserting that an inventor must be a natural person. This ruling raises significant questions about the role of AI in intellectual property rights, as Thaler and the Artificial Inventor Project argue that designating AI systems as inventors would encourage businesses to invest in AI development and confidently patent the results. While the UK government's aspirations to establish itself as an AI superpower may require legislative intervention to address the issue, the decision underscores the need for further discussion on the recognition of AI's creative contributions.
Reference:
Geschwindt, S. (2023, December 21). Sorry AI, only humans can invent things, UK supreme court rules. The Next Web. https://thenextweb.com/news/ai-cant-invent-things-uk-supreme-court-rules?utm_source=linkedin&utm_medium=share&utm_campaign=article-share-button
Researchers Create Chatbot that Can Jailbreak Other Chatbots
BY Royal-RAIS Editorial Team | PUBLISHED: December 30, 2023
Researchers from Nanyang Technological University (NTU) in Singapore have developed an AI chatbot known as "Masterkey" designed to jailbreak other chatbots, including models like ChatGPT and Google Bard. These large language models (LLMs) are known for their ability to generate human-like responses but also have the potential to generate malicious content, misinformation, and other harmful outputs. The Masterkey bot uses reverse engineering techniques to understand how LLMs defend themselves against malicious queries and employs simple tactics like adding spaces between characters to confuse keyword scanners. By training an LLM with these techniques, the NTU team successfully compromised ChatGPT and Bard, highlighting the limitations of current AI security approaches. While the study has not yet undergone peer review, the researchers have alerted OpenAI and Google to the jailbreaking technique, and the AI could potentially be used to improve LLM security.
Reference:
Whitwam, R. (2023, December 28). Researchers Create Chatbot that Can Jailbreak Other Chatbots. ExtremeTech. https://www.extremetech.com/extreme/researchers-create-chatbot-that-can-jailbreak-other-chatbots
New AI model can predict human lifespan, researchers say. They want to make sure it's used for good
BY Royal-RAIS Editorial Team | PUBLISHED: December 25, 2023
Researchers have developed an artificial intelligence tool called "life2vec," powered by transformer models similar to those used in large language models like ChatGPT, to predict various aspects of a person's life based on sequences of life events, such as health history, education, job, and income. This AI tool was trained on a dataset comprising information from 6 million people in Denmark, provided by the Danish government. It has demonstrated impressive accuracy in predicting future events, including individual lifespans. However, the researchers emphasize that it should not be used for predictions on real individuals but rather as a foundation for further research and understanding societal trends. They highlight the importance of ethics in AI and the need for a human-centered approach to AI development to ensure that it aligns with societal values and regulations. The researchers view this model as a tool to better understand societal patterns and challenges and to promote transparency in AI development.
Reference:
Mello-Klein, C. (2023, December 24). New AI model can predict human lifespan, researchers say. They want to make sure it's used for good. Phys.org. https://phys-org.cdn.ampproject.org/c/s/phys.org/news/2023-12-ai-human-lifespan-good.amp
Google reportedly targeted people with 'dark skin' to improve facial recognition
BY Royal-RAIS Editorial Team | PUBLISHED: December 25, 2023
The article sheds light on Google's controversial attempt to improve its facial recognition algorithms by collecting data from individuals with darker skin tones, raising ethical concerns about data harvesting practices. Google employed subcontracted workers who targeted people with "darker skin tones" and individuals likely to be enticed by $5 gift cards, including homeless individuals and college students. These workers employed deceptive tactics to persuade subjects to agree to face scans, such as mischaracterizing the scan as a "selfie game" or "survey" and pressuring them to sign consent forms without proper disclosure. The project faced harsh condemnation from digital civil rights and racial justice advocates, highlighting concerns about algorithmic bias, data privacy, and the potential misuse of facial recognition technology.
Google defended the project, emphasizing that the face scan data collection aimed to improve the fairness and accuracy of the "face unlock feature" on its Pixel 4 phone. However, critics argued that the goal should not be to enhance the accuracy of an invasive technology that disproportionately misidentifies black and brown faces. They called for government regulation and legislative bans on facial recognition technology, citing the dangers it poses, particularly to marginalized communities.
Reference:
Wong, J. C. (2019, October 3). Google reportedly targeted people with 'dark skin' to improve facial recognition. The Guardian. https://www.theguardian.com/technology/2019/oct/03/google-data-harvesting-facial-recognition-people-of-color
Australia needs to face up to the dangers of facial recognition technology
BY Royal-RAIS Editorial Team | PUBLISHED: December 25, 2023
The article emphasizes the need for Australia to confront the risks associated with facial recognition technology and take immediate actions to suspend its use while establishing a regulatory framework. Over the years, Australia has witnessed a proliferation of surveillance cameras and facial recognition technology in public spaces, often without public consent. Both government agencies and private corporations have deployed these systems extensively, raising concerns about privacy and accountability. The technology's flaws, including frequent false positives and racial biases, have been highlighted, making it imperative to reevaluate its usage.
Prominent technology companies like Google, Microsoft, IBM, and even Amazon have recognized the risks and limitations of facial recognition technology and have withdrawn their systems or placed restrictions on their use. Social justice and human rights groups, along with these tech giants, are urging governments to ban facial recognition surveillance technology and establish regulations to protect individuals' privacy and prevent further systemic injustice. While some cities in Australia have taken steps to block the technology's use, there remains a lack of federal safeguards and accountability mechanisms. The article underscores the urgency of following the lead of cities both within Australia and abroad to halt facial recognition surveillance and engage in a transparent process to develop a regulatory framework that safeguards privacy and prevents abuse.
Reference:
Paris, D. (2020, August 7). Australia needs to face up to the dangers of facial recognition technology. The Guardian. https://www.theguardian.com/commentisfree/2020/aug/07/australia-needs-to-face-up-to-the-dangers-of-facial-recognition-technology
Microsoft limits access to facial recognition tool in AI ethics overhaul
BY Royal-RAIS Editorial Team | PUBLISHED: December 25, 2023
Microsoft is undergoing a significant overhaul of its artificial intelligence (AI) ethics policies, with a focus on responsible AI practices. As part of this initiative, the company has announced that it will no longer allow organizations to use its technology for activities such as inferring emotions, gender, or age using facial recognition technology. Microsoft's "responsible AI standard" is designed to prioritize people and their objectives in system design decisions. The company is implementing these high-level principles to bring about real changes in practice, including adjustments to existing features and the withdrawal of certain capabilities.
One notable example is Microsoft's Azure Face service, which offers facial recognition functionality used by companies like Uber for identity verification. Going forward, any company interested in using this service's facial recognition features will need to actively apply for access, even if they have already integrated it into their products, and demonstrate alignment with Microsoft's AI ethics standards, ensuring that these features benefit end users and society. Additionally, Microsoft is phasing out facial analysis technology that claims to infer emotional states and attributes like gender or age, due to concerns surrounding privacy, the lack of consensus on emotion definitions, and the inability to generalize the linkage between facial expression and emotional states across different use cases. Despite these restrictions, Microsoft will continue to use emotion recognition technology internally, particularly in accessibility tools such as Seeing AI, which assists users with visual impairments by verbally describing their surroundings.
Reference:
Hern, A. (2022, June 22). Microsoft limits access to facial recognition tool in AI ethics overhaul. The Guardian. https://www.theguardian.com/technology/2022/jun/22/microsoft-limits-access-to-facial-recognition-tool-in-ai-ethics-overhaul
Police to be able to run face recognition searches on 50m driving licence holders
BY Royal-RAIS Editorial Team | PUBLISHED: December 25, 2023
In a recent development, the UK government has quietly introduced a law change that grants police the authority to conduct facial recognition searches on a database comprising images of 50 million British driving license holders. This change, embedded within a new criminal justice bill, empowers law enforcement to identify individuals through their driving license records based on images collected from CCTV footage or shared on social media. Privacy advocates argue that this move effectively places every driver in the country under perpetual police surveillance, raising concerns about the erosion of privacy rights and the potential for discrimination, especially in cases where facial recognition technology has been shown to disproportionately affect black and Asian faces. While the government claims that the law is intended to clarify and regulate the use of driver data, critics emphasize the lack of transparency and public consultation surrounding these new powers, calling for more safeguards and oversight.
Reference:
Boffey, D. (2023, December 20). Police to be able to run face recognition searches on 50m driving licence holders. The Guardian. https://www.theguardian.com/technology/2023/dec/20/police-to-be-able-to-run-face-recognition-searches-on-50m-driving-licence-holders
‘Are you kidding, carjacking?’ – The problem with facial recognition in policing
BY Royal-RAIS Editorial Team | PUBLISHED: December 23, 2023
In an article titled "TechScape: 'Are you kidding, carjacking?' – The problem with facial recognition in policing," published on August 15, 2023, by The Guardian, the case of Porcha Woodruff, a pregnant Black woman falsely arrested on charges of carjacking and robbery due to a false identification by facial recognition technology, is highlighted. Woodruff's image was incorrectly matched to video footage of a woman involved in the carjacking, leading to her arrest. This incident is part of a series of false arrests related to facial recognition, with all six known cases involving Black individuals. Privacy experts and advocates have raised concerns about the technology's inability to accurately identify people of color and the associated privacy violations. Despite these issues, law enforcement agencies in the US and around the world continue to use facial recognition technology.
Woodruff's case has sparked renewed calls in the US for a complete ban on police and law enforcement use of facial recognition. While the Detroit police department implemented new limitations on facial recognition use after the lawsuit, activists argue that a complete ban is the only policy that can prevent false facial recognition arrests. Critics also emphasize that even if the technology were accurate, it would still pose privacy concerns and the potential for a vast surveillance network, infringing on individuals' rights in public spaces. The article underscores the need for more comprehensive regulations and the importance of addressing the biases inherent in facial recognition systems.
Reference:
Bhuiyan, J. (2023, August 15). TechScape: ‘Are you kidding, carjacking?’ – The problem with facial recognition in policing. The Guardian. https://www.theguardian.com/newsletters/2023/aug/15/techscape-facial-recognition-software-detroit-porcha-woodruff-black-people-ai
AI consciousness: scientists say we urgently need answers
BY Royal-RAIS Editorial Team | PUBLISHED: December 23, 2023
In an article titled "Rite Aid Facial Recognition Misidentified Black, Latino, and Asian People as 'Likely' Shoplifters," published on December 20, 2023, by The Guardian, it is reported that Rite Aid, a US drugstore chain, used facial recognition technology to identify shoppers previously deemed as potential shoplifters without their consent. The Federal Trade Commission (FTC) settled with Rite Aid over allegations that the system frequently misidentified individuals, particularly women and people from Black, Latino, or Asian backgrounds. The technology was employed in hundreds of Rite Aid stores from October 2012 to July 2020, alerting employees when individuals on its watchlist entered the store. This led to increased surveillance, banning customers from making purchases, and even publicly accusing them of previous crimes. As part of the settlement, Rite Aid is prohibited from using facial recognition technology in its stores for the next five years.
Reference:
Bhuiyan, J., & agencies. (2023, December 20). Rite Aid facial recognition misidentified Black, Latino and Asian people as ‘likely’ shoplifters. The Guardian. https://amp.theguardian.com/technology/2023/dec/20/rite-aid-shoplifting-facial-recognition-ftc-settlement
AI consciousness: scientists say we urgently need answers
BY Royal-RAIS Editorial Team | PUBLISHED: December 23, 2023
In the article titled "AI Consciousness: Scientists Say We Urgently Need Answers," published in Nature on December 21, 2023, the Association for Mathematical Consciousness Science (AMCS) highlights the growing concern regarding the potential consciousness of artificial intelligence (AI) systems. Researchers and experts in the field of consciousness science express uncertainty about whether AI systems could develop consciousness and emphasize the need for scientific investigations into the boundaries between conscious and unconscious AI. They argue that understanding AI consciousness is essential due to ethical, legal, and safety implications. For instance, questions arise about whether conscious AI systems should be granted rights, held accountable for wrongdoing, and whether they could experience suffering. The AMCS calls for increased funding to support research on AI consciousness, as well as the development of scientifically validated methods to assess consciousness in machines, to address these pressing concerns.
Reference:
Lenharo, M. (2023). AI consciousness: Scientists say we urgently need answers. Nature. https://doi.org/10.1038/d41586-023-04047-6
Police using drones to patrol malls during holiday shopping season
BY Royal-RAIS Editorial Team | PUBLISHED: December 19, 2023
The Sweetwater Police Department in South Florida has adopted high-tech security measures for the holiday shopping season, with a focus on enhancing safety at the busy Dolphin Mall. As part of their security strategy, the department has deployed advanced drones equipped with high-resolution cameras capable of tracking individuals and vehicles deemed suspicious. These drones can lock onto subjects and follow them until officers on the ground can intervene, ensuring swift responses to potential threats. Moreover, police cruisers are now equipped with license plate readers that can work in coordination with the drones to identify suspects, while K-9 units, motorcycle officers, and additional uniformed and undercover officers will be present to provide a heightened security presence. The measures are in response to concerns about car thefts and robberies during the holiday shopping season, with the department aiming to make it one of the safest holiday periods ever at Dolphin Mall (Hush, 2023).
Reference:
Hush, C. (2023, November 13). Sweetwater Police go high-tech with security during busy holiday shopping season. NBC 6 South Florida. https://www.nbcmiami.com/news/local/sweetwater-police-go-high-tech-with-security-during-busy-holiday-shopping-season/3158436/
Government report for the first time identifies AI as potential risk to financial stability
BY Royal-RAIS Editorial Team | PUBLISHED: December 19, 2023
In a recent report by Escobedo (2023), the Financial Stability Oversight Council (FSOC), comprising prominent figures like Treasury Secretary Janet Yellen and Federal Reserve Chair Jerome Powell, has identified artificial intelligence (AI) as a potential risk to the nation's financial stability. This marks the first acknowledgment of AI's impact on the financial sector's stability. The report highlights the operational risks associated with AI systems, including their reliance on large datasets and third-party vendors, which introduce concerns related to data controls, privacy, and cybersecurity. While AI has been increasingly utilized in the financial sector for pattern recognition and analysis, FSOC members, such as SEC Chair Gary Gensler, have cautioned about challenges in explainability, bias, and accuracy. Gensler also noted the potential misuse of AI by malicious actors to deceive markets, citing an incident where an AI-generated image falsely depicting an explosion near the Pentagon caused market disruption.
The report emphasizes the importance of managing the adoption of AI in the financial industry responsibly. Treasury Secretary Janet Yellen stressed the need to support responsible innovation while adhering to existing risk management principles and rules. The report also delves into other vulnerabilities in the financial system, such as poor risk management in banking institutions, high concentrations of commercial real estate loans, cybersecurity risks, climate-related financial risks, and the volatility of digital assets like cryptocurrencies. It recommends actions to address these concerns, including enhanced collaboration between government agencies and private firms to mitigate cyber risks and the development of frameworks for assessing climate risk. Additionally, the FSOC recommends congressional regulation of stablecoins and other crypto assets due to their price volatility. The report serves as a comprehensive overview of the multifaceted risks AI and other factors pose to the financial stability of the United States.
Reference:
Escobedo, R. (2023, December 15). Government report for the first time identifies AI as potential risk to financial stability. CBS News. https://www-cbsnews-com.cdn.ampproject.org/c/s/www.cbsnews.com/amp/news/regulators-identify-ai-risk-financial-stability/
AI helps detecting plastic in oceans
BY Royal-RAIS Editorial Team | PUBLISHED: December 19, 2023
The article titled "AI Helps Detecting Plastic in Oceans" reports on a collaborative research effort by EPFL and Wageningen University, resulting in the development of a more accurate artificial intelligence model for identifying floating plastics in satellite images. As plastic pollution in oceans continues to rise due to improper disposal and recycling of plastic waste, accurately detecting and removing plastics from marine environments becomes crucial. The research team used Sentinel-2 satellite images provided by the European Space Agency to develop their AI model, which estimates the likelihood of marine debris in the images. This enhanced AI model not only improves the precision of identifying plastic litter but also remains effective under challenging conditions, such as cloud cover and atmospheric haze. The model's accuracy in detecting plastics is vital for tracking and cleaning up marine debris, especially after rain and flood events that wash plastics into open waters.
Reference:
EPFL. (2023, November 30). AI Helps Detecting Plastic in Oceans. Wevolver. https://www.wevolver.com/article/ai-helps-detecting-plastic-in-oceans?fbclid=IwAR04a5fxP_3PWE45s2_UiYFSxkjOKnhz1sdT1cb5vUvFGKm6h8liS7OiKTU_aem_AbGUK9K6UqCYMNCtOCRr6UoS7S5diK4ODvcvwutC7Lg0oFqHs5C5ksOXoZGVFnIWJI4
Agricultural robots can help improve biodiversity
BY Royal-RAIS Editorial Team | PUBLISHED: December 19, 2023
The article titled "Agricultural Robots Can Help Improve Biodiversity" discusses the potential of small, autonomous robots to transform agriculture and enhance biodiversity. In Denmark, where a significant portion of land is dedicated to agriculture, monoculture practices have been prevalent, adversely affecting biodiversity. However, the article highlights a project called SAVA (Safe Autonomous Vehicles for Agriculture) led by Lazaros Nalpantidis from DTU Electro. The project aims to develop safe agricultural robots capable of working autonomously in smaller areas, allowing for the cultivation of multiple crop varieties on the same field. This shift away from monoculture practices can create more diverse habitats, reduce the need for pesticides, and ultimately contribute to improved biodiversity. While agricultural robots are still in development, they hold the potential to revolutionize farming practices and mitigate the negative impacts of monoculture on the environment.
Reference:
Bugge Moller, S. (2023, December 6). Agricultural Robots Can Help Improve Biodiversity. Wevolver. https://www.wevolver.com/article/agricultural-robots-can-help-improve-biodiversity?fbclid=IwAR17XQmtuOnLUd3N-TmxUNauHW6hN-DZXQYCdA5Ez1UMyaKYB1FEOTVJd1g_aem_AbGE24NDvLXKcQWAwrv4LWvUJUKgNqhAZqySY0epLtTLhvE10v8GKwT_CFhwImnTe2c
Who owns AI created content? The surprising answer and what to do about it
BY Royal-RAIS Editorial Team | PUBLISHED: December 19, 2023
The article titled "Who owns AI created content? The surprising answer and what to do about it" discusses the legal challenges surrounding AI-generated content and its ownership. It highlights the complexities arising from attempts to protect AI-generated material and protect human-generated works from being used in AI. The article notes that these challenges may hinder users of generative AI tools from protecting their creations and could potentially make them liable for infringement based on materials used to train the AI. The article explores a case where an artist faced issues with copyright registration for a graphic novel created using generative AI, leading to questions about the originality of AI-generated content. It also discusses potential steps to mitigate these risks when utilizing AI technology, emphasizing the need for smart governance and policies to ensure responsible use.
Reference:
Morriss, W. (2023, December 14). Who owns AI created content? The surprising answer and what to do about it. Westlaw Today. https://www.reuters.com/legal/legalindustry/who-owns-ai-created-content-surprising-answer-what-do-about-it-2023-12-14/
AI is powering a revolution in policing, at the Olympics and beyond
BY Royal-RAIS Editorial Team | PUBLISHED: December 19, 2023
In the city of Nice, France, AI-driven surveillance has transformed it into one of the most monitored cities in the country. With approximately 4,200 cameras deployed in public spaces, equipped with advanced features like thermal imaging and sensors, these cameras are integrated with AI technology in a command center. This AI can not only detect minor infractions, such as illegal parking, but also identify potentially suspicious activities, like unauthorized access to school buildings. Additionally, the city has experimented with highly accurate facial recognition software and real-time algorithms to monitor vehicle and pedestrian movements. The implementation of AI in policing is expanding globally, with the United States and the United Kingdom using AI for crime prevention, albeit facing privacy and ethical concerns. In preparation for the 2024 Olympics, France is embracing AI for security, but this approach encounters challenges in a region that champions AI regulation and digital privacy protections.
Reference:
Faiola, A. (2023, December 18). AI is powering a revolution in policing, at the Olympics and beyond. The Washington Post. https://www.washingtonpost.com/world/2023/12/18/ai-france-olympics-security/
‘Nudify’ Apps That Use AI to ‘Undress’ Women in Photos Are Soaring in Popularity
BY Royal-RAIS Editorial Team | PUBLISHED: December 19, 2023
The social network analysis company Graphika found that apps and websites that use artificial intelligence to undress women in photos are soaring in popularity. Many of these undressing, or “nudify,” services use popular social networks for marketing. For example, since the beginning of this year, the number of links advertising undressing apps increased more than 2,400% on social media, including on X and Reddit. The services use AI to recreate an image so that the person is nude. Many of the services only work on women. These apps are part of a worrying trend of non-consensual pornography being developed and distributed because of advances in artificial intelligence — a type of fabricated media known as deepfake pornography. The widespread use of this technology encounters significant legal and ethical challenges, primarily stemming from the fact that the images are frequently sourced from social media platforms and disseminated without the subject's consent, oversight, or awareness. The surge in its adoption aligns with the introduction of various open-source diffusion models, indicating artificial intelligence's ability to generate remarkably superior images compared to those produced just a few years ago, as highlighted by Graphika. Given that these models are freely accessible due to their open-source nature, the conclusion drawn is that merely pausing AI developments is insufficient; a comprehensive cessation is imperative.
Reference:
Murphy., M & Bloomberg (2023). 'Nudify'Apps that use AI to 'Undress' women in photos are soaring in popularity. Retrieved from https://time.com/6344068/nudify-apps-undress-photos-women-artificial-intelligence/
Extracting Training Data from ChatGPT
BY Royal-RAIS Editorial Team | PUBLISHED: December 16, 2023
In their article titled "Extracting Training Data from ChatGPT," Milad Nasr, Nicholas Carlini, and their team discuss a significant security vulnerability in ChatGPT, a widely-used language model. The authors demonstrate that it is possible to extract large amounts of ChatGPT's training data by employing a specific attack, highlighting the model's tendency to memorize and regurgitate training examples. This vulnerability poses potential privacy and security concerns, as sensitive information from the model's training data can be inadvertently disclosed. The authors emphasize the importance of testing not only aligned models but also base models, as alignment alone may not guarantee the absence of vulnerabilities. They argue that understanding and addressing these vulnerabilities in machine learning models are crucial steps toward ensuring their safety and reliability.
Reference:
Nasr, M., Carlini, N., Hayase, J., Jagielski, M., Cooper, A. F., Ippolito, D., Choquette-Choo, C. A., Wallace, E., Tramèr, F., & Lee, K. (2023). Extracting Training Data from ChatGPT. Royal Robotics and AI Security Newsletter. Retrieved from https://royal-robotics-ai.com/newsletters/extracting-training-data-from-chatgpt
Using AI In Cybersecurity: Exploring The Advantages And Risks
BY Royal-RAIS Editorial Team | PUBLISHED: December 14, 2023
The Forbes article explores the pivotal role of artificial intelligence (AI) in the realm of cybersecurity. It underscores how AI is instrumental in the fight against cybercrime, primarily by monitoring and analyzing behavior patterns, predicting potential outcomes of unusual activities, taking preventive measures, and continually enhancing its capabilities through machine learning. The piece underscores the numerous advantages of AI in cybersecurity, such as its capacity to minimize false alarms, automate repetitive tasks, and mitigate the scarcity of cybersecurity experts. However, it also acknowledges the challenges AI faces, including susceptibility to deception and the potential for human errors, while maintaining that the benefits of AI in cybersecurity outweigh the associated risks.
Reference:
Skwarczek, B. (2023, September 18). Using AI in Cybersecurity: Exploring the Advantages and Risks. Forbes Technology Council. https://www.forbes.com/sites/forbestechcouncil/2023/09/18/using-ai-in-cybersecurity-exploring-the-advantages-and-risks/?sh=1d1c08c229c7
AI being used for hacking and misinformation
BY Royal-RAIS Editorial Team | PUBLISHED: December 14, 2023
Sami Khoury, who leads the Canadian Centre for Cyber Security, has expressed concerns about AI being employed in cyberattacks. Specifically, AI is being used for activities like:
Crafting Convincing Phishing Emails: AI is employed to create phishing emails that appear highly convincing and are designed to trick recipients into revealing sensitive information or taking harmful actions.
Developing Malicious Code: Cybercriminals are using AI to develop malicious code, potentially for exploiting vulnerabilities in computer systems and carrying out cyberattacks.
Spreading Misinformation and Disinformation: AI is being used to generate and disseminate false or misleading information online, contributing to the spread of fake news and disinformation campaigns.
While the article does not provide specific examples or evidence of these activities, it underscores the urgent need for cybersecurity experts to address the risks associated with AI. Of particular concern are large language models (LLMs) like OpenAI's ChatGPT, which can generate realistic-sounding text and pose new challenges for identifying AI-generated content in cyber threats. The rapid advancement of AI technology presents a challenge, as its malicious potential evolves quickly, making it difficult for cybersecurity experts to stay ahead of emerging threats.
Reference:
Satter, R. (2023, July 20). Exclusive: AI being used for hacking and misinformation, top Canadian cyber official says. Reuters. https://www.reuters.com/technology/ai-being-used-hacking-misinfo-top-canadian-cyber-official-says-2023-07-20/
Google launches Gemini, the AI model it hopes will take down GPT-4
BY Royal-RAIS Editorial Team | PUBLISHED: December 13, 2023
Google has launched its latest large language model, Gemini, marking a significant step in the company's AI journey. Gemini comes in various versions, including Gemini Nano for Android devices, Gemini Pro for AI services, and Gemini Ultra for data centers and enterprise applications. Google plans to integrate Gemini into various products, including search, ad services, and more. The model's advantage lies in its ability to understand and interact with video and audio, making it a versatile AI system. Google claims that Gemini outperforms OpenAI's GPT-4 in 30 out of 32 benchmarks. The model is faster, cheaper to run, and represents a substantial improvement in AI capabilities.
Reference:
Pierce, D. (2023, December 6). Google Launches Gemini, the AI Model It Hopes Will Take Down GPT-4. The Verge. https://www.theverge.com/2023/12/6/23990466/google-gemini-llm-ai-model
Japanese tech giant Rakuten plans to launch proprietary AI model
BY Royal-RAIS Editorial Team | PUBLISHED: December 12, 2023
Japanese tech giant Rakuten is preparing to launch its proprietary artificial intelligence language model (LLM) in the near future, according to CEO Hiroshi "Mickey" Mikitani. The company boasts a diverse range of businesses, including banking, e-commerce, and telecommunications, which provides it with unique datasets to train its LLM. Rakuten intends to initially use the AI model internally to enhance operational efficiency and marketing by 20%. Eventually, the company plans to offer the model to third-party businesses for their use, similar to Amazon or Microsoft. While no specific timeline for the launch has been provided, an announcement related to the LLM is expected within the next couple of months. Japanese firms like Rakuten are striving to catch up with their U.S. and Chinese counterparts in the development of large language models.
Reference:
Kharpal, A. (2023, December 11). Rakuten plans to launch its own AI model within next 2 months: CEO. CNBC. https://www-cnbc-com.cdn.ampproject.org/c/s/www.cnbc.com/amp/2023/12/11/rakuten-plans-to-launch-its-own-ai-model-within-next-2-months-ceo.html
Meta Launches New Initiative To Establish AI Safety Regulations
BY Royal-RAIS Editorial Team | PUBLISHED: December 12, 2023
Meta has introduced a new initiative called "Purple Llama" aimed at establishing cybersecurity standards for large language models (LLMs) and generative AI tools. The project's goal is to create industry-wide benchmarks for LLM cybersecurity safety evaluations, in alignment with industry guidance and standards. This initiative responds to the White House's AI safety directive, which emphasizes the need to ensure AI systems' security and protect against AI-based manipulation. The Purple Llama project consists of two main components: "CyberSec Eval," which focuses on cybersecurity safety benchmarks, and "Llama Guard," a framework to safeguard against risky AI outputs. Meta is partnering with the AI Alliance, including Microsoft, AWS, Nvidia, and Google Cloud, to advance AI safety measures and rules, promoting industry collaboration in addressing AI-related risks.
Reference:
Hutchinson, A. (2023, December 7). Meta Launches New Initiative To Establish AI Safety Regulations. Social Media Today. https://www.socialmediatoday.com/news/meta-launches-new-initiative-establish-ai-safety-regulations/701918/
US appeals court proposes lawyers certify review of AI use in filings
BY Royal-RAIS Editorial Team | PUBLISHED: December 11, 2023
The 5th U.S. Circuit Court of Appeals in New Orleans has proposed a rule requiring lawyers to certify that they did not solely rely on artificial intelligence (AI) programs to draft legal briefs or that any AI-generated text was reviewed by humans for accuracy. This marks the first such proposed rule by any of the 13 federal appeals courts in the United States, aiming to regulate the use of generative AI tools, including OpenAI's ChatGPT, by attorneys appearing before the court. The proposed rule applies to lawyers and litigants, and failure to accurately certify compliance may result in sanctions and having filings stricken. The 5th Circuit is seeking public comments on this proposal until January 4, 2024.
Reference:
Raymond, N. (2023, November 22). US Appeals Court Proposes Lawyers Certify Review of AI Use in Filings. Reuters. https://www.reuters.com/legal/transactional/us-appeals-court-proposes-lawyers-certify-review-ai-use-filings-2023-11-22/
Toyota's AI Basketball Robot CUE5
BY Royal-RAIS Editorial Team | PUBLISHED: December 11, 2023
The article delves into a fascinating collaboration between Chris "Lethal Shooter" Matthews, a renowned basketball shooting coach, and Cue6, an advanced AI basketball-shooting robot developed by Toyota. In a meeting of human expertise and cutting-edge technology, Lethal Shooter teamed up with Cue6 in Japan to explore the intersection of basketball and artificial intelligence. Cue6, previously known for setting a Guinness World Record for consecutive free throws by a humanoid robot, displayed its remarkable shooting accuracy, thanks to its ability to analyze trajectories using multiple cameras and self-calibrate for precise shots.
Both Lethal Shooter and Cue6, masters of the art of shooting, embarked on various shooting challenges, including three-point shots from different angles and even using a modified 13-inch rim. The developer of Cue6, in an interview, discussed the robot's evolution and the challenges faced during its development. Despite its precision and accuracy, Cue6 revealed the limitations of robots in terms of flexibility and adaptability when compared to human basketball players. The future goals for Cue6 include enhancing its capabilities, such as making it capable of running and learning from human players.
In summary, the collaboration between Lethal Shooter and Cue6 sheds light on the intriguing blend of technology and sports, showcasing the incredible potential and challenges of AI in replicating and augmenting human skills, particularly in the realm of basketball.
Reference:
Hunter, R. (2023, July 18). Lethal Shooter and a Robot Shooter: Technology Meets Basketball. Red Bull. https://www.redbull.com/us-en/lethal-shooter-robot-cue-basketball
Europe reaches a deal on the world’s first comprehensive AI rules
BY Royal-RAIS Editorial Team | PUBLISHED: December 11, 2023
The European Union (EU) has successfully reached an agreement on the world's first comprehensive set of rules for artificial intelligence (AI). This landmark development allows for legal oversight of AI technology, which has the potential to significantly impact daily life and has raised concerns about potential risks to humanity.
Negotiators from the European Parliament and the EU's 27 member countries worked through contentious issues, including generative AI and the use of face recognition surveillance by law enforcement. The result is a tentative political agreement for the Artificial Intelligence Act, representing a significant step forward in regulating AI technology.
While this agreement marks a political victory for the EU, some civil society groups have expressed reservations, believing that the deal does not go far enough in safeguarding individuals from potential harm caused by AI systems. Technical details are expected to be ironed out in the coming weeks.
The European Parliament is scheduled to vote on the AI Act early next year, with the deal considered a formality. Once enacted, the law could impose substantial financial penalties for violations, with fines of up to 35 million euros ($38 million) or 7% of a company's global turnover.
Generative AI systems, such as OpenAI's ChatGPT, have been a focal point of concern, as they have demonstrated the ability to produce human-like content, raising issues related to jobs, privacy, copyright protection, and more.
This EU initiative is expected to set an example for other governments considering AI regulation. It may also influence AI companies to extend similar obligations outside the EU, as it is more efficient to maintain consistent standards across different markets.
The AI Act initially aimed to address AI functions based on their risk levels but was expanded to include foundation models that underpin general-purpose AI services like ChatGPT and Google's Bard chatbot. These models are trained on extensive data from the internet, giving them the ability to generate new content.
Notably, AI-powered face recognition surveillance systems were a contentious issue, with negotiations leading to compromises. While there is a ban on public use of face scanning and biometric identification systems, exemptions were made to allow law enforcement to use them in cases of serious crimes.
Despite the agreement, some rights groups are concerned about exemptions and loopholes in the AI Act, including a lack of protection for AI systems used in migration and border control and the option for developers to opt out of classifying their systems as high risk.
Reference:
Chan, K. (2023, December 8). Europe reaches a deal on the world’s first comprehensive AI rules. AP News. Retrieved from https://apnews.com/article/ai-act-europe-regulation-59466a4d8fd3597b04542ef25831322c
Due to AI, “We are about to enter the era of mass spying,” says Bruce Schneier
BY Royal-RAIS Editorial Team | PUBLISHED: December 11, 2023
Renowned security researcher Bruce Schneier has expressed concerns in an editorial for Slate about the potential for AI models to usher in a new era of mass spying. He warns that AI technology may enable companies and governments to automate the analysis and summarization of large volumes of conversation data, which could significantly lower the barriers to conducting spying activities that currently rely on human labor.
Schneier highlights that electronic surveillance has already transformed the modern era, with our digital footprints constantly tracked and analyzed for commercial purposes. However, he distinguishes between monitoring for commercial reasons and traditional spying activities, emphasizing that AI's capabilities could take economically inspired monitoring to a whole new level. Generative AI systems are becoming increasingly proficient at summarizing lengthy conversations and sifting through massive datasets, making spying not only more accessible but also more comprehensive. This shift toward AI-powered spying introduces the potential for analyzing the intent and context of interactions, moving from traditional digital surveillance to interpreting thoughts and discussions, which could have profound implications for personal privacy, corporate strategies, and governmental information gathering.
Reference:
Edwards, B. (2023, December 5). Due to AI, “We are about to enter the era of mass spying,” says Bruce Schneier. Ars Technica. Retrieved from https://arstechnica.com/information-technology/2023/12/due-to-ai-we-are-about-to-enter-the-era-of-mass-spying-says-bruce-schneier/
Taking flight against crime: Montgomery Co. police drone pilot program aides in arrests
BY Royal-RAIS Editorial Team | PUBLISHED: December 11, 2023
The Montgomery County Police Department in Maryland has introduced an innovative "Drone as First Responder" program, marking a significant milestone in the utilization of drone technology for law enforcement purposes. This pioneering initiative establishes the Montgomery County Police Department as the first major city police department to deploy drones as first responders in their efforts to combat crime. The program initially gained attention in a September report by ABC 7News and has since garnered increased interest, reflecting the growing recognition of drones' potential in enhancing public safety.
The primary goal of the "Drone as First Responder" program is to leverage drones for crime prevention and rapid response. These drones serve as agile units, swiftly deploying to incident scenes, emergencies, or areas requiring attention. This rapid deployment capability empowers law enforcement to proactively address criminal activities, assist in suspect apprehension, and enhance overall situational awareness. While the article does not delve into the technical specifics of the drones used, it underscores the forward-thinking approach of the Montgomery County Police Department in embracing drone technology to augment their public safety efforts, setting a precedent for the adoption of innovative tools in law enforcement.
Reference:
Gonzalez, J. (2023, December 1). Taking flight against crime: Montgomery Co. police drone pilot program aides in arrests. Retrieved from https://wjla.com/news/local/montgomery-county-maryland-police-drone-pilot-program-aides-in-arrests-crime-apprehension-unmanned-aircraft-officer-shortage
Fake Trump arrest photos: How to spot an AI-generated image
BY Royal-RAIS Editorial Team | PUBLISHED: December 5, 2023
The article discusses the proliferation of AI-generated fake images depicting Donald Trump's arrest on social media. These images are hyper-realistic but often contain subtle inconsistencies that give them away as fakes. For instance, discrepancies in body proportions, unnatural skin tones, and blurred features are telltale signs. The article advises readers to verify the authenticity of such images by checking reliable news sources and considering the motives of those sharing them. The experts cited in the article express concerns about the rapid evolution of synthetic content, making it increasingly challenging to distinguish between real and fake images, especially when they involve lesser-known individuals.
Reference:
Devlin, K., & Cheetham, J. (2023, March 24). Fake Trump arrest photos: How to spot an AI-generated image. BBC News. https://www.bbc.com/news/world-us-canada-65069316
Why cheap drones pose a significant chemical terrorism threat
BY Royal-RAIS Editorial Team | PUBLISHED: December 3, 2023
In this article by Zachary Kallenborn, published on November 21, 2023, the author discusses the growing threat posed by cheap drones in the context of chemical terrorism. The article highlights several key points:
Recent Arrest: The article begins by mentioning the arrest of Mohammad Al-Bared in the UK, who had designed a 3D-printed drone intended for delivering chemical weapons or explosives. This incident underscores the potential danger of drones in the hands of terrorists.
History of Drone Use: The author traces the history of terrorist organizations' interest in using drones, dating back to the Japanese doomsday cult Aum Shinrikyo's experiments in the early 1990s. Since then, drone technology has significantly improved and become more accessible.
Drones as Chemical Weapons Delivery Systems: Cheap drones can be adapted to deliver chemical weapons over crowded areas, posing a significant threat. Commercial agricultural drones, designed for spraying pesticides, are particularly well-suited for this purpose.
Use of Fentanyl Derivatives: While acquiring traditional chemical weapons may be challenging for terrorists, the article suggests that fentanyl derivatives could provide a more accessible alternative due to their availability on the black market.
Vulnerability of Chemical Facilities: The article highlights the vulnerability of chemical facilities to drone attacks. A terrorist could use a drone to drop a bomb on a chemical storage tank, potentially causing a catastrophic release of toxic chemicals.
Legislative and Security Measures: The author calls for legislative action to address this threat, including reauthorizing the Chemical Facility Anti-Terrorism Standards program and updating security standards for chemical facilities. The article also suggests requiring certification for the purchase of certain agricultural drones.
International Cooperation: The global community is urged to consider export controls and monitoring of high-capacity agricultural drones to prevent their misuse in chemical terrorism.
In conclusion, the article emphasizes the need for proactive measures to mitigate the risks associated with drones being used for chemical terrorism, as well as the importance of international cooperation in addressing this growing threat.
Reference:
Kallenborn, Z. (2023, November 21). Why Cheap Drones Pose a Significant Chemical Terrorism Threat. Bulletin of the Atomic Scientists. https://thebulletin.org/2023/11/why-cheap-drones-pose-a-significant-chemical-terrorism-threat/
What Can Copilot’s Earliest Users Teach Us About Generative AI at Work?
BY Royal-RAIS Editorial Team | PUBLISHED: December 3, 2023
The article delves into the impact of Copilot for Microsoft 365, a generative AI tool, on the modern workplace. Introduced eight months ago, Copilot was designed to reduce digital debt and enhance productivity, enabling individuals to focus on tasks that require uniquely human skills.
Key findings from the research include:
Productivity Gains: Copilot users reported substantial productivity gains, with 70% stating they were more productive, and 68% mentioning an improvement in the quality of their work.
Speed and Efficiency: Users were found to be 29% faster in tasks such as searching, writing, and summarizing when using Copilot. They could catch up on missed meetings nearly four times faster.
Email Management: Copilot helped users process emails more efficiently, with 64% saying it reduced the time spent on email-related tasks.
Creativity Enhancement: Many users found that Copilot jump-started their creative process, with 57% feeling more creative when using it. It also helped users generate ideas while writing.
Time Savings: On average, users reported daily time savings of 14 minutes, or 1.2 hours per week, by using Copilot.
Positive User Feedback: A significant majority of users (77%) expressed that they wouldn't want to give up Copilot, emphasizing its value in their work.
Beyond individual productivity gains, the article also explores the broader organizational impact of Copilot. It highlights its potential to enhance various functions within an organization, including sales, customer service, and cybersecurity.
For instance, in sales, Copilot helped salespeople save an average of 90 minutes per week and improve productivity, while in customer service, it led to a 12% reduction in the time spent resolving cases.
The article recognizes the urgent need for AI tools like Copilot to bridge the skills gap in cybersecurity, where there are 3 million unfilled positions worldwide. In a study, Copilot was shown to make security analysts 44% more accurate and 26% faster in their tasks.
Furthermore, the article emphasizes the importance of building a daily habit with Copilot and thinking like a manager to fully leverage its potential. It encourages users to reinvest their reclaimed time wisely to make a more significant impact on their organizations.
In conclusion, the article suggests that Copilot has the potential to fundamentally transform how people work by significantly improving productivity, creativity, and efficiency across various roles and functions within organizations. It highlights the need for users to embrace Copilot as a valuable assistant in their daily tasks.
Reference:
Microsoft. (2023, November 15). What Can Copilot’s Earliest Users Teach Us About Generative AI at Work? WorkLab. https://www.microsoft.com/en-us/worklab/work-trend-index/copilots-earliest-users-teach-us-about-generative-ai-at-work?ocid=FY24_soc_omc_br_li_CoPilotEarlyUser
AI in finance: Is it as good as it gets?
BY Royal-RAIS Editorial Team | PUBLISHED: December 3, 2023
The article discusses the impact of artificial intelligence (AI) on the finance industry, highlighting its potential and challenges. Despite the industry's cautious approach due to regulatory concerns, AI is already being used for automation and machine learning in financial institutions. The article features insights from John Duigenan, IBM's Global Leader in the Financial Services Industry, who emphasizes the significant role AI can play in enhancing employee efficiency, customer service, and business processes.
Duigenan also discusses the role of AI in dealing with the uncertainties brought by events like the pandemic and the importance of using existing technologies in new ways. The article touches on AI's role in fraud detection, with an emphasis on managing false positives. It concludes by emphasizing the growing importance of AI in transforming banking services into platform-based ecosystems, running on hybrid cloud technologies.
Reference:
Raj, A. (2023, November 15). AI in Finance: Is It as Good as It Gets? Tech Wire Asia. https://techwireasia.com/2023/11/ai-in-finance-is-it-as-good-as-it-gets/
Exploring AI for Law Enforcement: Insight from an emerging tech expert
BY Royal-RAIS Editorial Team | PUBLISHED: December 2, 2023
In this interview between Undersheriff Chris Hsiung and technology expert Frank Chen, they discuss the role of emerging technology and artificial intelligence (AI) in law enforcement. Key points from the interview include:
Benefits of Technology in Law Enforcement:
Technology can make police departments more efficient and effective in achieving their goals, including preventing and responding to crimes, maintaining public order and safety, building community relationships, and managing personnel.
Factors for Consideration:
Law enforcement executives should define their goals and success metrics when adopting new technology.
They should assess their risk tolerance and consider the experiences of other departments when adopting new technology.
Success with technology involves a combination of people, processes, and technology, not just the acquisition of the latest tools.
Silicon Valley Investors' Criteria:
Investors look for technologies with large market potential, rapid growth, strong business models, and ambitious founders who are resilient and adaptable.
Understanding Artificial Intelligence (AI):
AI is a field of computer science that focuses on creating intelligent agents capable of tasks like reasoning, learning, and autonomous decision-making.
AI has seen significant advancements, especially in recent years, due to increased data availability and computing power.
AI in Policing:
AI technologies like image recognition are already helping police departments solve crimes, such as identifying vehicles and suspects.
Predictive analytics, social media monitoring, and DNA analysis are also being used to enhance law enforcement efforts.
Generative AI:
Generative AI creates new content, including text, images, music, and video, by learning from existing data.
Chatbots and creator tools are examples of generative AI products.
This technology can assist officers with document-related tasks and potentially revolutionize how data in criminal cases is analyzed.
Cloud Computing:
Cloud computing is becoming increasingly popular due to its scalability, ease of use, and security features.
It allows organizations to access computing resources on-demand without the need for extensive on-premises infrastructure.
Future of AI in Policing:
AI is expected to play a significant role in policing, with personalized AI coaches for officers, augmented reality for patrol officers, AI assistants for detectives, and tools to assist police leadership and public information officers.
Overall, the interview highlights the potential for AI and emerging technologies to transform law enforcement by improving efficiency, data analysis, and communication while emphasizing the importance of careful consideration and thoughtful implementation.
Reference:
Hsiung, C., & Chen, F. (Year, Month Day). Exploring AI for Law Enforcement. Police Chief Magazine. URL: [insert URL]
Not RoboCop, but a new robot is patrolling New York's Times Square subway station
BY Royal-RAIS Editorial Team | PUBLISHED: December 2, 2023
A new fully autonomous outdoor security robot called the Knightscope 5 (K5) has started patrolling New York City's Times Square subway station during a two-month trial period. The K5 robot, developed by a tech company in Mountain View, California, is equipped with multiple cameras, including a 360-degree video capture capability and an infrared thermal camera. It can record video for emergency or crime situations but does not record audio or use facial recognition. The robot, standing at 5 feet 2 inches and weighing 400 pounds, has a top speed of 3 mph and requires periodic breaks for recharging. During the trial, it will be accompanied by a police officer to introduce the public to its functions. The city is leasing the K5 robot for approximately $9 per hour.
Reference:
Snider, M. (2023, September 23). Not RoboCop, but a new robot is patrolling New York's Times Square subway station. USA TODAY. https://www.usatoday.com/story/news/nation/2023/09/23/new-york-city-police-robot/70945317007/
Artificial Intelligence and Robotics for Law Enforcement
BY Royal-RAIS Editorial Team | PUBLISHED: December 1, 2023
The article discusses a report titled "Artificial Intelligence and Robotics for Law Enforcement," jointly published by the United Nations Interregional Crime and Justice Research Institute's (UNICRI) Centre for Artificial Intelligence (AI) and Robotics and the Innovation Centre of the International Criminal Police Organization (INTERPOL). The report summarizes key findings, challenges, and recommendations presented during the first INTERPOL - UNICRI Global Meeting on the Opportunities and Risks of Artificial Intelligence and Robotics for Law Enforcement, held in Singapore in July 2018. It examines the current and potential contributions of AI and robotics in policing, highlighting use cases already in practice. The report also addresses new threats and crimes related to the malicious use of AI and robotics, emphasizing the need for law enforcement to stay updated and ensure ethical, human rights-compliant, fair, accountable, transparent, and explainable use of these technologies. The article mentions upcoming discussions and meetings on the responsible integration of AI, particularly machine learning, in law enforcement.
Reference:
United Nations Interregional Crime and Justice Research Institute (UNICRI). (n.d.). Artificial Intelligence and Robotics for Law Enforcement. https://unicri.it/artificial-intelligence-and-robotics-law-enforcement
An Artificial Intelligence Strategy for NATO
BY Royal-RAIS Editorial Team | PUBLISHED: December 1, 2023
The article discusses NATO's adoption of an Artificial Intelligence (AI) Strategy in October 2021 and outlines its main features and objectives. It highlights the transformative impact of AI on international security, affecting military capabilities and hybrid threats. The strategy emphasizes the need for cooperation to mitigate security risks and leverage AI's potential. It underlines the importance of interoperability, responsible AI use, and common responses to AI-related security challenges. The article also details the Principles of Responsible Use for AI, emphasizing lawful, accountable, transparent, reliable, governable, and bias-mitigating AI applications. It discusses how these principles should be integrated into AI development and highlights NATO's role in operationalizing them. The article underscores the importance of ethical considerations, best practices, and standards in AI adoption, while also promoting collaboration with civilian innovators. It concludes by emphasizing the need for coordination across NATO and coherence with other emerging technologies.
Reference:
Stanley-Lockman, Z., & Christie, E. H. (2021, October 25). An Artificial Intelligence Strategy for NATO. NATO Review. https://www.nato.int/docu/review/articles/2021/10/25/an-artificial-intelligence-strategy-for-nato/index.html
A.I.-controlled killer drones become reality
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 29, 2023
The development and deployment of autonomous drones equipped with artificial intelligence (AI) by nations like the United States, China, and others are nearing reality, potentially reshaping warfare. These AI-controlled drones can make life-and-death decisions without human intervention. This advancement has raised significant concerns among various governments, prompting proposals at the United Nations to establish legally binding rules on lethal autonomous weapons. Despite the UN platform facilitating discussions, there's skepticism about achieving substantive new legally binding restrictions.
The debate over AI's risks has been highlighted recently due to disputes over the control of OpenAI and discussions between China and the United States about AI's role in nuclear weapon deployment. The urgency of this issue is compounded by AI advancements and the extensive use of drones in conflicts like Ukraine and the Middle East. Current drones generally require human operators for lethal missions, but evolving software will soon enable more autonomous target selection.
Pentagon officials have expressed intentions to deploy autonomous weapons extensively, with plans to introduce thousands of autonomous systems in the near future. The introduction of AI in weapons systems could transform warfighting by removing the direct human moral decision-making in taking lives. Critics argue that AI weapons may act unpredictably, increase the likelihood of lethal force use, and escalate conflicts more rapidly.
Arms control advocates and some national delegations propose various limitations, including a global ban on lethal autonomous weapons targeting humans and requirements for meaningful human control. However, a recent agreement in Geneva, under the influence of Russia and other major powers, extended the study of this topic until the end of 2025, indicating potential delays in implementing restrictions.
Reference:
Modern Diplomacy. (2023, November 26). A.I.-controlled killer drones become reality. Retrieved from https://moderndiplomacy.eu/2023/11/26/a-i-controlled-killer-drones-become-reality/
Minimizing Harms and Maximizing the Potential of Generative AI
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 28, 2023
Elham Tabassi of NIST discusses the potential and challenges of generative artificial intelligence (AI) technologies like ChatGPT, Bing Chat, and Bard. While these tools offer benefits such as trip planning or recipe suggestions, concerns about societal implications like job insecurity and misinformation are significant. NIST is working on a voluntary AI Risk Management Framework to guide technology companies in understanding the broader impacts of their AI products. This socio-technical approach aims to balance AI's benefits with potential negative consequences on individuals and society.
NIST's framework, developed with input from various stakeholders, focuses on trustworthy AI, aiming to minimize negative societal impacts. The agency is developing benchmarks and guidelines for testing AI technologies' trustworthiness, including areas like language model testing, cyber incident reporting, and authenticity verification for online content. These efforts seek to establish industry norms and best practices for AI development and deployment, ensuring AI technologies are beneficial and minimize potential harms.
Reference:
Tabassi, E. (2023, November 20). Minimizing Harms and Maximizing the Potential of Generative AI. NIST. Retrieved from https://www.nist.gov/blogs/taking-measure/minimizing-harms-and-maximizing-potential-generative-ai
AI bots are helping 911 dispatchers with their workload
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 28, 2023
AI technology is being implemented in several 911 dispatch centers across the United States to manage non-emergency calls. This initiative aims to alleviate the burden on dispatchers, especially during high-demand situations like storms, where reports of non-critical incidents can overwhelm the system. AI systems currently focus on non-emergency calls, allowing human dispatchers to concentrate on urgent matters. This adoption of AI in emergency call centers is partly due to staffing shortages and the need to address the mental health challenges faced by emergency responders.
The integration of AI, however, is met with concerns about potential biases and errors in the systems, which might result in overprescribing police responses or other mistakes. Presently, less than a dozen localities in seven states are utilizing or experimenting with AI in their 911 centers. Companies like Amazon Web Services and Carbyne are providing AI solutions for call centers. The technology has shown promise in reducing call volumes and improving efficiency, but the broader adoption of AI in public safety is cautious due to potential service disruptions and the need for more significant infrastructure improvements in the emergency response system.
Reference:
Stateline. (n.d.). AI bots are helping 911 dispatchers with their workload. Retrieved from https://chat.openai.com/c/1cd7ae25-2250-4a61-b23f-3a1262499a0c
China uses AI headbands to monitor students' concentration
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 26, 2023
In China, schools are employing AI-powered headbands to monitor students' concentration levels. These headbands change colors based on the wearer's focus: a red light indicates high concentration, while a blue light signifies distraction. The device tracks various student activities, such as the frequency of yawning, phone usage, and even their location within the school premises. This data is not only sent to the students' parents but is also believed to be shared with the government, raising concerns about privacy and surveillance.
Reference:
East Coast Radio. (2023, May 15). China uses AI headbands to monitor students' concentration. Retrieved from https://www.ecr.co.za/shows/carolofori/china-uses-ai-headbands-monitor-students-concentration/
Artificial Intelligence (Regulation) Bill [HL]
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 26, 2023
The Artificial Intelligence (Regulation) Bill [HL], introduced by Lord Holmes of Richmond, aims to establish comprehensive regulation for artificial intelligence (AI) in the United Kingdom. Key provisions of the bill include:
Creation of the AI Authority: The bill mandates the Secretary of State to establish the AI Authority, responsible for ensuring AI regulation across various sectors, analyzing regulatory gaps, coordinating legislative reviews, monitoring regulatory frameworks, and promoting international regulatory interoperability.
Regulatory Principles: The bill outlines principles for AI regulation focusing on safety, security, transparency, fairness, accountability, contestability, and redress. It emphasizes the need for AI and its applications to comply with equalities legislation, be inclusive, avoid discrimination, and cater to diverse groups.
Regulatory Sandboxes: The bill proposes the establishment of regulatory sandboxes to facilitate the testing of innovative AI applications in the market.
AI Responsible Officers: Businesses developing, deploying, or using AI must designate an AI officer responsible for ensuring ethical and non-discriminatory use of AI.
Transparency and IP Obligations: The bill requires those involved in AI training to record and assure the ethical use of third-party data and intellectual property. It also mandates clear customer warnings, consent mechanisms, and independent audits.
Public Engagement: The AI Authority is required to engage the public on AI's opportunities and risks and consult on effective public engagement frameworks.
Interpretation and Definitions: The bill defines key terms such as 'artificial intelligence', 'generative AI', and differentiates between AI developers and deployers.
Regulations and Enforcement: The bill sets out the procedures for making regulations, including the creation of offences and penalties, and requires parliamentary approval for significant regulatory changes.
Extent, Commencement, and Short Title: The bill applies to England, Wales, Scotland, and Northern Ireland and is set to commence upon passing, with the title 'Artificial Intelligence (Regulation) Act 2024'.
Reference:
United Kingdom Parliament. (2023). Artificial Intelligence (Regulation) Bill [HL]. HL Bill 11 58/4. Retrieved from chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://bills.parliament.uk/publications/53068/documents/4030
In a New York Minute: Rise of ‘AI experts’
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 26, 2023
The article "In a New York Minute: Rise of ‘AI experts’" from Ethically Aligned AI, observed at conferences a trend of individuals rebranding themselves as AI or generative AI experts, often sharing inaccurate or misleading information. The author, with experience transitioning from community radio to AI ethics, emphasizes that while it is acceptable to switch industries or be new to a field, true expertise is not solely defined by academic credentials. It involves a combination of education, professional experience, community involvement, and a genuine contribution to the field. Real experts acknowledge their limits and do not oversell their knowledge. The author, who focuses on ethics and responsible AI, highlights the value of diversity and the inclusion of non-technical perspectives in AI discussions. However, they caution against individuals who chase trends and superficially claim expertise, likening them to a chatbot that spreads misinformed opinions.
Reference:
Ethically Aligned AI. (n.d.). In a New York Minute: Rise of ‘AI experts’. Retrieved from https://www.ethicallyalignedai.com/post/in-a-new-york-minute-rise-of-ai-experts.
Courts Consider the Coming Risk of Deepfake Evidence
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 24, 2023
The article, written by Jule Pattison-Gordon and published on September 14, 2023, discusses the increasing concern within the judicial system regarding the potential risks posed by deepfake evidence in courtrooms. During the Court Technology Conference in Phoenix, experts highlighted the challenges deepfakes present, including the possibility of false evidence affecting cases and undermining public trust in the legal system. Deepfakes could lead to wrongful harm or the denial of justice. The difficulty in distinguishing deepfakes from real evidence raises concerns, especially as these technologies become more sophisticated. Courts currently rely on parties to authenticate evidence, but the rise of self-represented litigants may necessitate a shift in this approach. Solutions such as holding parties liable for manipulating evidence, hiring experts, or adopting deepfake detection systems are being considered. Additionally, efforts are underway to develop technologies for deepfake detection, like a system being piloted by Oakland University researchers. The article also mentions suggestions for mitigating deepfake risks, including digital watermarking by AI companies and metadata stamps on genuine media. Despite these concerns, generative AI has potential positive applications in legal contexts, such as aiding in legal research and administrative tasks.
Reference:
Pattison-Gordon, J. (2023, September 14). Courts consider the coming risk of deepfake evidence. Government Technology. Retrieved from https://www.govtech.com/products/courts-consider-the-coming-risks-of-deepfake-evidence
Cruise, GM’s robotaxi service, suspends all driverless operations nationwide
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 24, 2023
Cruise, the autonomous vehicle division of General Motors, has announced the suspension of its driverless operations across the United States. This decision comes shortly after the California Department of Motor Vehicles revoked Cruise's license due to safety concerns, particularly in San Francisco where Cruise had recently begun passenger transport. The suspension follows several incidents, including a pedestrian accident involving a Cruise robotaxi and complaints about its autonomous vehicles' behavior around pedestrians. The National Highway Traffic Safety Administration is also investigating Cruise for potential pedestrian safety issues. Despite this setback, Cruise aims to reflect on its operations and regain public trust. Meanwhile, General Motors has faced a financial impact, given its high revenue expectations from Cruise.
Reference:
Grantham-Philips, W. (2023, October 27). Cruise, GM’s robotaxi service, suspends all driverless operations nationwide. AP News. Retrieved from https://apnews.com/article/cruise-robotaxi-suspends-operations-gm-73f27ef959afe1e201e61f0fd31802d5
AI is about to completely change how you use computers (and upend the software industry)
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 23, 2023
Bill Gates, co-chair of the Bill & Melinda Gates Foundation, discusses the imminent transformative impact of Artificial Intelligence (AI) on software use and the software industry in his article published on November 12, 2023. Gates emphasizes that current software requires specific applications for different tasks, but with the advancement of AI, users will soon interact with their devices in everyday language to perform various tasks. He foresees the rise of AI agents, which respond to natural language and can execute multiple tasks based on user knowledge, marking a significant shift in computing since the transition from command-line to graphical user interfaces.
These AI agents, according to Gates, will act as personal assistants, offering nuanced and personalized assistance across applications and improving over time by learning user behavior and preferences. He envisions their impact across health care, education, productivity, entertainment, and shopping, democratizing services that are currently expensive or inaccessible to many. Gates acknowledges the technical and ethical challenges in realizing these sophisticated AI agents, including privacy concerns and the need for new data structures and interaction protocols. He also highlights the broader societal implications, urging thoughtful consideration and legislative action to address these emerging issues.
Reference:
Gates, B. (2023, November 12). AI is about to completely change how you use computers (and upend the software industry). LinkedIn. Retrieved from https://www.linkedin.com/pulse/ai-completely-change-how-you-use-computers-upend-software-bill-gates-brvsc/.
Thune, Klobuchar release bipartisan AI bill
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 20, 2023
Sens. John Thune (R-S.D.) and Amy Klobuchar (D-Minn.) introduced the "Artificial Intelligence Research, Innovation, and Accountability Act of 2023," a bipartisan bill aimed at establishing transparency and accountability standards for artificial intelligence (AI) tools. This legislation focuses on creating clear definitions for AI-related terms, especially those considered "critical-impact" and "high-impact," and sets requirements for AI systems.
Key features of the bill include a mandate for critical-impact AI organizations to self-certify compliance with established standards. The Commerce Department is tasked with developing a five-year plan for testing and certifying critical-impact AI, with regular updates required. Additionally, the National Institute of Standards and Technology (NIST) is directed to create standards for online content authenticity and provide recommendations for technical, risk-based guardrails on high-impact AI systems. Companies using high-impact AI systems must submit transparency reports to the Commerce Department.
Co-sponsored by Sens. Roger Wicker (R-Miss.), John Hickenlooper (D-Colo.), Shelley Moore Capito (R-W.Va.), and Ben Ray Luján (D-N.M.), the bill also clarifies the distinction between "developer" and "deployer" of AI systems. Thune emphasized AI's potential benefits across various industries and the need for consumer protection, innovation-friendly environments, and limited government intervention. Klobuchar highlighted the importance of safeguards in high-risk AI applications and improving transparency.
This bill follows the increasing popularity of generative AI technologies, like OpenAI's ChatGPT, and is part of ongoing legislative efforts to address AI-related risks and benefits, including Senate Majority Leader Chuck Schumer’s (D-N.Y.) AI Insight Forums.
Reference:
Klar, R. (2023, November 15). Thune, Klobuchar release bipartisan AI bill. The Hill. Retrieved from https://thehill.com/policy/technology/4311592-thune-klobuchar-release-bipartisan-ai-bill/
Walmart chases higher profits powered by warehouse robots and automated claws
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 20, 2023
Walmart is significantly increasing its use of automation within its supply chain to enhance productivity and profitability. As reported by Melissa Repko for CNBC on April 11, 2023, the retail giant revealed its strategy at an investor event, showcasing its first automated distribution center in Brooksville, Florida. This facility, spanning approximately 1.4 million square feet, uses advanced technology like automated claws and robots for inventory management and distribution.
Walmart plans to implement the same automation technology, developed by Symbotic, across all its 42 regional distribution centers. The company anticipates that by the end of January, about a third of its stores will receive supplies from these automated facilities. This move is part of Walmart's broader strategy to boost profits, with CEO Doug McMillon projecting revenue growth of about 4% annually over the next few years. Walmart expects profit growth to outpace sales, driven by automation and expansion in higher-margin businesses like advertising and delivery services.
While Walmart's workforce is set to remain roughly the same size, job roles will evolve, with a potential decrease in warehouse manual labor but an increase in online order delivery personnel. Walmart has not disclosed the total investment in these automation projects. However, Chief Financial Officer John David Rainey indicated that capital expenditures would be slightly higher than last year, focusing mainly on e-commerce, supply chain, and store investments.
Reference:
Repko, M. (2023, April 11). Walmart chases higher profits powered by warehouse robots and automated claws. CNBC. Retrieved from https://www.cnbc.com/2023/04/11/walmart-warehouse-automation-powers-higher-profits.html
Is Argentina the First A.I. Election?
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 18, 2023
The Argentine presidential election has become a notable example of how artificial intelligence (AI) is being utilized in political campaigns. Candidates Sergio Massa and Javier Milei are leveraging AI to create promotional images and videos, as well as to craft attacks against each other. An article by Jack Nicas and Lucía Cholakian Herrera, published in The New York Times on November 15, 2023, provides an in-depth look at this phenomenon.
Massa’s campaign has employed AI to enhance his image, depicting him in various heroic and strong roles, akin to figures in popular culture and historical references. In contrast, Milei’s campaign has used AI to portray Massa negatively, aligning him with unfavorable characters and scenarios. These AI-generated creations have become a significant part of their campaign strategies, reaching millions of viewers.
The utilization of AI in Argentina’s election illustrates the growing influence and potential risks of this technology in democratic processes. AI's ability to quickly produce convincing content raises concerns about disinformation and the blurring of lines between reality and fabrication in political discourse. The extensive use of AI in Argentina's election serves as a warning about the technology's potential impact on future elections globally.
Reference:
Nicas, J., & Cholakian Herrera, L. (2023, November 15). Is Argentina the First A.I. Election? The New York Times. Retrieved from https://www.nytimes.com/2023/11/15/world/americas/argentina-election-ai-milei-massa.html
Russian drone attack hits Ukraine infrastructure, causes power outage
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 18, 2023
Olena Harmash reports for Reuters on November 18, 2023, that Russia conducted a significant drone attack on Ukraine overnight, targeting infrastructure facilities and causing widespread power outages. The attack affected over 400 towns and villages across southern, southeastern, and northern regions of Ukraine. Ukrainian air defenses successfully intercepted 29 out of 38 Iranian-made Shahed drones launched from Russian territory. The assault, which lasted from 8 p.m. Friday to 4 a.m. Saturday, led to electricity disruptions in 416 settlements in the Odesa and Zaporizhzhia regions due to damaged networks.
This attack follows a pattern from the previous winter, where Russia's missile and drone strikes left millions in Ukraine without essential services. Ukrainian officials, anticipating colder temperatures, have warned of potential renewed attacks on infrastructure. Volodymyr Kudrytskiy, head of Ukrenergo, Ukraine's power grid operator, emphasized the need for vigilance and preparation.
Additional impacts of the attack included damage to an oil refinery in the Odesa region, an administrative building, and one civilian injury. In the Chernihiv region, near the border with Russia and Belarus, two infrastructure buildings were also damaged, and six settlements experienced power outages. Kyiv, the capital, was targeted in this attack, but all drones were shot down before reaching the city.
Reference:
Harmash, O. (2023, November 18). Russian drone attack hits Ukraine infrastructure, causes power outage. Reuters. Retrieved from https://www.reuters.com/world/europe/russia-launches-major-drone-attack-ukraine-infrastructure-hit-2023-11-18/
UK cybersecurity center says 'deepfakes' and other AI tools pose a threat to the next election
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 18, 2023
The UK’s National Cyber Security Center (NCSC), a division of GCHQ, has raised concerns about the potential threat posed by artificial intelligence (AI) to the country's next national election. In its annual review, the NCSC reported an increase in cyberattacks targeting critical national infrastructure such as power, water, and internet networks, largely attributed to state-aligned actors. These actors, particularly those sympathetic to Russia’s actions in Ukraine, are ideologically motivated and pose a significant threat to UK interests.
The NCSC has identified Russian-language criminals launching ransomware attacks on British firms, along with China state-affiliated cyber actors using their skills for strategic objectives that threaten UK security. The rise of China as a tech superpower is deemed an "epoch-defining challenge" for UK security, with concerns that China could dominate cyberspace if the UK doesn't enhance its resilience and capabilities.
Regarding election security, while the UK’s traditional paper-based voting system is less susceptible to direct hacking, the NCSC warned of the risks posed by advanced AI technologies, including deepfake videos and hyper-realistic bots. These tools could be used to spread disinformation during election campaigns, complicating efforts to maintain the integrity of the electoral process. The UK is expected to hold a national election by January 2025.
Reference:
The Associated Press. (2023, November 14). UK cybersecurity center says 'deepfakes' and other AI tools pose a threat to the next election. ABC News. Retrieved from https://abcnews.go.com/Technology/wireStory/uk-cybersecurity-center-deepfakes-ai-tools-pose-threat-104870340
AI-Powered Task Forces Tackle Online Child Exploitation
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 15, 2023
Nikki Davidson's article, published on November 14, 2023, highlights the critical role of artificial intelligence (AI) in combating online child exploitation. In response to a surge in cybercrimes targeting children and teens, public-private task forces comprising local, state, and federal agencies are utilizing AI and cloud storage to identify and stop online predators.
In 2022, the National Center for Missing and Exploited Children reported over 32 million cases of suspected child sexual exploitation, marking a significant increase from previous years. A primary concern is financial sextortion, where criminals coerce minors into sending explicit images and then extort them for more material or money.
Operation Light Shine, a nonprofit, plays a pivotal role in equipping regional Interagency Child Exploitation and Persons Trafficking (INTERCEPT) task forces. These teams use advanced technology and training to handle the vast amounts of digital evidence encountered in investigations.
Jim Cole, a former Homeland Security Investigations agent and current Chief of Law Enforcement Enterprise and Technology for Operation Light Shine, emphasizes the challenge of managing digital evidence, especially from social media platforms. The use of Pathfinder Labs technology and MongoDB's Atlas document database enables INTERCEPT teams to efficiently analyze massive datasets, leveraging AI to prioritize critical information and natural language processing to detect potential exploitation.
This combination of AI and cloud storage has been effective in aiding investigations and securing convictions, illustrating the significant impact of these technologies in public safety and child protection efforts.
Reference:
Davidson, N. (2023, November 14). AI-Powered Task Forces Tackle Online Child Exploitation. Government Technology. Retrieved from https://www.govtech.com/public-safety/ai-powered-task-forces-tackle-online-child-exploitation?_amp=true
CISA's Roadmap for AI
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 14, 2023
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has developed a comprehensive Roadmap for Artificial Intelligence (AI), aligning with the national AI strategy and adhering to the guidelines of Executive Order 14110, which emphasizes the safe and secure development and use of AI. This Roadmap is structured into five key lines of effort:
Responsibly Use AI to Support Mission: CISA plans to utilize AI-enabled tools to enhance cyber defense and support critical infrastructure missions, ensuring the use is responsible, ethical, and legal.
Assure AI Systems: The agency aims to promote secure AI software development and implementation across various stakeholders, including federal, state, and local governments, and the private sector, by developing best practices and guidance.
Protect Critical Infrastructure from Malicious Use of AI: CISA will work with other government agencies and industry partners to assess and mitigate AI threats to national critical infrastructure.
Collaborate and Communicate on Key AI Efforts: This involves contributing to domestic and international policy discussions on AI, advancing global AI security best practices, and supporting a comprehensive approach to AI policy within the Department of Homeland Security (DHS).
Expand AI Expertise in the Workforce: CISA is committed to educating its workforce on AI and recruiting individuals with expertise in AI. The agency emphasizes the importance of understanding the legal, ethical, and policy aspects of AI, in addition to technical knowledge.
The roadmap underscores CISA's role in ensuring AI systems are protected from cybersecurity threats and discouraging their malicious use, particularly in relation to critical infrastructure.
Reference:
Cybersecurity and Infrastructure Security Agency. (n.d.). Roadmap for Artificial Intelligence. CISA. Retrieved from https://www.cisa.gov/resources-tools/resources/roadmap-ai
Robotics’ Role in Public Safety
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 14, 2023
The article explores the evolving role of robotics in public safety, emphasizing how advancements in robotics technology, particularly the robot named 'Spot' by Boston Dynamics, are transforming the field. Traditional robots, which typically use tracks or wheels and have limited maneuverability and capabilities, are being surpassed by more advanced models like Spot. Spot's unique features include exceptional mobility, ability to navigate challenging terrain, and sophisticated sensory payloads.
Spot is highlighted for its capabilities in various public safety scenarios, such as investigating suspicious packages, handling chemical or radiological hazards, and responding to barricaded subjects. Its design allows it to operate in environments unsafe for humans, providing critical information and assessments from a distance. Spot's agility enables it to move through unstructured terrains, climb stairs, and avoid obstacles, making it highly effective in emergency situations.
Notably, Spot's advanced features include a 360-degree camera (Spot CAM+IR) for enhanced situational awareness, a manipulative arm with a camera, and the ability to carry various sensing payloads. These capabilities allow Spot to perform detailed thermal and visual investigations and communicate in two-way interactions. The article underscores Spot's role in improving safety procedures, reducing risks, and enhancing decision-making in public safety operations.
Reference:
Boston Dynamics. (n.d.). Robots' Role in Public Safety. Boston Dynamics. Retrieved from https://bostondynamics.com/resources/whitepaper/
Fraudsters Cloned Company Director’s Voice In $35 Million Heist, Police Find
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 14, 2023
In an article by Thomas Brewster, a major heist involving artificial intelligence (AI) and voice cloning technology is detailed. Cybercriminals used AI to clone the voice of a company director in the United Arab Emirates (U.A.E.) to authorize fraudulent transfers totaling $35 million. The fraud involved a branch manager of a Japanese company in Hong Kong, who received a call from what he believed was the voice of his company's director. This director supposedly instructed him to coordinate money transfers for a company acquisition with a lawyer named Martin Zelner, as confirmed by emails.
The U.A.E. is leading the investigation into this elaborate scheme, which involved at least 17 individuals and saw the stolen funds distributed to global bank accounts. This case is one of the few known instances where fraudsters have successfully used deep voice technology for financial crime. It underscores the growing concerns over AI's role in creating deep fake images and voices for cybercrime. The article also highlights the increasing sophistication of AI voice technologies and the challenges they pose to cybersecurity.
Reference:
Brewster, T. (2021, October 14). Huge bank fraud uses deep fake voice tech to steal millions. Forbes. Retrieved from https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=51f3ce097559
America’s secret asset against AI workforce takeover
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 12, 2023
Gary Officer's article, published on October 10, 2023, discusses the dual challenge facing America’s workforce: the rise of artificial intelligence (AI) and an aging workforce. As AI continues to reshape various job sectors, there is a growing concern over job displacement. However, Officer argues that older workers represent a crucial asset in this evolving landscape.
Despite facing age discrimination, older workers offer valuable experience, skills, and wisdom that AI cannot replicate. These include soft skills like leadership, communication, adaptability, and problem-solving abilities honed over years. In an AI-dominated age, businesses still require human-centric skills for tasks like relationship-building and mentorship, areas where older workers excel.
The article highlights that while AI may reorganize jobs, it can't replace the human skills older workers bring. Their contribution is not only beneficial for workplace dynamics but also positively impacts a company's innovation and profitability. Officer concludes that embracing the experience and skills of older workers is key to navigating the challenges of AI integration in the workforce and maintaining economic growth.
Reference:
Officer, G. (2023, October 10). America’s secret asset against AI workforce takeover. Fox News. Retrieved from https://www.foxnews.com/opinion/americas-secret-asset-against-ai-workforce-takeover
Psychiatrist used AI to create child porn, sentenced to 40 years in prison
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 12, 2023
David Tatum, a child psychiatrist from Charlotte, North Carolina, has been sentenced to 40 years in prison for creating child pornography using artificial intelligence (AI) and for secretly recording his 15-year-old cousin. Tatum, 41, used AI to modify pictures of his ex-girlfriends with sexually explicit images of minors obtained online. He altered images from school events to make them sexually explicit and viewed over 1,000 files of child pornography.
Tatum's criminal activities spanned from 2016 to 2021, during which he also recorded his cousin and other family members without their consent. He was convicted in May on counts of producing, transporting, and possessing child pornography, and will undergo 30 years of supervised release post-imprisonment. Tatum is also required to pay special assessments and register with the Sex Offender Registry Board after his release.
Reference:
Dorgan, M. (2023, November 11). Psychiatrist used AI to create child porn, sentenced to 40 years in prison. Fox News. Retrieved from https://www.foxnews.com/us/psychiatrist-used-ai-create-child-porn-sentenced-40-years-prison
Is ChatGPT writing your code? Watch out for malware
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 12, 2023
Lou Steinberg's article, published on November 6, 2023, addresses the potential risks associated with using generative AI tools like ChatGPT in software development. While these AI chatbots enhance productivity by assisting in code generation, language translation, and test case writing, they also pose a significant security risk. The primary concern is that these AI tools learn to code from vast amounts of open-source software, which may contain design errors, bugs, and malware.
Given the high volume of open-source contributions – with GitHub reporting over 400 million in 2022 alone – there's a considerable chance that AI models trained on these repositories may inherit and propagate these issues. When developers, especially those with limited expertise, use code generated by AI, they might not possess the skills to identify hidden vulnerabilities or malware in the code.
The article advises that companies must rigorously inspect and scan AI-generated code. It recommends using static behavioral scans and software composition analysis (SCA) rather than relying on traditional malware signature matching. Steinberg cautions against using the same AI for generating and testing high-risk code, likening it to "asking a fox to check the henhouse for foxes." He acknowledges the benefits of coding with generative AI but emphasizes the importance of the principle "Trust, but verify" to ensure code security.
Reference:
Steinberg, L. (2023, November 6). Is ChatGPT writing your code? Watch out for malware. InfoWorld. Retrieved from https://www.infoworld.com/article/3709191/is-chatgpt-writing-your-code-watch-out-for-malware.amp.html
Glossary - Artificial Intelligence, Council of Europe
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 12, 2023
The article from the Council of Europe provides a comprehensive glossary of key terms related to Artificial Intelligence (AI) and data science. It defines an array of concepts essential to understanding the AI landscape.
Key terms include:
Algorithm: A set of rules for solving problems or performing tasks, often used in automated processes like machine learning.
Artificial Intelligence (AI): Sciences and techniques aimed at replicating human cognitive abilities through machines, with a distinction between "strong" AI (fully autonomous) and "weak" or "moderate" AI (highly specialized).
Big Data: Large, diverse datasets from various sources.
Chatbot: AI-driven conversational agents used in various sectors.
Database: Storage systems for data that can be reprocessed to produce information.
Datamining: The analysis of large volumes of data to discover patterns and correlations.
Data Science: A field combining several disciplines to extract knowledge from diverse data sets.
Deep Learning: A subset of machine learning based on neural networks.
Machine Learning: A method of building mathematical models to make predictions or decisions based on data.
Metadata: Data that defines or contextualizes other data.
Neural Network: Algorithmic systems inspired by biological neurons, used in various applications like robotics and automated translation.
Open Data: Publicly available databases for non-commercial use under certain licenses.
Personal Data: Information related to an identified or identifiable individual.
Profiling: Processing personal data to evaluate certain aspects of a person's life.
Pseudonymisation: Processing personal data so it cannot be attributed to a specific individual without additional information.
These definitions are crucial for understanding the current state and potential applications of AI, addressing legal, ethical, and practical aspects.
Reference:
Council of Europe. (n.d.). Glossary. Artificial Intelligence - Council of Europe. Retrieved November 9, 2023, from https://www.coe.int/en/web/artificial-intelligence/glossary
Four Principles of Explainable Artificial Intelligence
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 11, 2023
The National Institute of Standards and Technology (NIST) has outlined four fundamental principles for explainable Artificial Intelligence (AI) systems. These principles were developed through engagement with the AI community and are part of NIST's broader effort to establish trust in AI technologies. The principles are:
Explanation: AI systems should deliver evidence or reasons for their outputs and processes.
Meaningful: Explanations provided by AI systems must be understandable to their intended users.
Explanation Accuracy: The explanations should accurately reflect the AI system's processes and the reasons behind its outputs.
Knowledge Limits: AI systems should operate within their designed conditions and confidence levels.
These principles recognize the importance of both process-based and outcome-based explanations and the varying needs of different AI users. The type and appropriateness of an explanation can depend on various factors, including regulatory requirements and user interactions. This work on explainable AI is part of NIST's broader AI portfolio, which focuses on creating trustworthy AI through standards, evaluation, validation, and verification. While acknowledging that explainability is just one aspect of AI trustworthiness, this framework serves as a roadmap for future AI measurements and evaluations.
Reference:
Phillips, P. J., Hahn, C. A., Fontana, P. C., Broniatowski, D. A., & Przybocki, M. A. (2020). Four principles of explainable artificial intelligence. Gaithersburg, Maryland, 18.
Offensive and Defensive AI: Let's Chat(GPT) About It
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 10, 2023
The article discusses the dual nature of ChatGPT as both a productivity tool and a potential security risk. ChatGPT, a rapidly growing generative AI chatbot, is highly capable in content creation, coding, education, and customer support, but it also poses security risks. Threat actors can exploit ChatGPT for activities like data exfiltration, misinformation, cyberattacks, and phishing. Conversely, defenders can use it to identify vulnerabilities and bolster security.
Key offensive uses of ChatGPT include finding and exploiting vulnerabilities, writing phishing emails, and identifying confidential files. In defense, ChatGPT can assist in learning new security terms and technologies, summarizing security reports, deciphering attacker code, predicting attack paths, researching threat actors, identifying code vulnerabilities, and reviewing log activities for suspicious activities.
The article, however, cautions about certain considerations when using ChatGPT. These include issues of copyright, data retention, privacy, bias, and accuracy. It notes that ChatGPT might not distinguish AI-generated content, which could have future implications in security. The article concludes by emphasizing the need to educate people on the responsible use of these tools.
Reference:
The Hacker News. (2023, November 7). Offensive and Defensive AI: Let's Chat(GPT) About It. The Hacker News. Retrieved from https://thehackernews.com/2023/11/offensive-and-defensive-ai-lets-chatgpt.html
OECD updates definition of Artificial Intelligence ‘to inform EU’s AI Act’
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 10, 2023
The Organization for Economic Co-operation and Development (OECD) updated its definition of Artificial Intelligence (AI) on November 8, 2023. This revised definition is more comprehensive and aligns with recent technological advances. It describes AI systems as machine-based systems that interpret inputs to generate various outputs, including predictions and decisions, with different levels of autonomy and adaptiveness post-deployment. This update aims for global consistency in AI definitions, enhances technical precision, and addresses the evolving nature of AI, such as content creation and learning capabilities. The new definition is expected to influence the European Union’s AI Act, a regulation focusing on AI's potential risks, which is in its final negotiation stages. However, as of now, no EU documents have been updated to reflect this change.
Reference:
Bertuzzi, L. (2023, November 9). OECD updates definition of Artificial Intelligence ‘to inform EU’s AI Act’. Euractiv. Retrieved from https://www.euractiv.com/section/artificial-intelligence/news/oecd-updates-definition-of-artificial-intelligence-to-inform-eus-ai-act/
US DOD's New AI and Data Strategy
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 7, 2023
The Pentagon unveiled a new AI and data strategy on November 3, 2023, prioritizing a structured approach to integrating advanced AI, with an emphasis on responsible use and ethical considerations. The framework, known as the "DOD AI Hierarchy of Needs," focuses on data integrity, governance, and responsible AI, updating the department's previous strategies from 2018 and 2020. Despite some commercial AI technologies not fully meeting the DOD's ethical standards, Deputy Secretary Hicks noted the beneficial potential of AI in numerous defense operations, overseen by Task Force Lima. This strategy supports the CJADC2 initiative for improved military communication and interoperability and aligns with the administration's goals for trustworthy AI development.
Reference:
Graham, E. (2023, November 3). DOD's New AI and Data Strategy Aims to Scale Adoption of New Technologies. Nextgov. Retrieved from https://www.nextgov.com/defense/2023/11/dods-new-ai-and-data-strategy-aims-scale-adoption-new-technologies/391777/
Europol Report Criminal Use of Deepfake Technology
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 4, 2023
Europol's Innovation Lab released a comprehensive report on April 28, 2022, analyzing the criminal applications of deepfake technology. The report covers the essence of deepfakes, their underlying technologies, particularly the use of generative adversarial networks (GANs), and the multifaceted impacts on crime and law enforcement. It highlights deepfakes' role in activities such as harassment, fraud, pornography, and disinformation campaigns, raising alarm over its contribution to crime as a service (CaaS).
The advancement of deepfakes poses unique challenges for law enforcement, requiring more sophisticated methods for assessment and detection. It stresses the urgency for regulatory adaptation, suggesting that policies, laws, and law enforcement practices must evolve to manage the threats posed by this technology. The report calls for concerted efforts by policymakers, online service providers, and law enforcement to enhance their detection capabilities and to establish robust legal frameworks to govern the use and abuse of deepfake technology.
Reference:
Riehle, C. (2022). Europol Report Criminal Use of Deepfake Technology. Europol's Innovation Lab. Retrieved from https://eucrim.eu/news/europol-report-criminal-use-of-deepfake-technology/
Incident of the AI-generated fake nudes sparks police investigation at New Jersey high school
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 4, 2023
In October, an unsettling incident at Westfield High School in New Jersey came to light when male students were found using AI to create and distribute fake nude photos of their female classmates. The school administration, upon discovery, believed the images were deleted and no longer shared among students. The full extent of the students affected and the disciplinary actions taken are undisclosed due to confidentiality policies.
The legality of the students' actions is uncertain as there is no federal law directly prohibiting the creation of such images. Nevertheless, President Biden recently signed an executive order calling for legal protections against AI-generated sexual abuse material and non-consensual intimate imagery. While some states like Virginia and California have laws against the distribution of fake pornography, New Jersey is now considering similar legislation.
New Jersey State Senator Jon Bramnick is reviewing existing laws and may propose new ones to criminalize the creation and sharing of AI-generated nude images. The current situation is being investigated by state police, with local authorities urging victims to come forward.
The impact of the incident has left many female students uncomfortable and concerned about their future privacy and reputation. This case underscores the potential harms of AI technology and highlights the urgent need for responsible usage and legal frameworks to protect individuals from such digital abuse.
Reference:
Belanger, A. (2023, November 2). Teen boys use AI to make fake nudes of classmates, sparking police probe. Ars Technica. Retrieved from https://arstechnica.com/tech-policy/2023/11/deepfake-nudes-of-high-schoolers-spark-police-probe-in-nj/
AI-generated nude images of girls at NJ high school trigger police probe: ‘I am terrified’
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 3, 2023
A disturbing incident at Westfield High School in New Jersey has caught the attention of parents, administrators, and law enforcement. Male students reportedly created and shared AI-generated pornographic images of female classmates, causing considerable uproar. Some of these images were allegedly pointed out to the victims by school staff. The local police department is investigating the matter, though the specific AI tool used remains unidentified. Concerned parents, including Dorota Mani, have expressed fear over the potential long-term impact of these 'deepfakes' on their children's futures. The prevalence of pornographic deepfakes, which often misuse celebrities' likenesses, has been highlighted by visual threat intelligence company Sensity, noting that more than 90% of such content is of a pornographic nature.
The issue is compounded by the lack of effective safeguards on the internet to prevent the generation and dissemination of such content, despite actions by companies like Snap to ban and report these images. The principal of Westfield High School, Mary Asfendis, acknowledged the seriousness of the situation and the need for educating students about the responsible use of technology. The broader context includes recent viral deepfake images of public figures and a new executive order by President Biden to regulate AI development, particularly to prevent the generation of child sexual abuse material and non-consensual intimate imagery. States such as Virginia, California, Minnesota, and New York have legislated against the distribution of AI-generated porn or permitted civil lawsuits by victims.
Reference:
Chang, A. (2023, November 2). AI-generated nudes of girls at NJ high school trigger police probe. New York Post. https://nypost.com/2023/11/02/news/ai-generated-nudes-of-girls-at-nj-high-school-trigger-police-probe/
UK Government's AI Safety Summit: Countries agree to safe and responsible development of frontier AI in landmark Bletchley Declaration
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 2, 2023
The United Kingdom has successfully led a historic assembly of 28 countries, including major AI nations such as the US, the EU, and China, to sign the Bletchley Declaration on the safe and responsible development of frontier AI technologies. This world-first agreement, forged at Bletchley Park, aims to establish a shared understanding of both the opportunities and challenges presented by advanced AI systems.
Key highlights of the declaration include:
An agreed upon urgency to comprehend and collaboratively manage potential AI risks.
A commitment to international cooperation for the benefit of the global community.
An acknowledgment of substantial risks stemming from AI, such as cybersecurity, biotechnology, and disinformation dangers.
The initiation of a global effort to ensure the responsible development of AI.
The launch of the world’s first AI Safety Institute in the UK to spearhead research and manage risks while leveraging AI's benefits.
Plans for ongoing international dialogues, with the Republic of Korea set to co-host a virtual summit and France to host the next in-person summit.
UK Prime Minister Rishi Sunak praised the agreement as a landmark achievement, emphasizing the shared global responsibility to navigate AI risks. Technology Secretary Michelle Donelan and Foreign Secretary James Cleverly highlighted the imperative of international collaboration to ensure AI's safe development. French and Korean spokespersons expressed commitment to the continued international effort.
The summit was marked by a virtual address from His Majesty The King, who underscored the necessity of making AI a force for good through international coordination.
Reference:
Prime Minister's Office. (2023, November 1). Countries agree to safe and responsible development of frontier AI in landmark Bletchley Declaration. GOV.UK. https://www.gov.uk/government/news/countries-agree-to-safe-and-responsible-development-of-frontier-ai-in-landmark-bletchley-declaration
President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
BY Royal-RAIS Editorial Team | PUBLISHED: NOVEMBER 2, 2023
Purpose of the Order:
The Executive Order aims to position the United States as a leader in AI, harnessing its benefits and mitigating its risks.
It seeks to establish new standards for AI safety and security, protect privacy, advance equity and civil rights, promote innovation and competition, and reinforce American leadership globally.
Key Directives of the Order:
AI Safety and Security Standards:
AI developers must report safety test results to the U.S. government, especially when AI poses a national security risk.
The National Institute of Standards and Technology (NIST) will create rigorous standards for AI, including extensive red-team testing for safety.
The Department of Homeland Security will establish an AI Safety and Security Board.
New standards will also address AI's potential to contribute to biological threats and cyber vulnerabilities.
Privacy Protections:
Calls for bipartisan data privacy legislation to protect Americans' privacy, particularly from AI risks.
Encourages the development of privacy-preserving AI techniques.
Federal agencies to strengthen privacy guidance and evaluate the use of commercially available information.
Equity and Civil Rights:
Guidance to prevent AI algorithms from exacerbating discrimination in housing, healthcare, and justice.
Coordination to address algorithmic discrimination and develop best practices for AI's fair use in the criminal justice system.
Consumer, Patient, and Student Safeguards:
Directives to ensure AI benefits consumers without causing harm.
Initiatives to use AI responsibly in healthcare and education.
Support for Workers:
Development of principles to maximize AI's benefits for workers while mitigating harms such as job displacement.
Studies on AI's impact on labor and strategies to support affected workers.
Promoting Innovation and Competition:
Support for AI research through national resources and funding.
Measures to maintain a competitive AI ecosystem and facilitate the entry of skilled immigrants in AI fields.
Global Leadership:
Collaboration with other nations on AI safety and benefits.
International engagements to establish frameworks for AI development and use.
Government Use of AI:
Guidance for responsible AI use in government, including standards for procurement and deployment.
Initiatives to recruit AI professionals into the federal workforce and train current employees.
Implementation:
The actions are designed to complement international efforts, like the G-7 Hiroshima Process and discussions at the United Nations.
The Administration will continue to work with Congress on legislation for responsible AI innovation.
Reference:
The White House. (2023, October 30). Fact sheet: President Biden issues Executive Order on safe, secure, and trustworthy artificial intelligence. The White House. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/