How Can AI Be Used Safely? Expert Researchers Weigh In

Image: Shutter2U/Adobe Stock An important focus of AI research is improving an AI system’s factualness and trustworthiness. Even though significant progress has been made in these areas, some AI experts are pessimistic that these issues will be solved in the near future. That is one of the main findings of a new report by The Association for the Advancement of Artificial Intelligence (AAAI), which includes insights from experts from various academic institutions (e.g., MIT, Harvard, and University of Oxford) and tech giants (e.g., Microsoft and IBM). The goal of the study was to define the current trends and the research challenges to make AI more capable and reliable so the technology can be safely used, wrote AAAI President Francesca Rossi. The report includes 17 topics related to AI research culled by a group of 24 “very diverse” and experienced AI researchers, along with 475 respondents from the AAAI community, she noted. Here are highlights from this AI research report. Improving an AI system’s trustworthiness and factuality An AI system is considered factual if it doesn’t output false statements, and its trustworthiness can be improved by including criteria “such as human understandability, robustness, and the incorporation of human values,’’ the report’s authors stated. Other criteria to consider are fine-tuning and verifying machine outputs, and replacing complex models with simple understandable models. SEE: How to Keep AI Trustworthy from TechRepublic Premium Making AI more ethical and safer AI is becoming more popular, and this requires greater responsibility for AI systems, according to the report. For example, emerging threats such as AI-driven cybercrime and autonomous weapons require immediate attention, along with the ethical implications of new AI techniques. Among the most pressing ethical challenges, the top concerns respondents had were: Misinformation (75%) Privacy (58.75%) Responsibility (49.38%) This indicates more transparency, accountability, and explainability in AI systems is needed. And, that ethical and safety concerns should be addressed with interdisciplinary collaboration, continuous oversight, and clearer responsibility. Respondents also cited political and structural barriers, “with concerns that meaningful progress may be hindered by governance and ideological divides.” More must-read AI coverage Evaluating AI using various factors Researchers make the case that AI systems introduce “unique evaluation challenges.” Current evaluation approaches focus on benchmark testing, but they said more attention needs to be paid to usability, transparency, and adherence to ethical guidelines. Implementing AI agents introduces challenges AI agents have evolved from autonomous problem-solvers to AI frameworks that enhance adaptability, scalability, and cooperation. Yet, the researchers found that the introduction of agentic AI, while providing flexible decision making, has introduced challenges when it comes to efficiency and complexity. The report’s authors state that integrating AI with generative models “requires balancing adaptability, transparency, and computational feasibility in multi-agent environments.” More aspects of AI research Some of the other AI research-related topics covered in the AAAI report include sustainability, artificial general intelligence, social good, hardware, and geopolitical aspects. source

How Can AI Be Used Safely? Expert Researchers Weigh In Read More »

House GOP Infighting Delays Push To Repeal 2 CFPB Rules

By Jon Hill ( April 1, 2025, 5:04 PM EDT) — Plans for the U.S. House to vote on overturning two Biden-era Consumer Financial Protection Bureau rules were scuttled Tuesday by an unrelated fight among Republicans about whether to allow proxy voting for lawmakers with infant children…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

House GOP Infighting Delays Push To Repeal 2 CFPB Rules Read More »

Adobe Beats Class Action Over Alleged Competitive Threats

By Katryna Perera ( March 28, 2025, 8:48 PM EDT) — A New York federal judge has tossed a securities class action against Adobe Inc. alleging that the software company and its top brass misled shareholders about the competitive threat Adobe’s products faced from a  user experience design tool developed by another company, saying the investors have failed to plead any actionable misstatements or knowledge of wrongdoing…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Adobe Beats Class Action Over Alleged Competitive Threats Read More »

OpenAI Secures $40B in Historic Funding Round — But There’s a $10B Catch

OpenAI, the company behind ChatGPT, has just completed the largest private tech funding round in history — raising $40 billion at a $300 billion valuation. The deal, led by Japan’s SoftBank with backing from Microsoft and other investors, solidifies OpenAI as one of the world’s most valuable private companies, trailing only SpaceX and rivaling TikTok’s parent company, ByteDance. A historic deal in tech funding The scale of this funding round is unprecedented. Before OpenAI, the largest private tech deal was Ant Group’s $14 billion raise in 2018. This new funding more than doubles that record and highlights the surging investor enthusiasm for artificial intelligence. SoftBank is leading the charge with a $30 billion commitment, with the rest coming from Microsoft, Coatue, Altimeter, and Thrive. OpenAI has positioned itself as the leader in generative AI, with its flagship product, ChatGPT, now boasting 500 million weekly users, up from 400 million just last month. That rapid adoption has made the company a prime target for investors looking to stake a claim in the AI boom. Where will the money go? OpenAI says the fresh capital will help push the boundaries of AI research, expand its computing infrastructure, and accelerate the development of artificial general intelligence (AGI). A significant portion — around $18 billion — is reportedly earmarked for OpenAI’s Stargate project, a $500 billion initiative with SoftBank and Oracle to build next-generation AI data centers. However, there’s a catch: The deal includes a clause requiring OpenAI to transition into a fully for-profit company by the end of 2025. If it fails to do so, the funding could be slashed by as much as $10 billion. This restructuring plan has drawn scrutiny, with concerns about the company’s unique nonprofit-to-capped-profit hybrid model and potential regulatory challenges. Why are investors betting so big? This massive funding round comes amid a surge in AI adoption across industries. Since ChatGPT launched in late 2022, OpenAI has become a household name, influencing how businesses and individuals use AI. CEO Sam Altman reflected via an X post on its rapid rise, “The [ChatGPT] launched 26 months ago was one of the craziest viral moments I’d ever seen, and we added one million users in five days,” he notes. “ We added one million users in the last hour.” The company’s revenue is also skyrocketing. OpenAI expects to generate $12.7 billion in 2025, a massive leap from $3.7 billion last year. Despite its rapid growth, OpenAI is still a cash-hungry operation. Insiders say the startup profitability is still a long way off — estimates suggest OpenAI may not be cash-flow positive until 2029, when it expects to generate $125 billion in revenue. Altman also hinted at the company’s next big move — an open-weight language model with advanced reasoning capabilities set to launch in the coming months. More must-read AI coverage The race to AI supremacy Despite its dominance in the AI space, OpenAI faces stiff competition from rivals like Google’s DeepMind, Amazon, Perplexity, and Anthropic. The AI industry is expected to generate more than $1 trillion in revenue within the next decade, and every major tech player is racing to lead the charge. With this record-breaking investment, OpenAI is well-positioned to shape the future of AI. But whether it can navigate regulatory hurdles, restructuring challenges, and increasing competition remains to be seen. source

OpenAI Secures $40B in Historic Funding Round — But There’s a $10B Catch Read More »

Davis Polk, Skadden Lead Stablecoin Issuer Circle's IPO Filing

By Tom Zanki ( April 2, 2025, 3:42 PM EDT) — Venture-backed stablecoin issuer Circle Internet Group Inc. is moving forward with its long-awaited initial public offering amid expectations of favorable regulatory policies for crypto firms, represented by Davis Polk & Wardwell LLP and underwriters’ counsel Skadden Arps Slate Meagher & Flom LLP…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Davis Polk, Skadden Lead Stablecoin Issuer Circle's IPO Filing Read More »

The tool integration problem that’s holding back enterprise AI (and how CoTools solves it)

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Researchers from the Soochow University of China have introduced Chain-of-Tools (CoTools), a novel framework designed to enhance how large language models (LLMs) use external tools. CoTools aims to provide a more efficient and flexible approach compared to existing methods. This will allow LLMs to leverage vast toolsets directly within their reasoning process, including ones they haven’t explicitly been trained on.  For enterprises looking to build sophisticated AI agents, this capability could unlock more powerful and adaptable applications without the typical drawbacks of current tool integration techniques. While modern LLMs excel at text generation, understanding and even complex reasoning, they need to interact with external resources and tools such as databases or applications for many tasks. Equipping LLMs with external tools—essentially APIs or functions they can call—is crucial for extending their capabilities into practical, real-world applications. However, current methods for enabling tool use face significant trade-offs. One common approach involves fine-tuning the LLM on examples of tool usage. While this can make the model proficient at calling the specific tools seen during training, it often restricts the model to only those tools. Furthermore, the fine-tuning process itself can sometimes negatively impact the LLM’s general reasoning abilities, such as Chain-of-Thought (CoT), potentially diminishing the core strengths of the foundation model. The alternative approach relies on in-context learning (ICL), where the LLM is provided with descriptions of available tools and examples of how to use them directly within the prompt. This method offers flexibility, allowing the model to potentially use tools it hasn’t seen before. However, constructing these complex prompts can be cumbersome, and the model’s efficiency decreases significantly as the number of available tools grows, making it less practical for scenarios with large, dynamic toolsets. As the researchers note in the paper introducing Chain-of-Tools, an LLM agent “should be capable of efficiently managing a large amount of tools and fully utilizing unseen ones during the CoT reasoning, as many new tools may emerge daily in real-world application scenarios.” CoTools offers a compelling alternative to existing methods by cleverly combining aspects of fine-tuning and semantic understanding while crucially keeping the core LLM “frozen”—meaning its original weights and powerful reasoning capabilities remain untouched. Instead of fine-tuning the entire model, CoTools trains lightweight, specialized modules that work alongside the LLM during its generation process. “The core idea of CoTools is to leverage the semantic representation capabilities of frozen foundation models for determining where to call tools and which tools to call,” the researchers write. In essence, CoTools taps into the rich understanding embedded within the LLM’s internal representations, often called “hidden states,” which are computed as the model processes text and generates response tokens. CoTools architecture Credit: arXiv The CoTools framework comprises three main components that operate sequentially during the LLM’s reasoning process: Tool Judge: As the LLM generates its response token by token, the Tool Judge analyzes the hidden state associated with the potential next token and decides whether calling a tool is appropriate at that specific point in the reasoning chain. Tool Retriever: If the Judge determines a tool is needed, the Retriever chooses the most suitable tool for the task. The Tool Retriever has been trained to create an embedding of the query and compare it to the available tools. This allows it to efficiently select the most semantically relevant tool from the pool of available tools, including “unseen” tools (i.e., not part of the training data for the CoTools modules). Tool Calling: Once the best tool is selected, CoTools uses an ICL prompt that demonstrates filling in the tool’s parameters based on the context. This targeted use of ICL avoids the inefficiency of adding thousands of demonstrations in the prompt for the initial tool selection. Once the selected tool is executed, its result is inserted back into the LLM’s response generation. By separating the decision-making (Judge) and selection (Retriever) based on semantic understanding from the parameter filling (Calling via focused ICL), CoTools achieves efficiency even with massive toolsets while preserving the LLM’s core abilities and allowing flexible use of new tools. However, since CoTools requires access to the model’s hidden states, it can only be applied to open-weight models such as Llama and Mistral instead of private models such as GPT-4o and Claude. Example of CoTools in action. Credit: arXiv The researchers evaluated CoTools across two distinct application scenarios: numerical reasoning using arithmetic tools and knowledge-based question answering (KBQA), which requires retrieval from knowledge bases. On arithmetic benchmarks like GSM8K-XL (using basic operations) and FuncQA (using more complex functions), CoTools applied to LLaMA2-7B achieved performance comparable to ChatGPT on GSM8K-XL and slightly outperformed or matched another tool-learning method, ToolkenGPT, on FuncQA variants. The results highlighted that CoTools effectively enhance the capabilities of the underlying foundation model. For the KBQA tasks, tested on the KAMEL dataset and a newly constructed SimpleToolQuestions (STQuestions) dataset featuring a very large tool pool (1836 tools, including 837 unseen in the test set), CoTools demonstrated superior tool selection accuracy. It particularly excelled in scenarios with massive tool numbers and when dealing with unseen tools, leveraging the descriptive information for effective retrieval where methods relying solely on trained tool representations faltered. The experiments also indicated that CoTools maintained strong performance despite lower-quality training data. Implications for the enterprise Chain-of-Tools presents a promising direction for building more practical and powerful LLM-powered agents in the enterprise. This is especially useful as new standards such as the Model Context Protocol (MCP) enable developers to integrate external tools and resources easily into their applications. Enterprises can potentially deploy agents that adapt to new internal or external APIs and functions with minimal retraining overhead. The framework’s reliance on semantic understanding via hidden states allows for nuanced and accurate tool selection, which could lead to more reliable AI assistants in tasks that require interaction with diverse information sources and systems. “CoTools explores the way to equip LLMs with massive new tools in a simple

The tool integration problem that’s holding back enterprise AI (and how CoTools solves it) Read More »

Apple Patches Critical Vulnerabilities in iOS 15 and 16

Image: ink drop/Adobe Stock On Monday, Apple issued critical security updates that retroactively address three actively exploited zero-day vulnerabilities affecting legacy versions of its operating systems. CVE-2025-24200 The first vulnerability, designated CVE-2025-24200, was patched in iOS 16.7.11, iPadOS 16.7.11, iOS 15.8.4, and iPadOS 15.8.4. CVE-2025-24200 allows a physical attacker to disable USB Restricted Mode on an Apple device. This is a security feature designed to block unauthorised data access through the USB port when the iPhone or iPad is locked for over an hour. Apple said CVE-2025-24200 “may have been exploited in an extremely sophisticated attack against specific targeted individuals,” hinting at potential involvement from state-sponsored actors aiming to surveil high-value targets such as government officials, journalists, or senior business executives. Although initially patched on February 10 in iOS 18.3.1, iPadOS 18.3.1, and iPad 17.7.5, the vulnerability remained unresolved in older operating systems until now. SEE: Critical Zero-Day Vulnerabilities Found in These VMware Products CVE-2025-24201 The second flaw, CVE-2025-24201, was also patched in iOS 16.7.11, iPadOS 16.7.11, iOS 15.8.4, and iPadOS 15.8.4. This flaw is in WebKit, the browser engine used by Safari to render web pages. It allows malicious code running inside the Web Content sandbox —  an isolated environment intended to contain browser-based threats — to escape and compromise broader system components. CVE-2025-24201 was first mitigated in iOS 17.2 in late 2023, followed by a supplemental patch in iOS 18.3.2, macOS Sequoia 15.3.2, visionOS 2.3.2, and Safari 18.3.1. The flaw has now been retrospectively addressed in iOS and iPadOS 15 and 16. Must-read Apple coverage CVE-2025-24085 CVE-2025-24085, the third vulnerability, was patched in iPadOS 17.7.6, macOS Sonoma 14.7.5, and macOS Ventura 13.7.5. The use-after-free vulnerability is in Apple’s Core Media, the framework responsible for handling media processing tasks such as audio and video playback in apps. It allows attackers to seize control of deallocated memory and repurpose it to execute privileged malicious code.. Originally patched in January, with iOS 18.3, iPadOS 18.3, macOS Sequoia 15.3, watchOS 11.3, visionOS 2.3, and tvOS 18.3, Apple has now backported the fix to older systems. Other vulnerabilities were patched in iOS 18.4 Alongside new Apple Intelligence features and emojis, iOS 18.4 — released on Tuesday — delivers fixes for new vulnerabilities, including: CVE-2025-30456: A flaw in the DiskArbitration framework that allowed apps to escalate their privileges to root. CVE-2025-24097: A flaw in AirDrop that allowed unauthorised apps to access file metadata, such as creation dates or user details. CVE-2025-31182: A flaw in the libxpc framework that lets apps delete arbitrary files on the device. CVE-2025-30429, CVE-2025-24178, CVE-2025-24173: Flaws that allowed apps to break out of sandbox in Calendar, libxpc, and Power Services, respectively. CVE-2025-30467: A flaw in Safari that could allow malicious websites to spoof the address bar. Apple users are strongly urged to update their devices immediately to guard against exploitation of these now-publicised vulnerabilities. While most users will receive automatic update prompts, manual updates can be performed via Settings, General, and then Software Update. source

Apple Patches Critical Vulnerabilities in iOS 15 and 16 Read More »

Ameritas chief AI officer on creating the future AI workforce

00:00 Maryfran Johnson 0:00Hello. Good afternoon and welcome to CIO Leadership Live. I’m your host, Maryfran Johnson, the CEO of Maryfran Johnson media and the former editor in chief of CIO magazine. Since November 2017 this video and audio podcast has been produced by the editors of CIO.com and the digital media division of Foundry, which is an IDG company, our growing library of past interviews, all of them openly available on both cio.com and CIOs. YouTube channel includes more than 150 chief information technology and digital officers from mid sized to large companies across every industry joining that esteemed lineup of CIOs today is a long time friend of the family who has been interviewed in CIO magazine and on ci.com a number of times over the years. Richard Wiedenbeck, he’s the chief AI officer at Ameritas, based in Lincoln, Nebraska. Ameritas is a mutual based financial services company with annual revenues of 3.4 billion. It serves some 6 million customers, many of them in the small to mid sized business space, and it serves them with a broad array of life annuities, retirement, disability, dental and vision insurance plans. Rich has worked in business and Senior Technology roles for more than three decades across multiple industries, including defense, manufacturing, consulting and software. He joined Ameritas in 2010 as the vice president of it, moving up into the CIOs chair in 2013 in 2020, he was inducted into our CIO Hall of Fame, which every year honors an elite group of outstanding business technology leaders. Then last year, in January of 2024, rich joined yet another elite group of leaders who hold the newly minted and still relatively rare title of Chief AI officer. According to our 2025 state of the CIO survey, only 14% of mid sized to large companies have caios, and another 21% of companies are out there actively looking to hire one the responsibilities of this emerging CI C suite role which are being covered by a lot of our cio.com reporters these days. Those responsibilities range from setting a company’s overall ai ai strategy and overseeing how and where the AI tech is being used to developing an AI skilled workforce and to establishing a new enterprise governance that integrates with existing corporate cultures. It is no small task, as you’re going to hear about during this conversation with rich, and there’s some really great expectations around this role. So we have a lot to talk about here. Welcome rich. Thanks for joining me today.Richard Wiedenbeck 3:05Thank you. Maryfran, always a pleasure to be chatting with you. Totally.Maryfran Johnson 3:09Alright, let’s start out with let’s talk first about a broader picture of how the broader business picture about how Ameritas has been doing during these last few challenging years, and the role that it has been playing in the business success you have been having,Richard Wiedenbeck 3:28yeah, absolutely. So, I mean, Ameritas, I always like to say, if you look at our kind of growth, right, our growth has been relative, has been really good, relative to the industry, right? We’re classified, even with that broad range of diversified products, we still get classified in the life and annuity space, or the life insurance space. If you look at that industry, or sub part of the insurance industry, it’s been growing at about one to 3% a year, and we’ve been growing at about seven to nine so we’re clearly outgrowing our industry, which is a good sign. But by the same token, if you look at our expense structure, our expense structure seems to be holding pace with our top line, right? So top line growing, bottom line going, or top expense structure growing at the same rate, right? So, so the challenge to do that cost curve. And then around 2020, we took a look at that and said, Hey, we really, you know, we really need to modernize, you know, I mean, a lot of the standard stories. We need to modernize our systems. We need to really look at how we’re getting things done. We need to look at the interactions and the digital advancements we’re having, and then we need, we need to look at this kind of cost curve bending thing. And we took on a transformation project, an enterprise wide transformation project, we call Pepi, everybody. Everybody gives it a name. Everybody gets an acronym. You know, you always,Maryfran Johnson 4:55everybody loves a good title on a program, right? Yes,Richard Wiedenbeck 4:58always, um. And so we started that journey and and we’re, you know, we’re obviously, you know, four years into it, we’ve, you know, like any transformation journey, you’re going to say, these things went well. These things didn’t go as planned. These things. We wish we could go back and do a little differently, but I think all in all, we’ve made meaningful progress on that, on that journey, and then we started to see the AI frame come in and and we didn’t want to lose sight of that, and we didn’t think it was something to wrap into that. It was something to really start to pay attention to a little bit differently. But I think a lot of firms are on that broad, transformative journey, whether you call it Age of the Customer digital, you know, all of those are pieces of the puzzle and Ameritas certainly, just like other firms in our industry, and even our industries, have been actively pushing to make progress on that, not just kind of doing it as a Hey, here’s our portfolio, let’s prioritize. We actually chose to drive our investment levels up for a period of time to really try to make meaningful progress on it. And now we’re kind of coming on the tail end of that saying now let’s get into that standard. Still continue to make investments, but But where are we pushing those priorities, and how do we bring this

Ameritas chief AI officer on creating the future AI workforce Read More »

All You Need to Know about Edtech

Technology is revolutionizing numerous aspects of our daily lives, and the landscape of education is no exception. In modern education, technology is reshaping the way we teach, learn, and interact within educational environments. Its significance lies in its potential to enhance learning experiences, streamline administrative tasks, and prepare students for a technologically driven world. The role of edtech in modern education Edtech, short for educational technology, refers to the modern learning landscape that uses digital tools and resources to enhance the ultimate learning experience. Edtech encompasses a wide range of opportunities, from using electronic gadgets (e.g., laptops and tablets) in a conventional classroom setting for note-taking to making online courses accessible. Understanding and embracing edtech has become a necessity for educators, students, and institutions alike. According to Research and Markets’s EdTech and Smart Classrooms Global Market Report 2025, the edtech and smart classrooms market globally is forecasted to grown from $185.78 billion in 2024 to $214.73 billion in 2025. Leveraging edtech to its fullest requires a thorough grasp of its concepts and key terminologies. Exploring core edtech concepts Sharpen your edtech knowledge with TechRepublic Premium’s quick glossary of important terms and concepts. For example, this in-depth resource breaks down the definition of adaptive learning, which refers to technology that tailors the content and pace of instruction based on individual learner needs. This strategy acknowledges that each student has differing learning styles and that a single, universal approach to teaching may not be helpful for all learners. It also describes asynchronous learning. This concept is defined as learning that occurs at different times and places, allowing students to access materials and complete assignments at their own pace. As technology progresses and more individuals gain access to online education platforms, this method is becoming more popular. In the glossary, collaborative learning is explained too. This is an educational strategy that encourages learners to work together on course projects, assignments, or activities. This is frequently supported by digital tools and platforms that foster cooperation and teamwork. Moreover, the resource delves into the details of immersive learning technologies, a term that describes technologies that offer immersive educational experiences by imitating real-world surroundings or adding digital features to the actual world. This includes virtual reality and augmented reality. Other key terminologies featured in the resource include the curriculum management system, distance education, flipped classroom, learning management system, and virtual classroom. Education is a valuable asset that can’t be taken away. For those aspiring to expand their understanding of edtech, this 13-page quick glossary is available for just $19 at TechRepublic Premium. source

All You Need to Know about Edtech Read More »

AI culture war: Hidden bias in training models may push political propaganda

Hangzhou developed DeepSeek despite US export controls on high-performance chips commonly used to design and test AI models, thus proving how quickly advanced AI models can emerge despite roadblocks, adds Adnan Masood, chief AI architect at digital transformation company UST. With a lower cost of entry, it’s now easier for organizations to create powerful AIs with cultural and political biases built in. “On the ground, it means entire populations can unknowingly consume narratives shaped by a foreign policy machine,” Masood says. “By the time policy executives realize it, the narratives may already be embedded in the public psyche.” Technology as propaganda While few people have talked about AI models as tools for propaganda, it shouldn’t come as a big surprise, Moogimane adds. After all, many technologies, including television, the Internet, and social media, became avenues for pushing political and cultural agendas as they reached the mass market. source

AI culture war: Hidden bias in training models may push political propaganda Read More »