Bill To Restrict Kids' Social Media Use Heads To Full Senate

By Allison Grande ( February 5, 2025, 11:07 PM EST) — The U.S. Senate Commerce Committee on Wednesday easily advanced legislation that would ban kids under 13 from accessing social media and prevent providers from feeding personalized content to users under 17, although the measure faces opposition from advocacy groups that say the proposal would unconstitutionally restrict free speech. … Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Bill To Restrict Kids' Social Media Use Heads To Full Senate Read More »

‘Sorry, I didn’t get that’: AI misunderstands some people’s words more than others

The idea of a humanlike artificial intelligence assistant that you can speak with has been alive in many people’s imaginations since the release of “Her,” Spike Jonze’s 2013 film about a man who falls in love with a Siri-like AI named Samantha. Over the course of the film, the protagonist grapples with the ways in which Samantha, real as she may seem, is not and never will be human. Twelve years on, this is no longer the stuff of science fiction. Generative AI tools like ChatGPT and digital assistants like Apple’s Siri and Amazon’s Alexa help people get driving directions, make grocery lists, and plenty else. But just like Samantha, automatic speech recognition systems still cannot do everything that a human listener can. You have probably had the frustrating experience of calling your bank or utility company and needing to repeat yourself so that the digital customer service bot on the other line can understand you. Maybe you’ve dictated a note on your phone, only to spend time editing garbled words. Linguistics and computer science researchers have shown that these systems work worse for some people than for others. They tend to make more errors if you have a non-native or a regional accent, are Black, speak in African American Vernacular English, code-switch, if you are a woman, are old, are too young or have a speech impediment. Tin ear Unlike you or me, automatic speech recognition systems are not what researchers call “sympathetic listeners.” Instead of trying to understand you by taking in other useful clues like intonation or facial gestures, they simply give up. Or they take a probabilistic guess, a move that can sometimes result in an error. As companies and public agencies increasingly adopt automatic speech recognition tools in order to cut costs, people have little choice but to interact with them. But the more that these systems come into use in critical fields, ranging from emergency first responders and health care to education and law enforcement, the more likely there will be grave consequences when they fail to recognize what people say. Imagine sometime in the near future you’ve been hurt in a car crash. You dial 911 to call for help, but instead of being connected to a human dispatcher, you get a bot that’s designed to weed out nonemergency calls. It takes you several rounds to be understood, wasting time and raising your anxiety level at the worst moment. What causes this kind of error to occur? Some of the inequalities that result from these systems are baked into the reams of linguistic data that developers use to build large language models. Developers train artificial intelligence systems to understand and mimic human language by feeding them vast quantities of text and audio files containing real human speech. But whose speech are they feeding them? If a system scores high accuracy rates when speaking with affluent white Americans in their mid-30s, it is reasonable to guess that it was trained using plenty of audio recordings of people who fit this profile. With rigorous data collection from a diverse range of sources, AI developers could reduce these errors. But to build AI systems that can understand the infinite variations in human speech arising from things like gender, age, race, first vs. second language, socioeconomic status, ability and plenty else, requires significant resources and time. ‘Proper’ English For people who do not speak English – which is to say, most people around the world – the challenges are even greater. Most of the world’s largest generative AI systems were built in English, and they work far better in English than in any other language. On paper, AI has lots of civic potential for translation and increasing people’s access to information in different languages, but for now, most languages have a smaller digital footprint, making it difficult for them to power large language models. Even within languages well-served by large language models, like English and Spanish, your experience varies depending on which dialect of the language you speak. Right now, most speech recognition systems and generative AI chatbots reflect the linguistic biases of the datasets they are trained on. They echo prescriptive, sometimes prejudiced notions of “correctness” in speech. In fact, AI has been proved to “flatten” linguistic diversity. There are now AI startup companies that offer to erase the accents of their users, drawing on the assumption that their primary clientele would be customer service providers with call centers in foreign countries like India or the Philippines. The offering perpetuates the notion that some accents are less valid than others. Human connection AI will presumably get better at processing language, accounting for variables like accents, code-switching and the like. In the U.S., public services are obligated under federal law to guarantee equitable access to services regardless of what language a person speaks. But it is not clear whether that alone will be enough incentive for the tech industry to move toward eliminating linguistic inequities. Many people might prefer to talk to a real person when asking questions about a bill or medical issue, or at least to have the ability to opt out of interacting with automated systems when seeking key services. That is not to say that miscommunication never happens in interpersonal communication, but when you speak to a real person, they are primed to be a sympathetic listener. With AI, at least for now, it either works or it doesn’t. If the system can process what you say, you are good to go. If it cannot, the onus is on you to make yourself understood. source

‘Sorry, I didn’t get that’: AI misunderstands some people’s words more than others Read More »

French AI startup Mistral launches Le Chat mobile app for iPhone, Android — can it take enterprise eyes off DeepSeek?

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More While the AI market has in recent days seemed to collapse around DeepSeek and OpenAI, there are of course many other teams of brilliant engineers fielding large language models (LLMs) that are worth a look as a user or enterprise seeking to leverage the latest and greatest. Take Mistral AI, the French startup that made headlines even before it launched with a record-setting seed funding round for Europe of $640 million, and which has quietly been training and releasing a mix of open source and proprietary models for consumers and enterprises. Even as the rise of new reasoning models and agents have dominated the AI landscape recently, Mistral is still positioning itself as a viable alternative to OpenAI’s signature chatbot ChatGPT and DeepSeek’s hit mobile app, especially for those concerned with data privacy and security. Today, Mistral finally launched its own free, mobile version of its chatbot Le Chat for iOS and Android, as well as a new Enterprise tier for private infrastructure, and a Pro plan at $14.99 per month, the move suggests that Mistral AI is making a concerted push to convince companies there are worthwhile alternatives to DeepSeek and OpenAI. Mistral’s le Chat offers business leaders an AI tool that integrates with enterprise environments, operates with high-speed performance, and—importantly for some customers—does not send user data to China, unlike DeepSeek. Mistral targets both consumer and enterprise with savvy new releases Mistral AI’s latest rollout comes at a time when enterprises are increasingly evaluating AI partners based on data privacy, security, and deployment flexibility. The launch of le Chat’s Enterprise tier, which allows businesses to deploy the assistant on private infrastructure, SaaS, or virtual private cloud (VPC), suggests that Mistral is targeting the same corporate users who may have previously considered OpenAI’s GPT-4 or Anthropic’s Claude but want more control over their data and models. Mistral’s strategy mirrors a recent move by DeepSeek, a Chinese AI company that released DeepSeek-R1, a powerful reasoning model that offers capabilities and performance similar to OpenAI’s “o” series of models (o1, o1-mini, o3-mini out now with o3 full to follow soon) but at a fraction of the lost (30 times less expensive for enterprise users than OpenAI o1). However, DeepSeek’s expansion has been met in the West with privacy and security concerns related to China’s data retention and censorship laws. Some analysts have raised questions about whether AI models developed by Chinese firms could be subject to Beijing’s data access regulations, prompting enterprises to proceed cautiously when integrating such systems. For companies concerned about where their AI models process and store data, Le Chat’s non-Chinese infrastructure could be a key selling point. Unlike DeepSeek, which operates under a Chinese legal framework, Mistral AI is a European company based in France, subject to EU data privacy laws (GDPR) rather than China’s Cybersecurity Law or the Personal Information Protection Law (PIPL). Mistral AI is betting that Le Chat’s performance advantages will also help it stand out. The mobile app is powered by the company’s latest low-latency AI models, which, according to Mistral, enable “Flash Answers”—a feature that generates responses at speeds of up to 1,000 words per second. Beyond speed, Le Chat differentiates itself by integrating real-time web search and sourcing from journalistic and social media platforms, allowing for fact-grounded responses rather than relying solely on pre-trained knowledge. This makes Le Chat a potential alternative for businesses that require more up-to-date, evidence-based AI insights rather than static model training data. For enterprises, Le Chat also includes: • Code Interpreter: Allows in-place execution of scripts, scientific computations, and data visualization. • OCR & Document Processing: Industry-grade optical character recognition (OCR) for PDFs, spreadsheets, and even complex or low-quality images. • Image Generation: Powered by Black Forest Labs’ Flux Ultra, enabling photorealistic content creation. Undercutting OpenAI and Anthropic on price Mistral AI is also taking a different approach to pricing compared to competitors. While OpenAI charges $20 per month for ChatGPT Plus and Anthropic’s Claude has varying pricing based on token limits, Le Chat’s Pro plan starts at $14.99 per month. Additionally, most features—including the latest models, document uploads, and even image generation—are free, with limits only kicking in for power users. For businesses looking at team-wide adoption, Le Chat Team provides priority support, unified billing, and integration credits, while Enterprise deployments allow companies to use their own custom AI models tailored to their organization’s needs. Quick hands-on comparison I downloaded and tested Mistral’s Le Chat iOS app on my iPhone briefly while writing and editing this piece, and compared some of my prompts to my default AI assistant, OpenAI’s ChatGPT powered by GPT-4o. Le Chat was typically noticeably faster in its outputs than ChatGPT, but its Black Forest Labs Ultra model image generation capabilities were surprisingly not as adherent to my prompt as ChatGPT’s built-in connection to OpenAI’s DALL-E 3 image model, which is now 5 months old and hardly state-of-the-art anymore. Also, OpenAI’s connectivity to web search provided a more rich diversity of sources than Le Chat, which defaulted to the AFP, a French yet English-language publishing news outlet and wire service that Mistral partnered with back in January 2025. See some of my comparisons of Le Chat and ChatGPT below. Le Chat: ChatGPT: Le Chat: ChatGPT: AI competition continues to intensify Le Chat’s launch underscores a broader industry shift: while OpenAI and Anthropic remain dominant players, enterprises are actively evaluating alternative AI providers that offer better pricing, more flexible deployment options, and clearer data privacy guarantees. With DeepSeek facing scrutiny over its Chinese data links and OpenAI dealing with ongoing enterprise adoption challenges, Mistral AI’s European positioning, fast performance, and competitive pricing could make it an increasingly attractive choice for businesses looking to integrate AI assistants into their workflows. For companies weighing their AI options, the latest iteration of Le Chat is a signal that viable non-U.S., non-Chinese AI alternatives are beginning to

French AI startup Mistral launches Le Chat mobile app for iPhone, Android — can it take enterprise eyes off DeepSeek? Read More »

CIOs: Catalysts to promote sustainability through collaboration

Sustainability initiatives are largely driven by data, and CIOs are uniquely positioned to ensure their organizations have the right data necessary to drive collaborative decision-making practices that influence sustainability. In recent years, IT groups have helped a wide range of organizations improve sustainability initiatives by creating infrastructure that allows them to collect and analyze related data. Apparel maker Vuori uses its data to drive sustainability goals regarding waste reduction, its carbon footprint, and managing raw materials. And Choice Hotels has used data derived from utility providers to identify sustainability issues like leaking swimming pools. By leading tech-related opportunities to measure sustainability KPIs, and even helping determine which ones to measure, CIOs bring critical information to collaborative sustainability efforts. Plus, collecting and presenting data helps CIOs and other decision-makers understand the outcomes of particular business decisions, identify inefficiencies that might otherwise go overlooked, and identify opportunities for improvement. With these resources, CIOs can often initiate and drive the conversation surrounding sustainability. source

CIOs: Catalysts to promote sustainability through collaboration Read More »

Ransomware Payments Decreased by 35% in 2024

Ransomware payments took an unexpected plunge in 2024, dropping 35% to approximately $813.55 million — despite payouts surpassing $1 billion for the first time in 2023. The decline was largely driven by a series of successful law enforcement takedowns and improved cyber hygiene, which enabled more victims to refuse payment, according to blockchain platform Chainalysis. The drop came as a surprise, considering the upward trend seen earlier in the year. In fact, ransomware actors extorted 2.38% more in the first half of 2024 compared to the same period in 2023, suggesting that payments would continue to rise. However, this momentum was short-lived, as payment activity plummeted by approximately 34.9% in the second half of the year. According to Chainalysis, Akira was the only one of the top 10 most prolific ransomware groups from the first half of 2024 to have increased its efforts in the second half. Additionally, as the year progressed, fewer exceptionally large payouts were made compared to the record-breaking $75 million payment to Dark Angels in early 2024. Incident response data also showed that the gap between the amounts demanded by criminals and the amounts paid by victims increased to 53% in the second half of the year. Chainalysis analysts attributed this to improved resiliency among organisations, which allowed them to explore recovery options, such as using a decryption tool or restoring from backups, rather than paying the ransoms. SEE: How Can Businesses Defend Themselves Against Common Cyberthreats? Despite the overall decline in ransomware payments, the number of new data leak sites doubled in 2024, according to Recorded Future. However, the Chainalysis team noted that many organisations had their data listed multiple times, and ransomware groups often claimed to have compromised multinational corporations when, in reality, they had only breached a single branch. Hackers may also exaggerate or misrepresent the extent of a victim’s compromised data, sometimes even reposting the results of old attacks. This tactic is often used to stay relevant or appear active after a law enforcement takedown — an operation criminals have dubbed “Operation Cronos.” Must-read security coverage LockBit and ALPHV have left a notable gap The notorious ransomware group LockBit, responsible for the most common type of ransomware deployed globally in 2023, was targeted in a law enforcement takedown in February 2024. The U.K. National Crime Agency’s Cyber Division, the FBI, and international partners cut off their website, which had been operating as a major ransomware-as-a-service storefront. While LockBit resumed operations at a different Dark Web address a few days later, payments to the group decreased by 79% in the second half of the year, according to Chainalysis. Research from Malwarebytes also found that while LockBit conducted more individual attacks, the proportion of ransomware incidents it claimed responsibility for fell from 26% to 20%. SEE: Cybersecurity News Round-Up 2024: 10 Biggest Stories That Dominated the Year ALPHV, the second-most prolific ransomware group in 2023, also left a vacancy after a poorly executed cyber attack against Change Healthcare in February. The group failed to pay an affiliate their share of the $22 million ransom, prompting the affiliate to expose them. In response, ALPHV staged a fake law enforcement takedown and ceased operations. Decline in mixer use and rise in personal wallets signal law enforcement impact Beyond the decline in payouts, Chainalysis identified additional evidence that law enforcement takedowns of 2024 were successful. The use of mixing services — tools that obscure the origin of illicit cryptocurrency by blending it with other funds — by ransomware actors declined in 2024. Chainalysis linked this trend to the sanctions and law enforcement crackdowns on mixers such as Chipmixer, Tornado Cash, and Sinbad. In their place, ransomware actors are using cross-chain bridges, which transfer cryptocurrency between different blockchains to facilitate their off-ramping. Furthermore, “substantial volumes” of criminal funds are now being held in personal wallets, suggesting they are abstaining from cashing out. “We attribute this largely to increased caution and uncertainty amid what is probably perceived as law enforcement’s unpredictable and decisive actions targeting individuals and services participating in or facilitating ransomware laundering, resulting in insecurity among threat actors about where they can safely put their funds,” the Chainalysis team said. Ransomware attackers are upping their game in response Chainalysis warned that ransomware groups continue to adapt despite law enforcement disruptions, with “new ransomware strains emerging from leaked or purchased code” to evade detection. The report also highlighted that attacks have become faster, with negotiations now beginning within hours of data exfiltration. SEE: Microsoft: Ransomware Attacks Growing More Dangerous, Complex However, authorities are now catching on to the evolving tactics and are considering more drastic countermeasures. Last month, the U.K. government announced it may ban ransomware payments to make critical industries “unattractive targets for criminals.” source

Ransomware Payments Decreased by 35% in 2024 Read More »

EU AI Act: First Requirements Become Legally Binding

As of Feb. 2, 2025, the first few requirements of the E.U.’s AI Act are legally binding. Businesses operating in the region that do not abide by these requirements are at risk of a fine of up to 7% of their global annual turnover. Certain AI use cases are now not allowed, including using it to manipulate behaviour and cause harm, for example, to teenagers. However, Kirsten Rulf, co-author of the E.U. AI Act and partner at BCG, said that these are applicable to “very few” companies. Other examples of now-prohibited AI practices include: AI “social scoring” that causes unjust or disproportionate harm. Risk assessment for predicting criminal behaviour based solely on profiling. Unauthorised real-time remote biometric identification by law enforcement in public spaces. “For example, banks and other financial institutions using AI must carefully ensure that their creditworthiness assessments do not fall in the category of social scoring,” Rulf said. Read the complete list of prohibited practices via the E.U.’s AI Act. In addition, the Act now requires staff at companies that either provide or use AI systems will need to have “a sufficient level of AI literacy.” This will be achieved through either training internally or hiring staff with the appropriate skillset. “Business leaders must ensure their workforce is AI-literate at a functional level and equipped with preliminary AI training to foster an AI-driven culture,” Rulf said in a statement. SEE: TechRepublic Premium’s AI Quick Glossary The next milestone for the AI Act will come at the end of April, when the European Commission will likely publish the final Code of Practice for General Purpose AI Models, according to Rulf. The code will become effective in August, as will the powers of member state supervisory authorities for enforcing the Act. “Between now and then, businesses must demand sufficient information from AI model providers to deploy AI responsibly and work collaboratively with providers, policymakers, and regulators to ensure pragmatic implementation,” Rulf advised. AI Act is not stifling innovation but allows it to scale, according to its co-author While many have criticised the AI Act, as well as the strict approach the E.U. has towards regulating tech companies in general, Rulf said during a BCG roundtable for the press that this first phase of the legislation marks the “start of a new era in AI scaling.” “(The Act) brings the guardrails and quality and risk management framework into place that it needs to scale up,” she said. “It’s not stifling innovation… it’s enabling the scaling of AI innovations that we all want to see.” She added that AI inherently comes with risks, and if you scale it up, the efficiency benefits will suffer and endanger the reputation of the business. “The AI Act provides you with a really good blueprint of how to tackle these risks, of how to tackle these quality issues, before they occur,” she said. According to BCG, 57% of European companies cite uncertainty surrounding AI regulations as an obstacle. Rulf acknowledged that the current definition of AI that falls under the AI Act “cannot be operationalized easily” because it’s so broad, and was written as such to be consistent with international guidelines. “The difference in how you interpret that AI definition for a bank is the difference between 100 models falling under that regulation, and 1,000 models plus falling under that regulation,” she said. “That, of course, makes a huge difference both for capacity costs, bureaucracy, scrutiny, but also can even policy makers keep up with all of that?” Rulf stressed that it is important businesses engage with the E.U. AI Office while standards for the AI Act that are yet to be phased in are still being drawn up. This means that policymakers can develop them to be as practical as possible. SEE: What is the EU’s AI Office? New Body Formed to Oversee the Rollout of General Purpose Models and AI Act “As a regulator and policy maker, you don’t hear these voices,” she said. “You cannot deregulate if you don’t know where the big problems and stepping stones are… I can only encourage everyone to really be as blunt as possible and as industry-specific as possible.” Regardless of criticism, Rulf said the AI Act has “evolved into a global standard” and that it has been copycatted both in Asia and in certain U.S. states. This means many companies may not find it too taxing to comply if they have already adopted a responsible AI program to abide with other regulations. SEE: EU AI Act: Australian IT Pros Need to Prepare for AI Regulation More than 100 organisations, including Amazon, Google, Microsoft, and OpenAI, have already signed the E.U. AI Pact and volunteered to start implementing the Act’s requirements ahead of legal deadlines. source

EU AI Act: First Requirements Become Legally Binding Read More »

How AI can drive business transformation

Strategic time savings It is possible to squander the time savings AI produces. For example, let’s say you deploy an AI solution that allows your staff to do things more quickly and efficiently. If you save everyone 30 minutes a day, then that doesn’t necessarily mean much to the company. It means everyone will have an extra coffee break or a longer lunch. It is only when you are strategic and pursue organizational redesign that concentrates and maximizes that time that you get real value. Now instead imagine that you have 10,000 people handling HR issues, customer service, technology support, and managing the business. If you can save each of them one hour a day by deploying AI solutions, then you have freed up 1,200 heads. Here is where companies can make strategic decisions to maximize this time savings. Instead of spreading the time savings so everyone gets a longer coffee break, can you consolidate it so entire teams or departments are freed up? When you concentrate the time savings, you open up the possibility for transformation. Time to transform It is easy to see employees’ and entire business unit’s saved time just in terms of cost savings. If AI can do the work, then that is time you no longer have to pay someone else to do that work. But that approach would be shortsighted. source

How AI can drive business transformation Read More »

FCC To Launch Spectrum Sale, Eyes More C-Band Use

By Christopher Cole ( February 5, 2025, 7:46 PM EST) — The FCC’s new Republican chief said Wednesday the agency will kick off rules for a new spectrum sale authorized by Congress and consider a plan to eventually open more midband airwaves in the C-band for private sector use…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

FCC To Launch Spectrum Sale, Eyes More C-Band Use Read More »

Highlights From The Forrester Wave™: Content Platforms, Q1 2025

The Forrester Wave™: Content Platforms, Q1 2025, is now live! We looked at 12 key vendors and evaluated them on 24 criteria. Four Leaders emerged, followed by five Strong Performers and three Contenders. To learn more about these vendors and how they serve their target markets, Forrester clients can view the full report. Get Ready For A More Intelligent Approach To Content Management The enterprise content management market has undergone a significant transformation and today is exemplified by AI-enabled cloud content platforms. Generic document management doesn’t cut it. Technical leaders and the business roles they support want flexible, extensible platforms on which to design and deploy a range of content-rich apps. Content platforms are a foundational component of an overall digital workplace, integrating with key productivity suites and essential enterprise applications. A Fast Pace Of Innovation Marked This Year’s Evaluation Our Wave methodology ensures that we do a deep dive into the evaluated content platforms, looking at extensive written questionnaire responses from vendors, sitting through demos and briefings, and talking directly to their customers. I found that: The pace of innovation over the last two years has been unprecedented — driven by AI. Generative AI (genAI), is transforming how we create, consume, and govern content. Vendors are making substantial investments in genAI, with capabilities evolving quickly as large language models and agentic AI continue to develop. This rapid iteration means that businesses can expect continuous improvement, including evolving pricing models to put more AI capabilities into users’ hands. Automation opportunities abound. From simple document approvals to complex processes that integrate with other enterprise applications, the range of automation capabilities is expanding. Intelligent data extraction helps fuel high-volume, document-centric workflows and automates metadata identification and tagging. Document generation capabilities, increasingly assisted by AI and integrations, continues to be an area of investment for vendors. Packaged apps and solution templates can fast-track adoption and productivity. Many vendors have mature vertical strategies and have packaged apps designed specifically for industry-specific use cases. Look for predefined templates or solution accelerators to tailor your deployment to meet specific business needs — often with minimal custom development. Key Considerations For Buyers Not all vendors are ideal for every content management use case. When building a shortlist for content platforms, keep in mind that: Vertical expertise matters, and not everybody has it. Look at the vendors that understand the nuances of your industry and that invest in the solutions, professional services, and partner ecosystem to meet your requirements. Vendors focusing on key verticals will invest in meeting industry-specific compliance obligations, obtain certifications, and help clients meet their regulatory requirements. Pricing models are simplifying — but becoming opaque. While a handful of vendors in this evaluation do publish the pricing for their most common subscription bundles, most don’t. Most vendors price their cloud content platforms on per-user/per-month models with a choice of a few subscription tiers, such as basic, intermediate, or advanced capabilities. Vendors that still offer self-hosted or on-premises deployment options may have additional pricing and licensing models tied to API calls, storage volumes, or other usage- or application-based parameters. Know your use cases and your requirements when you reach out to vendors that don’t publish pricing/licensing models. Context matters when trying to get value out of genAI. Bring AI to your content rather than bringing your content to AI. Using the embedded genAI capabilities from your content platform vendor can provide an unfair advantage. Most evaluated vendors are using a retrieval-augmented generation architecture that shields your corporate content from public AI models, and also respects existing access controls and permission structures. Exposing a genAI assistant in a folder, a workspace, or search result interface can ground the AI in the that specific context — i.e., a customer case, a project folder, or list of incoming claims — to return specific answers and relevant citation links to source documents. Interested in learning more? I’ll be hosting a webinar for clients on Thursday, March 6, to go over the key findings from this Wave. Join us by registering here! Forrester clients are encouraged to set up a guidance session or inquiry to talk about our key findings and learn more about this market. source

Highlights From The Forrester Wave™: Content Platforms, Q1 2025 Read More »

Australia Divided In DeepSeek Response

Australian authorities disagree over the response the country should take to the runaway success of the Chinese AI app DeepSeek. While some industry groups call for rapid action to support national AI innovation, the science minister urges caution. The Tech Council of Australia, an industry body that includes Microsoft, Atlassian, Google, and IBM among its members, warned the government should “act now or risk Australia falling behind in AI development and adoption.” In a statement about the Australian government’s national AI capability plan, the TCA said, “DeepSeek’s reported breakthrough shows that the AI landscape is highly competitive and rapidly evolving.” DeepSeek recently launched an AI chat app featuring a “reasoning” model comparable to OpenAI’s o1. The DeepSeek app quickly surged to the top of Apple’s App Store, causing a stir among American AI companies. Its debut rattled financial markets — NVIDIA and Microsoft stocks took a hit, as investor confidence in the U.S. AI makers dipped. The Council emphasised its support for the national AI plan announced by the government in December but argued the country “cannot wait” until 2025 for it to be finalised. It recommended key priorities such as AI education, infrastructure investment, pro-innovation regulations, international collaboration, and research support. In November, research from the industry group found that increasing total tech investment from 3.7% to 4.6% of the country’s GDP could contribute AUD $39 billion in productivity gains by 2035. “Realising these benefits will require the right policy settings and coordination with industry to ensure Australia is a competitive place to make and deliver technology products,” the council stated. SEE: Australia Could Have 200,000 AI Tech Workers by 2030 The Australian Strategic Policy Institute, a prominent think tank, echoed the Council’s sentiment. It said that Australia “cannot continue the current approach of responding to each new tech development” and should instead focus on building its own sovereign AI capabilities. Like the Tech Council, the institute emphasised the need for a national strategy to secure AI’s role in defence, national security, and economic stability. Security concerns surrounding DeepSeek have also emerged. Researchers have found the app is vulnerable to attacks and can be jailbroken, allowing it to bypass its built-in safeguards. CyberCX, a leading Australian cybersecurity firm, has called for a ban on DeepSeek in Australia, citing risks to data privacy and national security. “We assess it is almost certain that DeepSeek, the models and apps it creates, and the user data it collects, is subject to direction and control by the Chinese government,” CyberCX said in a statement. Federal Industry and Science Minister Ed Husic has also taken a cautious stance since DeepSeek’s debut. Instead of pushing for rapid innovation to compete with China, he raised concerns that the app’s remarkable capabilities may have come at the cost of proper “data and privacy management.” More Australia coverage “The Chinese are very good at developing products that work very well. That market is accustomed to their approaches on data and privacy,” Husic told ABC via AFP. “The minute you export it to markets where consumers have different expectations around privacy and data management, the question is whether those products will be embraced in the same way.” Newly-appointed Chief Scientist Tony Haymet, however, expressed a more optimistic outlook. Speaking at a press conference, Haymet described DeepSeek’s success as a demonstration of “how disruptive technology can be and how quickly things can happen.” He said: “I view AI as a great opportunity. I think it’s a great export opportunity for Australia because AI needs electricity and most of the world is demanding that we deliver AI with renewable electricity, and Australia is perfectly set up for that. No matter which way we decide to deliver that electricity, we can do it.” source

Australia Divided In DeepSeek Response Read More »