Customer Success Plays A Crucial Role In Revenue Process Transformation

The Forrester Opportunity Lifecycle is a framework for transforming revenue processes to maximize value for B2B customers. As my colleague Amy Hawthorne explains, top revenue transformers: Share signals and create a common view of the customer — throughout the postsale. This is so you can achieve a true understanding of what makes customers succeed, as well as when they are deviating from or toward that goal. Adopt opportunities and retention/buying groups. Managing an opportunity (not just leads, marketing-qualified leads, or customer-success-qualified leads) is what vendors should do to support a customer’s journey from signed contract to loyal advocate. Retention groups are the postsale counterpart to the buying group — and they can become buying groups when expansion opportunities arise. Align marketing, sales, and customer success around shared, customer-centered goals and metrics. All frontline teams must play a coordinated, highly collaborative role in engaging customers. This ensures that customers achieve expected results faster and in more measurable, meaningful ways. Customer Success’s Important Role Customer success (CS) is uniquely positioned to help customers adopt offerings and achieve those meaningful results. Customer success managers enjoy close contact with customers, which puts them in touch with the signals and information that better inform customer understanding. They play a key role in all four postsale stages of the Forrester Opportunity Lifecycle framework, providing the management needed to guide customers to succeed. And when aligned with marketing and sales, they ensure that the business promises made during the presale and postsale stages turn into the results that customer expect — and more.   Four Steps Move Customers From Delivery To Activation In a recently published report (subscription required), Forrester outlines the role of customer success in the opportunity lifecycle. Whether or not you have a distinct customer success function — or teams dedicated to account management, onboarding, training, retention, or value engineering — anyone responsible for supporting customers along their journey will help make your growth more predictable and increase customer longevity when they: Deliver: Set the stage for the customer’s success. Leading CS teams put goals and processes in place to formalize the transition from sales to postsale to make the customer experience more consistent. They document vital information about customer accounts and make it easily accessible across frontline teams, and they make customers step up to ensure their success. Develop: Ensure practical and meaningful offering adoption. To scale operations and generate measurable value, CS teams need to make sure that customers have at minimum a digital destination that gets them off to a fast start. This hub also becomes the place to connect with other customers, form a community, and elevate best practices across the customer ecosystem. Confirm: Help customers see that they’ve achieved a reasonable ROI. Top CS teams show they create value for customers when they help conquer the measurement obstacles presented by revenue process transformation, crystallize for customers the link between using their offerings and making measurable progress, and show the rest of the company that customers are really getting the results they want. Activate: Expand the relationship to reinforce loyalty. To turn happy customers into raving fans, CS teams help create community interactions that customers crave, show advocate customers appreciation through personally relevant experiences, and invite them to show off their achievements or report results that make leaders care. Does Your Postsale Strategy Set Your Customers Up For Success? Ensuring that customers get the value they want requires dedicated postsale resources. It’s time for customer success to earn the right to gain equal footing with marketing and sales along the journey to transformed revenue processes. To learn more, join us at Forrester’s B2B Summit North America from March 31–April 3 in Phoenix, where you can attend our workshop or sessions on customer success. Forrester Decisions clients: You can access this report and related ones, or reach out to your account manager to schedule an inquiry or guidance session with an analyst, if you want to explore this topic further. source

Customer Success Plays A Crucial Role In Revenue Process Transformation Read More »

Apple’s $500 Billion AI Investment to Create 20,000 Tech Jobs

Apple on Monday announced a plan to spend $500 billion to bolster its artificial intelligence ambitions that will add 20,000 research and development jobs in the US over the next four years. The plan will include the expansion of data center facilities in Michigan, Texas, California, Arizona, Nevada, Iowa, Oregon, North Carolina, and Washington. The company, with the help of Taiwan’s Foxconn, will build a 250,000-square-foot facility in Houston, Texas to manufacture AI servers to support Apple Intelligence. US President Donald Trump sought to claim the announcement as a boost to his administration, which in recent days saw falling approval ratings after a whirlwind start to his second term that included thousands of federal government firings. Trump met with Apple CEO Tim Cook last week and in social media posts touted Monday’s announcement as a vote of confidence in his administration. During an event with state governors in Washington, D.C., last week, Trump said Apple’s investment was proof that his tariff efforts are paying off. Apple manufactures many of its products in China and faces new 10% tariffs on those goods. “[Apple] stopped two plants in Mexico that were starting construction,” Trump said. “They just stopped them — they’re going to build them here instead, because they don’t want to pay the tariffs. Tariffs are amazing.” Related:IT Hiring in 2025: Cloudy With a Chance of High Salaries Despite Trump’s assertions, Apple did not state if the proposed tariffs factored into its plans. It’s also unclear what “plants” Trump was referring to, as Apple has not announced specific plans to build in Mexico. Reports say Foxconn, which produces iPhones for Apple in China and India, is planning to build a factory in Mexico in partnership with Nvidia. In 2021, during the Biden Administration, Apple made a $430 billion commitment to creating 20,000 new jobs across the country over five years. But its plan to build a new campus in Research Triangle Park in North Carolina was paused in 2024. And during the first Trump administration, Apple announced a $350 billion, five-year spending plan. Apple has not publicly disclosed how much of those previous commitments were fulfilled. Cook said Apple is committed to boosting domestic manufacturing. “We are bullish on the future of American innovation and we’re proud to build on our long-standing US investments with this $500 billion commitment to our country’s future,” Cook said in a statement. He said the company would double its Advanced Manufacturing Fund, which invests in training for high-skilled manufacturing. Related:What Tech Workers Should Know About Federal Job Cuts and Legal Pushback Apple said the 20,000 jobs will add to the 2.9 million jobs the company already supports throughout the country through direct employment, work with US-based suppliers and manufacturers, and developer jobs. The new positions will focus on research and development, software development, silicon engineering, and AI and machine learning advancements. “This is a welcome sign as Apple steps up to design its manufacturing infrastructure for the intelligent age,” Boston University Questrom School of Business professor emeritus Venkat Venkatraman wrote in a post on LinkedIn. “Could this help Apple get into a broader set of digital products? Possibly. It also signals a major geographical realignment of its global footprint (and political realities)!” Plan Details The announced Texas manufacturing facility is slated to open in 2026 and will produce AI servers previously manufactured outside of the US. The company’s US Advanced Manufacturing Fund, which was created in 2017 to spur high-skilled manufacturing training and support innovation, will increase from $5 billion to $10 billion. The expanded effort includes a multibillion-dollar commitment from Apple to produce advanced silicon in TSMC’s Arizona plant. In Detroit, the company will launch the Apple Manufacturing Academy to offer free in-person and online courses to teach project management and manufacturing process optimization, and other smart manufacturing techniques. Related:Tech Company Layoffs: The COVID Tech Bubble Bursts source

Apple’s $500 Billion AI Investment to Create 20,000 Tech Jobs Read More »

How test-time scaling unlocks hidden reasoning abilities in small language models (and allows them to outperform LLMs)

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Very small language models (SLMs) can outperform leading large language models (LLMs) in reasoning tasks, according to a new study by Shanghai AI Laboratory. The authors show that with the right tools and test-time scaling techniques, an SLM with 1 billion parameters can outperform a 405B LLM on complicated math benchmarks. The ability to deploy SLMs in complex reasoning tasks can be very useful as enterprises are looking for new ways to use these new models in different environments and applications. Test-time scaling explained Test-time scaling (TTS) is the process of giving LLMs extra compute cylces during inference to improve their performance on various tasks. Leading reasoning models, such as OpenAI o1 and DeepSeek-R1, use “internal TTS,” which means they are trained to “think” slowly by generating a long string of chain-of-thought (CoT) tokens. An alternative approach is “external TTS,” where model performance is enhanced with (as the name implies) outside help. External TTS is suitable for repurposing exiting models for reasoning tasks without further fine-tuning them. An external TTS setup is usually composed of a “policy model,” which is the main LLM generating the answer, and a process reward model (PRM) that evaluates the policy model’s answers. These two components are coupled together through a sampling or search method.  The easiest setup is “best-of-N,” where the policy model generates multiple answers and the PRM selects one or more best answers to compose the final response. More advanced external TTS methods use search. In “beam search,” the model breaks the answer down into multiple steps. For each step, it samples multiple answers and runs them through the PRM. It then chooses one or more suitable candidates and generates the next step of the answer. And, in “diverse verifier tree search” (DVTS), the model generates several branches of answers to create a more diverse set of candidate responses before synthesizing them into a final answer. Different test-time scaling methods (source: arXiv) What is the right scaling strategy? Choosing the right TTS strategy depends on multiple factors. The study authors carried out a systematic investigation of how different policy models and PRMs affect the efficiency of TTS methods. Their findings show that efficiency is largely dependent on the policy and PRM models. For example, for small policy models, search-based methods outperform best-of-N. However, for large policy models, best-of-N is more effective because the models have better reasoning capabilities and don’t need a reward model to verify every step of their reasoning. Their findings also show that the right TTS strategy depends on the difficulty of the problem. For example, for small policy models with fewer than 7B parameters, best-of-N works better for easy problems, while beam search works better for harder problems. For policy models that have between 7B and 32B parameters, diverse tree search performs well for easy and medium problems, and beam search works best for hard problems. But for large policy models (72B parameters and more), best-of-N is the optimal method for all difficulty levels. Why small models can beat large models SLMs outperform large models at MATH and AIME-24 (source: arXiv) Based on these findings, developers can create compute-optimal TTS strategies that take into account the policy model, PRM and problem difficulty to make the best use of compute budget to solve reasoning problems. For example, the researchers found that a Llama-3.2-3B model with the compute-optimal TTS strategy outperforms the Llama-3.1-405B on MATH-500 and AIME24, two complicated math benchmarks. This shows that an SLM can outperform a model that is 135X larger when using the compute-optimal TTS strategy. In other experiments, they found that a Qwen2.5 model with 500 million parameters can outperform GPT-4o with the right compute-optimal TTS strategy. Using the same strategy, the 1.5B distilled version of DeepSeek-R1 outperformed o1-preview and o1-mini on MATH-500 and AIME24. When accounting for both training and inference compute budgets, the findings show that with compute-optimal scaling strategies, SLMs can outperform larger models with 100-1000X less FLOPS. The researchers’ results show that compute-optimal TTS significantly enhances the reasoning capabilities of language models. However, as the policy model grows larger, the improvement of TTS gradually decreases.  “This suggests that the effectiveness of TTS is directly related to the reasoning ability of the policy model,” the researchers write. “Specifically, for models with weak reasoning abilities, scaling test-time compute leads to a substantial improvement, whereas for models with strong reasoning abilities, the gain is limited.” The study validates that SLMs can perform better than larger models when applying compute-optimal test-time scaling methods. While this study focuses on math benchmarks, the researchers plan to expand their study to other reasoning tasks such as coding and chemistry. source

How test-time scaling unlocks hidden reasoning abilities in small language models (and allows them to outperform LLMs) Read More »

High Court Finds FCC's E-Rate Subject To False Claims Act

By Christopher Cole ( February 21, 2025, 10:36 AM EST) — The U.S. Supreme Court ruled unanimously Friday that telecoms participating in the federal E-Rate program supporting school and library connectivity can be sued for excess payouts under the False Claims Act because the subsidy’s funds are provided through the U.S. Treasury…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

High Court Finds FCC's E-Rate Subject To False Claims Act Read More »

Medical training’s AI leap: How agentic RAG, open-weight LLMs and real-time case insights are shaping a new generation of doctors at NYU Langone

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Patient data records can be convoluted and sometimes incomplete, meaning doctors don’t always have all the information they need readily available. Added to this is the fact that medical professionals can’t possibly keep up with the barrage of case studies, research papers, trials and other cutting-edge developments coming out of the industry.  New York City-based NYU Langone Health has come up with a novel approach to tackle these challenges for the next generation of doctors.  The academic medical center — which comprises NYU Grossman School of Medicine and NYU Grossman Long Island School of Medicine, as well as six inpatient hospitals and 375 outpatient locations — has developed a large language model (LLM) that serves as a respected research companion and medical advisor.  Every night, the model processes electronic health records (EHR), matching them with relevant research, diagnosis tips and essential background information that it then delivers in concise, tailored emails to residents the following morning. This is an elemental part of NYU Langone’s pioneering approach to medical schooling — what it calls “precision medical education” that uses AI and data to provide highly customized student journeys.  “This concept of ‘precision in everything’ is needed in healthcare,”  Marc Triola, associate dean for educational informatics and director of the Institute for Innovations in Medical Education at NYU Langone Health, told VentureBeat. “Clearly the evidence is emerging that AI can overcome many of the cognitive biases, errors, waste and inefficiencies in the healthcare system, that it can improve diagnostic decision-making.”  How NYU Langone is using Llama to enhance patient care NYU Langone is using an open-weight model built on the latest version of Llama-3.1-8B-instruct and the open-source Chroma vector database for retrieval-augmented generation (RAG). But it’s not just accessing documents — the model is going beyond RAG, actively employing search and other tools to discover the latest research documents. Each night, the model connects to the facility’s EHR database and pulls out medical data for patients seen at Langone the previous day. It then searches for basic background information on diagnoses and medical conditions. Using a Python API, the model also performs a search of related medical literature in PubMed, which has “millions and millions of papers,” Triola explained. The LLM sifts through reviews, deep-dive papers and clinical trials, selecting a couple of the seemingly most relevant and “puts it all together in a nice email.”  Early the following morning, medical students and internal medicine, neurosurgery and radiation oncology residents receive a personalized email with detailed patient summaries. For instance, if a patient with congestive heart failure had been in for a checkup the previous day, the email will provide a refresher on the basic pathophysiology of heart conditions and information about the latest treatments. It also offers self-study questions and AI-curated medical literature. Further, it may give pointers about steps the residents could take next or actions or details they may have overlooked. “We’ve gotten great feedback from students, from residents and from the faculty about how this is frictionlessly keeping them up to date, how they’re incorporating this in the way they make choices about a patient’s plan of care,” said Triola.  A key success metric for him personally was when a system outage halted the emails for a few days — and faculty members and students complained they weren’t receiving the morning nudges they had come to rely on. “Because we’re sending these emails right before our doctors start rounds — which is among the craziest and busiest times of the day for them — and for them to notice that they weren’t getting these emails and miss them as a part of their thinking was awesome,” he said.  Transforming the industry with precision medical education This sophisticated AI retrieval system is fundamental to NYU Langone’s precision medical education model, which Triola explained is based on “higher density, frictionless” digital data, AI and strong algorithms. The institution has collected vast amounts of data over the past decade about students — their performance, the environments they’re taking care of patients in, the EHR notes they’re writing, the clinical decisions they’re making and the way they reason through patient interactions and care. Further, NYU Langone has a vast catalog of all the resources available to medical students, whether those be videos, self-study or exam questions, or online learning modules. The success of the project is also thanks to the medical facility’s streamlined architecture: It boasts centralized IT, a single data warehouse on the healthcare side and a single data warehouse for education, allowing Langone to marry its various data resources. Chief medical information officer Paul Testa noted that great AI/ML systems aren’t possible without great data, but “it’s not the easiest thing to do if you’re sitting on unwarehoused data in silos across your system.” The medical system may be large, but it operates as “one patient, one record, one standard.” Gen AI allowing NYU Langone to move away from ‘one-size-fits-all’ education As Triola put it, the main question his team has been looking to address is: “How do they link the diagnosis, the context of the individual student and all of these learning materials?”  “All of a sudden we’ve got this great key to unlock that: generative AI,” he said.  This has enabled the school to move away from a “one-size-fits-all” model that has been the norm, whether students intended to become, for example, a neurosurgeon or a psychiatrist — vastly different disciplines that require unique approaches.  It’s important that students get tailored education throughout their schooling, as well as “educational nudges” that adapt to their needs, he said. But you can’t just tell faculty to “spend more time with each individual student” — that’s humanly impossible.  “Our students have been hungry for this, because they recognize that this is a high-velocity period of change in medicine and generative AI,” said Triola. “It absolutely will change…what it means to be a

Medical training’s AI leap: How agentic RAG, open-weight LLMs and real-time case insights are shaping a new generation of doctors at NYU Langone Read More »

Surging European defence stocks signal ‘huge potential’ for military tech startups

Shares in European aerospace and defence companies soared to record highs this week, elevating expectations for the continent’s military tech startups. Britain’s BAE Systems leapt by 9% on Monday, while Germany’s Rheinmetall jumped by 14%. Stocks in Sweden’s Saab, Italy’s Leonardo, and France’s Thales also boomed. By the day’s end, the Stoxx Europe aerospace and defence index had hit an all-time peak. Military tech firms have also been surging. Kate Leaman, chief market analyst at online broker AvaTrade, said these companies have “huge potential” for growth — particularly those with AI-driven solutions. “We’re already seeing a shake-up in the defence sector, with AI-focused players like Palantir outperforming more traditional defence giants,” Leaman told TNW. “This suggests that cutting-edge, tech-centric firms could possibly capture a sizeable share of the market.” European defence tech startups have also grabbed investors’ attention. In 2024, they attracted a record $5bn in VC funding — a 24% increase over the previous year. TNW Conference FLASH SALE is LIVE Discover the next big thing. This week only, take advantage of our 2 for 1 offer on General Attendee and Corporate Passes. Ends 21 February. The momentum has raised expectations of future public listings. “Many defence tech startups haven’t gone public yet, but with the market heating up and investor interest growing, there’s a strong possibility we’ll see more IPOs in the near future,” Leaman said. “That could open the door to fresh investment opportunities and raise the profile of these emerging companies.” The push for defence tech The spending spree comes amid mounting concerns about Europe’s military sovereignty. Leaders across the continent have been shaken by the Russia-Ukraine war and tensions with the Trump administration. Ukraine’s President, Volodymyr Zelensky, has called for the creation of an “army of Europe”. His French counterpart, Emmanuel Macron, has urged his allies to “wake up” and spend more on defence. European Commission President Ursula von der Leyen wants to trigger an emergency clause exempting military expenditures from the fiscal restraints on EU countries. A growing share of their budgets is going to military tech — and startups are beginning to cash in. According to a new report from McKinsey, investment in European defence tech startups increased by over 500% between 2021 and 2024 compared to the previous three years. The report added, however, that the sector remains about five years behind the US’s in terms of maturity. A major factor in this gap is the struggle to secure late-stage funding — a common problem for European startups across industries. Nonetheless, the rise of defence tech is set to continue. “Military spending is rapidly moving away from traditional hardware toward software, drones, and robotic solutions,” Leaman said. “As a result, defence tech companies specialising in these areas may enjoy increasing demand for their products and services.” Defence tech is a key theme at this year’s Assembly, the invite-only policy track of TNW Conference. The event takes place on June 19 and 20 — a week before the NATO Summit arrives in Amsterdam. Tickets for TNW Conference are now on sale. Use the code TNWXMEDIA2025 for an exclusive subscriber discount. source

Surging European defence stocks signal ‘huge potential’ for military tech startups Read More »

Anthropic’s Claude 3.7 Sonnet takes aim at OpenAI and DeepSeek in AI’s next big battle

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Anthropic just fired a warning shot at OpenAI, DeepSeek and the entire AI industry with the launch of Claude 3.7 Sonnet, a model that gives users unprecedented control over how much time an AI spends “thinking” before generating a response. The release, alongside the debut of Claude Code, a command-line AI coding agent, signals Anthropic’s aggressive push into the enterprise AI market — a push that could reshape how businesses build software and automate work. The stakes couldn’t be higher. Last month, DeepSeek stunned the tech world with an AI model that matched the capabilities of U.S. systems at a fraction of the cost, sending Nvidia’s stock down 17% and raising alarms about America’s AI leadership. Now Anthropic is betting that precise control over AI reasoning — not just raw speed or cost savings — will give it an edge. Claude 3.7 Sonnet introduces a ‘thinking mode’ toggle, allowing users to optimize the AI’s response time based on task complexity. (Credit: Anthropic) “We just believe that reasoning is a core part and core component of an AI, rather than a separate thing that you have to pay separately to access,” said Dianne Penn, who leads product management for research at Anthropic, in an interview with VentureBeat. “Just like humans, the AI should handle both quick responses and complex thinking. For a simple question like ‘what time is it?’, it should answer instantly. But for complex tasks — like planning a two-week Italy trip while accommodating gluten-free dietary needs — it needs more extensive processing time.” “We don’t see reasoning, planning and self-correction as separate capabilities,” she added. “So this is essentially our way of expressing that philosophical difference…Ideally, the model itself should recognize when a problem requires more intensive thinking and adjust, rather than requiring users to explicitly select different reasoning modes.” A comparison of AI models shows Claude 3.7 Sonnet’s performance across various tasks, with notable gains in extended thinking capabilities compared to its predecessor. (Credit: Anthropic) The benchmark data backs up Anthropic’s ambitious vision. In extended thinking mode, Claude 3.7 Sonnet achieves 78.2% accuracy on graduate-level reasoning tasks, challenging OpenAI’s latest models and outperforming DeepSeek-R1. But the more revealing metrics come from real-world applications. The model scores 81.2% on retail-focused tool use and shows marked improvements in instruction-following (93.2%) — areas where competitors have either struggled or haven’t published results. While DeepSeek and OpenAI lead in traditional math benchmarks, Claude 3.7’s unified approach demonstrates that a single model can effectively switch between quick responses and deep analysis, potentially eliminating the need for businesses to maintain separate AI systems for different types of tasks. How Anthropic’s hybrid AI could reshape enterprise computing The timing of the release is crucial. DeepSeek’s emergence last month sent shockwaves through Silicon Valley, demonstrating that sophisticated AI reasoning could be achieved with far less computing power than previously thought. This challenged fundamental assumptions about AI development costs and infrastructure requirements. When DeepSeek published its results, Nvidia’s stock dropped 17% in a single day, investors suddenly questioning whether expensive chips were truly essential for advanced AI. For businesses, the stakes couldn’t be higher. Companies are spending millions integrating AI into their operations, betting on which approach will dominate. Anthropic’s hybrid model offers a compelling middle path: the ability to fine-tune AI performance based on the task at hand, from instant customer service responses to complex financial analysis. The system maintains Anthropic’s previous pricing of $3 per million input tokens and $15 per million output tokens, even with added reasoning features. Claude 3.7 Sonnet introduces a ‘thinking mode’ toggle, allowing users to optimize the AI’s response time based on task complexity. (Credit: Anthropic) “Our customers are trying to achieve outcomes for their customers,” explained Michael Gerstenhaber, Anthropic’s head of platform. “Using the same model and prompting the same model in different ways allows somebody like Thompson Reuters to do legal research, allows our coding partners like Cursor or GitHub to be able to develop applications and meet those goals.” Anthropic’s hybrid approach represents both a technical evolution and a strategic gambit. While OpenAI maintains separate models for different capabilities and DeepSeek focuses on cost efficiency, Anthropic is pursuing unified systems that can handle both routine tasks and complex reasoning. It’s a philosophy that could reshape how businesses deploy AI and eliminate the need to juggle multiple specialized models. Meet Claude Code: AI’s new developer assistant Anthropic today also unveiled Claude Code, a command-line tool that allows developers to delegate complex engineering tasks directly to AI. The system requires human approval before committing code changes, reflecting growing industry focus on responsible AI development. Claude Code’s terminal interface, part of Anthropic’s new developer tools suite, emphasizes simplicity and direct interaction. (Credit: Anthropic) “You actually still have to accept the changes Claude makes. You are a reviewer with hands on [the] wheel,” Penn noted. “There is essentially a sort of checklist that you have to essentially accept for the model to take certain actions.” The announcements come amid intense competition in AI development. Stanford researchers recently created an open-source reasoning model for under $50, while Microsoft just integrated OpenAI’s o3-mini model into Azure. DeepSeek’s success has also spurred new approaches to AI development, with some companies exploring model distillation techniques that could further reduce costs. The command-line interface of Claude Code allows developers to delegate complex engineering tasks while maintaining human oversight. (Credit: Anthropic) From Pokémon to enterprise: Testing AI’s new intelligence Penn illustrated the dramatic progress in AI capabilities with an unexpected example: “We’ve been asking different versions of Claude to play Pokémon…This version has made it all the way to Vermilion City, captured multiple Pokémon, and even grinds to level-up. It has the right Pokémon to battle against rivals.” “I think you’ll see us continue to innovate and push on the quality of reasoning, push towards things like dynamic reasoning,” Penn explained. “We have always thought of

Anthropic’s Claude 3.7 Sonnet takes aim at OpenAI and DeepSeek in AI’s next big battle Read More »

Apple Breaks Silence on UK Probe, Removes Data Protection Tool From UK Users

In response to a U.K. government inquiry about access to data sequestered on Apple devices, Cupertino has removed access to the Advanced Data Protection encryption feature from U.K.-held devices. “We have never built a backdoor or master key to any of our products or services and we never will,” an anonymous Apple representative wrote in a statement emailed to TechRepublic. Must-read Apple coverage UK wants law enforcement to be able to access data on individual devices, sources claim In early February, the Home Office invoked the Investigatory Powers Act of 2016 to request a way to access the encrypted data held under Apple’s Advanced Data Protection. The Washington Post broke the news based on anonymous sources, saying the information was discussed in secret. The Investigatory Powers Act gives law enforcement and intelligence personnel provisions for harvesting data. The U.K. government has not issued a statement confirming or denying the situation. Stating that the government has invoked the act is itself a criminal offense. SEE: Apps without contact information for their developers have been pulled by Apple from the EU App Store to comply with the Digital Services Act. According to the BBC, the government would have to follow a legal process to access such data, and would likely use it to target individuals already under investigation instead of wide swaths of the population. Advanced Data Protection is Apple’s most rigorous privacy measure Data stored under Apple’s Advanced Data Protection offers the highest level of protection the company provides, keeping information hidden even from Apple itself. Users have to sign up for Advanced Data Protection as an extra step on top of Apple’s default security measures. According to The Washington Post’s initial article, “most” Apple device users don’t sign up for Advanced Data Protection. If a U.K. user has not already signed up for Advanced Data Protection, they will not be able to as of February 21, Apple said. Instead, those users will see a message: “Apple can no longer offer Advanced Data Protection (ADP) in the United Kingdom to new users.” Apple said existing users will need to disable the feature manually to continue using iCloud. More guidance for those users is forthcoming. Other end-to-end encrypted applications and services from Apple, such as iCloud Keychain, Health, iMessage, and FaceTime, will not change. source

Apple Breaks Silence on UK Probe, Removes Data Protection Tool From UK Users Read More »