Is Google Rigging Search? EU's Preliminary Findings Are In

Image: Guillaume Périgois/Unsplash The European Commission claims that Alphabet, Google’s parent company, has breached the Digital Markets Act. The specific allegation is that Google is self-preferencing on Search and the Play Store. The DMA applies to “gatekeeper” organisations that have a major economic impact in the EU (at least €7.5 billion in annual revenue in the EU per year for the last three fiscal years) and have more than 45 million monthly active users in the E.U., or more than 10,000 yearly active business users for at least three fiscal years. SEE: Advocacy Groups Criticise European Commission for Weak Regulation of Apple, Google More Google news & tips Google’s under scrutiny for Search and Play Store practices The European Commission has published preliminary findings on Alphabet and how it could be preventing competition. The concerns relate to two issues: self-preferencing in Google Search and “steering rules” in Google Play; these issues were looked into as part of a non-compliance investigation opened in March 2024. The DMA bans self-preferencing, which is when a dominant platform favours its own products or services over those of competitors. The Commission believes the way Alphabet presents Google Search results may steer customers toward Google services, such as Shopping, Flights, or Hotels. Secondly, the Commission argues that the Play Store, Google’s mobile app marketplace, prevents app developers from directing consumers to alternative purchasing channels, such as their own website or third-party app stores. This limits their ability to offer better deals outside of Google’s platform. Google has made a series of changes in the last year to comply with the DMA, such as temporarily removing some Search Widgets and rejigging the layout of Search results, but the Commission has determined that these steps are insufficient. What are possible consequences of the EU’s ruling? Note that these findings are preliminary, and Alphabet has the opportunity to respond in writing; however, if they are confirmed, the Commission will adopt a non-compliance decision as it has now done with Apple, which could lead to fines or other penalties. Fines for noncompliance with the DMA can be up to 10% of the company’s total worldwide turnover, rising to 20% in cases of repeated infringement. In a blog post, Google’s senior director for competition Oliver Bethell said the changes the Commission wants will “hurt European businesses and consumers, hinder innovation, weaken security, and degrade product quality.” source

Is Google Rigging Search? EU's Preliminary Findings Are In Read More »

4 Key Payments Trends For White Collar Attys

By Laurel Loomis Rimon, Gina Shabana and Julianna St. Onge ( March 20, 2025, 3:46 PM EDT) — Over the last two decades, the payments ecosystem has grown in size and complexity. In January, the new administration announced a new working group that is aimed at broadly expanding the role of digital currency in the American economy…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

4 Key Payments Trends For White Collar Attys Read More »

Less is more: UC Berkeley and Google unlock LLM potential through simple sampling

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A new paper by researchers from Google Research and the University of California, Berkeley, demonstrates that a surprisingly simple test-time scaling approach can boost the reasoning abilities of large language models (LLMs). The key? Scaling up sampling-based search, a technique that relies on generating multiple responses and using the model itself to verify them.  The core finding is that even a minimalist implementation of sampling-based search, using random sampling and self-verification, can elevate the reasoning performance of models like Gemini 1.5 Pro beyond that of o1-Preview on popular benchmarks. The findings can have important implications for enterprise applications and challenge the assumption that highly specialized training or complex architectures are always necessary for achieving top-tier performance. The limits of current test-time compute scaling The current popular method for test-time scaling in LLMs is to train the model through reinforcement learning to generate longer responses with chain-of-thought (CoT) traces. This approach is used in models such as OpenAI o1 and DeepSeek-R1. While beneficial, these methods usually require substantial investment in the training phase. Another test-time scaling method is “self-consistency,” where the model generates multiple responses to the query and chooses the answer that appears more often. Self-consistency reaches its limits when handling complex problems, as in these cases, the most repeated answer is not necessarily the correct one. Sampling-based search offers a simpler and highly scalable alternative to test-time scaling: Let the model generate multiple responses and select the best one through a verification mechanism. Sampling-based search can complement other test-time compute scaling strategies and, as the researchers write in their paper, “it also has the unique advantage of being embarrassingly parallel and allowing for arbitrarily scaling: simply sample more responses.” More importantly, sampling-based search can be applied to any LLM, including those that have not been explicitly trained for reasoning. How sampling-based search works The researchers focus on a minimalist implementation of sampling-based search, using a language model to both generate candidate responses and verify them. This is a “self-verification” process, where the model assesses its own outputs without relying on external ground-truth answers or symbolic verification systems. Search-based sampling Credit: VentureBeat The algorithm works in a few simple steps:  1—The algorithm begins by generating a set of candidate solutions to the given problem using a language model. This is done by giving the model the same prompt multiple times and using a non-zero temperature setting to create a diverse set of responses. 2—Each candidate’s response undergoes a verification process in which the LLM is prompted multiple times to determine whether the response is correct. The verification outcomes are then averaged to create a final verification score for the response. 3— The algorithm selects the highest-scored response as the final answer. If multiple candidates are within close range of each other, the LLM is prompted to compare them pairwise and choose the best one. The response that wins the most pairwise comparisons is chosen as the final answer. The researchers considered two key axes for test-time scaling: Sampling: The number of responses the model generates for each input problem. Verification: The number of verification scores computed for each generated solution How sampling-based search compares to other techniques The study revealed that reasoning performance continues to improve with sampling-based search, even when test-time compute is scaled far beyond the point where self-consistency saturates.  At a sufficient scale, this minimalist implementation significantly boosts reasoning accuracy on reasoning benchmarks like AIME and MATH. For example, Gemini 1.5 Pro’s performance surpassed that of o1-Preview, which has explicitly been trained on reasoning problems, and Gemini 1.5 Flash surpassed Gemini 1.5 Pro. “This not only highlights the importance of sampling-based search for scaling capability, but also suggests the utility of sampling-based search as a simple baseline on which to compare other test-time compute scaling strategies and measure genuine improvements in models’ search capabilities,” the researchers write. It is worth noting that while the results of search-based sampling are impressive, the costs can also become prohibitive. For example, with 200 samples and 50 verification steps per sample, a query from AIME will generate around 130 million tokens, which costs $650 with Gemini 1.5 Pro. However, this is a very minimalistic approach to sampling-based search, and it is compatible with optimization techniques proposed in other studies. With smarter sampling and verification methods, the inference costs can be reduced considerably by using smaller models and generating fewer tokens. For example, by using Gemini 1.5 Flash to perform the verification, the costs drop to $12 per question. Effective self-verification strategies There is an ongoing debate on whether LLMs can verify their own answers. The researchers identified two key strategies for improving self-verification using test-time compute: Directly comparing response candidates: Disagreements between candidate solutions strongly indicate potential errors. By providing the verifier with multiple responses to compare, the model can better identify mistakes and hallucinations, addressing a core weakness of LLMs. The researchers describe this as an instance of “implicit scaling.” Task-specific rewriting: The researchers propose that the optimal output style of an LLM depends on the task. Chain-of-thought is effective for solving reasoning tasks, but responses are easier to verify when written in a more formal, mathematically conventional style. Verifiers can rewrite candidate responses into a more structured format (e.g., theorem-lemma-proof) before evaluation. “We anticipate model self-verification capabilities to rapidly improve in the short term, as models learn to leverage the principles of implicit scaling and output style suitability, and drive improved scaling rates for sampling-based search,” the researchers write. Implications for real-world applications The study demonstrates that a relatively simple technique can achieve impressive results, potentially reducing the need for complex and costly model architectures or training regimes. This is also a scalable technique, enabling enterprises to increase performance by allocating more compute resources to sampling and verification. It also enables developers to push frontier language models beyond their limitations on complex tasks. “Given that it complements other test-time

Less is more: UC Berkeley and Google unlock LLM potential through simple sampling Read More »

Google Acquires Startup Wiz for $32B to 'Turbocharge Improved Cloud Security’

Image: Wiz Google has announced it is acquiring cybersecurity startup Wiz for $32 billion. The acquisition is parent company Alphabet’s largest to date, more than doubling its previous record-breaking $12.5 billion purchase of Motorola Mobility in 2012. The company appears to have pursued this deal aggressively due to the growing demand for secure cloud services. The surge in generative AI has prompted tech companies to rush for cloud infrastructure, while major security incidents, such as last year’s CrowdStrike outage, have heightened concerns. Wiz’s software incorporates AI-powered security features that identify critical risks in cloud infrastructure, allowing developers to remediate them before they become an issue. If Wiz’s products are integrated, Google Cloud could gain a significant advantage in a market where it has historically fallen behind Amazon Web Services and Microsoft Azure. In Google’s announcement about the acquisition, it said Wiz will provide its customers with improved and lower-cost security for multiple cloud and code environments. Despite the acquisition, Wiz’s products will continue to work and be available across all major clouds, including Amazon Web Services, Microsoft Azure, and Oracle Cloud platforms. In a press release about this acquisition news, Google Cloud CEO Thomas Kurian stated: “Google Cloud and Wiz share a joint vision to make cybersecurity more accessible and simpler to use for organizations of any size and industry.” And, Alphabet and Google CEO Sundar Pichai noted: “Together, Google Cloud and Wiz will turbocharge improved cloud security and the ability to use multiple clouds.” SEE: CrowdStrike vs Wiz: Which Offers Better Cloud Security and Value? More cloud security coverage Wiz’s rejection of Alphabet’s previous offer When Wiz declined Alphabet’s last offer of $23 billion in July 2024, the startup cited concerns over antitrust scrutiny and disagreements on whether it would operate as an independent division or be fully integrated into Google Cloud, The Wall Street Journal reported at the time. After the deal collapsed, Wiz CEO Assaf Rappaport told employees the company would pursue an initial public offering, believing it could achieve a higher valuation as a publicly traded entity (the company was valued at $12 billion by investors in May 2024). Nevertheless, Rappaport clearly re-engaged with potential buyers since. Regulatory challenges and Alphabet’s antitrust battles Google said the deal is subject to customary closing conditions including regulatory approvals. Alphabet’s previous bid faced obstacles due to antitrust regulations imposed by the Biden administration such as the Executive Order on Competition, which mandates strict scrutiny of mergers, particularly in the tech sector. Although there was speculation that U.S. President Donald Trump might roll back certain regulations to favor innovation, his administration has instead introduced tariffs that could increase costs for tech companies. This shift in policy has made investors cautious about major acquisitions. SEE: Trump’s Import Tariffs: How They’ll Shake Prices, Jobs, and Trade Meanwhile, Google is currently facing two major antitrust lawsuits in the States. Last year, the Department of Justice demanded Google divest its Chrome browser, arguing it has been leveraging the platform to funnel users to its search engine, maintaining dominance in online search. The company is now awaiting a remedies trial. A verdict is also pending on whether Google illegally monopolised the digital advertising market through its ad technology business, which has also received legal scrutiny in the U.K. and EU. In August 2024, a U.S. federal judge also ruled that Google holds a monopoly on general search services and text ads and has broken antitrust laws. For more specifics about the acquisition, Alphabet’s webcast about the news will be available to watch for the next two weeks. Sundar Pichai, Thomas Kurian, Wiz CEO Assaf Rappaport, and Alphabet and Google CFO Anat Ashkenazi discuss the transaction. source

Google Acquires Startup Wiz for $32B to 'Turbocharge Improved Cloud Security’ Read More »

DOJ move against Chrome renews calls for Google to sell Android

Renewed calls for Google to sell Chrome have reignited demands for the company to also divest Android. An executive at Murena, a French smartphone startup, said today that breaking up the businesses is the only way to end Google’s “cycle of domination”. The appeal follows a Friday court filing from the US Department of Justice (DOJ). The filing reaffirmed a proposal for Google to divest its Chrome browser and sell it to a competitor, in a bid to break up the tech giant’s alleged search engine monopoly.  “Through its sheer size and unrestricted power, Google has robbed consumers and businesses of a fundamental promise owed to the public — their right to choose among competing services,” the DOJ said in the filing.  The accusation echoes common complaints about Chrome’s dominance. In February, Chrome made up two thirds of the global browser market. Next up was Safari with 17.99%, Edge (5.33%), Firefox (2.62%), and Samsung Internet (2.3%).  Opera, Europe’s largest homegrown browser, made up just 2.09%.  3 free tickets to TNW Conference? Get them now! For a limited time, groups can get up to three extra free tickets! Book now and increase your visibility and connections at TNW Conference The DOJ’s plan aims to level the playing field. But Rik Viergever, COO at Murena, a French company building privacy-first smartphones, believes the new proposals alone aren’t enough.  “As a data privacy advocate, I welcome the DOJ’s decision forcing Google to sell Chrome, however this should only be the start,” he said. “I want to see Google sell the Android operating system.” The government has left this possibility open, but is first calling for Google to change the business practices of Android. If these measures fail to curb Google’s market dominance, the DOJ may push for divestment from the operating system. Viergever wants the courts to do more to ease Google’s “stranglehold” on consumers and competitors. “Google is only able to offer Android free of charge to users because it profits off them in so many other ways and markets,” he said. “This makes it almost impossible for other providers in the operating system market to compete and so the cycle of domination continues.” Viergever’s stance aligns with Murena’s mission. The company’s main products are “deGoogled” smartphones billed as privacy-centric disruptors to the Apple-Google mobile duopoly. The devices use /e/OS, a privacy-oriented, open-source alternative to Android. Murena built the software to escape the shackles of Google’s operating system. Viergever argues that selling Chrome would lead to better products. “It’s time to open the market up to innovation and competition so users can benefit from a competitive industry in which businesses compete with products that benefit consumers, rather than a big company like Google holding all the power,” he said. source

DOJ move against Chrome renews calls for Google to sell Android Read More »

The Central Issues Facing Fed. Circ. In Patent Damages Case

By Thomas Wimbiscus and Christian Hallerud ( March 20, 2025, 5:36 PM EDT) — The admissibility of damages expert opinions in patent cases is before the U.S. Court of Appeals for the Federal Circuit in EcoFactor v. Google.[1]… Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

The Central Issues Facing Fed. Circ. In Patent Damages Case Read More »

Align Product Management And Portfolio Marketing To Create Three Growth Trajectories

Barriers to growth seem to be continuously on the rise for B2B companies. Higher-than-ever customer expectations, rapidly advancing technologies that are harder and harder to keep up with, and a seemingly constant influx of new competitors all make markets more crowded and more challenging than ever. Portfolio Marketers And Product Managers Should Align To Ignite Growth Portfolio marketers and product managers both want to ensure the success of their offerings and grow revenue. But the two functions often operate separately, focused on their own efforts to drive growth. Portfolio marketers are often focused on go-to-market strategies for the existing book of business, while product managers are working on how to add more capabilities to existing products. These two functions, however, can work together to provide a unified approach to growth that identifies the most attractive market opportunities while determining the best product strategies to capitalize on them. Portfolio Marketers Should Be In Search Of New Audiences Portfolio marketers should always be assessing the best growth opportunities by looking at existing and new buyers and markets: Existing buyers and markets. Opportunities exist to increase retention rates and usage, extend the offering to more users, or upsell buyers with better features or premium capabilities. New buyers. Opportunities to expand to new targets, such as new buying centers, new buying groups, and/or new buyer personas, exist to increase cross-sell and penetration within accounts. New markets. Evaluating new markets to target could include new industries, geographic regions, companies of different sizes, and even different market categories to play in. Product Managers Should Look For Innovation And Expansion Opportunities Product managers are always looking for opportunities to improve their products and need to determine how best to invest in offerings to create competitive advantage and leadership: On par. Offerings need to be continually improved with new features, better performance, improved user experience, and seamless integrations as well as ongoing regulatory and security compliance to keep up with market demands and competitive moves. Competitive distinction. To create an advantage and drive faster growth, offerings can create a new and better way of solving an existing problem, making it better, easier, or more economical for customers. Sustainable advantage. To achieve a sustainable advantage, oftentimes a new innovation that solves new, unmet, or emerging needs allows an organization to have a first-mover advantage in a changing or newly developing category. Marrying these two spectrums of market and product opportunities creates a matrix where it becomes easier to see how different strategies — innovate, expand, and adapt — might be used to create competitive distinction and sustainable advantage for each type of market opportunity.   To find out more about the innovate, expand, and adapt strategies, join me and my colleague, Lisa Singer, at B2B Summit North America in Phoenix, March 31–April 3, 2025. Or if you’re a client and would like to book a discussion with an analyst, please reach out to Beth Caplow or Lisa Singer. source

Align Product Management And Portfolio Marketing To Create Three Growth Trajectories Read More »

Hesai Says DOD's View On 'Chinese Military Co.' Too Broad

By Ali Sullivan ( March 20, 2025, 9:13 PM EDT) — The legal team representing a Shanghai-based manufacturer of lidar products urged a D.C. federal judge to remove the company from the U.S. Department of Defense’s list of “Chinese military companies,” saying the department’s definition of the term is so expansive it could apply to almost any company in China…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Hesai Says DOD's View On 'Chinese Military Co.' Too Broad Read More »

'Careless People' Author Can Testify In Meta Addiction MDL

By Dorothy Atkins ( March 20, 2025, 10:55 PM EDT) — Meta Platforms Inc. on Thursday failed to block the deposition of the former executive behind the tell-all memoir “Careless People,” with a California magistrate judge giving plaintiffs the green light to depose her in multidistrict litigation over social media platforms’ allegedly addictive designs…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

'Careless People' Author Can Testify In Meta Addiction MDL Read More »

The open-source AI debate: Why selective transparency poses a serious risk

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More As tech giants declare their AI releases open — and even put the word in their names — the once insider term “open source” has burst into the modern zeitgeist. During this precarious time in which one company’s misstep could set back the public’s comfort with AI by a decade or more, the concepts of openness and transparency are being wielded haphazardly, and sometimes dishonestly, to breed trust.  At the same time, with the new White House administration taking a more hands-off approach to tech regulation, the battle lines have been drawn — pitting innovation against regulation and predicting dire consequences if the “wrong” side prevails.  There is, however, a third way that has been tested and proven through other waves of technological change. Grounded in the principles of openness and transparency, true open source collaboration unlocks faster rates of innovation even as it empowers the industry to develop technology that is unbiased, ethical and beneficial to society.  Understanding the power of true open source collaboration Put simply, open-source software features freely available source code that can be viewed, modified, dissected, adopted and shared for commercial and noncommercial purposes — and historically, it has been monumental in breeding innovation. Open-source offerings Linux, Apache, MySQL and PHP, for example, unleashed the internet as we know it.  Now, by democratizing access to AI models, data, parameters and open-source AI tools, the community can once again unleash faster innovation instead of continually recreating the wheel — which is why a recent IBM study of 2,400 IT decision-makers revealed a growing interest in using open-source AI tools to drive ROI. While faster development and innovation were at the top of the list when it came to determining ROI in AI, the research also confirmed that embracing open solutions may correlate to greater financial viability. Instead of short-term gains that favor fewer companies, open-source AI invites the creation of more diverse and tailored applications across industries and domains that might not otherwise have the resources for proprietary models.  Perhaps as importantly, the transparency of open source allows for independent scrutiny and auditing of AI systems’ behaviors and ethics — and when we leverage the existing interest and drive of the masses, they will find the problems and mistakes as they did with the LAION 5B dataset fiasco.  In that case, the crowd rooted out more than 1,000 URLs containing verified child sexual abuse material hidden in the data that fuels generative AI models like Stable Diffusion and Midjourney — which produce images from text and image prompts and are foundational in many online video-generating tools and apps.  While this finding caused an uproar, if that dataset had been closed, as with OpenAI’s Sora or Google’s Gemini, the consequences could have been far worse. It’s hard to imagine the backlash that would ensue if AI’s most exciting video creation tools started churning out disturbing content. Thankfully, the open nature of the LAION 5B dataset empowered the community to motivate its creators to partner with industry watchdogs to find a fix and release ​​RE-LAION 5B — which exemplifies why the transparency of true open-source AI not only benefits users, but the industry and creators who are working to build trust with consumers and the general public.  The danger of open sourcery in AI While source code alone is relatively easy to share, AI systems are far more complicated than software. They rely on system source code, as well as the model parameters, dataset, hyperparameters, training source code, random number generation and software frameworks — and each of these components must work in concert for an AI system to work properly. Amid concerns around safety in AI, it has become commonplace to state that a release is open or open source. For this to be accurate, however, innovators must share all the pieces of the puzzle so that other players can fully understand, analyze and assess the AI system’s properties to ultimately reproduce, modify and extend its capabilities.  Meta, for example, touted Llama 3.1 405B as “the first frontier-level open-source AI model,” but only publicly shared the system’s pre-trained parameters, or weights, and a bit of software. While this allows users to download and use the model at will, key components like the source code and dataset remain closed — which becomes more troubling in the wake of the announcement that Meta will inject AI bot profiles into the ether even as it stops vetting content for accuracy.  To be fair, what is being shared certainly contributes to the community. Open weight models offer flexibility, accessibility, innovation and a level of transparency. DeepSeek’s decision to open source its weights, release its technical reports for R1 and make it free to use, for example, has enabled the AI community to study and verify its methodology and weave it into their work.  It is misleading, however, to call an AI system open source when no one can actually look at, experiment with and understand each piece of the puzzle that went into creating it. This misdirection does more than threaten public trust. Instead of empowering everyone in the community to collaborate, build and advance upon models like Llama X, it forces innovators using such AI systems to blindly trust the components that are not shared. Embracing the challenge before us As self-driving cars take to the streets in major cities and AI systems assist surgeons in the operating room, we are only at the beginning of letting this technology take the proverbial wheel. The promise is immense, as is the potential for error — which is why we need new measures of what it means to be trustworthy in the world of AI. Even as Anka Reuel and colleagues at Stanford University recently attempted to set up a new framework for the AI benchmarks used to assess how well models perform, for example, the review practice the industry and the

The open-source AI debate: Why selective transparency poses a serious risk Read More »