FTC's Holyoak Wants 'Predictable' Regulatory Space For AI

Law360, Washington, D.C. (April 22, 2025, 11:36 PM EDT) — The Federal Trade Commission won’t stop policing fraud and deception powered by artificial intelligence, but flexibility is needed to avoid “misguided enforcement actions or excessive regulation” that could stifle innovation and competition in the emerging field, Commissioner Melissa Holyoak said Tuesday.  In a keynote address at the International Association of Privacy Professionals‘ Global Privacy Summit in Washington, D.C., the Republican commissioner laid out three of her top data privacy priorities: the regulation of AI technology, boosting protections for children and teens online, and strengthening enforcement against companies that sell, transfer or disclose Americans’ geolocation information and other sensitive data to foreign adversaries.  “The commission is committed to protecting consumers’ privacy and security interests while promoting competition and innovation,” Holyoak said in her remarks, which she stressed expressed her own views. “We’ll do that by enforcing the laws we have and not by stretching our legal authorities, and we’ll continue to do it by taking a flexible, risk-based approach to privacy enforcement that balances potential privacy harms, consumer expectations, legal obligations, business needs and competition.” When it comes to the rapid development of AI and other digital technologies that are fueled by vast quantities of consumer data, Holyoak said that the commission — which is currently being steered by three Republicans, following the abrupt firings of the agency’s two Democrats last month — would continue to “aggressively root out AI-powered frauds and scams and stop companies from making false or unsubstantiated representations that harm consumers.” However, Holyoak also urged caution, saying that she saw the fast pace of these new technological developments as presenting “opportunities” rather than challenges for policymakers, enforcers and compliance professionals to forge a path forward that protects consumers while still allowing for “innovation to flourish.” “With artificial intelligence, the commission should create a predictable regulatory and enforcement environment that promotes innovation and development of new technologies,” Holyoak said in her speech, which marked the first on data privacy issues since Republican Chairman Andrew Ferguson took over three months ago. “Under the leadership of Chairman Ferguson, the commission will promote AI growth and innovation, not hammer it with misguided enforcement actions or excessive regulation.” In order to strike an appropriate balance, Holyoak repeatedly stressed the importance of the FTC studying “this nascent industry and how privacy enforcement and regulations may impact its development.” The commissioner noted that she supported the use of the agency’s Section 6(b) authority to issue a report on AI partnerships and investments in the waning days of the prior Democratic administration “because it advanced our knowledge of some of the commercial dynamics shaping AI’s evolution” and that she saw additional opportunities moving forward to further strengthen the agency’s understanding of AI, including taking a closer look at “how regulatory and enforcement efforts in privacy may impact a firm’s ability to access and train data, and importantly, how they impact the firm’s ability to compete.”  As an example, Holyoak explained that while requiring consent for using certain types of data in some instances may “level the competitive playing field by requiring the same level of privacy protections across the board,” establishing a mandate for affirmative consent from users for data collection or use “may actually favor dominant players, because users are more familiar with big firms, and thus may have more trust in how those firms will collect or use their data.” The commissioner also encouraged privacy professionals to respond to recent requests for information issued by the FTC and U.S. Department of Justice to help the agencies identify potentially anticompetitive regulations at the state and federal level, noting that there have been more than 500 AI-related bills introduced in the states and that this public input “will help us understand the different regulatory burdens for firms and whether those burdens create barriers to new entrants and competition.” Holyoak, a mother of four and former solicitor general of Utah, also drew from her personal experience in stressing the ongoing importance of ensuring that children are protected online and that Americans’ sensitive information isn’t ending up in the hands of foreign adversaries.  She urged the commission to continue to use “every tool that Congress has given” it to protect underage internet users, including its authorities under the Children’s Online Privacy Protection Act and its power to police both deception and unfair practices that are “grounded in sound economic theories of harm and reliable empirical research” under Section 5 of the FTC Act.  Additionally, the commission should be careful to not overlook the practice of foreign adversaries buying Americans’ sensitive information in bulk from data brokers, according to Holyoak, who suggested that “there may be opportunities” in the future for the commission to partner with the DOJ as it enforces its recently enacted rules to prevent China, Russia, Iran and other foreign entities from exploiting Americans’ sensitive personal data through commercial transactions. “Precise geolocation data is particularly sensitive and can reveal our religious beliefs, our political affiliations and even medical conditions and treatment,” Holyoak said. “This information can be exploited and poses significant, and frankly unacceptable, risks to our national and economic security.” –Editing by Jay Jackson Jr. For a reprint of this article, please contact [email protected]. source

FTC's Holyoak Wants 'Predictable' Regulatory Space For AI Read More »

Broadcasters Oppose FCC Adding New Local Notice Regs

By Christopher Cole ( April 25, 2025, 6:29 PM EDT) — Broadcasters said they don’t like the idea of new local notice requirements for some types of new stations as part of a Federal Communications Commission plan to otherwise cut down on rules covering the industry that it believes are no longer needed…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Broadcasters Oppose FCC Adding New Local Notice Regs Read More »

Bankers Push FCC For Caller ID To Combat Fraud

By Sydney Price ( April 24, 2025, 8:48 PM EDT) — The American Bankers Association has urged the Federal Communications Commission to move forward on a plan to reduce bank-impersonating phone calls by ensuring certain voice service providers implement a new caller identification authentication process within two years…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Bankers Push FCC For Caller ID To Combat Fraud Read More »

Agents are here — but can you see what they're doing?

“The agent is able to make its own choices about where to send it in the decision tree,” says Moldovan. In some cases, the final agent in the chain might send it back up the tree for additional review. “It allows humans to go through a much more manageable set of signals and interpretations,” he adds. To keep the systems going off the rails, several controls are in place. First of all, OpenAI itself has a set of controls in place, including a moderation API. Then, the system is extremely limited in what information comes in and what it can do with it. Finally, all decisions go to humans for review. “We’re risk managers, not boundary pushers,” Moldovan says. “We use this system to properly identify a set of content that needs human review, and all final moderation decisions are human. We believe content moderation, especially on a platform like ours, requires a level of nuance we’re not yet ready to cede to robots.” source

Agents are here — but can you see what they're doing? Read More »

New method lets DeepSeek and other models answer ‘sensitive’ questions

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More It is tough to remove bias, and in some cases, outright censorship, in large language models (LLMs). One such model, DeepSeek from China, alarmed politicians and some business leaders about its potential danger to national security.  A select committee at the U.S. Congress recently released a report called DeepSeek, “a profound threat to our nation’s security,” and detailed policy recommendations.  While there are ways to bypass bias through Reinforcement Learning from Human Feedback (RLHF) and fine-tuning, the enterprise risk management startup CTGT claims to have an alternative approach. CTGT developed a method that bypasses bias and censorship baked into some language models that it says 100% removes censorship. In a paper, Cyril Gorlla and Trevor Tuttle of CTGT said that their framework “directly locates and modifies the internal features responsible for censorship.” “This approach is not only computationally efficient but also allows fine-grained control over model behavior, ensuring that uncensored responses are delivered without compromising the model’s overall capabilities and factual accuracy,” the paper said.  While the method was developed explicitly with DeepSeek-R1-Distill-Llama-70B in mind, the same process can be used on other models.  “We have tested CTGT with other open weights models such as Llama and found it to be just as effective,” Gorlla told VentureBeat in an email. “Our technology operates at the foundational neural network level, meaning it applies to all deep learning models. We’re working with a leading foundation model lab to ensure their new models are trustworthy and safe from the core.” How it works The researchers said their method identifies features with a high likelihood of being associated with unwanted behaviors.  “The key idea is that within a large language model, there exist latent variables (neurons or directions in the hidden state) that correspond to concepts like ‘censorship trigger’ or ‘toxic sentiment’. If we can find those variables, we can directly manipulate them,” Gorlla and Tuttle wrote.  CTGT said there are three key steps: Feature identification Feature isolation and characterization Dynamic feature modification.  The researchers make a series of prompts that could trigger one of those “toxic sentiments.” For example, they may ask for more information about Tiananmen Square or request tips to bypass firewalls. Based on the responses, they run the prompts and establish a pattern and find vectors where the model decides to censor information.  Once these are identified, the researchers can isolate that feature and figure out which part of the unwanted behavior it controls. Behavior may include responding more cautiously or refusing to respond altogether. Understanding what behavior the feature controls, researchers can then “integrate a mechanism into the model’s inference pipeline” that adjusts how much the feature’s behavior is activated. Making the model answer more prompts CTGT said its experiments, using 100 sensitive queries, showed that the base DeepSeek-R1-Distill-Llama-70B model answered only 32% of the controversial prompts it was fed. But the modified version responded to 96% of the prompts. The remaining 4%, CTGT explained, were extremely explicit content.  The company said that while the method allows users to toggle how much baked-in bias and safety features work, it still believes the model will not turn “into a reckless generator,” especially if only unnecessary censorship is removed.  Its method also does not sacrifice the accuracy or performance of the model.  “This is fundamentally different from traditional fine-tuning as we are not optimizing model weights or feeding it new example responses. This has two major advantages: changes take effect immediately for the very next token generation, as opposed to hours or days of retraining; and reversibility and adaptivity, since no weights are permanently changed, the model can be switched between different behaviors by toggling the feature adjustment on or off, or even adjusted to varying degrees for different contexts,” the paper said.  Model safety and security The congressional report on DeepSeek recommended that the US “take swift action to expand export controls, improve export control enforcement, and address risks from Chinese artificial intelligence models.”  Once the U.S. government began questioning DeepSeek’s potential threat to national security, researchers and AI companies sought ways to make it, and other models, “safe.” What is or isn’t “safe,” or biased or censored, can sometimes be difficult to judge, but developing methods that allow users to figure out how to toggle controls to make the model work for them could prove very useful.  Gorlla said enterprises “need to be able to trust their models are aligned with their policies,” which is why methods like the one he helped develop would be critical for businesses.  “CTGT enables companies to deploy AI that adapts to their use cases without having to spend millions of dollars fine-tuning models for each use case. This is particularly important in high-risk applications like security, finance, and healthcare, where the potential harms that can come from AI malfunctioning are severe,” he said.  source

New method lets DeepSeek and other models answer ‘sensitive’ questions Read More »

Latham-Led LLR Clinches 7th Fund With $2.45B Committed

By Jade Martinez-Pogue ( April 24, 2025, 11:57 AM EDT) — Latham & Watkins LLP-advised LLR Partners on Thursday announced that it wrapped its seventh private equity fund with $2.45 billion in tow…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Latham-Led LLR Clinches 7th Fund With $2.45B Committed Read More »

Zencoder buys Machinet to challenge GitHub Copilot as AI coding assistant consolidation accelerates

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Zencoder announced today the acquisition of Machinet, a developer of context-aware AI coding assistants with more than 100,000 downloads in the JetBrains ecosystem. The acquisition strengthens Zencoder’s position in the competitive AI coding assistant landscape and expands its reach among Java developers and other users of JetBrains’ popular development environments. The deal represents a strategic expansion for Zencoder, which emerged from stealth mode just six months ago but has quickly established itself as a serious competitor to GitHub Copilot and other AI coding tools. “At this point, there are three strong coordination products in the market that are production grade: it’s us, Cursor, and Windsurf. For smaller companies, it’s becoming harder and harder to compete,” said Andrew Filev, CEO and founder of Zencoder, in an exclusive interview with VentureBeat about the acquisition. “Our technical staff includes more than 50 engineers. For some startups, it’s very hard to keep that pace.” The great AI coding assistant shakeout: Why small players can’t compete This acquisition comes at a pivotal moment in the AI coding assistant market. Just last week, reports emerged that OpenAI is in discussions to acquire Windsurf, another AI coding assistant, for approximately $3 billion. While Filev maintains the timing is coincidental, he acknowledges that it reflects broader market dynamics. “I think there’s going to be more to it, and I’m looking forward to it,” Filev said. “It’s a huge product surface. You have to support multiple IDEs, you have to integrate with multiple DevOps tools, you have to support different parts of software life cycle. There are 70-plus, 100-plus programming languages… There’s so much work there that it’s very, very hard for the smaller companies that only have like sub-10 engineers to compete in the long term.” How Zencoder’s JetBrains strategy outflanks Microsoft-dependent rivals One of the key strategic values of acquiring Machinet is its strong presence in the JetBrains ecosystem, which is particularly popular among Java developers and enterprise backend teams. “JetBrains audiences are millions of engineers. They’re one of the leading providers for certain programming languages and technologies. They’re particularly well known in the Java world, which is a big chunk of enterprise backend,” Filev explained. This gives Zencoder an advantage over competitors like Cursor and Windsurf, which are built as forks of Visual Studio Code and may face increasing constraints due to Microsoft’s tightening of licensing restrictions. “Both Cursor and Windsurf are what’s called forks of Visual Studio, and Microsoft recently started tightening their licensing restrictions,” Filev noted. “The support that VS Code has for certain languages is better than the support that Cursor and Windsurf can offer, specifically for C Sharp, C++.” By contrast, Zencoder works with Microsoft’s native platforms on VS Code and also integrates directly with JetBrains IDEs, giving it more flexibility across development environments. Beyond hype: How Zencoder’s benchmark victories translate to real developer value Zencoder differentiates itself from competitors through what it calls “Repo Grokking” technology, which analyzes entire code repositories to provide AI models with better context, and an error-corrected inference pipeline that aims to reduce code errors. The company claims impressive performance on industry benchmarks, with Filev highlighting results from March that showed Zencoder outperforming competitors: “On SWE-Bench Multimodal, the best result was around 13%, and we have been able to easily do 27% which we submitted, so we doubled the next best result. We later resubmitted even higher results of 31%,” Filev said. He also noted performance on OpenAI’s benchmark: “On the SWE-Lancer ‘diamond’ subset, OpenAI’s best result that they published was in the high 20s. Our result was in the low 30s, so we beat OpenAI on that benchmark by 20%.” These benchmarks matter because they measure an AI’s ability to solve real-world coding problems, not just generate syntactically correct but functionally flawed code. Multi-agent architecture: Zencoder’s answer to code quality and security concerns A significant concern among developers regarding AI coding tools is whether they produce secure, high-quality code. Zencoder’s approach, according to Filev, is to build on established software engineering best practices rather than reinventing them. “I think when we design AI systems, we definitely should borrow from the wisdom of human systems. The software engineering industry was rapidly developing for the last 40 years,” Filev explained. “Sometimes you don’t have to reinvent the wheel. Sometimes the best approach is to take whatever best practices and tools are in the market and leverage them.” This philosophy manifests in Zencoder’s agentic approach, where AI acts as an orchestrator that uses various tools, similar to how human developers use multiple tools in their workflows. “We enable AI to use all of those tools,” said Filev. “We’re building a truly multi-agentic platform. In our previous release, we not only shipped coding agents, like some of our competitors, but we also shipped unit testing agents, and you’re going to see more agents from us in that multi-agent interaction platform.” Coffee mode and the future: When AI does the work while developers take a break One of Zencoder’s most talked-about features is its recently launched “Coffee Mode,” which allows developers to set the AI to work on tasks like writing unit tests while they take a break. “You can literally hit that button and go grab a coffee, and the agent will do that work by itself,” Filev told VentureBeat in a previous interview. “As we like to say in the company, you can watch forever the waterfall, the fire burning, and the agent working in coffee mode.” This approach reflects Zencoder’s vision of AI as a developer’s companion rather than a replacement. “We’re not trying to substitute humans,” Filev emphasized. “We’re trying to progressively and rapidly make them 10x more productive. The more powerful the AI technology is, the more powerful is the human that uses it.” As part of the acquisition, Machinet will transfer its domain and marketplace presence to Zencoder. Current Machinet customers will receive guidance on

Zencoder buys Machinet to challenge GitHub Copilot as AI coding assistant consolidation accelerates Read More »

This AI already writes 20% of Salesforce’s code. Here’s why developers aren’t worried

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More When Anthropic CEO Dario Amodei declared that AI would write 90% of code within six months, the coding world braced for mass extinction. But inside Salesforce, a different reality has already taken shape. “About 20% of all APEX code written in the last 30 days came from Agentforce,” Jayesh Govindarajan, Senior Vice President of Salesforce AI, told me during a recent interview. His team tracks not just code generated, but code actually deployed into production. The numbers reveal an acceleration that’s impossible to ignore: 35,000 active monthly users, 10 million lines of accepted code, and internal tools saving 30,000 developer hours every month. Yet Salesforce’s developers aren’t disappearing. They’re evolving. “The vast majority of development — at least what I call the first draft of code — will be written by AI,” Govindarajan acknowledged. “But what developers do with that first draft has fundamentally changed.” From lines of code to strategic control: How developers are becoming technology pilots Software engineering has always blended creativity with tedium. Now AI handles the latter, pushing developers toward the former. “You move from a purely technical role to a more strategic one,” Govindarajan explained. “Not just ‘I have something to build, so I’ll build it,’ but ‘What should we build? What does the customer actually want?’” This shift mirrors other technological disruptions. When calculators replaced manual computation, mathematicians didn’t vanish — they tackled more complex problems. When digital cameras killed darkrooms, photography expanded rather than contracted. Salesforce believes code works the same way. As AI slashes the cost of software creation, developers gain what they’ve always lacked: time. “If creating a working prototype once took weeks, now it takes hours,” Govindarajan said. “Instead of showing customers a document describing what you might build, you simply hand them working software. Then you iterate based on their reaction.” ‘Vibe coding’ is here: Why software engineers are now orchestrating AI rather than typing every command Coders have begun adopting what’s called “vibe coding” — a term coined by OpenAI co-founder Andrej Karpathy. The practice involves giving AI high-level directions rather than precise instructions, then refining what it produces. There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper… — Andrej Karpathy (@karpathy) February 2, 2025 “You just give it a sort of high-level direction and let the AI use its creativity to generate a first draft,” Govindarajan said. “It won’t work exactly as you want, but it gives you something to play with. You refine parts of it by saying, ‘This looks good, do more of this,’ or ‘Those buttons are janky, I don’t need them.’” He compares the process to musical collaboration: “The AI sets the rhythm while the developer fine-tunes the melody.” While AI excels at generating straightforward business applications, Govindarajan admits it has limits. “Are you going to build the next-generation database with vibe coding? Unlikely. But could you build a really cool UI that makes database calls and creates a fantastic business application? Absolutely.” The new quality imperative: Why testing strategies must evolve as AI generates more production code AI doesn’t just write code differently — it requires different quality control. Salesforce developed its Agentforce Testing Center after discovering that machine-generated code demanded new verification approaches. “These are stochastic systems,” Govindarajan explained. “Even with very high accuracy, scenarios exist where they might fail. Maybe it fails at step 3, or step 4, or step 17 out of 17 steps it’s performing. Without proper testing tools, you won’t know.” The non-deterministic nature of AI outputs means developers must become experts at boundary testing and guardrail setting. They need to know not just how to write code, but how to evaluate it. Beyond code generation: How AI is compressing the entire software development lifecycle The transformation extends beyond initial coding to encompass the full software lifecycle. “In the build phase, tools understand existing code and extend it intelligently, which accelerates everything,” Govindarajan said. “Then comes testing—generating regression tests, creating test cases for new code—all of which AI can handle.” This comprehensive automation creates what Govindarajan calls “a significantly tighter loop” between idea and implementation. The faster developers can test and refine, the more ambitious they can become. Algorithmic thinking still matters: Why computer science fundamentals remain essential in the AI era Govindarajan frequently fields anxious questions about software engineering’s future. “I get asked constantly whether people should still study computer science,” he said. “The answer is absolutely yes, because algorithmic thinking remains essential. Breaking down big problems into manageable pieces, understanding what software can solve which problems, modeling user needs—these skills become more valuable, not less.” What changes is how these skills manifest. Instead of typing out each solution character by character, developers guide AI tools toward optimal outcomes. The human provides judgment; the machine provides speed. “You still need good intuition to give the right instructions and evaluate the output,” Govindarajan emphasized. “It takes genuine taste to look at what AI produces and recognize what works and what doesn’t.” Strategic elevation: How developers are becoming business partners rather than technical implementers As coding itself becomes commoditized, developer roles connect more directly to business strategy. “Developers are taking supervisory roles, guiding agents doing work on their behalf,” Govindarajan explained. “But they remain responsible for what gets deployed. The buck still stops with them.” This elevation places developers closer to decision-makers and further from implementation details—a promotion rather than an elimination. Salesforce supports this transition with tools designed for each stage: Agentforce for Developers handles code generation, Agent Builder enables customization, and Agentforce Testing Center ensures reliability. Together, they form a platform for developers to grow into these expanded roles. The company’s vision presents a stark contrast

This AI already writes 20% of Salesforce’s code. Here’s why developers aren’t worried Read More »

Inside Salesforce’s Agentforce: AI agents, digital labor and the Agentic Maturity Model

Overview Join host Keith Shaw in this episode of Demo as he sits down with Shibani Ahuja, SVP of Enterprise IT Strategy at Salesforce; and Mike Jortberg, Global Sales Director at Slalom, to explore Agentforce — Salesforce’s innovative AI-powered agent platform — and the groundbreaking Agentic Maturity Model. Discover how enterprises can harness autonomous agents to drive digital labor, boost productivity, and transform operations across sales, service, HR, IT, and more. 🚀 What you’ll see: * Live demos of AI-driven agents in action* Real-world use cases in sales, insurance, and hospitality * Automation that compresses 30-minute tasks down to 2 minutes * Integration of AI, CRM, and enterprise data * Sentiment analysis and multi-agent orchestration * Insights from Salesforce + Slalom on the future of autonomous enterprise systems 📍 Featuring: Shibani Ahuja, SVP at Salesforce; and Mike Jortberg, Global Sales Director at Slalom 📅 Learn more and get hands-on: Attend an AgentForce World Tour event near you – visit salesforce.com for dates & locations. This episode is sponsored by Salesforce & Slalom. 📢 Like, comment, and subscribe for more cutting-edge tech demos every week! #salesforce #AgentForce #AI #DigitalLabor #AgenticAI #AutonomousAgents #CRMAI #EnterpriseTech #DemoShow #KeithShaw Register Now source

Inside Salesforce’s Agentforce: AI agents, digital labor and the Agentic Maturity Model Read More »