Judge Weighs Impact Of Top Court Ruling On DOE Grant Cap

By Brian Dowling ( April 28, 2025, 6:38 PM EDT) — A federal judge hearing a challenge to a Department of Energy grant cap on Monday expressed concerns about the case’s potential overlap with a U.S. Supreme Court ruling that cast doubt on a bid to revive federal teacher training grants…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Judge Weighs Impact Of Top Court Ruling On DOE Grant Cap Read More »

Featured CX Executive: Brad Smith, Head Of CX & Insights At Securian Financial

We’re four months into 2025, and there has been no shortage of volatility to challenge even the most seasoned CX executives. To stay on course, we recommend keeping a ruthless focus on customers while also balancing disciplined spending, change leadership, and risk management. But what has this looked like, in practical terms, for CX leaders? I recently asked Brad Smith, who leads enterprise insights and CX at Securian Financial, to share what’s been top of mind for him in 2025. Below, Brad describes Securian’s CX transformation journey, including the evolution of its CX org structure, headwinds and tailwinds created by AI, and work underway to establish links between CX, EX, and business outcomes. Brad, thanks so much for sharing your insights! How Brad Is Leading Securian’s CX Efforts During Turbulent Times Brad Smith, Securian Financial As a CX leader, what headwinds and tailwinds are you facing as you enter 2025? You probably won’t be surprised to hear that Securian Financial is navigating the complexities of integrating AI into its CX strategies, and I’d call AI both a headwind and a tailwind. When thinking about how and where AI can help Securian, the company needs to answer three main questions: What tasks can be trusted to AI, and which ones should be kept within human control? How can we overcome challenges related to compliance, data management, technology adoption, and even expense management? How should we evaluate AI as a productivity tool for customers and associates while considering the cultural change required? Securian is a 145-year-old company built on trust and relationships, so fully leveraging the capabilities of AI will need to be done with extreme care, and it will require a huge cultural change to be successful. What are your top CX priorities for 2025? How have you established the link between CX initiatives and broader organizational objectives? In select areas of the business, we have found a direct correlation between the customer experience and repeat business/revenue. In other areas, we are taking a leap of faith for now that there is a correlation until we have a larger dataset to confidently conduct the analysis. Philosophically, the leadership team agrees that providing enhanced experiences to our customers will result in a win-win for Securian and our customers. We continue to reinforce customer-centric behaviors to demonstrate our belief that every role in the company impacts our customers in some way. Valuing empathy and creating an emotional connection with customers is crucial to the long-term sustainability of a customer-focused business model. Additionally, we are evolving our CX measurement capabilities to more broadly leverage AI. Having quicker access to deeper insights will transform our measurement approach, especially if we can push those insights to the correct decision-makers without a lot of heavy lifting. Currently, we are testing migrating from static dashboards to query-driven insights and recommendations created by the user. Many organizations are just getting started on their CX transformations. Where is Securian on its CX journey? Securian has been on its CX transformation journey for nearly five years. We started slowly by targeting one business line that was a willing business partner. Initially, we focused on CX measurement and formed a team to map out a priority journey aligned with the CX measurement work. By leveraging SWAT teams to tackle top pain points, we improved the overall customer experience scores, shortened cycle times, and reduced defects/rework. Building on these wins, we gradually expanded our focus while simultaneously building our expertise. Since our CX maturity still varies across business units, we are currently focusing on bringing the less mature areas up to speed with our leading areas. In 2025, we will continue the journey to better understand the impact that the Securian associate experience has on our customers and channel partners. We’ve spent the last few years understanding and improving the associate experience, and now we will take it a step further to understand the linkage between employee experience and customer experience. Describe your CX org structure. How did you land on this structure? What makes it effective today, and how do you see it evolving in the future? We began with an advisory model, where a centralized team of CX professionals advised the business on CX strategy, conducted measurements, and shared insights. Initially, business units were responsible for prioritizing and improving the experiences with their existing staff. After achieving significant activation wins, we transitioned the CX strategy roles into the business units. This allows them to better develop a vision for the customer experience within their respective areas. Currently, the design of specific experiences is handled by shared services such as enterprise technology and operations, with experience improvement as a formal goal. As we evolve toward a matrixed model, business units will leverage insights to prioritize improvement areas in a more formal, collaborative manner. For example, a digital product owner leads a cross-functional pod to enhance the product to better meet customer needs. Pod members represent CX measurement, Adobe analytics, data analytics, and UX design. Although we have considered implementing formal journey owners, we continue to be effective using the matrix model. Each stage of our CX evolution has been successful due to strong communication, clear roles and responsibilities, and a one-team mindset. What has made you personally successful as a CX leader? What resources would you recommend to other CX leaders? Traits that have served me well in CX include curiosity, a collaborative mindset, and strong change management skills. Being a certified Lean Six Sigma Black Belt has also been beneficial, as it has given me a strong process focus. I have also learned a lot from partners like the folks at Heart of the Customer, as well as experts at Forrester, Qualtrics, and Medallia. Most importantly, I have learned from the highly skilled associates on the Securian team. They bring diverse experiences to the table that enrich our CX offering and thought leadership. Additionally, the ability to identify and collaborate with a willing business partner cannot be stressed enough. Working

Featured CX Executive: Brad Smith, Head Of CX & Insights At Securian Financial Read More »

IT leaders see big business potential in small AI models

“That’s 100% accurate,” says Patrick Buell, chief innovation officer at Hakkoda, an IBM company. “Tuned, open-source small language models run behind firewalls solve many of the security, governance, and cost concerns.”  Tom Richer, a former CIO and founder of Intelagen, a Google Partner that develops and deploys specialized vertical AI solutions, says the Gartner report aligns with what he is seeing in the field. “General-purpose LLMs have their place, but for specific business problems, smaller, fine-tuned models deliver better results with greater efficiency especially in regulated industries,” Richer says. “The main driver towards SLMs is the hallucination risk of LLMs. The tendency of general-purpose LLMs to generate inaccurate or nonsensical information, especially when dealing with specific or nuanced business contexts, is a significant barrier.” source

IT leaders see big business potential in small AI models Read More »

Agri Stats Gets Say In DOJ's Poultry Worker Wage Fixing Case

By Matthew Perlman ( April 30, 2025, 8:04 PM EDT) — A Maryland federal court allowed Agri Stats Inc. to intervene Wednesday in the U.S. Department of Justice’s case accusing Wayne-Sanderson Farms and George’s Inc. of suppressing wages, after the government said the poultry companies need to stop using the agricultural data firm…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Agri Stats Gets Say In DOJ's Poultry Worker Wage Fixing Case Read More »

Senate Bill Would Make FCC List Foreign Foes' Telecom Stakes

By Christopher Cole ( April 30, 2025, 9:19 PM EDT) — The U.S. Senate will consider a bipartisan bill to direct the Federal Communications Commission to publish a list of foreign adversaries’ ownership stakes in regulated companies…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Senate Bill Would Make FCC List Foreign Foes' Telecom Stakes Read More »

DeepSeek’s success shows why motivation is key to AI innovation

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More January 2025 shook the AI landscape. The seemingly unstoppable OpenAI and the powerful American tech giants were shocked by what we can certainly call an underdog in the area of large language models (LLMs). DeepSeek, a Chinese firm not on anyone’s radar, suddenly challenged OpenAI. It is not that DeepSeek-R1 was better than the top models from American giants; it was slightly behind in terms of the benchmarks, but it suddenly made everyone think about the efficiency in terms of hardware and energy usage. Given the unavailability of the best high-end hardware, it seems that DeepSeek was motivated to innovate in the area of efficiency, which was a lesser concern for larger players. OpenAI has claimed they have evidence suggesting DeepSeek may have used their model for training, but we have no concrete proof to support this. So, whether it is true or it’s OpenAI simply trying to appease their investors is a topic of debate. However, DeepSeek has published their work, and people have verified that the results are reproducible at least on a much smaller scale. But how could DeepSeek attain such cost-savings while American companies could not? The short answer is simple: They had more motivation. The long answer requires a little bit more of a technical explanation. DeepSeek used KV-cache optimization One important cost-saving for GPU memory was optimization of the Key-Value cache used in every attention layer in an LLM. LLMs are made up of transformer blocks, each of which comprises an attention layer followed by a regular vanilla feed-forward network. The feed-forward network conceptually models arbitrary relationships, but in practice, it is difficult for it to always determine patterns in the data. The attention layer solves this problem for language modeling. The model processes texts using tokens, but for simplicity, we will refer to them as words. In an LLM, each word gets assigned a vector in a high dimension (say, a thousand dimensions). Conceptually, each dimension represents a concept, like being hot or cold, being green, being soft, being a noun. A word’s vector representation is its meaning and values according to each dimension. However, our language allows other words to modify the meaning of each word. For example, an apple has a meaning. But we can have a green apple as a modified version. A more extreme example of modification would be that an apple in an iPhone context differs from an apple in a meadow context. How do we let our system modify the vector meaning of a word based on another word? This is where attention comes in. The attention model assigns two other vectors to each word: a key and a query. The query represents the qualities of a word’s meaning that can be modified, and the key represents the type of modifications it can provide to other words. For example, the word ‘green’ can provide information about color and green-ness. So, the key of the word ‘green’ will have a high value on the ‘green-ness’ dimension. On the other hand, the word ‘apple’ can be green or not, so the query vector of ‘apple’ would also have a high value for the green-ness dimension. If we take the dot product of the key of ‘green’ with the query of ‘apple,’ the product should be relatively large compared to the product of the key of ‘table’ and the query of ‘apple.’ The attention layer then adds a small fraction of the value of the word ‘green’ to the value of the word ‘apple’. This way, the value of the word ‘apple’ is modified to be a little greener. When the LLM generates text, it does so one word after another. When it generates a word, all the previously generated words become part of its context. However, the keys and values of those words are already computed. When another word is added to the context, its value needs to be updated based on its query and the keys and values of all the previous words. That’s why all those values are stored in the GPU memory. This is the KV cache. DeepSeek determined that the key and the value of a word are related. So, the meaning of the word green and its ability to affect greenness are obviously very closely related. So, it is possible to compress both as a single (and maybe smaller) vector and decompress while processing very easily. DeepSeek has found that it does affect their performance on benchmarks, but it saves a lot of GPU memory. DeepSeek applied MoE The nature of a neural network is that the entire network needs to be evaluated (or computed) for every query. However, not all of this is useful computation. Knowledge of the world sits in the weights or parameters of a network. Knowledge about the Eiffel Tower is not used to answer questions about the history of South American tribes. Knowing that an apple is a fruit is not useful while answering questions about the general theory of relativity. However, when the network is computed, all parts of the network are processed regardless. This incurs huge computation costs during text generation that should ideally be avoided. This is where the idea of the mixture-of-experts (MoE) comes in. In an MoE model, the neural network is divided into multiple smaller networks called experts. Note that the ‘expert’ in the subject matter is not explicitly defined; the network figures it out during training. However, the networks assign some relevance score to each query and only activate the parts with higher matching scores. This provides huge cost savings in computation. Note that some questions need expertise in multiple areas to be answered properly, and the performance of such queries will be degraded. However, because the areas are figured out from the data, the number of such questions is minimised. The importance of reinforcement learning An LLM is taught to think

DeepSeek’s success shows why motivation is key to AI innovation Read More »

FCC Eyes New Power Limits For NGSO Satellites

By Christopher Cole ( April 28, 2025, 7:33 PM EDT) — The Federal Communications Commission on Monday floated new power limits for nongeostationary orbit satellites in a move the feds say could boost the availability of broadband service beamed from space, and that was requested by SpaceX…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

FCC Eyes New Power Limits For NGSO Satellites Read More »

Structify raises $4.1M seed to turn unstructured web data into enterprise-ready datasets

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A Brooklyn-based startup is taking aim at one of the most notorious pain points in the world of artificial intelligence and data analytics: the painstaking process of data preparation. Structify emerged from stealth mode today, announcing its public launch alongside $4.1 million in seed funding led by Bain Capital Ventures, with participation from 8VC, Integral Ventures and strategic angel investors. The company’s platform uses a proprietary visual language model called DoRa to automate the gathering, cleaning, and structuring of data — a process that typically consumes up to 80% of data scientists’ time, according to industry surveys. “The volume of information available today has absolutely exploded,” said Ronak Gandhi, co-founder of Structify, in an exclusive interview with VentureBeat. “We’ve hit a major inflection point in data availability, which is both a blessing and a curse. While we have unprecedented access to information, it remains largely inaccessible because it’s so difficult to convert into the right format for making meaningful business decisions.” Structify’s approach reflects a growing industry-wide focus on solving what data experts call “the data preparation bottleneck.” Gartner research indicates that inadequate data preparation remains one of the primary obstacles to successful AI implementation, with four of five businesses lacking the data foundations necessary to fully capitalize on generative AI. How AI-powered data transformation is unlocking hidden business intelligence at scale At its core, Structify allows users to create custom datasets by specifying the data schema, selecting sources, and deploying AI agents to extract that data. The platform can handle everything from SEC filings and LinkedIn profiles to news articles and specialized industry documents. What sets Structify apart, according to Gandhi, is their in-house model DoRa, which navigates the web like a human would. “It’s super high-quality. It navigates and interacts with stuff just like a person would,” Gandhi explained. “So we’re talking about human quality — that’s the first and foremost center of the principles behind DoRa. It reads the internet the way a human would.” This approach allows Structify to support a free tier, which Gandhi believes will help democratize access to structured data. “The way in which you think about data now is, it’s this really precious object,” Gandhi said. “This really precious thing that you spend so much time finagling and getting and wrestling around, and when you have it, you’re like, ‘Oh, if someone was to delete it, I would cry.’” Structify’s vision is to “commoditize data” — making it something that can be easily recreated if lost. From finance to construction: How businesses are deploying custom datasets to solve industry-specific challenges The company has already seen adoption across multiple sectors. Finance teams use it to extract information from pitch decks, construction companies turn complex geotechnical documents into readable tables, and sales teams gather real-time organizational charts for their accounts. Slater Stich, partner at Bain Capital Ventures, highlighted this versatility in the funding announcement: “Every company I’ve ever worked with has a handful of data sources that are both extremely important and a huge pain to work with, whether that’s figures buried in PDFs, scattered across hundreds of web pages, hidden behind an enterprise SOAP API, etc.” The diversity of Structify’s early customer base reflects the universal nature of data preparation challenges. According to TechTarget research, data preparation typically involves a series of labor-intensive steps: collection, discovery, profiling, cleansing, structuring, transformation, and validation — all before any actual analysis can begin. Why human expertise remains crucial for AI accuracy: Inside Structify’s ‘quadruple verification’ system A key differentiator for Structify is its “quadruple verification” process, which combines AI with human oversight. This approach addresses a critical concern in AI development: ensuring accuracy. “Whenever a user sees something that’s suspicious, or we identify some data as potentially suspicious, we can send it to an expert in that specific use case,” Gandhi explained. “That expert can act in the same way as [DoRa], navigate to the right piece of information, extract it, save it, and then verify if it’s right.” This process not only corrects the data but also creates training examples that improve the model’s performance over time, especially in specialized domains like construction or pharmaceutical research. “Those things are so messy,” Gandhi noted. “I never thought in my life I would have a strong understanding of geology. But there we are, and that, I think, is a huge strength – being able to learn from these experts and put it directly into DoRa.” As data extraction tools become more powerful, privacy concerns inevitably arise. Structify has implemented safeguards to address these issues. “We don’t do any authentication, anything that required a login, anything that requires you to go behind some sense of information – our agent doesn’t do that because that’s a privacy concern,” Gandhi said. The company also prioritizes transparency by providing direct sourcing information. “If you’re interested in learning more about a particular piece of information, you go directly to that content and see it, as opposed to kind of legacy providers where it’s this black box.” Structify enters a competitive landscape that includes both established players and other startups addressing various aspects of the data preparation challenge. Companies like Alteryx, Informatica, Microsoft, and Tableau all offer data preparation capabilities, while several specialists have been acquired in recent years. What differentiates Structify, according to CEO Alex Reichenbach, is its combination of speed and accuracy. A recent LinkedIn post by Reichenbach claimed they had sped up their agent “10x while cutting cost ~16x” through model optimization and infrastructure improvements. The company’s launch comes amid growing interest in AI-powered data automation. According to a TechTarget report, automating data preparation “is frequently cited as one of the major investment areas for data and analytics teams,” with augmented data preparation capabilities becoming increasingly important. How frustrating data preparation experiences inspired two friends to revolutionize the industry For Gandhi, Structify addresses problems he faced firsthand in previous roles.

Structify raises $4.1M seed to turn unstructured web data into enterprise-ready datasets Read More »

How MCP can revolutionize the way DevOps teams use AI

As for security, MCP agents are subject to all of the risks that come with any type of LLM-based technology. They have the potential to leak sensitive data because any resources that are available to an MCP server could become exposed to a third-party AI model. A potential solution is to avoid third-party models by hosting models locally (or on a server located behind a firewall) instead, but not all models support this approach, and it adds to MCP setup challenges.  MCP servers could also potentially carry out actions that you don’t want them to perform, like deleting critical resources. To control for this risk, it’s important to apply a least-privilege approach to MCP server design and management by ensuring that they can only access the minimum resources necessary to support a target use case. The capabilities of MCP servers are limited to the level of security access available to users, so by restricting user privileges, admins can restrict MCP security risks. MCP and the future of AI in DevOps To be sure, MCP is not perfect. But it constitutes a huge leap forward in terms of how DevOps teams can leverage AI. It’s also a technology that’s here and now, and that DevOps engineers can start using today. Going forward, it’s likely that MCP will become as integral to DevOps as technologies like CI/CD. source

How MCP can revolutionize the way DevOps teams use AI Read More »

Akin Atty Returns To FCC To Lead Wireline Bureau

By Nadia Dreid ( April 30, 2025, 6:34 PM EDT) — After three years in private practice, the Federal Communications Commission has welcomed an Akin Gump Strauss Hauer & Feld LLP attorney back to the agency as the newest head of the commission’s Wireline Competition Bureau…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Akin Atty Returns To FCC To Lead Wireline Bureau Read More »