Solar panels made from moon dust could power future lunar colonies

Future lunar bases could run on solar panels forged from molten moon dust, turning the Moon’s surface into an energy source, thanks to a new research breakthrough.  Scientists at the University of Potsdam have engineered so-called “moonglass” solar cells made by melting artificial moon dust or “regolith” and then combining it with a layer of perovskite crystal to create a working solar panel.  The device could be lighter, cheaper, and more radiation-resistant than the panels already used in space, said the researchers. Their results were published in the journal Device this week. Today, solar panels power satellites, space stations, and Mars and lunar rovers. All these arrays are currently built on Earth and launched into space. But as humanity pushes for a permanent lunar presence, the need for solar power is set to skyrocket — and so will the cost of getting panels there. 3 free tickets to TNW Conference? Get them now! For a limited time, groups can get up to three extra free tickets! Book now and increase your visibility and connections at TNW Conference Felix Lang, lead author of the paper, said that while the silicon-based solar cells used in space now are “amazing” — reaching efficiencies of 30% to 40% — they are very expensive. They are also heavy because they use glass or a thick foil as a cover. “It’s hard to justify lifting all these cells into space,” he said. Harnessing the Moon’s own regolith could be a game-changer. By creating moonglass directly on the lunar surface and pairing it with a thin layer of perovskite crystals brought from Earth, the researchers found they could slash launch mass by 99%. Building solar panels on the Moon An artist’s impression of future solar cell fabrication on the Moon. Credit: Sercan Özen Once the materials are collected, turning them into solar panels on the Moon would require “minimal equipment,” according to the researchers, because they can be made with raw regolith that doesn’t need to be pre-processed. The team says they have already achieved promising results by using a large curved mirror and sunlight to focus a beam hot enough to melt regolith into moonglass. Since moonglass is made from raw regolith, it’s milky-white instead of transparent, limiting its light-harvesting potential. The best prototypes from the Potsdam team reached about 12% efficiency — roughly half that of conventional perovskite cells. But simulations suggest they could eventually match the efficiency of conventional perovskite cells.  Nicholas Bennett at the University of Technology Sydney told New Scientist that this is the first successful use of moonglass in a functioning solar cell. The real challenge now, he says, is producing large quantities of the stuff outside of the lab. Moonglass panels are the latest in a string of high-tech bids to lay the foundations for a permanent human presence on the Moon. Other planned projects include using moon dust to 3D-print a lunar base, building oxygen extraction systems from regolith, and even building space mirrors that melt the Moon’s ice into drinking water. Space age technologies will be appearing on Earth during TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale. Use the code TNWXMEDIA2025 at the check-out to get 30% off the price tag. source

Solar panels made from moon dust could power future lunar colonies Read More »

9 principles to improve IT supplier relationship management

8. Ensure resource continuity One of the most common issues with IT supplier relationships and other partnerships is a lack of resource continuity, particularly when key individuals involved change. After all, with IT’s high turnover rate, personnel changes are inevitable in any supplier partnership. Provisions that ensure resource continuity, particularly regarding key personnel, will keep work flowing smoothly. “People will inevitably change over the course of your relationship with a supplier,” says George Nellist, director and CIO at Ascend Agency. “But if you have the right framework in place, these changes don’t have to disrupt your processes and outcomes, particularly when both sides are involved in coordinating the change. The right onboarding for new personnel and ensuring the partnership continues to get the same level of priority will keep everyone aligned and focused so your output doesn’t take a step back.” 9. Incorporate an onboarding plan As an extension of ensuring resource continuity, businesses must also account for relationships with IT suppliers as part of their onboarding process with new hires. All team members and stakeholders involved with the supplier relationship should be onboarded to the partnership, with materials and guidance adapted to their roles. A consistent yet adaptable approach to onboarding that highlights the vision, guiding principles, required skills, desired outcomes, and mindset associated with the partnership will keep things running smoothly, even as new hires are onboarded. Better relationship management, better IT outcomes CIOs who improve their ability to manage relationships with IT suppliers can drastically improve outcomes for their business. With stronger relationship management, you can ensure full alignment between your organization and its IT suppliers, enhance productivity, and drive meaningful progress toward your IT goals. A framework that strengthens the relationship between both parties will create the necessary win-wins for lasting relationship success. source

9 principles to improve IT supplier relationship management Read More »

C/side protects websites from third-party script attacks, enhances browser security

00:00 Hi everybody, welcome to DEMO, the show where companies come in and showcase their latest products and services. Today, I’m joined by Simon Wijckmans. He is the CEO and founder of c/side. Welcome to the show, Simon. 00:10Thanks for having me. 00:11You’re coming all the way from the U.K., so I appreciate you making the trip. Tell us a little bit about c/side and what you’re going to be showing us today. 00:18So c/side stands for “client side”—we basically do client-side security. It’s a logical step, since we’ve worked to protect our infrastructure, buying firewalls and all that kind of stuff. Now, we also protect open source dependencies on registries like Node Package Manager. There are all kinds of attacks looking for less monitored places to execute—and of course, the browser is one of those. So that’s what we do. 00:40When we ask who this is designed for, I’m pretty sure you just said “attacks” and “security.” So this is aimed at security professionals within companies, correct? 00:49Yeah, mostly. I’d say companies in the e-commerce space or those that generate significant revenue through their websites—whether by accepting credit card information or running ads. In line with that, PCI DSS now requires client-side security, since the majority of credit card theft these days happens via client-side attacks. 01:06What problem are you solving here? Why should companies care about this issue? When we talked before the show, you mentioned things I wasn’t even aware of—especially regarding third-party scripts. 01:17Correct. The thing is, we don’t actually know what’s happening in a user’s browser. When you build a website, you add all kinds of dependencies, many of which make fetch requests when loaded in the browser—and you don’t see any of that happening. So when we talk about client-side attacks, it could be anything: crypto mining, stealing credit card information or login credentials, capturing sensitive information from input forms—you name it. Anything you can do in a browser for legitimate reasons, you can also do for illegitimate reasons. 01:46So what would companies be doing if they didn’t have c/side? Would they only find out about an attack after the fact, once they’re already in trouble? Or is there another way to monitor for this? 02:01There are a couple of open-source options to help limit risk—like using Content Security Policies or being very selective about the client-side fetches you allow. But even then, there’s often a significant gap. What we see is that, when a client-side security incident occurs, it can take days, weeks, even months before it’s discovered. For example, in the case of credit card theft, session tokens might be stolen, bucketed, and resold on the dark web—making it very hard to trace the origin. Many companies don’t even know they’ve suffered a client-side attack. The Polyfill incident last June was a great example—a script on over 500,000 websites was potentially doing things we still don’t fully understand. 02:46Wow. That’s intense. All right, show me the cool demo you’ve got. 02:51Sure. We built a website for a fictitious company called Beverage Ltd. It’s a drinks company, and like most websites, it asks for your email to sign up for a newsletter. You can also buy products on it. If you go to the cart, you can enter your credit card info. We’ve added some analytics tools—if you inspect the site’s <head> tag, you’ll see it’s built on Webflow. We’ve also added PostHog, Google Analytics, ServerCell Analytics, Clicky, PartnerStack, and Intercom—the common support chat widget. Now, even if I don’t submit a form, anything I type is being keylogged. I’ll refresh the page and show you. Here are things I typed yesterday—”help, help, help.” A script on this site is actively stealing that information and sending it elsewhere. In fact, there’s even a crypto miner running on this website—it’s mining crypto in users’ browsers. Definitely a site with major client-side issues.Now I’m going to implement c/side. There are multiple ways to do this depending on the framework. This site is built on Webflow, but if you’re using React, Next.js, or another modern framework, we recommend our NPM package—it provides the highest level of security. I’ll paste the c/side script into the Webflow settings and publish. After a few seconds, the site will update. Now, if we reload the page—you’ll see the scripts are rerouted through c/side. For example, domains now go through proxy.cipher.dev, one of our testing URLs. These scripts now flow through us, so we can inspect and block malicious activity. You’ll notice my laptop fan has stopped spinning—it’s no longer crypto mining, because that’s now blocked. Let’s go into the c/side dashboard. Here, you can see your site, the scripts that were blocked, and those flagged for review. The browser is a bit of a wild west—people do things with JavaScript that aren’t necessarily malicious, but are unconventional. So we have three paths: block the script, flag it for review, or allow it if we know it’s safe.Now let me show you a blocked script—the crypto miner. This one’s heavily obfuscated to avoid detection. It uses eval, spikes CPU usage, and was flagged by our AI classifier. We parse the code and run it through an LLM to determine intent. As you can see, this script was blocked by c/side, triggered by our obfuscation and script rules. Here are other scripts running on the site—like analytics tools—that didn’t raise concerns. But during onboarding, customers are often surprised by how many scripts are running. That’s because a client-side script often loads more scripts, which load even more—creating a noisy and complex environment. As I mentioned, PCI DSS version 4 now requires monitoring of credit card payment pages for client-side scripts. We built a compliance portal to support that. PCI DSS asks you to justify why each script is on the page. Here’s a list of all scripts on the payment page—like Intercom, JSDelivr—and you can provide justifications. For PostHog, for example, I can write one manually or click “Get AI Review Suggestion.”

C/side protects websites from third-party script attacks, enhances browser security Read More »

Translytical Databases Are Fueling Modern AI Apps

The growing demand for real-time data to power AI applications is compelling businesses to reevaluate their traditional data architectures. Legacy systems typically rely on separate platforms for transactional and analytical processing, leading to inefficiencies and delayed insights. Translytical databases are emerging as a critical solution, seamlessly integrating both transactional and analytical workloads into a single, unified platform enabling enterprises to support modern AI-driven applications such as conversational AI, chatbots for customer service, and real-time personalization. The continuous, consistent, real-time data from translytical databases drives the performance and accuracy of AI applications. Translytical Benefits Go Beyond Real-Time Data The rapid adoption of translytical databases is driven mainly by their ability to support broader AI use cases. As organizations increasingly seek to harness the full potential of AI, the need for such platforms will further grow. Several key advantages make translytical databases essential for powering these advanced AI-driven use cases: Real-time data for contextual accuracy. AI agents, large language models (LLMs), and retrieval-augmented generation (RAG) systems thrive on vast amounts of data, and their value is maximized when the data is current. Translytical databases provide access to real-time data, ensuring that AI systems receive the up-to-date context needed to generate accurate responses. This is critical in applications such as customer service chatbots, which require account or order information, and financial analysis tools, which need real-time market data and customer portfolios. Optimized data integration for AI. RAG systems frequently need to pull vast amounts of contextual data from multiple sources to improve content accuracy. Translytical databases streamline this by offering a unified platform that combines both transactional and analytical data. This integrated data view enables generative AI models, AI agents, and LLMs to generate more accurate response. Additionally, many translytical databases now incorporate vector capabilities, enhancing data retrieval for RAG applications by identifying similar data quickly. Centralized data governance to protect sensitive data. With growing concerns over data privacy and security in AI, translytical databases offer robust governance features that control data access and ensure compliance with regulatory standards. By consolidating transactional and analytical data into a single platform, these databases enable organizations to maintain stringent data security measures, protecting sensitive information and fostering trust. Seize The Translytical Advantage Now Translytical databases are transforming the way that businesses process and analyze data. As organizations strive to harness the full potential of AI, these databases have become crucial for success. To guide enterprises through this evolving landscape, Forrester published The Forrester Wave™: Translytical Data Platforms, Q4 2024, which evaluates the top 15 vendors in the translytical database market. This comprehensive analysis highlights the leading providers, offering valuable insights that can help select the most suitable provider. If your organization still uses separate systems for transactional and analytical workloads, now is the time to transition to a translytical database. This shift will help reduce issues with AI applications, such as hallucinations, by ensuring that your data is consistent, reliable, and accessible in real time. For more insights, book time with me via an inquiry or guidance session. source

Translytical Databases Are Fueling Modern AI Apps Read More »

Square Review: Features, Pricing, and Pros & Cons

Square’s fast facts Our rating: 4.5 out of 5 Starting price: $0/month, plus processing fees Key features: Get started for free, and make the most of in-person selling Choose from a wide variety of industry-specific POS tools Advanced support add-ons to help your team scale up Maximize value with Square Banking Figure A: Square Logo (Source: Square) Square is a major player in the payment processing industry. But is it the right payment solution for you? In my years of reviewing point-of-sale systems, I’ve become familiar with what different business types expect of payment software. Square is an interesting product that definitely stands out. This guide provides some insight into Square as a POS and payment provider and what it has to offer. I will be going over what the brand promises, what it delivers, and how it stacks up against some of its competitors. Square’s pricing For me, one of Square’s best features is its small business-friendly pricing: you can sign up for free, claim a card reader for free, and use their payment processing without paying a monthly subscription. All you’ll be on the hook for are the processing fees. If you only sell in person and can be served effectively by a magstripe reader, you could get started for $0 and only need to pay 2.6% + $0.15 per transaction. Here are some specifics: Monthly subscription tiers Free: No monthly subscription fee, just processing fees; covers core POS, online store and checkout, invoice, virtual, gift card, and customer directory functionality Plus: Starting at $29/month plus processing fees Premium: Custom monthly pricing, plus processing fees Processing fees In-person: 2.6% + $0.15 per transaction Online: 2.9% + $0.30 per transaction Manual entry: 3.5% + $0.15 per transaction Invoices: 3.3% + $0.30 to 3.5% + $0.15 per transaction Popular add-ons Advanced invoicing: $30/month Payroll: Starting at $35/month + $6/payee/month Email marketing: Starting at $15/month Text marketing: Starting at $10/month plus messaging rates Loyalty programs: Starting at $45/month Advanced access: Starting at $35/location/month Afterpay: 6% + $0.30 per transaction Payment Hardware Square Magstripe reader: First is free; additional $10 Square Contactless and Chip reader: $49-59 Square Terminal: $299 or $27 per month for 12 months Other Square POS hardware also comes with a built-in card reader as opposed to just connecting it to a separate payment terminal. Square Stand: An iPad or tablet docking station ($149 or $14 per month for 12 months) Square Kiosk: Another iPad or tablet docking station with self-checkout feature ($149 or $14 per month for 12 months) Square Register: An all-in-one POS hardware with chip, contactless, and magstripe readers built into a separate customer-facing screen ($799 or $39 per month for 24 months) SEE: 7 Best Small Business POS Systems Square’s key features Square has several core value propositions that both the brand and its satisfied customers point to as reasons to sign up. Get started for free Figure B: Square Reader in Action (Source: Square) I appreciate that Square lets customers start accepting card payments at zero upfront cost. In-person processing fees are also competitive, so if that’s all you’re looking for, Square is one of the most affordable options on the market. Even its upgraded POS hardware and Square readers are comparatively inexpensive, though many software features required to make the most of those devices need a paid subscription. On the free tier, you can also: Send invoices and receive payment for them Offer digital gift cards to customers Create a business website with a digital storefront to accept payments online Use a virtual terminal to accept card-not-present transactions While the fees for anything but card-present transactions are a bit high, it’s hard not to see the value in having access to the services at no additional cost. Industry-specific POS tools Figure C: Square Self-Serve Kiosk (Source: Square) Finding a platform that covers both broad market needs and specific use case requirements simultaneously can be a chore. Square manages it, though, with at least three key verticals: Restaurants Retail Appointments Each of these verticals has a dedicated version of the POS platform, with features and tools tailored directly to what they need the software to do. And the fact that there are free versions of these POS software makes Square stand out from its competitors. A lot of those enhanced features require upgrading to a paid subscription. That said, for a lot of small businesses, just finding the functionality they need at all is a pretty big deal, so adding $30-ish a month to the asking price isn’t too bad in that context, in my opinion. Add-ons and upgrades Figure D: Square Full Register Setup (Source: Square) From my initial assessment of Square’s offerings, it’s clear that the provider serves two major demographics: businesses looking to quickly and inexpensively get started with payment processing and everyone else. Makes sense, considering how Square got its start as the pioneer in mobile card payments. What that means in practical terms, though, is that a large swathe of what makes Square’s platform so capable is reserved for those who sign up to pay for it. There are a dozen or so paid add-ons, and you can upgrade nearly everything you can get for free for increased functionality. Now, most features are low-cost, at least individually, and their modular nature means that you can pick and choose, so you don’t pay more than is justified by your usage. But unless you’re big enough to justify custom pricing, going for a “kitchen sink” approach may be more expensive than it’s worth. Still, the flexibility to add on things like marketing tools and access management as needed is nothing to shake a stick at. Especially since you can upgrade or downgrade as needed without penalty. Here are a few of the most popular add-ons that Square offers: Payroll Email marketing Loyalty program functionality Afterpay SEE: 5 Best Retail POS Systems Banking and beyond Figure E: Square Account Dashboard Example (Source: Square) It doesn’t get talked about as much as

Square Review: Features, Pricing, and Pros & Cons Read More »

Boost Agile team performance with scalable and flexible workflows with Lucid Software

Imagine one of your Agile teams is six months deep into a strategic digital initiative when they realize the solution isn’t working. Maybe it isn’t meeting customer requirements, or there is insufficient visibility into the project to specify where things are going off track. That’s not an uncommon scenario, according to The Agile Advantage study, in which 42% of Agile practitioners cited “unclear project requirements or scope changes” as top reasons for rework. A lack of visibility and clarity can cause teams to fall back to Waterfall approaches rather than engaging in the Agile methods your organization has in place.  Not only that, but a lack of visibility into successful practices of other teams makes it difficult to replicate those practices across your Agile groups. Lucid Software’s new Agile Accelerator is designed to help organizations scale Agile practices by sharing standardized, yet flexible ways of working. It surfaces qualitative insights about team confidence and health, and helps teams to respond to change with agility by performing data-driven scenario planning. Product, engineering, and portfolio manager leaders can use the Agile Accelerator to speed up and scale Agile practices, product road-mapping, and big-room planning. Why acceleration and agility go hand-in-hand The business depends on innovation for competitive advantage. That puts pressure on development operations (devops) and engineering teams to speed up processes like writing code, development, and deployment. However, accelerating innovation isn’t just about increased productivity and faster processes. For example, if your teams are more aligned and can foresee roadblocks, dependencies, and issues earlier, then they can make strategic decisions and take swift action. Make Agile teams even more agile Lucid Software helps organizations better scale Agile practices to improve team performance, as well as increase visibility for teams and leaders with the Agile Accelerator. Its capabilities include: Scalability for best practices and resources: Many organizations struggle to ensure that all teams have access to a set of the latest and most successful processes and templates, especially as they grow to 50, 100, 200, or more teams. Having that many groups also makes it difficult for Scrum Masters or Agile coaches to gain visibility into progress and help influence collaboration. With the Agile Accelerator, team leaders can automatically create team hubs and blueprints—which are a set of templates—to quickly share proven ways of working. Blueprints speed up work by providing teams with a starting point to conduct their sprint, big-room planning, or new product discovery. They also have the ability to customize templates as they see fit, making it easier to follow processes without feeling restricted to doing things a certain way. Collect and surface qualitative insights: Agile leaders often have quantitative datapoints to meet, such as a percentage rate for project completion. Yet, what is the team’s confidence level that they’ll actually get there—and on time? When team leaders deploy a blueprint with the Agile Accelerator, they can include confidence determinants, sentiment check-ins, as well as potential project risks and blockers. These datapoints are then integrated into an insights dashboard, allowing leaders and their teams to review and drill into the most critical information to answer questions and deepen collaboration. Make better decisions with scenario planning: Agile leaders need to quickly make data-driven decisions on staffing and scope adjustments while ensuring accurate planning and avoiding potential errors. With the Agile Accelerator, team leaders can test data-backed scenarios safely and visualize impact before making permanent changes. For example, by pausing the data sync between Lucid and Jira or ADO, they can work through a scenario without changing data in their system of record. They can also see how changes in scope and assignments affect the scenario by adding a team’s capacity to a dynamic table. Take the next step to true agility By enhancing visibility, surfacing critical insights, and enabling data-driven decision-making, the Agile Accelerator empowers teams to scale Agile practices effectively—ensuring alignment, adaptability, and transparency. Take the fast track to transformation. Learn more about the Lucid Software Agile Accelerator here. source

Boost Agile team performance with scalable and flexible workflows with Lucid Software Read More »

Law Firm Executive Orders Create A Legal Ethics Minefield

By Joshua Robbins and Sherry Haus ( April 1, 2025, 3:55 PM EDT) — Over the past few weeks, the White House has issued a series of unprecedented executive orders and memoranda that target both specific law firms associated with President Donald Trump’s opponents, as well as the legal profession more broadly…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Law Firm Executive Orders Create A Legal Ethics Minefield Read More »

DeepSeek jolts AI industry: Why AI’s next leap may not come from more data, but more compute at inference

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The AI landscape continues to evolve at a rapid pace, with recent developments challenging established paradigms. Early in 2025, Chinese AI lab DeepSeek unveiled a new model that sent shockwaves through the AI industry and resulted in a 17% drop in Nvidia’s stock, along with other stocks related to AI data center demand. This market reaction was widely reported to stem from DeepSeek’s apparent ability to deliver high-performance models at a fraction of the cost of rivals in the U.S., sparking discussion about the implications for AI data centers.  To contextualize DeepSeek’s disruption, we think it’s useful to consider a broader shift in the AI landscape being driven by the scarcity of additional training data. Because the major AI labs have now already trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. As a result, model providers are looking to “test-time compute” (TTC) where reasoning models (such as Open AI’s “o” series of models) “think” before responding to a question at inference time, as an alternative method to improve overall model performance. The current thinking is that TTC may exhibit scaling-law improvements similar to those that once propelled pre-training, potentially enabling the next wave of transformative AI advancements. These developments indicate two significant shifts: First, labs operating on smaller (reported) budgets are now capable of releasing state-of-the-art models. The second shift is the focus on TTC as the next potential driver of AI progress. Below we unpack both of these trends and the potential implications for the competitive landscape and broader AI market. Implications for the AI industry We believe that the shift towards TTC and the increased competition among reasoning models may have a number of implications for the wider AI landscape across hardware, cloud platforms, foundation models and enterprise software.  1. Hardware (GPUs, dedicated chips and compute infrastructure) From massive training clusters to on-demand “test-time” spikes: In our view, the shift towards TTC may have implications for the type of hardware resources that AI companies require and how they are managed. Rather than investing in increasingly larger GPU clusters dedicated to training workloads, AI companies may instead increase their investment in inference capabilities to support growing TTC needs. While AI companies will likely still require large numbers of GPUs to handle inference workloads, the differences between training workloads and inference workloads may impact how those chips are configured and used. Specifically, since inference workloads tend to be more dynamic (and “spikey”), capacity planning may become more complex than it is for batch-oriented training workloads.  Rise of inference-optimized hardware: We believe that the shift in focus towards TTC is likely to increase opportunities for alternative AI hardware that specializes in low-latency inference-time compute. For example, we may see more demand for GPU alternatives such as application specific integrated circuits (ASICs) for inference. As access to TTC becomes more important than training capacity, the dominance of general-purpose GPUs, which are used for both training and inference, may decline. This shift could benefit specialized inference chip providers.  2. Cloud platforms: Hyperscalers (AWS, Azure, GCP) and cloud compute Quality of service (QoS) becomes a key differentiator: One issue preventing AI adoption in the enterprise, in addition to concerns around model accuracy, is the unreliability of inference APIs. Problems associated with unreliable API inference include fluctuating response times, rate limiting and difficulty handling concurrent requests and adapting to API endpoint changes. Increased TTC may further exacerbate these problems. In these circumstances, a cloud provider able to provide models with QoS assurances that address these challenges would, in our view, have a significant advantage. Increased cloud spend despite efficiency gains: Rather than reducing demand for AI hardware, it is possible that more efficient approaches to large language model (LLM) training and inference may follow the Jevons Paradox, a historical observation where improved efficiency drives higher overall consumption. In this case, efficient inference models may encourage more AI developers to leverage reasoning models, which, in turn, increases demand for compute. We believe that recent model advances may lead to increased demand for cloud AI compute for both model inference and smaller, specialized model training. 3. Foundation model providers (OpenAI, Anthropic, Cohere, DeepSeek, Mistral) Impact on pre-trained models: If new players like DeepSeek can compete with frontier AI labs at a fraction of the reported costs, proprietary pre-trained models may become less defensible as a moat. We can also expect further innovations in TTC for transformer models and, as DeepSeek has demonstrated, those innovations can come from sources outside of the more established AI labs.    4. Enterprise AI adoption and SaaS (application layer) Security and privacy concerns: Given DeepSeek’s origins in China, there is likely to be ongoing scrutiny of the firm’s products from a security and privacy perspective. In particular, the firm’s China-based API and chatbot offerings are unlikely to be widely used by enterprise AI customers in the U.S., Canada or other Western countries. Many companies are reportedly moving to block the use of DeepSeek’s website and applications. We expect that DeepSeek’s models will face scrutiny even when they are hosted by third parties in the U.S. and other Western data centers which may limit enterprise adoption of the models. Researchers are already pointing to examples of security concerns around jail breaking, bias and harmful content generation. Given consumer attention, we may see experimentation and evaluation of DeepSeek’s models in the enterprise, but it is unlikely that enterprise buyers will move away from incumbents due to these concerns. Vertical specialization gains traction: In the past, vertical applications that use foundation models mainly focused on creating workflows designed for specific business needs. Techniques such as retrieval-augmented generation (RAG), model routing, function calling and guardrails have played an important role in adapting generalized models for these specialized use cases. While these strategies have led to notable successes, there has been persistent concern that significant

DeepSeek jolts AI industry: Why AI’s next leap may not come from more data, but more compute at inference Read More »

Most Americans Say They Are Tuned In to News About the Trump Administration

Far fewer are hearing about the administration’s relationship with the media than was the case early in Trump’s first term President Donald Trump speaks with reporters on the White House South Lawn on March 21, 2025. (Demetrius Freeman/The Washington Post via Getty Images) How we did this Pew Research Center conducted this study to track how Americans are paying attention to news about the new Trump administration. For many years we have asked U.S. adults for their views and habits related to news about elections, presidential administrations and policy developments. This study builds on work we did leading up to the 2024 election, and on studies in both 2017 and 2021 during the early stages of the Trump and Biden administrations. To do this, we surveyed 5,123 adults from Feb. 24 to March 2, 2025. Everyone who took part in this survey is a member of the Center’s American Trends Panel (ATP), a group of people recruited through national, random sampling of residential addresses who have agreed to take surveys regularly. This kind of recruitment gives nearly all U.S. adults a chance of selection. Interviews were conducted either online or by telephone with a live interviewer. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other factors. Read more about the ATP’s methodology. Here are the questions used for this report, along with responses, and its methodology. The second Trump administration has started with a rapid succession of executive orders and policy changes, including tariffs, cuts to government agencies and more. Americans are paying attention, but Democrats and Republicans give different reasons for why they are tuning in, according to a new Pew Research Center survey conducted in late February and early March. As the president and his allies move to reshape the federal government and U.S. foreign policy, about seven-in-ten U.S. adults say they have been following news about the actions and initiatives of the Trump administration very (31%) or fairly (40%) closely. That’s about the same share who said they were following news about the presidential election last September (69%), and slightly higher than the percentage who said they were following news about the actions and initiatives of the new Biden administration in 2021 (66%). There is also a gap between now and the early days of the Biden presidency when it comes to the share who are following administration news very closely (31% vs. 22%). Both partisan coalitions are paying attention to the actions and initiatives of the administration at similar rates. This is different from the first months of the Biden administration in 2021, when Republicans and Republican-leaning independents were less likely than Democrats and Democratic leaners to say they were following the Biden administration’s actions very or fairly closely (60% of Republicans vs. 75% of Democrats). Now, 74% of Republicans and 71% of Democrats say they are following the Trump administration’s actions at least fairly closely. Four-in-ten Americans say they’re now paying more attention to political news than they were before Trump took office, while just 10% say they are paying less attention. Democrats are slightly more likely than Republicans to say they’re paying more attention (44% vs. 37%) and are also more likely to say they’re paying less attention (15% vs. 5%) than before the inauguration. Republicans, meanwhile, are more inclined to say their attention has been steady. Reasons Americans follow – or don’t follow – news about the Trump administration The survey asked the 71% of Americans who say they are following news about the Trump administration very or fairly closely why they are doing this. Respondents were given a list of five possible reasons why they might be following what Trump is doing, and indicated whether each was a major reason, minor reason or not a reason at all. The most common reasons are concern and relevance. About two-thirds of U.S. adults in this group (66%) say “I’m concerned about what the administration is doing” is a major reason they are following its actions. And roughly six-in-ten (62%) say its relevance to their life is a major reason. Smaller shares cite three other potential factors as major reasons they follow news about the Trump administration: Because it’s hard to avoid (43%), Because they like what the administration is doing (36%) or Because they find it entertaining or interesting (25%). Among the smaller share of Americans who aren’t closely following news about the administration, the most common reasons for tuning out are fatigue and lack of interest in politics generally. About half say a major reason for this is that they’re worn out by the amount of news (49%) or that they don’t typically follow political news (48%). Roughly a third say they are tuned out because they don’t like what the administration is doing (34%). Fewer say a major reason is that the news about the Trump administration is not relevant to their life (15%) or that they trust the administration to make good decisions (13%). Reasons by party Among U.S. adults who are closely following news about the Trump administration’s actions and initiatives, identical shares of Democrats and Republicans (62% each) say that the personal relevance to their life is a major reason they are doing so. But on other reasons, there are substantial gaps between the two parties. Democrats are much more likely to say concern about what the administration is doing is a major factor in why they are following news about it (88%), though nearly half of Republicans also cite concern about the administration’s activities as a major reason for paying attention to the news (45%). At the same time, most Republicans (64%) say a major reason they follow this news is that they like what the administration is doing, compared with just 8% of Democrats. Democrats are more likely to say they’re keeping up with this news because that it’s hard to avoid, while more Republicans than Democrats say it’s because the news is entertaining. Republicans respond

Most Americans Say They Are Tuned In to News About the Trump Administration Read More »

Beyond human identities: Cybersecurity's blind spot in the age of AI agents

As AI continues to evolve and mature, organizations are beginning to deploy AI agents, which behave very differently from other forms of AI. Unlike generative or traditional AI, which act in response to a human prompt or request, AI agents independently perform complex tasks that require multi-step strategies. To accomplish their goals, agents must collect data from myriad sources and interact with internal and external systems. Machine identities far outnumber humans in enterprise networks, and machine identity management becomes very complex, very quickly. Unfortunately, many of the permissions given to AI agents are far too broad. If agents are compromised, attackers can use them to move laterally across the network, escalate their privileges to steal data, deploy malware and hijack critical internal systems. When employees find they can’t do their jobs because they don’t have broad enough permissions, they complain, and it gets fixed. Machines, on the other hand, don’t complain. They just break, which creates issues that IT must investigate. Every IT department is overtaxed, so administrators are likely to err on the side of giving the AI agent overly broad privileges. This may make managing AI agents easier in the short term, but it increases the long-term security risk. Let’s say IT has deployed an AI agent that acts as a chatbot to help sales representatives find information quickly about prospects and customers. This agent will need access to CRM data, but an admin might mistakenly give it broad read-write access to many enterprise databases. “With these privileges, if bad actors compromise the agent, they could delete records, drop entire databases, take over applications and execute a serious data breach,” says Phil Calvin, chief product officer at Delinea. The ease of spinning AI agents creates other issues: primarily, shadow AI and agent sprawl. It has become possible, even simple, for non-technical employees to download an agent from open-source sites, spin it up, and connect to data sources — all without any input or awareness from IT. To properly manage AI agent identities, IT needs to continuously discover all agents in the environment, a process that should be automated and continuous, so IT can become aware of new agents as they appear. Next, IT needs a unified view of all machine identities and their permissions for efficient management. Agent permissions should default to read-only. Those agents that need the ability to create, update or delete data should each be handled individually and with great care. Next, adhere to the principle of least privilege. If an agent is deployed to provide employees with easier access to information in the knowledge bases, then there’s no reason it should have read access to customer information in the CRM. Restrict access only to the data sources the agent needs to accomplish its tasks. Delinea has built a cloud-native identity security platform that runs on a global scale to continuously discover, provision, and govern all machine and human identities, including AI agents. IT gains a coherent, comprehensive view of all identities — even those not under IT’s direct control —via a single pane of glass. “As an industry, we tend overcomplicate identity management for our customers,” Calvin said. “At its most basic, an AI agent is just an account, and you need to understand the account sprawl and permissions. We give the customer an easy-to-comprehend view into all of that, which exponentially simplifies management.” Learn more about how Delinea can help your organization gain control over and reduce the risk posed by AI agents. source

Beyond human identities: Cybersecurity's blind spot in the age of AI agents Read More »