Tech Republic

April 2025 TIOBE Index: Kotlin, Ruby & Swift Drop in Popularity

Popularity consolidated in the top 20 programming languages in April, according to TIOBE Software CEO Paul Jansen. C overtook Java for the number three spot between March and April. Key takeaways from the April TIOBE Index rankings: SQL continues to fall as it has done over the last few months, dropping to the number 10 position. C++ held on to the number two spot, showing a significant year-over-year increase in the proprietary points system. Python’s popularity dipped slightly month-over-month, but it still holds a significant lead over the second-place language, C++. Take this survey for a chance to win a gift card: Tell our sister site DZone about your work with APIs by completing this brief survey for a chance to enter to win one of two $120 e-gift cards of your choice.  Kotlin and Swift decline when they’re no longer the best picks for mobile Kotlin, Ruby, and Swift are all “likely to go out of fashion” after reliably holding on to spots in the top 20, Jansen said. Why? For Kotlin and Swift, the answer comes down to new programming languages entering and competing within their niches. “Kotlin and Swift have the same reason why they are declining,” Jansen said. “They are both mainly used for one particular mobile platform, Android and iOS, respectively, whereas there are other sufficiently good languages and frameworks to develop cross platform nowadays.” Swift can technically be used for Android development as well, but it may present unnecessary barriers in the process. Meanwhile, Ruby is a general-purpose language once considered a rival to Python; it competed for space with Perl, too. Now, Python has pulled well ahead of both Python and Perl (which sits at 19th on the TIOBE Programming Community Index), and Ruby garners less interest. Interest consolidates in the most popular programming languages Now may not be the time to try to create a new programming language or to learn one of the lesser-known options. Along with ranking individual programming languages, Jansen also keeps an eye on the entire market. In April, he noted the top 20 languages account for 82.56% of the total market, compared to a usual share of about 75%. The industry is in what he called “a consolidation phase.” “This means that the market is a bit defensive, preferring proven technology to trying out new technologies,” Jansen said. source

April 2025 TIOBE Index: Kotlin, Ruby & Swift Drop in Popularity Read More »

Shark Tank's Mr. Wonderful is Building the World's Largest AI Data Center in Canada

Photo from Data Center World 2025 in Washington, D.C. Image: Drew Robb/TechnologyAdvice Kevin O’Leary — better known as “Mr. Wonderful” from ABC’s “Shark Tank” — made a surprise appearance at Data Center World 2025 in Washington, D.C. What’s a venture capitalist doing at a major IT event? He’s building the world’s largest AI factories, and he’s ready to talk about it. The project, called Wonder Valley, is a massive off-grid AI data center under construction in Alberta’s Municipal District of Greenview in Canada. Purpose-built for AI workloads, the facility will span 6,000 acres and boast a staggering 7.5 gigawatts of power capacity. The initial phase of 1.5 GW is expected to complete in the 2027-2028 timeframe at a cost of $2 billion; the remainder will be added gradually over the next few years. “Data centers are today’s gold rush,” said O’Leary during his keynote. “AI is in high demand, and the strongest market is in companies of 5 to 500 employees.” Bypassing lengthy regulatory and grid interconnect delays O’Leary explained how difficult it can be to build and power a new data center. He said the regulatory environment is difficult; there are many permitting and approval hurdles to overcome. Utilities prefer to take half a decade or more to deliver power to new customers. He gave an example of a data center asking for 250 MW now and 250 MW more in two years. The response: 25 MW was all they could get — but not for another three years. “If you want to attract investors, you have to get a project operational within 24 months,” said O’Leary. “We had to figure out how to pull this off when there was no power available on the grid.” Solution: Find sources of stranded power, specifically large quantities of natural gas that may not have easy access to the market. In that order, he found the best sources: Alberta, North Dakota, West Virginia, and Virginia. “Alberta, Canada, is the motherload in North America with about 10 times the natural gas of all others,” he said. More about data centers Creating a data center with sustainability and citizens in mind Wonder Valley is big on sustainability and community value. As well as being the world’s biggest data center, the campus will be surrounded by nature trails and wilderness areas. It can operate entirely off the grid, but the owners plan to provide power to the local community. The region has an abundance of everything required: land, natural gas, fiber, people, infrastructure, a local polytechnic, hospitals, and more. “The capital cost of an AI data center is so high that you have to build big,” said O’Leary. “We are looking for other sites with similar characteristics to Alberta and where the government is keen to help.” More Data Center World 2025 coverage: NVIDIA’s Vision For AI Factories | Industry Analysts’ Take on How AI is Revolutionizing Data Center Power and Cooling The “Shark Tank” celebrity complained to the Data Center World audience about the poor sales job the industry has done in recent years. New data center projects frequently provoke intense local antagonism; he recommended a change in approach. Instead of taking power from an already constrained grid and driving up electricity rates in a community, go in with a plan that includes adding power to the area — and have enough power available to give some of it to the locals so their electricity rates don’t rise. “Arrive in town to build more power for them at low cost, to provide lots of construction jobs for locals, to set up training of technicians who will man the facility, to eliminate flaring of natural gas to lower emissions, and to bring in tax revenue by buying stranded natural gas assets,” said O’Leary. “Everything comes from the availability of power in abundance. Stranded natural gas is inexpensive, clean, and we can even put the carbon underground rather than emitting it into the atmosphere.” source

Shark Tank's Mr. Wonderful is Building the World's Largest AI Data Center in Canada Read More »

How AI is Revolutionizing Data Center Power and Cooling

Vlad Galabov, Omdia’s research director for digital infrastructure, spoke during Data Center World 2025’s analyst day. Image: Courtesy of Data Center World AI will drive more than 50% of global data center capacity and more than 70% of revenue opportunity, according to Omdia’s Research Director for Digital Infrastructure Vlad Galabov, who said massive productivity gains across industries driven by AI will fuel this growth. Speaking during Data Center World 2025’s analyst day, Galabov made a number of other predictions about the industry: NVIDIA and hyperscalers’ 1 MW-per-rack ambitions probably won’t materialize for another couple of years until engineering innovation catches up to power and cooling demands. By 2030, over 35 GW of data center power is expected to be self-generated, making off-grid and behind-the-meter solutions no longer optional for those looking to build new data centers, as many utilities struggle to deliver the necessary power. Data center annual capital expenditure (CAPEX) investments are expected to reach $1 trillion globally by 2030, up from less than $500 billion at the end of 2024. The strongest area for CAPEX is physical infrastructure, such as power and cooling, where spending is increasing at a rate of 18% per year. “As compute densities and rack densities climb, the investment in physical infrastructure accelerates,” Galabov said. “We expect a consolidation of server count where a small number of scaled-up systems are preferred to a scaled-out server strategy. The cost per byte/compute cycle is also decreasing.” More about data centers Data center power capacity explodes Galabov highlighted the explosion AI has caused in data center power needs. When the AI wave began in late 2023, the installed capacity of power in data centers worldwide was less than 150 GW. But with 120 kW rack designs on the immediate horizon, and 600 kW racks only about two years away, he forecasts nearly 400 GW of cumulative data center capacity by 2030. With new data center capacity additions approaching 50 GW per year by the end of the decade, it won’t be long before half a terawatt becomes the norm. But not everyone will survive the wild west of the AI and DC market. Many startup DC campus developments and neoclouds will fail to build a long-term business model, as some lack the expertise and business savvy to survive. Don’t focus on a single provider, Galabov cautioned, as some are likely to fail. More Data Center World 2025 coverage: NVIDIA’s Vision For AI Factories AI drives liquid cooling innovation Omdia’s Principal Analyst Shen Wang laid out the cooling repercussions of the AI wave. Air cooling hit its limit around 2022, he said. The consensus is that it can deliver up to 80 Watts per cm2, with a few suppliers claiming they can take air cooling higher. Beyond that range, single-phase direct-to-chip (DtC) cooling — in which water or a fluid is taken to cold plates that sit directly on top of computer chips to remove heat — is needed. Single-phase DtC can go as high as 140 W/cm2. “Single-phase DtC is the best way to cool chips right now,” Wang said. “By 2026, the threshold for single-phase DtC will be exceeded by the latest racks.” That’s when two-phase liquid cooling should begin to see a ramp-up in adoption rates. Two-phase cooling runs fluids at higher temperatures to the chip, causing them to turn to vapor as part of the cooling process, thereby increasing cooling efficiency. “Advanced chips in the 600 watt and above range are seeing the heaviest adoption of liquid cooling,” Wang said. “By 2028, 80% of chips in that category will utilize liquid cooling, up from 50% today.” source

How AI is Revolutionizing Data Center Power and Cooling Read More »

Stanford’s 2025 AI Index Reveals an Industry at a Crossroads

Vanessa Parley, director of research at Stanford’s Institute for Human-Centered AII, speaks in a video about the 2025 AI Index. Image: Stanford HAI The AI industry is undergoing a complex and transitional period, according to Stanford University’s 2025 AI Index, published by the Institute for Human-Centered AI. While AI continues to transform the tech sector, public sentiment remains mixed, underscoring the rapidly shifting nature of the field. Below are key takeaways from Stanford’s latest findings on the current state of artificial intelligence — both generative and not generative. Investment in AI increases Investment in AI is growing. Private investors poured $109.1 billion into AI in the US. Globally, private investors contributed $33.9 billion to generative AI specifically. The number of businesses reporting using AI has grown from 55% in 2023 to 78% in 2024. Most notable AI models in 2024 were produced in the US; China and Europe follow. While China has 15 notable models to the 40 notable models produced in the US, China’s models nearly match America’s in quality. Plus, China produces more AI-related patents and publications. The Middle East, Latin America, and Southeast Asia have also produced notable AI launches. Most advanced AI are ‘reasoning’ models Frontier models today typically use “complex reasoning,” an increasingly competitive part of the field. Stanford pointed out reasoning is still a challenge. Frontier AI still struggles with complex reasoning benchmarks and logic tasks. Although companies often refer to human-level intelligence, pattern-recognition tasks that are simple to humans still elude the most advanced AI. SEE: Meta-hallucinations: Anthropic’s Claude 3.7 Sonnet and DeepSeek-R1 don’t always accurately reveal how they arrived at an answer in their explanations of their reasoning.  AI benchmark scores improve Stanford said benchmark scores are steadily improving, with tests like MMMU now considered standard and AI systems scoring high. Video generation has improved, with AI videos now able to be longer, more realistic, and more consistent moment-to-moment. More must-read AI coverage FDA approval for medical devices increase In 2023, a growing number of medical devices including AI were approved by the FDA: 223 compared to 15 in 2015 (these devices don’t necessarily include generative AI). Automated cars like Waymo’s growing fleet show AI is becoming more and more integrated with daily life. AI responsible risks needs to be addressed more Generally accepted definitions of how to use AI responsibly have been slow to emerge, Stanford pointed out. “Among companies, a gap persists between recognizing RAI [responsible AI] risks and taking meaningful action,” the researchers wrote. However, global organizations have released frameworks to address this. SEE: How to Keep AI Trustworthy From TechRepublic Premium Consumers worry about AI’s drawbacks compared to benefits Consumer sentiment does not always match business sentiment. Significant proportions of respondents to the study in Canada (40%), US (39%), and the Netherlands (36%) said that AI would prove more harmful than beneficial. Elsewhere, the public is more on board – see the number of people who believe AI has more benefits than drawbacks in China (83%), Indonesia (80%), and Thailand (77%). Confidence that AI companies will protect users’ data fell from 50% in 2023 to 47% in 2024 globally. Barriers to AI decrease, though environmental impact is still a concern As with any technology, people gradually learn how to produce it more quickly and with greater efficiency. Looking at Stanford’s data, costs to run the hardware declined by 30% annually, while energy efficiency improved by 40% per year. “Together, these trends are rapidly lowering the barriers to advanced AI,” the researchers wrote. Improved energy efficiency does not necessarily mean good energy use. Power consumption has increased beyond the capacity of energy efficiency to make up for it, meaning carbon emissions from frontier models continue to rise. source

Stanford’s 2025 AI Index Reveals an Industry at a Crossroads Read More »

'No AI Agents are Allowed.' EU Bans Use of AI Assistants in Virtual Meetings

Image: Guillaume Périgois/Unsplash The EU is banning the use of AI-powered virtual assistants during online meetings. Such assistants are often used to transcribe, take notes, or even record visuals and audio during a video conference. In a presentation from the European Commission delivered to European Digital Innovation Hubs earlier this month, there is a note on the “Online Meeting Etiquette” slide that states “No AI Agents are allowed.” AI agents are tools that can perform complex, multi-step tasks autonomously often by interacting with applications, such as video conferencing software. For example, Salesforce uses AI agents to call sales leads. The Commission confirmed this presentation was the first time this rule had been imposed but declined to explain why when questioned by Politico. There is no specific EU legislation that covers AI agents, but the AI models that power them will need to abide by the strict and controversial rules of the AI Act. AI agents raise security concerns While AI notetakers and other agent types are not inherently a security threat, according to a 2025 report from global AI experts, security risks stem from the user being unaware of what their AI agents are doing, their innate ability to operate outside of the user’s control, and potential AI-to-AI interactions. These factors make AI agents less predictable than standard models. SEE: How Can AI Be Used Safely? Researchers From Harvard, MIT, IBM & Microsoft Weigh In Tech companies do have to be cautious when promoting products that can accomplish an increasing amount without the user’s awareness. One of the biggest cautionary tales is that of Microsoft Recall, an AI tool that allowed users to control their PC or search through files using natural language. The convenience came at a cost: Recall captured screenshots of active windows every few seconds, saving them as a timeline, raising concerns about privacy and data usage and leading to significant launch delays. Microsoft has since released a series of agents specifically designed to tackle cyber threats. More must-read AI coverage AI agents are growing in prevalence This hasn’t stopped the AI players from handing over more control to their models. Anthropic added a Computer Use feature to its Claude Sonnet chatbot in October 2024, which gave it the ability to navigate desktop apps, move cursors, click buttons, and type text. Its deep research function, announced this week, also responds to prompts “agentically,” as does Microsoft’s equivalent. Last month, OpenAI expanded its text-to-speech and speech-to-text tools to agentic models, indicating their growing relevancy. In January 2025, OpenAI announced Operator, an agentic tool that runs in-browser to autonomously perform actions such as ordering groceries or booking tours. SEE: EU Invests €1.3 Billion to Boost AI Adoption & Improve ‘Digital Competencies’ Anthropic and OpenAI are even working together to improve agent technology, with the latter adding support for the former’s Model Context Protocol, an open-source standard for connecting AI apps, including agents, to data repositories. Anthropic has also joined forces with Databricks to help large corporate clients build their own agents. TechRepublic predicted at the end of 2024 that the use of AI agents will surge this year. OpenAI CEO Sam Altman echoed this in a January blog post, saying “we may see the first AI agents ‘join the workforce’ and materially change the output of companies.” By 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, according to Gartner. A fifth of online store interactions and at least 15% of day-to-day work decisions will be conducted by agents by that year. source

'No AI Agents are Allowed.' EU Bans Use of AI Assistants in Virtual Meetings Read More »

NVIDIA's Vision For AI Factories – 'Major Trend in the Data Center World'

NVIDIA’s Wade Vinson during his keynote at Data Center World 2025. Image: Drew Robb/TechnologyAdvice NVIDIA kicked off the Data Center World 2025 event this week in Washington, D.C., with a bold vision for the future of AI infrastructure. In his keynote, Wade Vinson, NVIDIA’s chief data center engineer, introduced the concept of AI-scale data centers; these massive, energy-efficient facilities would meet the soaring demand of accelerated computing. NVIDIA envisions sprawling “AI factories” powered by Blackwell GPUs and DGX SuperPODs, supported by advanced cooling and power systems by Vertiv and Schneider Electric. “There is no doubt that AI factories are a major trend in the data center world,” said Vinson. Completing phase one of an AI factory in Texas Vinson pointed to the Lancium Clean Campus that Crusoe Energy Systems is building near Abilene, Texas. As he explained: The first phase of this AI factory is largely complete: 200 MW in two buildings. The second phase will expand it to 1.2 GW. It should be completed by the middle of 2026. The design includes direct-to-chip liquid cooling, rear-door heat exchangers, and air cooling. It will comprise six additional buildings, bringing the facility to four million square feet. 10 gas turbines will be deployed onsite to provide on-site power. Additionally, each building will operate up to 50,000 NVIDIA GB200 NVL72s GPUs on a single integrated network fabric, advancing the frontier of data center design and scale for AI training and inference workloads. Vinson said some AI factories will leverage on-site power, while others will take advantage of sites where power is already available. He pointed to old mills, manufacturing sites, and retail facilities that are already plugged into the grid. For example, an old mall in San Francisco can be converted to an AI factory in months, rather than the many years required to complete new-build construction and obtain utility interconnects and permits. Such sites often have large roofs that can be used for solar power arrays. More about data centers Reconfiguring existing data centers into AI factories How about existing data centers? Aging structures may struggle to accommodate NVIDIA gear and AI applications. Vinson believes many colocation facilities (colos) are in a good position to be transitioned into AI factories. “Any colo built in the last 10 years has enough power and cooling to become an AI factory,” he said. “AI factories should be looked upon as a revenue opportunity rather than an expense.” He estimates that AI could boost business and personal productivity by 10% or more, adding $100 trillion to the global economy. “It represents a bigger productivity shift than happened due to the wave of electrification around the world that started about 100 years ago,” said Vinson. Planning is key to AI factory success Vinson cautioned those interested in building or running their own AI factories about the importance of planning. It’s important to consider the various factors involved, and modeling is vital. He touted NVIDIA’s Omniverse simulation tool as one way to correctly plan an AI factory. It uses digital twin technology to enable comprehensive modeling of data center infrastructure and design optimization. Failing to model in advance and simulate many possible scenarios can lead to inefficiencies in areas such as energy consumption and can extend construction timelines. “Simulations empower data centers to enhance operational efficiency through holistic energy management,” said Vinson. SEE: Data Centres Can Cut Energy Use By Up To 30% With Just About 30 Lines of Code For example, many data center veterans may find it challenging to shift from traditional concepts of racks, aisles, and servers to GPU gear surrounded by liquid cooling and with adequate power and power distribution equipment. AI factory designs will have far more power and cooling gear inside than server racks; therefore, layouts will be radically different. After all, the amount of heat generated by GPU-powered SuperPODs is more than that generated by typical data centers. “Expect significant consolidation of racks,” said Vinson. “Eight old racks might well become one future rack with GPUs inside. It is essential to develop a simplified power and cooling configuration for the racks inside AI factories, as these will be quite different from what most data centers are used to.” source

NVIDIA's Vision For AI Factories – 'Major Trend in the Data Center World' Read More »

CISA’s Reversal Extends Support for CVE Database, Averting Possible National Security Problem

Image: CROCOTHERY/Adobe Stock The nonprofit organization MITRE, which maintains the Common Vulnerabilities and Exposures (CVE) database, said on April 15 that the US government funding for its operations will expire without renewal; however, in a last-minute reversal announced the morning of April 16, CISA said it has extended support for the database. At the same time, CVE Board members have founded the CVE Foundation, a nonprofit not affiliated with the US federal government, to maintain the CVE program. The CVE program, which has been in place since 1999, is an essential way to report and track vulnerabilities. Many other cybersecurity resources, such as Microsoft’s Patch Tuesday update and report, refer to CVE numbers to identify flaws and fixes. Organizations called CVE Numbering Authorities are associated with MITRE and authorized to assign CVE numbers. “CVE underpins a huge chunk of vulnerability management, incident response, and critical infrastructure protection efforts,” wrote Casey Ellis, founder of crowdsourced cybersecurity hub Bugcrowd, in an email to TechRepublic. “A sudden interruption in services has the very real potential to bubble up into a national security problem in short order.” Funds were expected to run out on MITRE without renewal A letter sent to CVE board members began circulating on social media on Tuesday. “Current contracting pathway for MITRE to develop, operate, and modernize CVE and several other related programs, such as CWE, will expire,” said the letter from Yosry Barsoum, vice president and director of the Center for Securing the Homeland, a division of MITRE. CWE is Common Weakness Enumeration, the list of hardware and software weaknesses. “The government continues to make considerable efforts to continue MITRE’s role in support of the program,” Barsoum wrote. MITRE is traditionally funded by the Department of Homeland Security. DOWNLOAD: Protect your company with our premade and customizable network security policy.  MITRE did not respond to TechRepublic’s questions about the cause of the expiration or what cybersecurity professionals can expect next. The foundation has not specified whether the cut in funding is related to the widespread cull by the Department of Government Efficiency (DOGE). CVE Foundation has been laying the groundwork for a new system for the past year Prior to CISA’s announcement, an independent foundation said they were prepared to step in to continue the CVE program. The CVE Foundation is a nonprofit dedicated to maintaining the CVE submission program and database. Must-read security coverage “While we had hoped this day would not come, we have been preparing for this possibility.” wrote an anonymous CVE Foundation representative in a press release on Wednesday. “In response, a coalition of longtime, active CVE Board members have spent the past year developing a strategy to transition CVE to a dedicated, non-profit foundation.” The CVE Foundation plans to detail its structure, timeline, and opportunities for involvement in the future. With CISA extending funding, the foundation may not be needed yet – although it may be reassuring to know its services and backups are available. source

CISA’s Reversal Extends Support for CVE Database, Averting Possible National Security Problem Read More »

Network Security at the Edge for AI-ready Enterprise

Modern enterprises are adopting AI applications, particularly generative AI (GenAI), at a rapid rate. This adds new network security challenges to already complex enterprise workloads spanning data centers, campuses, cloud, branches, and remote user locations. Network data is being reshaped by the rapid adoption of AI products. By 2026, it is estimated that over 80% of businesses are likely to have adopted generative AI APIs or apps, yet a recent McKinsey study suggests that less than 50% are ready to manage the associated cybersecurity risks. Shadow AI usage among employees is also on the rise — with no oversight from IT — further exposing organizations to cyber attacks. AI model and application developers are building inherent security mechanisms within these applications. IT teams are also tightening their security posture within the data center. But threat actors scan for vulnerabilities in common entry points that include users, devices, and applications at the edge or in the cloud. VeloCloud, a division of Broadcom, has developed AI-based architecture to address the needs of an AI-ready enterprise. Why Modern Network Architecture Demands AI Security Solutions With about 47% of organizations citing adversarial capabilities enabled by generative AI (GenAI) as their top cybersecurity concern, risks of data loss and compliance violations are rising in multi-cloud and edge environments. It doesn’t stop there. Security teams are also inundated with an overwhelming array of security alerts, inconsistent controls, fragmented governance, and visibility gaps as organizations expand their technological footprint across diverse platforms — creating blind spots that sophisticated attackers readily exploit. A 2025 Tenable Cloud AI Risk Report reveals that 70% of cloud AI workloads in cloud environments have unremediated vulnerabilities that leave data exposed. Unfortunately, many organizations still rely on conventional security solutions to address these risks. Traditional approaches to addressing security risks with AI applications may not be adequate. Traffic generated from AI applications tends to be distributed and latency-sensitive, so deploying all security tools at the data center may deliver a secure but sub-optimal experience. It is imperative to enforce security on the optimal path between users and the application, or between model consumers and the models. How VeloCloud Solutions Improve Security for AI-Ready Enterprise Enterprises adopting AI-driven applications require networks that can dynamically adapt to evolving workloads while providing security enforcement on an optimal path outside the data center. VeloCloud addresses these challenges with VeloRAIN, an AI-powered networking architecture designed to enhance security, performance, and scalability for distributed AI workloads. VeloCloud SASE is built using the VeloRAIN architecture offering modular components that include VeloCloud SD-WAN for secure campus and branch connectivity, VeloCloud SD-Access for ZTNA-based remote user access, and Symantec SSE for VeloCloud for security enforcement in the cloud. VeloCloud Dynamic Multipath Optimization™ (DMPO) technology is being enhanced with AI to analyze network conditions in real time and select the best paths for traffic in a way that ensures reliability across multiple networks. Complementing this is Dynamic Application-Based Slicing (DABS), designed to enhance performance by prioritizing critical applications and allocating bandwidth accordingly. Together, these technologies maintain optimal Quality of Experience (QoE) by adapting to network fluctuations and application demands, even in complex, multi-cloud environments. Its AI-driven approach enables real-time application identification and policy enforcement, ensuring that AI workloads receive the necessary prioritization and protection. Features that Set VeloRAIN Apart Unlike traditional SD-WAN solutions that rely on static policies, VeloRAIN dynamically adjusts network resources based on AI-driven traffic patterns to mitigate performance bottlenecks and reduce attack surfaces. Below are four key ways VeloRAIN can benefit your organization: AI-Driven Threat Protection VeloCloud SASE uses AI to gather, analyze, detect, and act on evolving threats. By processing billions of threat signals from various sources, including endpoints, emails, and internet traffic, it enables proactive defense against zero-day attacks and evolving cyber threats. Powered by Symantec Global Threat Intelligence Network the solution enables enterprises to address their security and compliance needs with the changing nature of threat landscapes. Path Optimized Security VeloCloud SASE offers customers the flexibility to configure security policies centrally and enforce these policies at the branch or in the cloud. Branch enforcement is made possible by the native integration of enhanced firewall services on the VeloCloud SD-WAN appliance. Cloud enforcement takes advantage of VeloCloud’s global network of SASE points of presence optimally located closer to public cloud and SaaS application vendors. Securing Data in Motion When users access applications, data is transferred between branch, campus, remote locations, cloud, and the data center. This data has to be protected and any chance of data loss to threat actors has to be prevented. VeloCloud SASE allows only authorized users to access AI applications and in doing so it encrypts any data that is exchanged. Any attempts to exfiltrate that data are monitored and blocked. Optimized Performance for AI Applications AI workloads demand high bandwidth and low-latency connectivity. VeloRAIN-based solutions continuously analyze network conditions and adapt application traffic in real time to maintain optimal performance. This ensures that AI models, including interconnected AI agents, receive consistent network quality without disruption. The platform also integrates AI-driven telemetry to predict and allocate bandwidth efficiently, as well as prevent congestion and ensure seamless application performance. Conclusion As enterprises embrace AI-driven applications across distributed environments, robust network security becomes paramount. VeloCloud, a Broadcom division, harnesses the power of VeloRAIN, its AI-enhanced architecture to deliver cutting-edge security, seamless performance, and scalability that outpace conventional solutions. Tailored to protect data and AI models at the edge and in the cloud, VeloCloud empowers organizations to mitigate risks while ensuring an exceptional user experience. Visit VeloCloud today to learn how the solution can enhance your enterprise’s security and resilience. source

Network Security at the Edge for AI-ready Enterprise Read More »

Meta’s US Antitrust Trial: What You’ve Missed So Far

Meta’s Mark Zuckerberg. Image: Meta Meta was summoned to Washington to defend its acquisitions of Instagram and WhatsApp in a trial brought by the US Federal Trade Commission, which alleges the deals were part of a monopolistic strategy. The FTC wants Meta to divest these platforms to create a more level playing field in the social app market. Meta, which went only by Facebook then, bought photo-sharing app Instagram in 2012 and messaging platform WhatsApp in 2014. It argues that the acquisitions fueled the apps’ growth and that there’s little evidence they would have evolved into viable competitors on their own. On Wednesday, Meta CEO Mark Zuckerberg maintained that he never intended to stifle competition through acquisition, according to live reporting from The Verge. “Was the intent to stop offering or stop making Instagram good? Absolutely not,” he said. His hope was just to scale the app’s user base tenfold, but he had done so hundredfold by 2018. Zuckerberg and the Meta team emphasised that the company has always faced — and continues to face — rivals while building Facebook, Instagram, and WhatsApp, including platforms like TikTok and Google Plus. Day 1: Monday, April 14 The most significant points discussed on the first day centred around TikTok. While the FTC wants to prove that Meta has monopolised the market of social apps that “connect friends and family,” it does not include TikTok in that market. Meta argues that the Chinese video-sharing app should be seen as a viable competitor that holds comparable market value, according to The Verge. For instance, when TikTok was banned in the US for one day in January 2025, Facebook and Instagram usage spiked by 20% and 17%, respectively. If Zuckerberg can prove that the FTC’s market definition is too narrow, Meta could win the case. The courts also heard that, in February 2012, Zuckerberg considered acquiring Instagram but did not make any significant changes to avoid creating “a hole in the market for someone else to fill.” Nevertheless, per The Verge, the CEO said he never took this route. What’s hot at TechRepublic Day 2: Tuesday, April 15 Zuckerberg was asked to explain why, in a February 2012 exchange, he agreed with CFO David Ebersman’s suggestion that acquiring Instagram could help “neutralize a potential competitor,” according to The Verge. On the stand, he said that buying a company will inherently result in a competitor being taken off the market. He also admitted that he could have built a new app to compete with Instagram, but “whether it would have succeeded or not … is a matter of speculation,” according to the BBC. In an email sent before Instagram’s acquisition, Zuckerberg said that Meta was “so far behind” in the photo-sharing space and that the prospect of falling behind was “really scary,” per Mashable. Nevertheless, his company did start building the competing product Facebook Camera, after, in 2011, he was more focused on Instagram’s camera technology than its social potential. Zuckerberg then realised that his app would not catch up with Instagram, so he scrapped it and pursued acquisition. In court, the CEO admitted he was “worried” about other messaging apps like WeChat from “broadly competing with (Meta)” before it acquired WhatsApp. The statement was in response to messages from January 2013, in which Zuckerberg suggested “block(ing) WeChat, Kakao and Line ads” as they “are trying to build social networks and replace us,” per The Verge. The second day of the trial illuminated several of Zuckerberg’s ideas to expand his company, which never came to fruition. One was buying Snapchat, now Snap, for a proposed $6 billion. The Facebook founder was particularly concerned when Snapchat released Stories, saying in internal messages from 2014 that it was “now more of a competitor for Instagram and News Feed than it ever was for messaging.” He also considered creating a Facebook feed that shows only ads, deleting all users’ Facebook Friends to regain its “cultural relevance,” and spinning out Instagram into its own company. The latter was pre-empting the regulatory scrutiny his company is currently under, as in a 2018 email, he admitted that “most companies actually perform better after they’ve been split up.” Day 3: Wednesday, April 16 Zuckerberg said that Facebook’s “growth slowed down dramatically” when TikTok became popular, reiterating how the social media platform is not the only dominant player in the market, according to The Verge. He also didn’t consider acquiring TikTok’s precursor, Musical.ly, as he didn’t want to deal with “any connection that they had to China.” ByteDance subsequently acquired it and became a major competitor, the CEO said. Zuckerberg acknowledged that competition is now coming from YouTube, too, as “richer forms of media” like video have become more attractive to digital creators, per CNN. However, the platform is not considered a competitor in the market that the FTC has defined. The FTC argued that Facebook gained disproportionate influence from “network effects,” as its large, sustained user base encourages new users to join and existing users to stay, since much of their social circle is already on the platform. However, Zuckerberg argued that network effects aren’t solely beneficial. He explained that users may eventually see their feeds dominated by content from people they no longer care about, making the platform obsolete. This is why he considered resetting everyone’s Friends lists. Regarding WhatsApp, Zuckerberg said that the motivation behind its acquisition was never to hinder its growth and prevent it from challenging Meta’s dominance, because he knew that the founders had no plans to do so. After getting to know them, he found they “looked down” on adding features that could make the app more competitive and eventually had to persuade them to implement those changes. Ex-Meta COO Sheryl Sandberg came to the stand near the end of the session and said she was shaken when Google Plus was launched in 2011, noting how it was “almost an exact replica” of Facebook. source

Meta’s US Antitrust Trial: What You’ve Missed So Far Read More »

US Blocks NVIDIA Chip Sales to China – Company Projects $5.5B Hit

NVIDIA says new US restrictions on chip exports to China could cost the company $5.5 billion, as Washington tightens licensing rules for advanced AI hardware sales. The chipmaker said the charge is related to inventory, purchase commitments, and related reserves. “The (US government) indicated that the license requirement addresses the risk that the covered products may be used in, or diverted to, a supercomputer in China,” the chipmaker wrote in a filing to the U.S. Securities and Exchange Commission. It added that the license requirement would be in place “for the indefinite future.” The US government said the license rule aims to prevent Chinese supercomputers from using NVIDIA’s advanced chips, citing national security risks. The US is keen to maintain its sovereignty in the chip market by blocking China from access to NVIDIA’s state-of-the-art hardware, which is important for running advanced AI models. In addition to financial motivations, the country has also raised concerns about China developing AI for military purposes. The filing comes just after NVIDIA announced it would be expanding operations in the US with TSMC, winning praise from the White House. It is also a technology partner in the Stargate Project, a joint venture that will contribute $500 billion over four years to AI infrastructure in the States. Another chapter in years-long battle between the US and China for chip supremacy The license requirement marks the first formal restriction on chip exports under the Trump administration, but it is merely the latest US measure in a years-long tussle with China. In 2022, the country applied its first set of chip-related export controls on the sale of semiconductors to Beijing and separately banned NVIDIA from selling its most advanced chips to Chinese companies. In response, NVIDIA developed the China-specific A800 and H20 chips that were compliant with the new controls, enabling it to maintain customers in the country. China also enforced export controls on gallium and germanium-related items that are essential in chip production. SEE: China Investigates NVIDIA for Allegedly Breaking Monopoly Law In the second half of 2024, the Biden administration imposed two more sets of export restrictions on semiconductors, closing some of the loopholes NVIDIA exploited and expanding the list of banned technologies. China then swiftly banned the sale of germanium and gallium to the US, closing loopholes from its 2023 export controls. On January 13, 2025, President Joe Biden proposed additional chip export restrictions not only to China but to a wide range of countries, with compliance not required until May. NVIDIA was vocally opposed to this and, just this week, a number of Republican senators sent a letter to Commerce Secretary Howard Lutnick urging him to withdraw the “overly restrictive” rule before it comes into effect. More must-read AI coverage License requirements will erode NVIDIA’s business NVIDIA was informed on April 9 that sales of its H20 chips and any equivalent in memory or interconnect bandwidth to China will be subject to the new licensing requirements. “This kills NVIDIA’s access to a key market, and they will lose traction in the country,” Patrick Moorhead, a tech analyst with Moor Insights & Strategy, told The New York Times. He added that Chinese companies will buy from local rival Huawei instead. NVIDIA’s China revenue has been steadily declining due to earlier US export rules. Sales from China made up 26% of its total revenue in 2022, dropping to 17% in 2024, according to MarketWatch. Bernstein Research analyst Stacy Rasgon estimated that the share would be about 13% by the end of 2025, even before the latest license restrictions take effect. Chinese companies, including Tencent, Alibaba, and TikTok parent company ByteDance, had been ordering an increasing number of H20 chips to support the deployment of DeepSeek’s AI models, which gained popularity for their cost-effective performance, according to Reuters. A spokesman for the Commerce Department told the NYT that licensing requirements will also be applied to sales of Advanced Micro Devices’ MI308 chip to China to “safeguard (the US’s) national and economic security.” source

US Blocks NVIDIA Chip Sales to China – Company Projects $5.5B Hit Read More »