Ethically trained AI startup Pleias releases new small reasoning models optimized for RAG with built-in citations

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More French AI startup Pleias made waves late last year with the launch of its ethically trained Pleias 1.0 family of small language models — among the first and only to date to be built entirely on scraping “open” data, that is, data explicitly labeled as public domain, open source, or unlicensed and not copyrighted. Now the company has announced the release of two open source small-scale reasoning models designed specifically for retrieval-augmented generation (RAG), citation synthesis, and structured multilingual output. The launch includes two core models — Pleias-RAG-350M and Pleias-RAG-1B — each also available in CPU-optimized GGUF format, making a total of four deployment-ready variants. They are all based on Pleias 1.0, and can be used independently or in conjunction with other LLMs that the organization may already or plan to deploy. All appear to be available under a permissive Apache 2.0 open source license, meaning they are eligible for organizations to take, modify and deploy for commercial use cases. RAG, as you’ll recall, is the widely-used technique that enterprises and organizations can deploy to hook an AI large language model (LLM) such as OpenAI’s GPT-4o, Google’s Gemini 2.5 Flash, Anthropic’s Claude Sonnet 3.7 or Cohere’s Command-A, or open source alternatives like Llama 4 and DeepSeek V3 to external knowledge bases, such as enterprise documents and cloud storages. This is often necessary for enterprises that want to build chatbots and other AI applications that reference their internal policies or product catalogs (an alternative, prompting a long context LLM with all the information necessary, may not be suitable for enterprise use cases where security and per-token transmission costs are concerns). The Pleias-RAG model family is the latest effort to bridge the gap between accuracy and efficiency in small language models. These models are aimed at enterprises, developers, and researchers looking for cost-effective alternatives to large-scale language models without compromising traceability, multilingual capabilities, or structured reasoning workflows. The target userbase is actually Pleias’s home continent of Europe, as co-founder Alexander Doria told VentureBeat via direct message on the social network X: “A primary motivation has been the difficulty of scaling RAG applications in Europe. Most private organization have little GPUs (it may have changed but not long ago less than 2% of all [Nvidia] H100 [GPUs] were in Europe). And yet simultaneously there are strong incentive to self-host for regulated reasons, including GDPR. “SLMs have progressed significantly over the past year, yet they are too often conceived as ‘mini-chatbots’ and we have observed a significant drop of performance in non-English languages, both in terms of source understanding and quality of text generation. So we have been satisfied to hit most of our objectives: An actual alternative to 7-8b models for RAG even on CPU and other constrained infras. Fully verifiable models coming with citation support. Preservation of European language performance.” However, of course the models being open source under the Apache 2.0 license means anyone could take and use them freely anywhere in the world. Focused on grounding, citations, and facts A key feature of the new Pleias-RAG models is their native support for source citation with literal quotes, fully integrated into the model’s inference process. Unlike post-hoc citation methods or external chunking pipelines, the Pleias-RAG models generate citations directly, using a syntax inspired by Wikipedia’s reference format. This approach allows for shorter, more readable citation snippets while maintaining verifiability. Citation grounding plays a functional role in regulated settings. For sectors like healthcare, legal, and finance — where decision-making must be documented and traceable — these built-in references offer a direct path to auditability. Pleias positions this design choice as an ethical imperative, aligning with increasing regulatory demands for explainable AI. Proto agentic? Pleias-RAG models are described as “proto-agentic” — they can autonomously assess whether a query is understandable, determine if it is trivial or complex, and decide whether to answer, reformulate, or refuse based on source adequacy. Their structured output includes language detection, query and source analysis reports, and a reasoned answer. Despite their relatively small size (Pleias-RAG-350M has just 350 million parameters) the models exhibit behavior traditionally associated with larger, agentic systems. According to Pleias, these capabilities stem from a specialized mid-training pipeline that blends synthetic data generation with iterative reasoning prompts. Pleias-RAG-350M is explicitly designed for constrained environments. It performs well on standard CPUs, including mobile-class infrastructure. According to internal benchmarks, the unquantized GGUF version produces complete reasoning outputs in roughly 20 seconds on 8GB RAM setups. Its small footprint places it in a niche with very few competitors, such as Qwen-0.5 and SmolLM, but with a much stronger emphasis on structured source synthesis. Competitive performance across tasks and languages In benchmark evaluations, Pleias-RAG-350M and Pleias-RAG-1B outperform most open-weight models under 4 billion parameters, including Llama-3.1-8B and Qwen-2.5-7B, on tasks such as HotPotQA, 2WikiMultiHopQA, and MuSiQue. These multi-hop RAG benchmarks test the model’s ability to reason across multiple documents and identify distractors — common requirements in enterprise-grade knowledge systems. The models’ strength extends to multilingual scenarios. On translated benchmark sets across French, German, Spanish, and Italian, the Pleias models show negligible degradation in performance. This sets them apart from other SLMs, which typically experience a 10–35% performance loss when handling non-English queries. The multilingual support stems from careful tokenizer design and synthetic adversarial training that includes language-switching exercises. The models not only detect the language of a user query but aim to respond in the same language—an important feature for global deployments. In addition, Doria highlighted how the models could be used to augment the performance of other existing models an enterprise may already be using: “We envision the models to be used in orchestration setting, especially since their compute cost is low. A very interesting results on the evaluation side: even the 350m model turned out to be good on entirely different answers than the answers [Meta] Llama and [Alibaba] Qwen were performing at. So there’s a real

Ethically trained AI startup Pleias releases new small reasoning models optimized for RAG with built-in citations Read More »

Don’t Call It A Comeback: Stay Ready For Ransomware

So far, 2025 is filled with … distractions for security leaders. Between scrambling to secure their organizations’ AI initiatives, staying on top of critical vulnerabilities (and the organizations delivering the CVE process), perpetually communicating and training to guard against human element breaches, and navigating yet another period of uncertainty and volatility, it’s tempting to take a “set and forget” approach to common attack scenarios such as ransomware. Ransomware Is Not Going Away Ransomware attack volume often dips when law enforcement activity or geopolitical tensions interfere with gang operations. For example, law enforcement actions in 2023 and 2024 disrupted some of the more notorious ransomware gangs, like LockBit and ALPHV/Blackcat, and their supporting infrastructure. In September 2024, German authorities seized 47 cryptocurrency exchanges used by various ransomware gangs for laundering illicit funds, disrupting a core component of the ransomware financial infrastructure. In February of this year, blockchain analytics firm Chainalysis reported a 35% year-over-year decrease in ransomware payments, with less than half of recorded incidents resulting in victim payments. And yet, despite these bright spots, the number of ransomware victims appearing on data leak sites in 2024 rose to 5,243, a 15% increase over 2023 according to the Travelers Q4 2024 Cyber Threat Report, with new gangs and innovative tactics springing up faster than authorities and security leaders can thwart them. According to Forrester’s Security Survey, 2024, 25% of CISOs cite preventing and protecting against ransomware as a top strategic priority for their organization. To do this, security leaders, their teams, and their IR services firms must continue to prioritize ransomware readiness. That’s where our newly published decision tool comes in. As a follow-up to our report The Ransomware Survival Guide, The Forrester Ransomware Readiness And Response Guide, a downloadable Excel-based tool, will help you and your team: Understand the controls in place to prepare for, respond to, and recover from attacks. Identify and close gaps that could worsen the impact of a ransomware attack. Prioritize tactical steps to bolster organizational resilience against ransomware. Read The Full Report Here: Prioritize Your Ransomware Readiness And Response Efforts Recommended actions in the decision tool are aligned with the incident response stages included in the NIST SP 800-61 Computer Security Incident Handling Guide and the SANS Incident Handler’s Handbook, as well as Forrester’s Security Tools and Services Mapping, Zero Trust, and Information Security Maturity models. Avoid getting knocked out by ransomware by regularly reviewing and refining the people, processes, tech, and services required for optimal readiness. Forrester clients can: Complete the Forrester Ransomware Readiness And Response Guide to assess your current state. Align ransomware response strategies and priorities with Forrester’s recommended actions across the incident response lifecycle. Schedule an inquiry or guidance session with us to discuss your ransomware preparedness plan. source

Don’t Call It A Comeback: Stay Ready For Ransomware Read More »

The new AI calculus: Google’s 80% cost edge vs. OpenAI’s ecosystem

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The relentless pace of generative AI innovation shows no signs of slowing. In just the past couple of weeks, OpenAI dropped its powerful o3 and o4-mini reasoning models alongside the GPT-4.1 series, while Google countered with Gemini 2.5 Flash, rapidly iterating on its flagship Gemini 2.5 Pro released shortly before. For enterprise technical leaders navigating this dizzying landscape, choosing the right AI platform requires looking far beyond rapidly shifting model benchmarks While model-versus-model benchmarks grab headlines, the decision for technical leaders goes far deeper. Choosing an AI platform is a commitment to an ecosystem, impacting everything from core compute costs and agent development strategy to model reliability and enterprise integration.  But perhaps the most stark differentiator, bubbling beneath the surface but with profound long-term implications, lies in the economics of the hardware powering these AI giants. Google wields a massive cost advantage thanks to its custom silicon, potentially running its AI workloads at a fraction of the cost OpenAI incurs relying on Nvidia’s market-dominant (and high-margin) GPUs.   This analysis delves beyond the benchmarks to compare the Google and OpenAI/Microsoft AI ecosystems across the critical factors enterprises must consider today: the significant disparity in compute economics, diverging strategies for building AI agents, the crucial trade-offs in model capabilities and reliability and the realities of enterprise fit and distribution. The analysis builds upon an in-depth video discussion exploring these systemic shifts between myself and AI developer Sam Witteveen earlier this week. 1. Compute economics: Google’s TPU “secret weapon” vs. OpenAI’s Nvidia tax The most significant, yet often under-discussed, advantage Google holds is its “secret weapon:” its decade-long investment in custom Tensor Processing Units (TPUs). OpenAI and the broader market rely heavily on Nvidia’s powerful but expensive GPUs (like the H100 and A100). Google, on the other hand, designs and deploys its own TPUs, like the recently unveiled Ironwood generation, for its core AI workloads. This includes training and serving Gemini models.   Why does this matter? It makes a huge cost difference.  Nvidia GPUs command staggering gross margins, estimated by analysts to be in the 80% range for data center chips like the H100 and upcoming B100 GPUs. This means OpenAI (via Microsoft Azure) pays a hefty premium — the “Nvidia tax” — for its compute power. Google, by manufacturing TPUs in-house, effectively bypasses this markup. While manufacturing GPUs might cost Nvidia $3,000-$5,000, hyperscalers like Microsoft (supplying OpenAI) pay $20,000-$35,000+ per unit in volume, according to reports. Industry conversations and analysis suggest that Google may be obtaining its AI compute power at roughly 20% of the cost incurred by those purchasing high-end Nvidia GPUs. While the exact numbers are internal, the implication is a 4x-6x cost efficiency advantage per unit of compute for Google at the hardware level. This structural advantage is reflected in API pricing. Comparing the flagship models, OpenAI’s o3 is roughly 8 times more expensive for input tokens and 4 times more expensive for output tokens than Google’s Gemini 2.5 Pro (for standard context lengths). This cost differential isn’t academic; it has profound strategic implications. Google can likely sustain lower prices and offer better “intelligence per dollar,” giving enterprises more predictable long-term Total Cost of Ownership (TCO) – and that’s exactly what it is doing right now in practice. OpenAI’s costs, meanwhile, are intrinsically tied to Nvidia’s pricing power and the terms of its Azure deal. Indeed, compute costs represent an estimated 55-60% of OpenAI’s total $9B operating expenses in 2024, according to some reports, and are projected to exceed 80% in 2025 as they scale. While OpenAI’s projected revenue growth is astronomical – potentially hitting $125 billion by 2029 according to reported internal forecasts – managing this compute spend remains a critical challenge, driving their pursuit of custom silicon. 2. Agent frameworks: Google’s open ecosystem approach vs. OpenAI’s integrated one Beyond hardware, the two giants are pursuing divergent strategies for building and deploying the AI agents poised to automate enterprise workflows. Google is making a clear push for interoperability and a more open ecosystem. At Cloud Next two weeks ago, it unveiled the Agent-to-Agent (A2A) protocol, designed to allow agents built on different platforms to communicate, alongside its Agent Development Kit (ADK) and the Agentspace hub for discovering and managing agents. While A2A adoption faces hurdles — key players like Anthropic haven’t signed on (VentureBeat reached out to Anthropic about this, but Anthropic declined to comment) — and some developers debate its necessity alongside Anthropic’s existing Model Context Protocol (MCP). Google’s intent is clear: to foster a multi-vendor agent marketplace, potentially hosted within its Agent Garden or via a rumored Agent App Store.   OpenAI, conversely, appears focused on creating powerful, tool-using agents tightly integrated within its own stack. The new o3 model exemplifies this, capable of making hundreds of tool calls within a single reasoning chain. Developers leverage the Responses API and Agents SDK, along with tools like the new Codex CLI, to build sophisticated agents that operate within the OpenAI/Azure trust boundary. While frameworks like Microsoft’s Autogen offer some flexibility, OpenAI’s core strategy seems less about cross-platform communication and more about maximizing agent capabilities vertically within its controlled environment.   The enterprise takeaway: Companies prioritizing flexibility and the ability to mix-and-match agents from various vendors (e.g., plugging a Salesforce agent into Vertex AI) may find Google’s open approach appealing. Those deeply invested in the Azure/Microsoft ecosystem or preferring a more vertically managed, high-performance agent stack might lean towards OpenAI. 3. Model capabilities: parity, performance, and pain points The relentless release cycle means model leadership is fleeting. While OpenAI’s o3 currently edges out Gemini 2.5 Pro on some coding benchmarks like SWE-Bench Verified and Aider, Gemini 2.5 Pro matches or leads on others like GPQA and AIME. Gemini 2.5 Pro is also the overall leader on the large language model (LLM) Arena Leaderboard. For many enterprise use cases, however, the models have reached rough parity in core capabilities.    The real difference lies in

The new AI calculus: Google’s 80% cost edge vs. OpenAI’s ecosystem Read More »

Samsung Presses For New Trial After $192M EDTX Verdict

By Andrew Karpan ( April 25, 2025, 10:43 PM EDT) — Samsung is asking a Texas federal court for a new trial in its latest bid to escape a $192 million jury verdict owed to a small Silicon Valley outfit that asserted a handful of wireless charger patents against the tech giant…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Samsung Presses For New Trial After $192M EDTX Verdict Read More »

4. Economic ratings and concerns

Overall, Americans continue to rate economic conditions negatively, with just 23% calling them  excellent or good. And their expectations for the economy a year from now have grown more pessimistic since February. Prices of food and consumer goods, housing, and energy remain top economic concerns. These continue to rank higher on the public’s list of economic concerns than the availability of jobs or the state of the stock market. Note: This survey was conducted after Trump’s April 2 announcement of sweeping new tariffs on nearly all U.S. trading partners, which triggered several days of volatility in U.S. and global stock markets. The survey was in the field on April 9, when Trump paused tariffs on most countries but levied higher rates on China. Opinions about the economy and economic concerns were largely unchanged throughout the April 7-13 field period. Americans gloomy about current economic conditions; a growing share says the economy will be worse a year from now Just 23% of Americans rate national economic conditions as excellent or good, while 42% say they are only fair and 34% rate the economy as poor. The share who rate the economy positively is similar to the share who did so in February (24%). Positive evaluations of economic conditions have been below 30% for the past four years. Partisans’ views of the economy have changed with Trump in office Today, 36% of Republicans and Republican-leaning independents rate the economy as excellent or good, compared with 11% of Democrats and Democratic leaners. In February, these views were roughly reversed, with 30% of Democrats and 18% of Republicans rating the economy positively. Republicans’ ratings of the economy today are roughly on par with where they were in March 2017, early in Trump’s first term, when 37% rated the economy positively. At that time, Americans’ overall views of the economy were considerably more positive than they are today (41% then vs. 23% now) and 44% of Democrats had a positive view. While overall economic views have remained relatively steady since February, the public’s expectations for the economy a year from now have soured somewhat. 45% of Americans say they expect economic conditions to be worse a year from now, up from 37% in February. Another 36% of the public expect the economy will be better in a year, while 19% say it will be about the same as today. Republicans remain more optimistic than Democrats about the economy’s future. But since February, positive expectations have declined among both parties: Among Republicans 65% expect the economy to be better a year from now, down from 73% in February. 15% expect the economy will be worse a year from now, up from 9% in February. Among Democrats 74% say the economy will be worse a year from now, up 10 percentage points since February. 17% expect the economy to be about the same as it is now, and only 8% think the economy will be better a year from now. Prices of housing, food and consumer goods remain the public’s top economic concerns Prices for food and consumer goods, housing, and energy continue to be Americans’ leading economic concerns. However, the shares saying they are very concerned about these have declined since last year, mostly driven by changes among Republicans. Two-thirds of Americans are very concerned about the price of food and consumer goods. In September, 74% said they were very concerned about these prices. There also have been declines in the shares citing housing costs and the price of gasoline and energy as major economic concerns. Currently, 61% say they are very concerned about housing prices, down from 69% in September. Fewer than half of adults (46%) say they are very concerned about the price of gasoline and energy, down somewhat since January 2024 (51%). The share of adults who are very concerned about the availability of jobs is essentially unchanged over this period (41% today vs. 40% in September). Concerns about the state of the stock market are higher than they were last year, though they still rank lower than other items. Today, 36% of Americans say they are very concerned about how the stock market is doing, up 12 points from September. Republicans’ concerns about several economic factors have decreased over this period, while Democrats’ concerns have increased. Price of food and consumer goods 57% of Republicans are very concerned about food and consumer prices, down from 85% in September. By contrast, 78% of Democrats are very concerned about these prices, compared with 64% in September. Housing costs Today, about half of Republicans (51%) say they are very concerned about the cost of housing, down sharply from 72% in September. Democrats’ concerns about housing costs have increased somewhat since September (71% today vs. 66% then). Stock market The share of Democrats who are very concerned about how the stock market is doing has nearly tripled since September, from 17% to 49%. There has been far less change among Republicans. Currently, 24% are very concerned about the stock market, down from 31% in September. There are similar patterns in concerns about the price of gasoline and energy and concerns about the job market. source

4. Economic ratings and concerns Read More »

Surgical Center CIO Builds an IT Department

Since 2001, Regent Surgical Health has developed and managed surgery center partnerships between hospitals and physicians. The firm, based in Franklin, Tennessee, works to improve and evolve the ambulatory surgical center (ASC) model.  Rusty Strange, Regent’s CIO, is used to facing challenges in a field where lives are at stake. He joined Regent after a 17-year stint at ambulatory surgery center operations firm Amsurg, where he served as vice president of IT infrastructure and operations.  In an online interview, Strange discusses the challenge he faced in building an entire IT department.  What is the biggest challenge you ever faced?  The biggest challenge I faced when I came to Regent was building an IT department from the ground up. As background, I was the first IT employee. At the time, we had no centralized IT structure — each ambulatory surgical center ASC operated with fragmented, non-standard systems managed by local staff or unvetted third parties. There was no cohesive strategy for clinical applications, data management, cybersecurity, or operational support.  What caused the problem?  The issue arose from rapid growth. The company was acquired, transforming into a high-growth organization overnight. Multiple ASCs were added to our portfolio over a short period, but we lacked the infrastructure to have sustainable success. There was no dedicated IT budget, no standardized software or hardware, and no staff trained to handle the increasing complexity of healthcare technology. This left us vulnerable to inefficiencies, security risks, and a lack of data to inform important decisions.  Related:Knowledge Gaps Influence CEO IT Decisions How did you resolve the problem?  I started by conducting a full assessment of existing systems across all locations to identify gaps and risks. I developed a multi-year plan to address foundational needs/capabilities, secured buy-in for an initial budget to hire our first functional area leaders, and partnered with a few firms that could provide us with the additional people resources to execute on multiple fronts. We standardized hardware and software, implementing cloud-based systems and a scalable network architecture. We also established policies for cybersecurity, business continuity, and staff training, while gradually scaling the team and outsourcing specialized tasks like penetration testing to additional trusted partners.  What would have happened if the problem wasn’t swiftly resolved?  Without a stable IT department, the company would have been unable to grow effectively. Important data would have been at risk and unutilized, potentially leading to violations and missed insights. Operational inefficiencies, like mismatched scheduling systems or billing errors, would have eroded profitability and frustrated surgeons and patients alike. Over time, our reputation as a first-class ASC management partner would have suffered, potentially stalling further growth or even losing existing centers to competitors.  Related:The Kraft Group CIO Talks Gillette Stadium Updates and FIFA World Cup Prep How long did it take to resolve the problem?  It took about 18 months to establish a fully operational IT department. The first six months were spent laying the foundation, hiring the core team, standardizing systems, and addressing immediate risks. The next year focused on refining processes, expanding the team, and rolling out core capabilities. It was a phased approach, but we hit key milestones early to stabilize operations and gain organizational buy-in/trust.  Who supported you during this challenge?  The entire leadership team was a critical ally, trusting the vision and advocating for the investments needed to achieve it. My initial hires were integral, they were able to adopt an entrepreneurial mindset, often setting direction while also being responsible for tactical execution. Our ASC administrators also stepped up, providing insights into their workflows and championing the changes with their staff. External partners helped accelerate implementation once we had the resources and process to engage them properly.  Related:CIO Angelic Gibson: Quell AI Fears by Making Learning Fun Did anyone let you down?  Not everyone was the right fit and not everyone in the organization was ready for the accelerated pace of change, but those were not personal failures, just circumstantial and provided learning opportunities for me and others in the company.  What advice do you have for other leaders?  Start with a clear vision and get fellow-executive buy-in early — without it, you’re facing a steep uphill climb. Prioritize quick wins, like fixing the most glaring risks and user pain points to build momentum and credibility. Hire a small, versatile team you can trust — quality beats quantity when you’re starting out. Be patient but persistent; building something from scratch takes time, but cutting corners will haunt you later. Communicate constantly — stakeholders need to understand why the change matters. Lastly, build a “team first” mindset so that individuals know they are supported and can go to others to brainstorm or for assistance.  Is there anything else you would like to add?  This experience reinforced the critical role technology plays in ASCs, where efficiency and patient safety are non-negotiable. It also taught me that resilience isn’t just about systems — it’s about people. It’s proof that even the toughest challenges can transform an organization if you tackle them head-on with the right team and strategy.  source

Surgical Center CIO Builds an IT Department Read More »

Future-proofing your mainframe: three takeaways from the frontlines of innovation

The mainframe is no longer a system to modernize away from, but a critical foundation to build the future on. Over the past few months, the Rocket Software team has participated in some of the biggest tech events and this message has been heard loud and clear. At every gathering of CIOs, IT leaders, and technologists there has been a shared sense of urgency—and opportunity—centered around one idea: future-proofing the mainframe. To prepare your mainframe for the future, these events highlighted three key conversations tied to the modernization journey. 1. AI can’t work without the right data—and that data lives on the mainframe At the recent Gartner Data & Analytics (D&A) Summit in Orlando, one of the hottest topics was how to operationalize AI in a way that delivers business value. Yet, as organizations race to build AI models, many overlook a crucial asset: the mainframe. Mainframes still run critical operations for industries like banking, healthcare, retail, and transportation. They house decades of rich transactional data—precisely the kind of high-value, high-integrity data that AI models need to produce reliable insights. But tapping into that data can be challenging. Silos, outdated tools, and integration gaps often stand in the way. To future-proof the mainframe, organizations need to ensure frictionless access to mainframe data, not only for reporting and compliance but also to feed modern data pipelines and AI workflows. As we heard in Orlando, the message is clear: your AI is only as good as the data you feed it—and much of that data still lives on the mainframe. 2. The talent gap is real—but so is the opportunity As we look to the future of the mainframe, an adjacent conversation took place at SHARE Washington D.C.: the accelerating retirement of seasoned mainframe professionals. This isn’t just a staffing issue—it’s a knowledge management crisis. Years of expertise in COBOL, JCL, and system administration can’t be replaced overnight. But this transition also opens the door to empowering the next generation with modern tools, languages, and interfaces that abstract away complexity while preserving the power of the platform. We heard inspiring stories of companies creating mentorship programs, investing in mainframe bootcamps, and using DevOps tools to help new developers ramp up faster. These efforts don’t just bridge the skills gap—they create a culture of innovation around the mainframe. To future-proof your workforce, not only do you need to rethink who works on the mainframe, but how they work on it. 3. Modernization isn’t a one-time event—it’s a continuous strategy Modernization was a major throughline at both events, but with a clear shift in tone. It’s no longer about “ripping and replacing” mainframes with cloud-native architectures. Instead, organizations are looking for hybrid approaches that modernize without disruption. That means taking a measured, iterative approach to modernization, and finding the infrastructure that works best for the unique needs of your enterprise. The most successful organizations are the ones who view modernization as a strategic, ongoing capability, not a one-time project. As one CIO put it at SHARE: “The goal isn’t to move off the mainframe. The goal is to move forward—with it.” Final thoughts: the mainframe as a modern platform The future of enterprise IT will be powered by AI, automation, and hybrid cloud. But none of these trends can succeed without a solid data foundation, a skilled workforce, and a modernization strategy that respects what already works. Learn more about modernization strategies at the Rocket Software Insights Hub. source

Future-proofing your mainframe: three takeaways from the frontlines of innovation Read More »

IDC’s 2025 Smart Cities North America Awards Showcases Passion and Partnership

I’ve been hosting the IDC Smart Cities Awards since 2018 and this year was the first time that I got choked up multiple times during the awards ceremony. And I wasn’t the only one! IDC’s Smart City North America awards were hosted at Smart Cities Connect, held in beautiful San Antonio.  As the award winners receive their awards, each team has a chance to say a few words about their initiative and this year their messages were powerful, personal and showed the passion for their work. The award recipient from the City of Charlotte, Jamar Davis, whose project “Access Charlotte” focused on housing as a means to drive access to broadband, was emotional as he talked about his connection to the places he was serving.  Amy Atchley from the City of Austin said, when describing their project with Austin Energy, “Smart cities is for dreamers.” She didn’t mean Smart Cities are hypothetical; she meant this is a group that is dreaming big and, slowly but surely, realizing those dreams. These were just two of the speakers that moved me and the rest of the attendees.  We can see the practical application of technology to big ideas and big challenges in the agenda from Smart Cities Connect where we host the awards.  From Digital Transformation and urban operations to community engagement, cities and their tech partners came together to demonstrate how their use of technology has matured in service to the public. Our award winners and finalists are a microcosm of this. Here are just a few examples of the innovative projects that made this year’s IDC Smart City North America winners and finalists so impressive, and the three key take-aways from the winners. Deliver on What Your Community Needs and Wants The city of Phoenix, AZ  – the hottest city in the US – discovered that their residents wanted access to chilled drinking water.  With ideas from the community, the city developed a custom-designed water station that features two drinking spouts at ADA-approved heights, a bottle filling station, an internal chiller, and smart meters for reporting live water usage data via a central dashboard. The initiative is tackling hydration and heat concerns by establishing a network of modern, chilled fountains that enhance resilience to extreme heat, reduce plastic waste, and support access to essential services like work and healthcare. The City of Chandler, AZ developed an Instant Language Assistant (ILA) that tested real-time translation tools to improve resident communication across 250 languages. Custom devices, used at city service counters and events, the ILA supported over 560 face-to-face interactions and enabled communication through headsets, keyboards (including Braille), and ASL support. Success stories included a hearing-impaired resident renewing a passport and immediate language support in libraries and housing services. Following the pilot’s success, city leadership approved funding for 60 ILA units over three years, making Chandler a regional leader in inclusive, tech-driven public service. Do the Hard Work to take Partnerships to a New Level to Achieve Scaled Results The City of San Antonio worked with two utilities and their initiative showcases the power of inter-utility collaboration, maximizing shared infrastructure to improve service reliability, and empowering residents with real-time insights and enable more efficient operations. San Antonio Water System (SAWS) modernized its 600,000-endpoint water network by replacing manual meter reading with advanced metering infrastructure (AMI). Faced with labor challenges, rising costs, and billing delays, SAWS partnered with CPS Energy to use its existing industrial IoT (IIoT) network, avoiding the cost of building a new system. A pilot using 2,500 ultrasonic meters showed near-perfect read accuracy, real-time usage data, and early leak detection, improving billing, customer engagement, and conservation. The shared network supports future smart city uses, setting an example of how cross-utility collaboration can increase ROI, operational efficiency, and resident satisfaction. South Bend, IN  launched an innovative grant program to expand the city’s Real-Time Crime Center (RTCC) by partnering with local organizations to enhance security infrastructure. Run by SBPD and the Department of Innovation & Technology, the program offers up to $4,000 for eligible investments like cameras and software by local businesses. In return, participants integrate their security systems into the RTCC via the FususCORE device. Since launch, 39 organizations have joined, adding 171 camera views—a 51% increase in RTCC coverage. Benefits include improved incident response, deterrence, and stronger community-police ties. The program prioritizes privacy and transparency and has already helped SBPD address 13 safety incidents. Embed Resilience and Sustainability in Project Design The Cincinnati/Northern Kentucky International Airport modernized its main garage by addressing poor lighting, high energy use, and inefficient space utilization. The outdated sodium vapor lighting was costly and limited future upgrades like EV charging. The project introduced LED lighting, IoT sensors, and a data platform to improve navigation, safety, and energy efficiency. Goals included enhancing passenger experience, expanding infrastructure, reducing costs, and achieving ROI within 3.5 years. The system enabled real-time parking guidance, supported scalable innovation, and created new revenue opportunities through better space management and pricing strategies. Overall, the project marked a transformative shift in airport parking operations through smart, sustainable technology. Generation Park Generation Park, a 4,300-acre master-planned community in Northeast Houston, is a public-private partnership between McCord Development and the Generation Park Management District. Facing high water bills and unaccounted water loss due to aging infrastructure and lack of monitoring tools, McCord built MizuWatch, a digital twin IoT water monitoring platform, using the AWS Garnet Framework based on FIWARE open standards. MizuWatch enables real-time water usage analytics, leak detection, and system transparency. It helped reduce billing and improve efficiency by identifying leaks and enabling proactive collaboration with water operators. The Garnet Framework also prevents vendor lock-in and will serve as the data foundation for future smart city initiatives in Generation Park. One project embodied all of these – Austin Energy’s EVs for Schools initiative which provides an educational living lab that is scaling country-wide. Austin Energy is supporting Austin’s goals to be net zero emissions by 2040 which requires 40% of all vehicle miles traveled

IDC’s 2025 Smart Cities North America Awards Showcases Passion and Partnership Read More »

「手·望 家庭送暖服務」推動關心社會由少年做起 少年獅子為基層家庭送暖

(相)「手·望 家庭送暖服務」開幕禮大合照,中間位置帶太陽眼鏡者為大會主席兼區青少獅委員會主席莊紹嘉博士。 國際獅子總會中國港澳303區一直致力於在培訓青少年成為未來的社會棟樑,更於2023年6月創立少年獅子分區,讓12至18歲的中學生可以加入少年獅子會服務社會。日前數十名來自不同背景的本地中學生及國際學校學生一同探訪基層家庭,除了為他們送上溫暖外,亦能多了解不同社會問題,有助少獅們早日決定未來路向。 是次活動為青少獅子區會聯同亞洲學術慈善基金會聯合舉辦,基督教香港信義會轄下太和青少年綜合服務中心協辦「手·望 家庭送暖服務」,名稱取自兩個服務的重點項目,「手」代表STEM手工藝工作坊;「望」代表探望探基層劏房戶家庭。希望透過這次送暖服務,令基層家庭知道社會上有一群熱心的青少年,在落力守護他們,為他們帶來希望。 大會主席 兼 青少獅委員會主席莊紹嘉博士在開幕禮致歡迎辭時表示:「不同背景的中學生在這次服務可以近距離與居住在劏房、天台屋或寮屋的基層家庭交流,了解他們的需要,是一個難能可貴的機會。」除了出席開幕禮,莊紹嘉博士與國際獅子總會中國港澳303區總監沈顏先生更身體力行,與少獅一同探訪街坊,當中包括單親家庭及新來港家庭。 沈顏先生在開幕致辭時亦提到:「感謝亞洲學術慈善基金會及基督教香港信義會太和青少年綜合服務中心,籌辦這個極具意義的活動,為基層兒童帶來歡樂,也給予年輕人走進社區考察的機會, 進一步展現共融共長的社會氣氛。」探訪隊伍外出的同時,另一群少獅亦在中心舉辦聲控車工作坊,一對一教導近30名基層兒童親手製作聲控車。基層兒童接觸新事物的機會不多,在哥哥姐姐的帶領下從零開始拼砌聲控遙控車,中心禮堂整個下午歡笑聲一片。 大會主席莊紹嘉、總監沈顏與國際學校少獅探訪基層家庭 亞洲學術慈善基金會(AACF)為賦能青少年擔當未來的領導角色,經常籌辦不同社會服務,讓青少年接觸不同弱勢社群。憑著相同的理念,基金會已不是第一次與獅子會合作。去年基金會與北區少年獅子會(加拿大國際學校)合辦深水埗劏房探訪,在寒冬探訪近50戶劏房老人。 少獅沈子晴認為這次探訪令她大開眼界:「這次探訪我與同伴目睹了四個劏房家庭的真實生活痕跡。每個家庭都有各自的難處,與他們坦率的交流中我感受到他們對生活的積極態度,也讓我更深入了解社會的需求。」 LinkedIn Email Facebook Twitter WhatsApp source

「手·望 家庭送暖服務」推動關心社會由少年做起 少年獅子為基層家庭送暖 Read More »

Key Digital Asset Issues Require Antitrust Vigilance

By Luke Taeschler, Sarah Gilbert and Jared Levine ( April 22, 2025, 6:01 PM EDT) — Only three months into his second term, President Donald Trump has taken several steps to advance the growth of the digital assets industry…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Key Digital Asset Issues Require Antitrust Vigilance Read More »