Why CIOs Fail — and How They Can Avoid It

It’s never been harder to be a chief information officer. You have the demands of major digital-transformation projects that far too often fail to fully deliver on their promise. You have the give and take between user convenience and IT security in an era when, thanks to ransomware, breaches have never been more costly. You have talent gaps and budget limitations.   And, you have unremitting requests from business units amid the emergence of generative AI, which has had the effect of releasing squirrels at a dog show.  So, it’s no surprise that, while infamously short CIO tenures seem to be marginally longer than they were a few years back, their departures are often someone else’s idea. How can a CIO avoid that fate?  Don’t try to be a technical wizard. The CIO job is mostly about communicating. You don’t make it to the C-suite without proven technical skills. That background remains indispensable. But the CIO’s job is to deeply understand the business’s goals and then guide the selection, implementation, and acceptance of technological solutions that best help the organization achieve those goals.   The business environment is in constant flux. Technologies quickly evolve. Knowing the business requires constant dialogue with C-suite peers as well as business-unit leaders. That means taking the initiative to reach out to and drive strategic conversations with leaders across the organization to deeply understand what their functions do; what they hope to do; how they’re using technology; and how all that contributes (or may one day contribute) to the organization’s overall strategic goals.   Related:How to Prioritize Multiple Innovation Projects However, to grasp the technological state of the art, CIOs must rely on the deep dives of trusted IT architects and other specialists. Only then can CIOs serve as trusted intermediaries between business and technology experts. So, regardless of one’s background, a CIO’s communication skills and political savvy are vastly more prized than their technical knowledge.  Also, a CIO’s technical upbringing can color a worldview in unproductive ways. A CIO who came up through data-center management and infrastructure may be prone to invest in performance past the point of economic return. One who grew up in development may pour more money into custom solutions and user experience than pays off. Staying laser-focused on the company’s strategy and business goals while understanding — and communicating at a conceptual level — how evolving technologies can meet those goals lets CIOs grow beyond their own backgrounds. That’s good for the company, and for the CIO.   Related:Should IT Add Automation and Robotics Engineers? Focus on strategy. That takes ruthless prioritization. Marketing wants a new automation platform. Finance and operations want a new security app. Product wants custom development for an R&D project. Business development wants IT due diligence for a prospective acquisition. Sales wants a new lead-generation system. Operations wants a new messaging app.  Each may be a good idea in isolation. But approving them all would overwhelm the IT group even if one could budget for it all. Yet, so often, the CIO says “yes,” “yes,” and “yes.” That’s overpromising, which is a guaranteed path to underdelivering, disappointing and throwing the CIO’s competence into question.  A focus on strategy is crucial here. What is technology’s role in the business? Unless you’re a Spotify or a Netflix, technology is not what the business does, but rather an enabler of what the business does. For example, with a financial advisory firm, finding new customers to advise is the lifeblood, so it makes sense to invest in and support state-of-the-art analytics and lead-generation capabilities for the sales team and to hold off on that new messaging app for operations.  Say “no,” then explain the strategic business reasons why. Vivid explanation must accompany ruthless prioritization. This takes us back to the importance of communication. Failing to deliver on too many “yeses” can doom a CIO. But saying “no” (or, with good ideas that rank as lower priorities for the time being, “not yet”) will disappoint, too. That can sour a business unit or administrative function’s relationship with IT. At its worst, it can lead to rogue installations that bring security risks and maintenance nightmares.   Related:Task Delegation Mistakes IT Leaders Need to Avoid The way a CIO avoids this is, yet again, by evangelizing the IT organization’s alignment with the company’s overall strategic goals. That means being firm and factual about where a rejected or waitlisted project sits on the long roster of prospective projects — and why the ones above it are more important to the business’s success.   It may mean describing the need to engage external partners or bring in outside resources. It certainly means explaining that each new system or API represents a long-term commitment of money and attention. And it could even mean reminding people that trying to deliver for everyone runs the real risk of delivering for no one.  Failure to deliver due to impaired strategic vision, compounded by poor communication, is bad for the business and everyone involved. By constantly communicating, ruthlessly prioritizing, and focusing on projects that make the most strategic sense for the business, CIOs can make the right moves for their companies and help ensure that, when they do depart, they do so on their own terms.  source

Why CIOs Fail — and How They Can Avoid It Read More »

How the wow factor drives innovation at Northeast Grocery

To drive democratization, we follow ECTERS, which is educate, coach, train the trainer, empower, reinforce, and support, which helps nurture and embed internal AI talent. For ChatGPT Enterprise, for example, we use town halls to educate employees about AI use cases, coached through the establishment of an early adopter AI Champions group, and provide power users with advanced training. For empowerment, we’ve introduced prompt engineering guides and access to an AI knowledge hub, and we reinforce training through AI forums about high-value use cases. We also provide support through dedicated AI phone-a-friend peer communities and office hours. What is the role of the CIO in our age of AI? A key part is to educate. We’re creating a new environment that empowers the business to leverage data better. These business discussions are much better if everyone understands how AI works, what’s possible, and how to apply a functional domain lens to a problem set in order to create solutions they never thought were possible, and in a relatively short period of time. Another part of the role is to drive momentum. Your new job is to create, support, and nurture that innovation wheel so as new AI tools come onto the market, you can rotate the wheel and keep the momentum going. With AI, we can now deliver the wow factor, which increases momentum and shows the power of the wheel to the entire enterprise. In IT, we’re no longer ticket-takers; we’re momentum creators. A third key aspect is being facilitators of the future. By democratizing AI innovation, IT can now share its responsibility to anticipate future customer behavior with the entire organization by bringing new tools and information to the business user, and then training them to leverage that power. This takes a different way of thinking than a traditional CIO role. source

How the wow factor drives innovation at Northeast Grocery Read More »

Meta’s answer to DeepSeek is here: Llama 4 launches with long context Scout and Maverick models, and 2T parameter Behemoth on the way!

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The entire AI landscape shifted back in January 2025 after a then little-known Chinese AI startup DeepSeek (a subsidiary of the Hong Kong-based quantitative analysis firm High-Flyer Capital Management) launched its powerful open source language reasoning model DeepSeek R1 publicly to the world, besting the performance of U.S. tech giants such as Meta. As DeepSeek usage spread rapidly among researchers and enterprises, Meta was reportedly sent into panic mode upon learning that this new R1 model had been trained for a fraction of the cost of many other leading models, as little as several million dollars — what it pays some of its own AI team leaders — yet still achieved top performance in the open source category. Meta’s whole generative AI strategy up to that point had been predicated on releasing best-in-class open source models under its brand name “Llama” for researchers and companies to build upon freely (at least, if they had fewer than 700 million monthly users, at which point they are supposed to contact Meta for special paid licensing terms). Yet DeepSeek R1’s astonishingly good performance on a far smaller budget had allegedly shaken the company leadership and forced some kind of reckoning, with the last version of Llama, 3.3, having been released just a month prior in December 2024 yet already looking outdated. Now we know the fruits of that reckoning: today, Meta founder and CEO Mark Zuckerberg took to his Instagram account to announce a new Llama 4 series of models, with two of them — the 400-billion parameter Llama 4 Maverick and 109-billion parameter Llama 4 Scout — available today for developers to download and begin using or fine-tuning now on llama.com and AI code sharing community Hugging Face. A massive 2-trillion parameter Llama 4 Behemoth is also being previewed today, though Meta’s blog post on the releases said it was still being trained, and gave no indication of when it might be released. (Recall parameters refer to the settings that govern the model’s behavior and that generally more mean a more powerful and complex all around model.) One headline feature of these models is that they are all multimodal — trained on, and therefore, capable of receiving and generating text, video, and imagery (audio was not mentioned). Another is that they have incredibly long context windows — 1 million tokens for Llama 4 Maverick and 10 million for Llama 4 Scout — which is equivalent to about 1,500 and 15,000 pages of text, respectively, all of which the model can handle in a single input/output interaction. That means a user could theoretically upload or paste up to 7,500 pages-worth-of text and receive that much in return from Llama 4 Scout, which would be handy for information-dense fields such as medicine, science, engineering, mathematics, literature etc. Here’s what else we’ve learned about this release so far: All-in on mixture-of-experts All three models use the “mixture-of-experts (MoE)” architecture approach popularized in earlier model releases from OpenAI and Mistral, which essentially combines multiple smaller models specialized (“experts”) in different tasks, subjects and media formats into a unified whole, larger model. Each Llama 4 release is said to be therefore a mixture of 128 different experts, and more efficient to run because only the expert needed for a particular task, plus a “shared” expert, handles each token, instead of the entire model having to run for each one. As the Llama 4 blog post notes: As a result, while all parameters are stored in memory, only a subset of the total parameters are activated while serving these models. This improves inference efficiency by lowering model serving costs and latency—Llama 4 Maverick can be run on a single [Nvidia] H100 DGX host for easy deployment, or with distributed inference for maximum efficiency. Both Scout and Maverick are available to the public for self-hosting, while no hosted API or pricing tiers have been announced for official Meta infrastructure. Instead, Meta focuses on distribution through open download and integration with Meta AI in WhatsApp, Messenger, Instagram, and web. Meta estimates the inference cost for Llama 4 Maverick at $0.19 to $0.49 per 1 million tokens (using a 3:1 blend of input and output). This makes it substantially cheaper than proprietary models like GPT-4o, which is estimated to cost $4.38 per million tokens, based on community benchmarks. Indeed, shortly after this post was published, I received word that cloud AI inference provider Groq has enabled Llama 4 Scout and Maverick at the following prices: Llama 4 Scout: $0.11 / M input tokens and $0.34 / M output tokens, at a blended rate of $0.13 Llama 4 Maverick: $0.50 / M input tokens and $0.77 / M output tokens, at a blended rate of $0.53 All three Llama 4 models—especially Maverick and Behemoth—are explicitly designed for reasoning, coding, and step-by-step problem solving — though they don’t appear to exhibit the chains-of-thought of dedicated reasoning models such as the OpenAI “o” series, nor DeepSeek R1. Instead, they seem designed to compete more directly with “classical,” non-reasoning LLMs and multimodal models such as OpenAI’s GPT-4o and DeepSeek’s V3 — with the exception of Llama 4 Behemoth, which does appear to threaten DeepSeek R1 (more on this below!) In addition, for Llama 4, Meta built custom post-training pipelines focused on enhancing reasoning, such as: Removing over 50% of “easy” prompts during supervised fine-tuning. Adopting a continuous reinforcement learning loop with progressively harder prompts. Using pass@k evaluation and curriculum sampling to strengthen performance in math, logic, and coding. Implementing MetaP, a new technique that lets engineers tune hyperparameters (like per-layer learning rates) on models and apply them to other model sizes and types of tokens while preserving the intended model behavior. MetaP is of particular interest as it could be used going forward to set hyperparameters on on model and then get many other types of models out of it, increasing training efficiency. As my

Meta’s answer to DeepSeek is here: Llama 4 launches with long context Scout and Maverick models, and 2T parameter Behemoth on the way! Read More »

Accountant Shortage: How to Protect Yourself This Tax Season

As the April 15, 2025, tax filing deadline approaches, you may find yourself joining the growing crowd of US taxpayers scrambling to find a qualified tax accountant. With a dwindling IRS workforce, trying to do taxes alone isn’t an attractive alternative, especially if you hit any snags in tax prep. Acumatica Cloud ERP Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Any Company Size Any Company Size Features Accounts Receivable/Payable, API, Departmental Accounting, and more QuickBooks Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Micro (0-49 Employees), Small (50-249 Employees), Medium (250-999 Employees), Large (1,000-4,999 Employees) Micro, Small, Medium, Large Features API, General Ledger, Inventory Management Quicken Business & Personal Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Micro (0-49 Employees) Micro Features Accounts Receivable/Payable, Invoicing / Billing, Mobile Capabilities, and more Here’s the backstory for the shortage and a few practical places to start your search to secure a tax pro in 2025. Why can’t you find a tax preparer? There’s a growing shortage of tax professionals in the US. Many certified public accountants (CPAs) have retired over the past few years, and fewer young professionals are entering the field. Accounting programs have seen declining enrollment, and the high stress and long hours during tax season have pushed some out of the profession. These trends are evidenced by the following statistics: Decline in accounting professionals: Over the past five years, the US has seen a reduction of approximately 340,000 accountants, indicating a substantial decrease in the workforce. Aging workforce: Currently, about 75% of CPAs are part of the Baby Boomer generation and are approaching retirement age, highlighting an emerging gap in experienced professionals. Seasonal surge in demand: Tax professionals are frequently scheduled out several weeks in advance. This makes it very difficult for taxpayers to get on a tax preparer’s schedule if they don’t start looking until March or later. More complex tax situations: Tax returns have become increasingly complicated. Between gig work, cryptocurrency transactions, and remote work arrangements, more people are seeking professional help to navigate the maze of deductions, credits, and compliance rules. How to increase your chances of landing a tax pro 1. Contact your state’s CPA society. Every US state (and most territories) supports an independent CPA professional society. Many have directories or referral services that aren’t widely advertised. Some even have “Find a CPA” tools geared toward matching by specialty, which can be especially useful if you need both financial statement preparation and tax preparation services. 2. Reach out to university accounting programs. Some accounting departments have student-run tax clinics (especially around tax season) where advanced students help with returns for free or at a low cost under faculty supervision. Contact the business school at your local university to determine if that institution has a tax prep program. Note that these programs will likely cater to simpler returns, and there may be income or other demographic limitations for participation. If you land a preparer through a program like this, starting a professional relationship with someone new to the profession may set you up for long-term service and reduce your chances of repeating this search year over year. 3. Link up via LinkedIn. If you’re on LinkedIn, search for CPAs or tax preparers in your area or industry. Look for professionals posting about tax topics, as this often signals their engagement in the community and potential openness to engaging new clients. Consider posting a status asking for referrals — your network may have good recommendations. 4. Ask your other service providers. In addition to leveraging your LinkedIn connections, tap into your existing network of professional service providers. Inquire with financial advisors, mortgage brokers, lawyers, and more for tax preparer recommendations. These professionals frequently collaborate with CPAs and can point you to someone trustworthy. Some professionals, like financial advisors, might also have accounting designations or experience providing tax preparation services. While they may not advertise those, they may be open to offering them for a bundled fee in conjunction with the other services they provide. 5. Tune into podcasts, blogs & social media/online marketplaces. YouTube channels, podcasts, and tax-focused TikToks are other ways to identify individuals interested in growing their brand. Listening to someone’s content gives you a sense of their style and may reveal their niche. Tax content creators may be open to taking clients, offering consultations, or — at the very least — offering recommendations. Tip: When going with someone you don’t know, be sure to confirm their credentials to ensure that they’re a credible authority, as a social media platform alone doesn’t guarantee expertise. 6. Check out online marketplaces. Some tax preparers offer their services on platforms like Upwork and Fiverr. You may also find a tax preparer using financial services networks like XY Planning Network. If you have doubts about using a preparer from one of these online marketplaces, here are some things to consider when hiring a CPA from any source that does not come with a personal recommendation. Verify credentials. Ensure the CPA or enrolled agent (EA) is licensed and in good standing. You can verify this information in some states by entering the state name and “professional license verification” into a search engine and then looking up the preparer’s name. Not all tax preparers must be CPAs or EAs, but the lack of a designation makes it more difficult to verify their skill level. Meet face-to-face. Ask for an in-person meeting or a video conference call. Without a recommendation, the more information you know about the preparer, the better equipped you’ll be to make the best preparer selection. Inquire about the security of information. Be wary of how your preparer asks for your data to be submitted. Tax preparers have a duty to protect client information and shouldn’t request that sensitive information be transferred through non-secure methods. More about Tech & Work More helpful tips & resources When looking for a tax preparer, you can do a few

Accountant Shortage: How to Protect Yourself This Tax Season Read More »

Dave Meyer, Chief AI Officer at Reveleer: Compliance Isn’t Enough for Healthcare AI

In the healthcare industry, compliance falls short as an AI strategy. Chief AI officers, CIOs, and CISOs need to prioritize responsible AI usage to minimize potential data breaches that could not only lead to fines and litigation, but also reputational damage.  “It’s really a trust factor,” says Dave Meyer, chief data and AI officer at value-based care platform Reveleer. “[Public healthcare information or] PHI is paramount in healthcare, so we have to treat it responsibly. No one in our organization, including data scientists, has access to anything they don’t need to access. Data access needs to be strictly governed.”  Transparency is also critical because it reduces the risk of relying on what could be a hallucination.   “When we give AI results, or when we go through our data models, we support it with monitoring, evaluation, assessment and treatment (MEAT). So, for example, not only did we find the term, ‘diabetes,’ in a patient’s chart, there’s also an explanation of why we suggested this particular ICD [internal classification of diseases] code,” says Meyer. “That way, when AI provides suggestions, the human still decides whether the suggestion is valid or invalid. We’re not trying to [replace] humans. We’re trying to make their job easier and more accurate.”  Related:How to Prioritize Multiple Innovation Projects AI as a Problem-Solving Tool  While the ability to quickly identify health conditions and find correlations is powerful, it’s considerably less helpful if users must then manually wade through volumes of information, which could be several hundred or more pages, to locate the references. Instead, AI can surface the references quickly, such as by identifying on what pages of a document, or pages within a set of documents, those references can be found.   That sort of use case opens the door to GenAI, however, like in many other industry sectors, GenAI tends to be misunderstood. People who lack a foundational understanding of AI tend to believe that GenAI is the latest and greatest version of single technology called, “AI” versus another AI technique.   “I think people view GenAI as a panacea, and it is not a panacea, especially in the healthcare industry where you cannot just have a black box that says, ‘Here’s the answer, but we’re not going to tell you how we got there,’” says Meyer. “We’re using it for evidence extraction from the chart which we can then double check for hallucinations. We take that evidence and run it through our models.”  However, Reveleer also uses AI for other techniques, such as rules, to pull evidence.  Related:Should IT Add Automation and Robotics Engineers? “A lot of people think they can upload a chart and then ask GenAI for the answer. It will give you an answer that looks okay on the surface, but they are not production level, customer trustworthy answers that are in the percentile of accuracy that [is necessary] in the healthcare industry,” says Meyer. “Healthcare is a high stakes industry where you’re trying to drive patient outcomes, and I don’t think that GenAI can be trusted on its own to provide that answer.”  Some Healthcare’s Challenges and How to Address Them  One of healthcare’s biggest challenges is failing to understand that the accuracy of a prediction can, and often does, vary with use cases. Since healthcare organizations need highly sensitive patient information to provide diagnoses and treatment, the confidence level matters greatly.  “Trust is a big factor, so being given a suggestion that is 70% accurate isn’t good enough. The stakes are too high. You have to balance the sensitivity and security of the data with who has access to it,” says Meyer.   Of course, trust must be earned by a vendor, particularly when patient records are involved. In Reveleer’s case, customer trust in its AI capabilities has been earned in a stair-step fashion over time. Specifically, the company began by automatically routing patient charts, then later NLP techniques were added so patient information could be surfaced faster and validated. Now its AI provides automatic pointers to where critical information can be located.  Related:Task Delegation Mistakes IT Leaders Need to Avoid “One of the biggest challenges is getting the data in an organized format that is usable,” says Meyer. “In order to build any AI model, you need to have a large quantity of data, and you need to govern that data appropriately. Managing your data is really the foundation of everything before you start building models. You also need to make sure that you know how to handle the data well.”  In addition to getting the foundational elements right, it’s important to choose the right tool for the right job.  “Data science still is a good method for solving a lot of these problems. Everybody’s trying to jump to GenAI as the solution. Don’t force that if you’re getting good results from data science,” says Meyer. “The same is true for rules-based systems. For example, if you see the word, ‘blood pressure’ and the reading next to it says 120 over 80, you don’t need a GenAI model to pull that out for you. Or, if the data is in a structured format, and you can pull it out without any AI.”  However, don’t overlook the need for a human in the loop when it comes to AI.   “In the healthcare industry, machines need to be partnered with humans, because healthcare is too high stakes for a lack of human oversight. One suggestion may have a better than 90% confidence score while another only has a 50% conference score,” says Meyer. “AI can help you cut through the noise and surface the good stuff quickly, but it’s always going to need the human element. We’re not trying to replace humans; we’re just trying to make them more efficient.” source

Dave Meyer, Chief AI Officer at Reveleer: Compliance Isn’t Enough for Healthcare AI Read More »

Taming the cost of AI: Is FinOps the answer?

As artificial intelligence (AI) services, particularly generative AI (genAI), become increasingly integral to modern enterprises, establishing a robust financial operations (FinOps) strategy is essential. AI services require high resources like CPU/GPU and memory and hence cloud providers like Amazon AWS, Microsoft Azure and Google Cloud provide many AI services including features for genAI. When using these services, it is imperative that we keep an eye on the consumption as cost overhead in using AI services can be costly for an organization. The advent of AI services, particularly genAI, has revolutionized various industries, enhancing capabilities and driving innovation. However, the financial complexities posed by these advanced technologies necessitate a robust FinOps strategy to ensure cost efficiency and sustainability. Establishing a governance model and cost management strategy for AI services plays a vital role in the AI strategy. FinOps provides the structure to achieve cost transparency, cost management and cost optimization, ensuring that AI services are not only effective but also economically sustainable. This article delves into developing FinOps solutions tailored for AI services, highlighting the unique considerations and strategic approaches necessary. source

Taming the cost of AI: Is FinOps the answer? Read More »

Uplimit raises stakes in corporate learning with suite of AI agents that can train thousands of employees simultaneously

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Uplimit unveiled a suite of AI-powered learning agents today designed to help companies rapidly upskill employees while dramatically reducing administrative burdens traditionally associated with corporate training. The San Francisco-based company announced three sets of purpose-built AI agents that promise to change how enterprises approach learning and development: skill-building agents, program management agents, and teaching assistant agents. The technology aims to address the growing skills gap as AI advances faster than most workforces can adapt. “There is an unprecedented need for continuous learning—at a scale and speed traditional systems were never built to handle,” said Julia Stiglitz, CEO and co-founder of Uplimit, in an interview with VentureBeat. “The companies best positioned to thrive aren’t choosing between AI and their people—they’re investing in both.” How Uplimit’s AI agents transform traditional corporate training models Stiglitz, whose background includes teaching with Teach for America, running Google Apps for Education, and being an early employee at Coursera, founded Uplimit during the pandemic. She saw a disconnect between engaging classroom experiences and the often static nature of first-generation online learning platforms. “I started to think, well, maybe there’s a way that we can sort of get both, like both the scale that you would get from a Coursera, that type of experience, but with the engagement that you would get from having a one-on-one tutor,” Stiglitz explained. The company’s new AI agents tackle what Uplimit identifies as critical pain points in corporate learning. The skill-building agents facilitate practice-based learning through AI role-plays and personalized feedback. Program management agents analyze learner progress, automatically identifying struggling participants and sending personalized interventions. Teaching assistants provide 24/7 support, answering questions and facilitating discussions. What distinguishes Uplimit’s approach is its focus on active learning rather than passive content consumption. Traditional corporate e-learning typically relies on videos and quizzes, with completion rates averaging a dismal 3-6 percent. In contrast, Uplimit’s customers report completion rates exceeding 90 percent. Enterprise customers report dramatic efficiency gains and completion rates “Industry standard for an asynchronous course is like three to 6%. That’s what you see from a Coursera,” said Stiglitz. “Databricks has 94% completion rates. They traditionally had to cap those programs at about 20 people, because that’s the amount of people that an instructor can manage. Now the cohort that’s running this week has about 1000 learners.” Early customers report striking efficiency gains. Procore Technologies estimates creating courses through Uplimit is 95% faster than traditional methods, while Databricks has reduced instructor time by over 75%. Another unnamed large technology company compressed what would have been a three-year leadership training rollout into just one year. The timing of Uplimit’s launch aligns with growing concerns about AI’s impact on employment. A McKinsey report cited in Uplimit’s announcement estimates 400 million jobs could be eliminated by 2030. This reality creates urgency for effective upskilling solutions. For employees concerned about AI replacing their jobs, Stiglitz offers pragmatic advice: “The best advice would be figuring out how you can use AI yourself to augment your own skills. Across many professions, we’re sort of seeing how AI can make people significantly more productive.” AI-powered learning addresses fear and misconceptions about technology Josh Bersin, a respected industry analyst and CEO of The Josh Bersin Company, characterized Uplimit’s approach as representing the future of corporate learning. “Despite many innovations, corporate learning has stagnated over the last decade. Today, thanks to the power of AI, we are ready for a revolution in this massive industry,” Bersin said in a statement sent to VentureBeat. The company has addressed potential privacy concerns by building enterprise-grade security features. “We have SOC two compliance. It’s siloed. We’re not training our models on any of their data,” Stiglitz emphasized. “We have this sort of enterprise level security and privacy features that you would expect working with Fortune 500 companies.” Interestingly, Uplimit has found that AI training itself represents a significant opportunity. Kraft Heinz, for example, used Uplimit to create AI upskilling programs that addressed fear and misconceptions about the technology. “There was a lot of fear at Kraft Heinz associated with AI, and a lot of misconceptions around what it could do,” Stiglitz noted. “They built the program that made AI much more accessible. What they were really excited about was that they would be able to experience AI through the learning experience while they were learning about AI.” The future of learning: Connecting skills development to business outcomes While many aspects of learning can be automated, Stiglitz believes certain elements will remain distinctly human. “Peer-to-peer interaction, where people are sharing their experiences and ideas is still really valuable,” she said. “Learning from somebody else who’s going through the same experience as you, having this sort of emotional support associated with that, and that’s particularly important for leadership and management courses.” Looking ahead, Stiglitz envisions AI enabling a tighter connection between learning and measurable business outcomes. “If you think about what learning is, it’s really about enabling human performance,” she said. “The reason why it’s gotten sort of fragmented or disassociated from the actual objective is it’s been hard to sort of measure those connections.” Backed by prominent investors including Salesforce Ventures, Greylock Ventures, and the co-founders of OpenAI and DeepMind, Uplimit appears well-positioned in a corporate learning market ripe for transformation. As companies face the dual challenge of integrating AI while ensuring their workforce can adapt, Uplimit’s approach suggests that AI itself may offer the most viable solution to the very disruption it creates. source

Uplimit raises stakes in corporate learning with suite of AI agents that can train thousands of employees simultaneously Read More »

AM Radio Bill Clears Bar For Senate OK, Backers Say

By Courtney Bublé ( April 4, 2025, 6:50 PM EDT) — A bipartisan bill to keep AM radio capabilities in cars has cleared the filibuster hurdle…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

AM Radio Bill Clears Bar For Senate OK, Backers Say Read More »

IoT, IIoT, IoMT, and OT – Welcome to acronym mania. What does it all mean?

Across IT, acronyms come with the territory. Whether they’re classic ones (ENIAC, Electronic Numerical Integrator and Computer), just a tad more modern (VAX, Virtual Address eXtension), network-based (TCP/IP, Transmission Control Protocol/Internet Protocol; XNS, Xerox Network Systems), or cybersecurity-related (NGAV, next-gen antivirus; DLP, data loss prevention; IDS, intrusion detection system), the acronyms and the process of keeping up with them are endless. It doesn’t help that many IT vendors create new acronyms in an effort to stand out and make themselves feel special. In the world of autonomous endpoints, we are dealing with five primary acronyms. To clarify the meaning of these acronyms, here is some guidance and perspective. IoT: internet of things This is the broadest category, as there are a myriad of devices and technologies within it, both at home or as part of a business. Device types range from smart assistants, doorbell cameras, and fitness trackers to printers, security door locks, and warehouse label scanners. What ties these devices together is that they are designed to communicate and exchange internet data, with ‘I’ being the key letter in the acronym. IoT devices, such as sensors and actuators, are integrated into or attached to machines or assets and connected to the internet via a Wi-Fi connection or through cellular networks. The devices use cloud platforms to send and receive data to make informed decisions about the connected assets. IIoT: industrial internet of things A subset of the IoT category, these devices, as the name implies, are made for heavy work but are often larger than simple sensors or scanners. IIoT devices are usually focused on improving industrial processes, including predictive maintenance, asset tracking, quality monitoring, process optimization, supply chain visibility, and building management. The industrial aspect isn’t restrictive to just monitoring; it can also incorporate devices such as electric vehicle chargers or building management systems. The first ‘I’ is the differentiator in the acronym. OT: operational technology As the name implies, OT encompasses the hardware and software that controls the physical operation of industrial devices. Here is where we will find manufacturing, energy production and transmission, water treatment devices, or factory equipment. Connectivity is regularly restricted to private networks, but in recent years, OT has started to have external/internet connections. The focus is on the ‘O.’ To make matters worse, under OT, you also have industrial control system (ICS), supervisory control and data acquisition (SCADA), distributed control systems (DCS), and programmable logic controllers (PLC). There seems to be no end to OT-based acronyms. IoMT: internet of medical things As the ‘M’ implies, this subset of IoT revolves around devices used within the healthcare industry. These could be devices in a hospital, such as infusion pumps or smart medication dispensers, or outside devices like blood pressure monitors, CPAP machines, and pacemakers. But like IIoT, you also have devices that could be considered operational technology like MRI or X-ray machines, but it is generally accepted that IoMT, the ‘M’ for medical being the distinction, will incorporate both IoT and OT. M2M: machine to machine This entails technology that enables machines to interact via wireless or wired communication channels without human intervention. Devices connect and interact with each other to exchange information and perform actions without requiring an internet connection. M2M technology is often integrated into security, track and trace, automation, manufacturing, and facility management processes. IoT technology differs from M2M communication because IoT extends interactions to include cloud-based networks. Please note: We recognize that there are many other relevant IoT-related acronyms, which we will explore in an upcoming IoT report. A simplified version that takes these distinctions to just IoT and OT would be: IoT devices are those that you run inside your business. If these devices go offline, you may have some challenges, but your business can still function. OT devices are those that run your business. If these devices go offline, you’re not doing business. Like all analogies, there are exceptions that don’t fit. For instance, if your medical business relies on performing MRI scans and the MRI machine is offline, you can’t do business. A hospital can treat patients without IoT infusion pumps or Bluetooth pulse oximeter sensors, but it won’t be easy. And would you really want to run your industrial manufacturing tools without IoT noxious gas sensors? For a little more distinction, we could use this image below:   Device protection is important with IoT and OT, but the purpose is different. For IoT devices, the goal is to protect the data. For OT, the goal is maintaining operational safety. Because of this, the approaches to security for these technologies have historically been different. Until recently, many enterprises completely walled off their OT devices into their own air-gapped network, developing extensive human-action security policies to control the flow of data in and out of the network to ensure that these devices stayed operational and weren’t exposed to internal or external threats. Conversely, IoT devices were often interspersed throughout the enterprise with other endpoints. In more secure environments, network traffic to and from these devices is logically segmented and controlled to protect them against internet-based threats. Security in IoT and OT environments is currently changing. The walls between the OT devices and the rest of the network are becoming porous. Business leaders are still highly concerned about OT security, but the need for connectivity to IT and internet resources is growing. For IoT, simple segmentation is no longer sufficient because of the mounting threats. This is leading business and security leaders to deploy solutions to improve device security. New acronyms will continue to emerge (such as the confusing CPS, cyber physical security) as IoT and OT security solutions expand. I’m still dreading hearing about the first IoTDR solution. Vendors in this space need to stop throwing out word salad in an attempt to make something relevant and stick with established acronyms. If you’d like to get assistance in understanding the complexities of managing and securing IoT and OT devices, please schedule an inquiry or guidance session

IoT, IIoT, IoMT, and OT – Welcome to acronym mania. What does it all mean? Read More »

DDoS Attacks Now Key Weapons in Geopolitical Conflicts, NETSCOUT Warns

Image: EV_Korobov/Adobe Stock Cyberattacks aren’t just about stealing data anymore — they’ve evolved into a key weapon in geopolitical fights, crippling vital infrastructure, and shaking public trust in governments. A new report by NETSCOUT reveals that hackers are increasingly using Distributed Denial of Service (DDoS) attacks to disrupt elections, protests, and policy debates, turning digital sabotage into a tool of modern warfare. The company’s Second Half 2024 DDoS Threat Intelligence Report sheds light on how cybercriminals and hacktivist groups have turned DDoS attacks into a dominant form of cyberwarfare, strategically targeting critical systems during periods of national instability. In addition, NETSCOUT revealed that nearly nine million DDoS attacks were recorded in just the second half of 2024 — a 12.7% increase from the first half. Regions such as Latin America and Asia Pacific were among the largest hit, experiencing approximately 30% and 20% increases, respectively. DDoS attacks surge during political crises According to NETSCOUT, politically motivated DDoS attacks skyrocketed in 2024, with some countries seeing spikes of over 2,800% during major conflicts. Israel faced a 2,844% surge in attacks during hostage rescues and political tensions. Georgia saw a 1,489% jump as lawmakers debated a controversial “Russia Bill.” Mexico experienced a 218% rise in attacks during its national elections. The U.K. had a 152% spike when the Labour Party returned to Parliament. “DDoS has emerged as the go-to tool for cyberwarfare,” said Richard Hummel, NETSCOUT’s threat intelligence director. A pro-Russian hacking group, NoName057(16), was behind many of these strikes, repeatedly hitting government services in the U.K., Belgium, and Spain. AI and botnets make attacks deadlier Hackers are now using artificial intelligence to supercharge their assaults. Most DDoS-for-hire services now use AI to bypass security checks like CAPTCHA, lowering the barrier to entry and increasing attack success rates. Meanwhile, powerful botnets — networks of hijacked devices — are being weaponized to overwhelm servers. Law enforcement agencies, despite coordinated crackdowns like Operation PowerOFF, continue to struggle with long-term takedown effectiveness. Despite global crackdowns, like Operation PowerOFF, new attack platforms quickly replace the ones taken down. “Attackers adapt and reconstitute their networks, with no significant decline in global attack volume,” the report noted. Why DDoS attacks are so dangerous now DDoS attacks don’t just crash websites — they can paralyze essential public services like banks, hospitals, power grids, and emergency response systems. By striking during moments of political turmoil, threat actors amplify national chaos and undermine government credibility. What’s being done to mitigate DDoS attacks? Governments and companies are scrambling to strengthen defenses, but NETSCOUT warned that many organizations are still unprepared. The firm urges businesses running critical services to adopt real-time threat monitoring and better response plans. source

DDoS Attacks Now Key Weapons in Geopolitical Conflicts, NETSCOUT Warns Read More »