6 reasons so many IT orgs fail to exceed expectations today

As Swartz says: “If you’re underspending compared to others, then it should not be a surprise the organization is behind and that IT suffers.” Moreover, he says in such scenarios CIOs usually must commit a high percentage of their limited resources to keep-the-lights-on costs, leaving little to spend on innovation and transformation that can really dazzle their colleagues and deliver big wins for their business. There’s no easy fix for that, but having the CIO reporting to the CEO, not the CFO, can get IT more focused on driving business objectives and land it the money required to deliver, Swartz says. CIOs can exceed expectations even if they’re not able to change reporting structure, he adds, by “finding ways to make ‘less’ [resources] work more effectively” and putting any savings to those tech-driven business projects that will deliver the most benefits. “I would call that an exceed,” Swartz says. 6. Misplaced accountability Confusion about accountability — that is, who is really accountable for what results — is another obstacle for CIOs and IT teams as they aim high, according to Swartz. “The question of what keeps CIOs from exceeding expectations assumes that everyone knows what the CIO is accountable for, and I have seen, in fact, that the answer to that question varies,” he says. Too often, CIOs are held accountable for failures not of their making. CIOs in organizations where business teams turn problems over to IT to fix, where there’s no joint ownership, often won’t have the authority needed to effectively find solutions and drive change. At the same time, the IT department is often still held accountable for the delivery when it inevitably falls short. Worse still, Swartz says, is when the business gets credit in the cases where the project succeeds or exceeds expectations — even if IT drove the positive results. Joshi sees similar issues. “CIOs aren’t appreciated for what they do, and many aren’t recognized until something goes wrong,” he says. Joshi says the solution involves better alignment between IT and business teams on objectives and priorities, more collaboration, and better change management practices. “The challenge lies on both sides: the way business behaves and runs, and the way the technology organization runs,” Joshi adds. Swartz recommends the use of agile development principles, DevOps teams, and a product mindset — all of which, when properly implemented, require business-IT partnerships and joint accountability. He and others say those steps go a long way in helping IT successfully work on tech-driven business initiatives that stand out and get IT the credit it deserves for doing so. source

6 reasons so many IT orgs fail to exceed expectations today Read More »

Calif. Privacy Agency Takes Regulatory Aim At 6th Data Broker

By Allison Grande ( February 20, 2025, 10:52 PM EST) — The California Privacy Protection Agency continued to keep the heat on data brokers Thursday, announcing that it’s pursuing a monetary penalty against a Florida-based company that allegedly failed to comply with the registration requirements of a groundbreaking state data deletion law. … Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Calif. Privacy Agency Takes Regulatory Aim At 6th Data Broker Read More »

Voltron Data just partnered with Accenture to solve one of AI’s biggest headaches

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More As artificial intelligence drives unprecedented demand for data processing, Mountain View startup Voltron Data is offering a solution to one of AI’s least discussed but most critical challenges: moving and transforming massive datasets quickly enough to keep up. Voltron Data, which announced a strategic partnership with Accenture today, has developed a GPU-accelerated analytics engine that could help enterprises overcome the data preparation bottleneck hampering AI initiatives. The company’s core product, Theseus, enables organizations to process petabyte-scale data using graphics processing units (GPUs) instead of traditional computer processors (CPUs). “Everyone’s focused on the flashy new stuff that you can touch and feel, but it’s that dataset foundation underneath that is going to be key,” said Michael Abbott, who leads Accenture’s banking and capital markets practice, in an exclusive interview with VentureBeat. “To make AI work, you’ve got to move data around at a speed and pace you just never had to before.” Building for the AI tsunami: Why traditional data processing won’t cut it The partnership comes as companies rushing to adopt generative AI are discovering their existing data infrastructure isn’t equipped to handle the volume and velocity of data required. This challenge is expected to intensify as AI agents become more prevalent in enterprise operations. “Agents will probably write more SQL queries than humans did in a very short time horizon,” said Rodrigo Aramburu, Voltron Data’s Field CTO and cofounder. “If CIOs and CTOs are already saying they spend way too much on data analytics and cloud infrastructure, and the demand is about to step function higher, then we need a step function down in the cost of running those queries.” Unlike traditional database vendors that have retrofitted GPU support onto existing systems, Voltron Data built its engine from the ground up for GPU acceleration. “What most companies have done when they’ve tried to do GPU acceleration is they’ll shoehorn GPUs onto an existing system,” Aramburu told VentureBeat. “By building from the ground up…we’re able to get 10x, 20x, 100x depending on the performance profile of a particular workload.” From 1,400 servers to 14: Early adopters see dramatic results The company positions Theseus as complementary to established platforms like Snowflake and Databricks, leveraging the Apache Arrow framework for efficient data movement. “It’s really an accelerator to all those databases, rather than competition,” Abbott said. “It’s still using the same SQL that was written to get the same answer, but get there a lot faster and quicker in a parallel fashion.” Early adoption has focused on data-intensive industries like financial services, where use cases include fraud detection, risk modeling and regulatory compliance. One large retailer reduced its server count from 1,400 CPU machines to just 14 GPU servers after implementing Theseus, according to Aramburu. Since launching at Nvidia’s GTC conference last March, Voltron Data has secured about 14 enterprise customers, including two large government agencies. The company plans to release a “test drive” version that will allow potential customers to experiment with GPU-accelerated queries on terabyte-scale datasets. Turning the GPU shortage into an opportunity The current GPU shortage sparked by AI demand has been both challenging and beneficial for Voltron Data. While new deployments face hardware constraints, many enterprises possess underutilized GPU infrastructure originally purchased for AI workloads, assets that could be repurposed for data processing during idle periods. “We actually saw it as a boon in that there’s just so many GPUs out in the market that previously weren’t there,” Aramburu noted, adding that Theseus can run effectively on older GPU generations that might otherwise be deprecated. The technology could be particularly valuable for banks dealing with what Abbott calls “trapped data” — information locked in formats like PDFs and documents that could be valuable for AI training but is difficult to access and process at scale. “You’ve seen some of the data that Voltron would show you is potentially 90% more effective and efficient to move data using this technology than standard CPUs,” Abbott said. “That’s the power.” As enterprises grapple with the data demands of AI, solutions that can accelerate data processing and reduce infrastructure costs are likely to become increasingly critical. The partnership with Accenture could help Voltron Data reach more organizations facing these challenges, while giving Accenture’s clients access to technology that could significantly improve their AI initiatives’ performance and efficiency. source

Voltron Data just partnered with Accenture to solve one of AI’s biggest headaches Read More »

IT infrastructure complexity hindering cyber resilience

The complexity of IT and security infrastructure was highlighted as the greatest obstacle to achieving cyber resilience according to new research, Unlock the Resilience Factor from Zscaler. Forty-three percent of 1,700 IT and security leaders worldwide ranked the challenge as a major barrier to an improved ability to recover from serious cyber events, nine percentage points above the second-placed issue: legacy security and IT issues. The survey results underscore the pressing need for organizations to rethink their approach and shift towards resilience by design. Resilience red flag Despite the obstacles, nearly half of IT leaders (49%) believe their infrastructure is highly resilient, and a further substantial portion (43%) consider it somewhat resilient. However, this perception of resilience must be backed up by robust, tested strategies that can withstand real-world threats. One major gap in the findings is that four in ten respondents admitted their organization had not reviewed its cyber resilience strategy in the last six months. Given the rapid evolution of cyber threats and continuous changes in corporate IT environments, failing to update and test resilience plans can leave businesses exposed when attacks or major outages occur. The importance of integrating cyber resilience into a broader organizational resilience strategy cannot be overstated. With cybersecurity now fundamental to business operations, it must be considered alongside financial, operational, and reputational risk planning to ensure continuity in the face of disruptions. Expectation of disruption Limited investment in cyber resilience remains a challenge, despite rising security budgets overall: nearly 49% of U.S.-based IT leaders globally believe their budget for cyber resilience is inadequate. India (67%) expressed the greatest concern. A lack of budget cannot be put down to a lack of evidence of need. Over the past six months, 45% of respondents worldwide said their organization experienced a cyber incident, with the highest rates reported in Sweden (71%) and Germany (53%). Leaders also expect to face adversity in the near future with 60% anticipating a significant cybersecurity failure within the next six months, which reflects the sheer volume of cyber attacks as well as a growing recognition that cloud services are not immune to disruptions and outages. Expectations vary by region—ranging from 68% in Sweden to 33% in France and the UK & Ireland—but the overall consensus is clear: resilience is no longer optional, but essential. Resilience by design: A path forward Improving an organization’s ability to rebound after an incident starts with moving to a modern zero trust architecture, which achieves several key outcomes. First and most importantly, it removes IT and cybersecurity complexity–the key impediment to enhancing cyber resilience. Eliminating traditional security dependencies such as firewalls and VPNs not only reduces the organization’s attack surface, but also streamlines operations, cuts infrastructure costs, and improves IT agility. Zero trust allows security teams to focus on strategic initiatives rather than maintaining outdated security controls. The second big win is the inability of attackers to move laterally should a compromise at an endpoint occur. Users are verified and given the lowest privileges necessary each time they access a corporate resource, meaning ransomware and other data-stealing threats are far less of a concern. The potential for a cloud outage due to natural or human-made disruptions, including cyber attacks and sabotage, persists, and cloud service purchasing decisions are often driven by feature sets rather than resilience. A nuanced approach is needed: while a four-hour outage of an internal HR platform may be tolerable, the same disruption to core communication systems could be catastrophic. Due to the criticality of its services, Zscaler prioritizes security and reliability in its development strategy. Through building and owning its cloud infrastructure, Zscaler maintains complete control over its core offerings, meaning no single data center outage can disrupt customer operations. Identify shortcomings through testing By designing for scale and automation, Zscaler provides tools that help businesses minimize downtime. Many of its 7,500 customers experience 100% uptime because they fully leverage the resilience and reliability best practices, integrations, and automation tools that Zscaler offers. Further, customers can host their own private failover cloud instances should the Zero Trust Exchange become unreachable, allowing for continued access and policy enforcement, even if Zscaler experiences an outage. Regardless of safeguards, regular disaster recovery exercises—conducted twice yearly—should define roles, responsibilities, and communication protocols to prepare teams for potential crises. Exercises identify shortcomings that can be addressed ahead of a real incident. Organizations must move beyond a reactive mindset. By embedding resilience into their cybersecurity DNA—through Zero Trust, vendor scrutiny, and continuous testing—businesses can safeguard operations against inevitable disruptions. To learn more, visit us here. source

IT infrastructure complexity hindering cyber resilience Read More »

DeepSeek’s R1 and OpenAI’s Deep Research just redefined AI — RAG, distillation, and custom models will never be the same

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Things are moving quickly in AI — and if you’re not keeping up, you’re falling behind.  Two recent developments are reshaping the landscape for developers and enterprises alike: DeepSeek’s R1 model release and OpenAI’s new Deep Research product. Together, they’re redefining the cost and accessibility of powerful reasoning models, which has been well reported on. Less talked about, however, is how they’ll push companies to use techniques like distillation, supervised fine-tuning (SFT), reinforcement learning (RL) and retrieval-augmented generation (RAG) to build smarter, more specialized AI applications. After the initial excitement around the amazing achievements of DeepSeek begins to settle, developers and enterprise decision-makers need to consider what it means for them. From pricing and performance to hallucination risks and the importance of clean data, here’s what these breakthroughs mean for anyone building AI today. Cheaper, transparent, industry-leading reasoning models – but through distillation The headline with DeepSeek-R1 is simple: It delivers an industry-leading reasoning model at a fraction of the cost of OpenAI’s o1. Specifically, it’s about 30 times cheaper to run, and unlike many closed models, DeepSeek offers full transparency around its reasoning steps. For developers, this means you can now build highly customized AI models without breaking the bank — whether through distillation, fine-tuning or simple RAG implementations. Distillation, in particular, is emerging as a powerful tool. By using DeepSeek-R1 as a “teacher model,” companies can create smaller, task-specific models that inherit R1’s superior reasoning capabilities. These smaller models, in fact, are the future for most enterprise companies. The full R1 reasoning model can be too much for what companies need — thinking too much, and not taking the decisive action companies need for their specific domain applications. “One of the things that no one is really talking about, certainly in the mainstream media, is that, actually, reasoning models are not working that well for things like agents,” said Sam Witteveen, a machine learning (ML) developer who works on AI agents that are increasingly orchestrating enterprise applications.   As part of its release, DeepSeek distilled its own reasoning capabilities onto a number of smaller models, including open-source models from Meta’s Llama family and Alibaba’s Qwen family, as described in its paper. It’s these smaller models that can then be optimized for specific tasks. This trend toward smaller, fast models to serve custom-built needs will accelerate: Eventually there will be armies of them.  “We are starting to move into a world now where people are using multiple models. They’re not just using one model all the time,” said Witteveen. And this includes the low-cost, smaller closed-sourced models from Google and OpenAI as well. “The means that models like Gemini Flash, GPT-4o Mini, and these really cheap models actually work really well for 80% of use cases.” If you work in an obscure domain, and have resources: Use SFT…  After the distilling step, enterprise companies have a few options to make sure the model is ready for their specific application. If you’re a company in a very specific domain, where details are not on the web or in books — which large language models (LLMs) typically train on — you can inject it with your own domain-specific data sets, with SFT. One example would be the ship container-building industry, where specifications, protocols and regulations are not widely available.  DeepSeek showed that you can do this well with “thousands” of question-answer data sets. For an example of how others can put this into practice, IBM engineer Chris Hay demonstrated how he fine-tuned a small model using his own math-specific datasets to achieve lightning-fast responses — outperforming OpenAI’s o1 on the same tasks (View the hands-on video here.) …and a little RL Additionally, companies wanting to train a model with additional alignment to specific preferences — for example, making a customer support chatbot sound empathetic while being concise — will want to do some RL. This is also good if a company wants its chatbot to adapt its tone and recommendation based on user feedback. As every model gets good at everything, “personality” is going to be increasingly big, Wharton AI professor Ethan Mollick said on X. These SFT and RL steps can be tricky for companies to implement well, however. Feed the model with data from one specific domain area, or tune it to act a certain way, and it suddenly becomes useless for doing tasks outside of that domain or style. For most companies, RAG will be good enough For most companies, however, RAG is the easiest and safest path forward. RAG is a relatively straight-forward process that allows organizations to ground their models with proprietary data contained in their own databases — ensuring outputs are accurate and domain-specific. Here, an LLM feeds a user’s prompt into vector and graph databases to search information relevant to that prompt. RAG processes have gotten very good at finding only the most relevant content. This approach also helps counteract some of the hallucination issues associated with DeepSeek, which currently hallucinates 14% of the time compared to 8% for OpenAI’s o3 model, according to a study done by Vectara, a vendor that helps companies with the RAG process.  This distillation of models plus RAG is where the magic will come for most companies. It has become so incredibly easy to do, even for those with limited data science or coding expertise. I personally downloaded the DeepSeek distilled 1.5b Qwen model, the smallest one, so that it could fit nicely on my Macbook Air. I then loaded up some PDFs of job applicant resumes into a vector database, then asked the model to look over the applicants to tell me which ones were qualified to work at VentureBeat. (In all, this took me 74 lines of code, which I basically borrowed from others doing the same). I loved that the Deepseek distilled model showed its thinking process behind why or why not it recommended each applicant —

DeepSeek’s R1 and OpenAI’s Deep Research just redefined AI — RAG, distillation, and custom models will never be the same Read More »

A Simple Definition Of “Platform”

“The only simplicity for which I would give a straw is that which is on the other side of the complex.” — Oliver Wendell Holmes Jr.* At Forrester, we’ve periodically debated the meaning of the word “platform,” and it’s been challenging. Common ground has eluded those who cover ecosystems such as Amazon and Salesforce versus those covering platform engineering. Recently, we’ve been discussing this common definition of platform: “a product that supports the creation and/or delivery of other products.” The following diagram illustrates this concept: The acid test for a unified definition of “platform”: What can we say that would be true both of the Amazon retail ecosystem as well as Amazon Web Services? Well, what do a new Amazon storefront and a new AWS account have in common? Both of them are going to require a lot more investment by their owners to deliver any value. An empty Amazon storefront? You need to figure out your product mix, supply chain, pricing, marketing, etc. Amazon gives you a lot of help, but you have much work ahead of you in configuring the platform for value. An AWS account? Empty EC2 virtual machines or Lambda functions? Not doing anyone much good until you install and run software and surround those workloads with a lot of additional capabilities. So we can say that platforms, in general, require further investment, and the result of such investment is typically value-generating capability. It’s also well established that platforms are products (see Team Topologies and other sources). Therefore, in a world pivoting to the product model, it seems reasonable to simply say that the platform is a product that is creating, or supporting the delivery of, other products. We also see platforms as either “infrastructure” or “business.” Sometimes a given vendor provides both — Salesforce with Force.com as an infrastructure platform (a platform as a service, in the classic definition), Agentforce for CRM, etc. Note that both require serious investment to get going (and this is not a criticism of Salesforce; it’s just a general observation that you’re not going to have a functioning CRM capability without investing substantial setup effort). The boundary here is simple: Infrastructure is business-agnostic (in general, it could work in various industry verticals) while a business platform embeds business-meaningful semantics in the form of APIs, data, or services. Customer relationship management, supply chain, pricing, payment sales funnels — these are all business-specific concepts, and if that’s what’s on offer, you have a business platform. (Some nuance in the above diagram: Business platforms may support constructed apps or be directly configured for consumer access, but in either case, it’s effort, and for me, it’s “application” by definition if the end consumer is interacting directly.) Finally, I can already feel the eyebrows raising at the inclusion of “application.” I’ll be talking more about this as we update Forrester’s Four-Lifecycle Model, but for now, I’ll just say: If platforms are “products,” then we need a specific label for products that are not platforms (data geeks will recognize the subtyping problem). And with due respect to Team Topologies, I have not seen the term “stream-aligned” get traction in portfolio management. Conversely, the term “application” is here to stay and has a reasonably consistent industry meaning, at least in the discussions I have with IT leaders — more on this later. Finally, this model is part of the Forrester Platform Engineering Capability Model, just released last week. I’ll be doing another blog on the core of that work. Also, be sure to check out Embrace Platform Org Structures To Break Down Silos And Deliver Scale, also just out this month, which I coauthored with Manuel Geitz! *Wikiquote notes: “Often quoted as ‘I wouldn’t give a fig for the simplicity on this side of complexity; I would give my right arm for the simplicity on the far side of complexity’ and attributed to Oliver Wendell Holmes, Sr.” source

A Simple Definition Of “Platform” Read More »

FTC's Holyoak Has Her Eyes On DeepSeek

By Bryan Koenig ( February 21, 2025, 10:24 PM EST) — Federal Trade Commission member Melissa Holyoak suggested Friday that DeepSeek, the Chinese artificial intelligence startup whose rise has roiled AI markets, could have competed unfairly if it really trained its model using ChatGPT in violation of OpenAI’s policies, as has been suggested…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

FTC's Holyoak Has Her Eyes On DeepSeek Read More »

Is a Small Language Model Better Than an LLM for You?

While it’s tempting to brush aside seemingly minimal AI model token costs, that’s only one line item in the total cost of ownership (TCO) calculation. Still, managing model costs is the right place to start in getting control over the end sum. Choosing the right sized model for a given task is imperative as the first step. But it’s also important to remember that when it comes to AI models, bigger is not always better and smaller is not always smarter.   “Small language models (SLMs) and large language models (LLMs) are both AI-based models, but they serve different purposes,” says Atalia Horenshtien, head of the data and AI practice in North America at Customertimes, a digital consultancy firm.  “SLMs are compact models, efficient, and tailored for specific tasks and domains. LLMs, are massive models, require significant resources, shine in more complex scenarios and fit general and versatile cases,” Horenshtien adds.   While it makes sense in terms of performance to choose the right size model for the job, there are some who would argue model size isn’t much of a cost argument even though large models cost more than smaller ones.   “Focusing on the price of using an LLM seems a bit misguided. If it is for internal use within a company, the cost usually is lass than 1% of what you pay your employees. OpenAI, for example, charges $60 per month for an Enterprise GPT license for an employee if you sign up for a few hundred. Most white-collar employees are paid more than 100x that, and even more as fully loaded costs,” says Kaj van de Loo, CPTO, CTO, and chief innovation officer at UserTesting.  Related:AI Is Improving Medical Monitoring and Follow-Up Instead, this argument goes, the cost should be viewed in a different light.  “Do you think using an LLM will make the employee more than 1% more productive? I do, in every case I have come across. It [focusing on the price] is like trying to make a business case for using email or video conferencing. It is not worth the time,” van de Loo adds.  Size Matters but Maybe Not as You Expect  On the surface, arguing about model sizes seems a bit like splitting hairs. After all, a small language model is still typically large. A SLM is generally defined as having fewer than 10 billion parameters. But that leaves a lot of leeway too, so sometimes an SLM can have only a few thousand parameters although most people will define an SLM as having between 1 billion to 10 billion parameters.  As a matter of reference, medium language models (MLM) are generally defined as having between 10B and 100B parameters while large language models have more than 100 billion parameters. Sometimes MLMs are lumped into the LLM category too, because what’s a few extra billion parameters, really? Suffice it to say, they’re all big with some being bigger than others.  Related:Medallion Architecture: A Layered Data Optimization Model In case you’re wondering, parameters are internal variables or learning control settings. They enable models to learn, but adding more of them adds more complexity too.   “Borrowing from hardware terminology, an LLM is like a system’s general-purpose CPU, while SLMs often resemble ASICs — application-specific chips optimized for specific tasks,” says Professor Eran Yahav, an associate professor at the computer science department at the Technion – Israel Institute of Technology and a distinguished expert in AI and software development. Yahav has a research background in static program analysis, program synthesis, and program verification from his roles at IBM Research and Technion. Currently, he is CTO and co-founder of Tabnine, an AI-coding assistant for software developers.   To reduce issues and level-up the advantages in both large and small models, many companies do not choose one size over the other.  “In practice, systems leverage both: SLMs excel in cost, latency, and accuracy for specific tasks, while LLMs ensure versatility and adaptability,” adds Yahav.  Related:The Cost of AI: How Can We Adopt and Deliver AI Efficiently? As a general rule, the main differences in model sizes pertain to performance, use cases, and resource consumption levels. But creative use of any sized model can easily smudge the line between them.  “SLMs are faster and cheaper, making them appealing for specific, well-defined use cases. They can, however, be fine-tuned to outperform LLMs and used to build an agentic workflow, which brings together several different ‘agents’ — each of which is a model — to accomplish a task. Each model has a narrow task, but collectively they can outperform an LLM,” explains, Mark Lawyer, RWS‘ president of regulated industries and linguistic AI.  There’s a caveat in defining SLMs versus LLMs in terms of task-specific performance, too.  “The distinction between large and small models isn’t clearly defined yet,” says Roman Eloshvili, founder and CEO of XData Group, a B2B software development company that exclusively serves banks. “You could say that many SLMs from major players are essentially simplified versions of LLMs, just less powerful due to having fewer parameters. And they are not always designed exclusively for narrow tasks, either.”   The ongoing evolution of generative AI is also muddying the issue.  “Advancements in generative AI have been so rapid that models classified as SLMs today were considered LLMs just a year ago. Interestingly, many modern LLMs leverage a mixture of experts architecture, where smaller specialized language models handle specific tasks or domains. This means that behind the scenes SLMs often play a critical role in powering the functionality of LLMs,” says Rogers Jeffrey Leo John, co-founder and CTO of DataChat, a no-code, generative AI platform for instant analytics.  In for a Penny, in for a Pound  SLMs are the clear favorite when the bottom line is the top consideration. They are also the only choice when a small form factor comes into play.  “Since the SLMs are smaller, their inference cycle is faster. They also require less compute, and they’re likely your only option if you need to run the model on an

Is a Small Language Model Better Than an LLM for You? Read More »

The Ultimate Guide to Benefits

What Are Employee Benefits Employee benefits are non-wage compensations you provide to your employees alongside their regular salaries or wages. These perks are designed to support their well-being, boost job satisfaction, and position your company as a desirable workplace. Benefits can be mandatory, as required by law, or voluntary, offered by you to gain a competitive edge in attracting and retaining talent. Employees can pay a small portion of the cost, with you as their employer covering the rest. Offered on a monthly basis, the four main categories of employee benefits are: Insurance plans, such as life or health insurance Retirement plans, such as 401(k) plans Additional compensation plans, such as bonuses Time off policies, such as paid vacation or sick days Most benefits are subject to income tax withholding and employment taxes because the IRS considers them part of the employee’s gross income for services rendered. When factoring in mandatory and extra benefits, they can account for up to 9% of your total compensation costs per employee, according to the Bureau of Labor Statistics. 1 BambooHR Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Micro (0-49 Employees), Small (50-249 Employees), Medium (250-999 Employees) Micro, Small, Medium Features Applicant Tracking, Benefits Administration, Onboarding, and more 2 Rippling Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Any Company Size Any Company Size Features 360 Degree Feedback, Applicant Tracking, Benefits Administration, and more Why Are Employee Benefits Important Employee benefits contribute to both your employees’ well-being and your company’s success. For example, offering generous employee benefits packages can help you manage a positive employee lifecycle by attracting and recruiting top talent, improving employee health and job satisfaction, and retaining top talent. These benefits enhance your company’s competitive edge and productivity. Let’s take a look at each of these advantages. Attracting Top Talent Up to 34% of employees see benefits as the second biggest motivator to look for another job. Clearly, employers who offer competitive benefits packages are more likely to draw in jobseekers, lending to their overall competitive edge in their industries. Enhancing Employee Satisfaction In a 2022 study conducted by LIMRA, 63% of employees said their benefits packages contribute to their decision to stay with a company. This means the benefits you offer your employees directly correlate to whether they are satisfied enough with their jobs. In addition, the number of benefits you offer may contribute to higher on-the-job satisfaction. In the same study, two-thirds of employees said they were satisfied with six or more benefits compared to only three out of 10 saying they were satisfied with one to three benefits. Improving Employee Health Health insurance is mandatory under the Affordable Care Act (ACA) for companies with more than 50 full-time or full-time equivalent employees. It also benefits businesses by ensuring healthier, more productive teams. Employees not worrying about a sick child or other dependent at home are often more focused at work. Many health plans come with preventative care that can hinder the development of serious personal or familial illnesses. These plans may reduce underproductivity and minimize excessive time off, saving your company from financial and productivity loss. Strengthening Employee Retention Employees who can plan their retirements with your company are incentivized to stay long-term. You can encourage this kind of loyalty by offering pension and 401(k) retirement plans, among others. A pension plan could offer employees a retirement income, while a 401(k) plan offers an employer contribution to an employee retirement savings plan. Employee Benefits Types Employee benefits are divided into four categories.  Below is an overview of each, followed by a table listing benefit options for each category. Insurance Benefits Insurance benefits may include health, dental, vision, life, and disability insurance. But to stand out in a competitive job market, you can also include these additional benefits: Accidental death and dismemberment (AD&D) policies: These plans are often add-ons to health or life insurance policies. They cover expenses related to the policyholders’ accidental deaths or dismemberments, such as if the holder loses a limb, vision, hearing, or speech in an accident. Short-term disability policies: These policies help employees keep afloat if they experience a sudden but transitory disability, such as a non-work illness, injury, or other medical condition. Flexible spending accounts (FSAs): These accounts are often part of healthcare plans and allow employees to set aside part of their pre-tax salary for healthcare expenses and co-pays, self-care expenses, and even child care. Long-term care insurance: This policy pays employees who need long-term care. It can cover assistance with everyday tasks like bathing, dressing, and eating; adult day care services; transportation; or a place in an assisted living or nursing home. Retirement Plans For companies of a certain size, retirement plans are part of the legally mandated offering. However, even smaller companies that are not required to provide them often do so to drive employee satisfaction and retention. Some examples of retirement plans you can offer as employee benefits include: 401(k) plans: Employees contribute a portion of each paycheck to save for retirement. You may also match these contributions, provide a partial matching program, or offer profit sharing. These 401(k) plans often come with employer tax benefits. SIMPLE IRA plans:  Similar to 401(k) plans, these plans are usually offered by smaller employers. Employees can contribute funds from their paychecks, while your company can agree to match their contributions. Funds are contributed on a pre-tax basis, and employers also enjoy tax benefits from offering SIMPLE IRA plans. Employee stock ownership plans (ESOPs): These plans award employees ownership of the company in the form of stocks at retirement. In doing so, they incentivize active employees to work toward the company’s profitability and stay with the company until retirement. Additional Compensation Plans While employee benefits are often interchangeably referred to as “fringe benefits,” the latter are offered outside of the company’s standard or legally mandated benefits package. Even though some of these benefits are not provided in monetary form,

The Ultimate Guide to Benefits Read More »

Qualcomm, Intel, and Others Form Ambient IoT Coalition

Organizations including Qualcomm and Wiliot have announced the formation of the Ambient IoT Alliance, a multi-standard ecosystem of ambient IoT manufacturers, suppliers, integrators, operators, users, and customers. Ambient IoT is an ecosystem for devices that draw energy from ambient radio waves, light, motion, heat, or any other widely available, pervasive source. Bluetooth, 5G Advanced, and 802.11bp could help support this class of devices, which offers high scalability and, potentially, lower costs than non-ambient versions, the alliance said. The term ambient IoT could apply to a wide variety of  “battery-less things,” such as sensors for location, temperature, and humidity. In the press release, the Ambient IoT Alliance said their ecosystem is not meant to replace any other standardization activities; instead, the group will promote ambient IoT and contribute documentation, support, and use cases to standardization efforts where appropriate. Must-read IoT coverage Who are the founding members of the Ambient IoT Alliance? Founding members of the Ambient IoT Alliance include: Atmosic. Infineon Technologies AG. Intel. PepsiCo. Qualcomm. VusionGroup. Wiliot. The group’s founders hope businesses, telcos, technology vendors, and standards bodies will join to speed up the formal, global standardization of ambient IoT already in progress among the IEEE (Wi-Fi), Bluetooth SIG, and 3GPP (5G Advanced). What will the Ambient IoT Alliance enable? The Alliance’s vision is to connect the wireless radios in mobile devices and appliances to more IoT-networked devices. The Ambient IoT Alliance sees this method being used in “supply chains, retail channels, and healthcare delivery services.” The method is also a chance to gather the large sets of data artificial intelligence can sort through. Ambient IoT could realize the ROI potential of AI and revolutionize supply chains or customer experiences, the group proposed. “Today, artificial intelligence is running out of reliable data to train on,” said Steve Statler, spokesperson for the Ambient IoT Alliance. “This is a danger – the technology can end up eating itself by training itself on the output from AI. The data coming from the ambient IoT offers orders of magnitude more quantity and also an increasing quality, because it’s coming from real items.” Ambient IoT potentially offers options for energy reduction As batteryless devices, ambient IoT components offer a way to reduce energy use and, at best, minimize maintenance time. “Ambient IoT is well aligned with Infineon’s strategic focus on IoT and Energy leadership and our motto of driving digitalization and decarbonization,” said Dr. Kamesh Medapalli, senior vice president, Connected Secure Systems, Infineon Technologies. Specifically, proponents of ambient IoT say it can “rearchitect” the way wireless tracking devices use energy. “Ambient IoT is the key to sustainable IoT adoption,” said Atmosic co-founder David Su. “It allows us to rearchitect wireless tracking solutions to either use very little power or harvested energy, so everything can remain connected continuously and companies can operate at maximum efficiency.” “Eliminating batteries and eliminating cabling brings down the cost, and because this is a new generation of IoT Ambient IoT, tags can cost as little as ten cents with the technology being produced in 2025,” said Statler. source

Qualcomm, Intel, and Others Form Ambient IoT Coalition Read More »