CIO CIO

Buyer’s remorse: What to do when an IT purchase isn’t a great fit

The vendor might offer you an “out” on the contract, with the proviso that you pay a penalty cancellation fee; or There might be other products or services that the vendor offers that would be useful, creating a possible “trade in” scenario in which the original fee can be applied to purchase the new asset. This can be a win-win for the vendor and your company; the vendor doesn’t lose a sale or a client’s good will, and you get a solution that works without incurring further loss. Sell, repurpose, donate There are cases where companies have been able to sell or trade unwanted assets to other companies or to third-party vendors. There is usually some monetary loss from the first investment, but the seller can recover a substantial amount of the original price paid. This works well for older hardware (e.g. disk drives, processors) that third-party equipment suppliers are eager to buy, re-certify, and sell. Another route, as we took with the marketing system we purchased and regretted, is to seek ways to repurpose the asset. Hardware offers the most likely chance for a repurpose win. In our case, with the server redeployed for financial reporting, only the software ended up as a loss. Unwanted or aging assets can also be donated to nonprofit groups, and you can use the donation as a charitable write-off. This isn’t a great way to “monetize” a failed asset, but it is a way to make something of your buyer’s remorse, including helping a charity of your choosing to improve its technology stack. source

Buyer’s remorse: What to do when an IT purchase isn’t a great fit Read More »

Your enterprise business needs an AI policy. Here’s how to build it out

Scalability is essential — not just across geographies, but across roles and technologies. The risks introduced by a generative AI content tool differ from those posed by predictive analytics in finance or compliance. Your policy should reflect that nuance, flexing where needed while maintaining core principles.  And while such principles — transparency, accountability, fairness — should remain steady, your policy must be built for movement. AI evolves rapidly, as do the legal and operational risks accompanying it. Governance must include mechanisms for iteration, feedback and change so that your policy is resilient over time.  Equip people with awareness, and they will use AI responsibly  Even the most thoughtful AI policy will fail if the people it’s meant to guide don’t know they’re using AI in the first place. Today, AI is baked into everyday tools — sales enablement platforms, writing assistants, chatbots, CRM plug-ins — but many employees don’t recognize it. That lack of awareness is more than a training issue. It’s a risk exposure.  source

Your enterprise business needs an AI policy. Here’s how to build it out Read More »

The 10 fastest growing US tech hubs for IT talent

Percent higher than national median: 106% Austin Austin ranked in the top quartile for cost of living and second for tech wage premium. In addition to tech, the top industries driving tech hiring include professional, scientific, and technical services; public sector; and finance and insurance. Net tech employment in Austin is projected to grow 4.4% and currently makes up just over 13% of the overall workforce, with an economic impact of $51.2 billion in 2024. Some of the top tech companies in Austin include Apple, Tesla, Google, Dell, Amazon, Samsung, AlertMedia, BAE Systems, and General Motors. Median tech wage: $118,888 source

The 10 fastest growing US tech hubs for IT talent Read More »

New Report Reveals Just 10% of Employees Drive 73% of Cyber Risk

Living Security, the global leader in Human Risk Management (HRM), today released the 2025 State of Human Cyber Risk Report, an independent study conducted by leading research firm Cyentia Institute. The report provides an unprecedented look at behavioral risk inside organizations and reveals how strategic HRM programs can reduce that risk 60% faster than traditional methods. Drawing on behavioral data from more than 100 enterprises and hundreds of millions of user events, the study offers a first-of-its-kind, data-driven map of where cyber risk actually lives in the workforce and how leading organizations are shrinking it. The report confirms a long-suspected but rarely proven reality: a small fraction of employees (just 10%) are responsible for 73% of risky behavior. According to the findings, it’s clear that protecting the enterprise in 2025 means managing people, not just systems. “Security teams have always known the human factor plays a critical role in breaches, but they’ve lacked the visibility to act on it,” said Ashley Rose, CEO and Co-founder of Living Security. “Until now, most insights have relied on anecdotal evidence or narrow indicators like phishing clicks. This report changes that by providing hard data that shows exactly where risk lives, and what actually works to reduce it.” Key Findings from the Report:  Human risk is concentrated, not widespread: Just 10% of employees are responsible for nearly three-quarters (73%) of all risky behavior. Visibility is alarmingly low: Organizations relying solely on security awareness training (SAT) have visibility into only 12% of risky behavior, compared to 5X that for mature HRM programs. Risk is often misidentified: Contrary to popular belief, remote and part-time workers are less risky than their in-office peers. HRM works: Companies using Living Security’s Unify platform cut their risky user population by 50% and reduced high-risk behavior duration by 60%. From Awareness to Action: Making Human Risk Measurable Unlike traditional reports that focus on external threats or compliance audits, the 2025 State of Human Cyber Risk Report centers on internal risk behaviors and how they change with the right interventions. The report includes: A detailed breakdown of what constitutes human risk across behaviors, events, and attributes Analysis of how risk is distributed across roles, industries, and access levels Persona-based insights using behavioral alignment models Proof that HRM initiatives, especially behavior-triggered action plans, dramatically reduce organizational risk exposure A Call to Cybersecurity Leaders With budgets tightening and threats evolving, the stakes are clear: cybersecurity can no longer rely on awareness alone. Leaders must prioritize behavioral visibility, targeted action, and ROI-driven results.  “Cybersecurity is no longer just about technology, it’s about behavior,” said Rose. “If we don’t understand who our riskiest users are, why they’re at risk, and how to help them improve, we’ll continue chasing symptoms instead of solving the root problem.” Looking Ahead These findings come at a time when AI agents and digital co-workers are entering the enterprise and the attack surface is evolving fast. As pioneers in Human Risk Management, Living Security sees this evolution clearly: the future of cyber resilience isn’t just about managing human risk, it’s about managing behavioral risk, wherever it originates. This report not only celebrates measurable progress on the human side, but also signals what comes next: a future where enterprises govern both humans and agents through shared visibility, standards, and accountability. About the Report The 2025 State of Human Cyber Risk Report was produced in partnership with the Cyentia Institute using anonymized data from Living Security’s Unify platform over the last several years. It reflects hundreds of millions of real-world user events and decisions, collected and analyzed to provide a clear picture of how human risk shows up, and how it can be reduced. The full report is available for download at: https://www.livingsecurity.com/2025-human-risk-report-key-cybersecurity-insights. For a deeper look at the findings, users can join a live webinar with Cyentia researchers and Living Security CEO Ashley Rose on July 23 at 3PM ET / 12PM PT by registering here. About Cyentia Institute The Cyentia Institute is a renowned research firm committed to providing high-quality, data-driven insights to help organizations enhance cybersecurity and effectively manage information risks. Through collaborations with leading industry and government entities, Cyentia continually advances cybersecurity knowledge and practice. About Living Security Living Security is the global leader in Human Risk Management (HRM), providing a risk-informed approach that meets organizations where they are—whether that’s starting with AI-based phishing simulations, intelligent behavior-based training, or implementing a full HRM strategy that correlates behavior, identity, and threat data streams. Living Security’s Unify platform delivers 3X more visibility into human risk than traditional, compliance-based training platforms by eliminating siloed data and integrating across the security ecosystem. The platform pinpoints the 8–12% of users who pose the greatest risk and automates targeted interventions in real time—reducing exposure to human risk by over 90%. Powered by AI, human analysis, and industry-wide threat telemetry, Unify transforms fragmented signals into intelligent, adaptive defense. Named a Global Leader in Human Risk Management by Forrester and trusted by enterprises like Unilever, Mastercard, Merck, and Abbott Labs, Living Security helps security teams move from awareness to action—driving measurable behavior change and proving impact at every stage of the journey. For more information, users can find them online at livingsecurity.com or follow on LinkedIn. Contact Living Security Press Living Security [email protected] source

New Report Reveals Just 10% of Employees Drive 73% of Cyber Risk Read More »

7 things you need to know about AI and the data center

Gulyani says that many organizations are starting to address this by deploying high-capacity, low-latency, lossless data center fabrics tailored to AI. Nokia, he says, has worked with hyperscaler nScale and cloud provider CoreWeave on next-generation interconnect solutions, including 800G IP and optical networking. “Now is the time for the telecoms industry to rethink network design — prioritizing scalability, flexibility, and automation — to prevent them from becoming a bottleneck in AI strategies,” Gulyani says. 3. Cloud and hybrid storage are key parts of the puzzle As AI workloads evolve, even organizations committed to on-premises infrastructure are leaning into public cloud and hybrid storage strategies. Anant Adya, executive vice president and head of Americas Delivery at Infosys, says that successful AI data center modernizing efforts entail “shifting workloads to the public cloud, and adopting hybrid storage. These moves boosted agility and cut energy use and cost.” This blend of on-premises and cloud-based compute isn’t just about performance — it’s also about access. For organizations without massive infrastructure budgets, a hybrid approach can be the difference between riding in the AI wave or being left behind. Classroom365’s Friend has worked with customers to navigate these limitations. “A lot of the schools we help, especially in under-funded councils, don’t have the means to rebuild kit or hire local AI brains,” he says. “But they’re not excluded.” source

7 things you need to know about AI and the data center Read More »

Moving AI workloads off the cloud? A hefty data center retrofit awaits

“If you have a very specific use case, and you want to fold AI into some of your processes, and you need a GPU or two and a server to do that, then, that’s perfectly acceptable,” he says. “What we’re seeing, kind of universally, is that most of the enterprises want to migrate to these autonomous agents and agentic AI, where you do need a lot of compute capacity.” Racks of brand-new GPUs, even without new power and cooling infrastructure, can be costly, and Schneider Electric often advises cost-conscious clients to look at previous-generation GPUs to save money. GPU and other AI-related technology is advancing so rapidly, however, that it’s hard to know when to put down stakes. “We’re kind of in a situation where five years ago, we were talking about a data center lasting 30 years and going through three refreshes, maybe four,” Carlini says. “Now, because it is changing so much and requiring more and more power and cooling you can’t overbuild and then grow into it like you used to.” source

Moving AI workloads off the cloud? A hefty data center retrofit awaits Read More »

The cross-functional communication required to deal with short certificate lifecycles

“They can spend their time advising internal customers,” he adds. “They can go to their web server team, their F5 team, or their ATM team and say, we planned for this change a year and a half ago. You don’t have to do anything; we just want to let you know.” Knock on effects Even organizations already managing and renewing TLS certificates at scale need to pay attention, says Chris Swan, a senior engineer at Atsign. He describes the changes as relatively straightforward for the systems he manages, but not entirely anxiety free since a shorter renewal window — just 17 days by 2029 — has other implications. Monthly renewals may impact patching and restart schedules. “While IIS can take a cert renewal on the fly and you don’t need to restart services, for some applications like Tomcat, you have to restart services,” says Jeff Hagen, PKI and IAM security architect at Hyland. “We currently schedule that with our patch window, but we’re likely going to need to do something different because if maintenance windows are monthly, that might be cutting it tight.” source

The cross-functional communication required to deal with short certificate lifecycles Read More »

31% of employees are ‘sabotaging’ your gen AI strategy

“Executives sometimes look to spin layoffs — it’s a rationalization — as, ‘We are not doing this because the company is in trouble. No, we are doing [the layoffs] because AI is making us so efficient that we don’t need as many people anymore.’ Instead of admitting that they over-hired, they prefer to say, ‘We are using AI as mature and tech-savvy leaders,’” he says. A data analyst overseeing AI integration at an $80 billion retail chain — who asked that his name and employer not be stated — said he has directly seen acts of AI pushback. Although “outright sabotage is rare, I’ve observed more subtle forms of pushback, such as teams underutilizing AI features, reverting to manual processes, or selectively ignoring AI-generated recommendations without clear justification. In some cases, it’s rooted in fear: Employees worry that increased automation will reduce their role or make their expertise less valued,” the data analyst says. But “what appears to be resistance is actually a cry for inclusion in the change process. People want to understand how AI supports their work, not just that it’s being imposed on them.” source

31% of employees are ‘sabotaging’ your gen AI strategy Read More »

Autonomous AI agents = Autonomous security risk

Upping the pressure? The risks from AI agents will grow super fast because AI itself changes so fast. Rather than one human stealing employee credentials — or 50 machines orchestrated by a hacker — there will be 50,000 AI agents. They’ll move fast, learn and pivot — far faster than humans. Any semblance of control we think we have over data and systems will be fiction. Back to basics  As such, the next enterprise security frontier isn’t only defending against human threats — it’s also about securing the exploding universe of autonomous AI agents. CEOs and CIOs need to double down on the basics — ideally before AI agents are deployed. Needed steps include risk assessment around:  Employees. At least 15% of employees “routinely access” generative AI platforms on their corporate devices, a Verizon survey shows. This greatly enhances the risk of data leaks and open doors. Figure out what’s going on in your company, who’s using what and what guardrails are needed. Agent permissions. If you’ve already got AI agents deployed, what do they have permission to do and what data can they access? AI agents often rely on credentials or API tokens initially provisioned with overly broad permissions for simplicity or operational speed. Over time, these broad permissions create significant security risk, as agents perform tasks and access critical resources far beyond their actual business requirements. Data. What data is being uploaded and where? How strong is your data governance, meaning you know where the data came from, when, how and if it was changed and by whom? Agentic AI will exploit weak data governance like never before because AI’s ability to explore data is unprecedented. Vendors. How are vendors using and securing AI agents? Where are they in your supply chain? Look for AI agents to have job functions, like ordering parts when supplies get low. You want vendors to profile agents so they’ll be more likely to spot abnormal behaviors. For instance, if the parts agent asks for supplier payment information, red flag alert. Press vendors for audits and metrics that show results.  In short, secure AI agents are like any employee. Demand full visibility into human and non-human identities, the ability to track interaction by AI agents back to their origin, spot behaviors that are abnormal and detect unauthorized, anomalous or risky actions by agents across cloud, SaaS and hybrid infrastructures. By continuously auditing agent permissions, privileges and interactions, companies will better enforce policies that minimize risk exposure. source

Autonomous AI agents = Autonomous security risk Read More »