Information Week

How AI Can Speed Disaster Recovery

Disaster recovery technologies are designed to prevent or minimize the data loss and business disruption resulting from unexpected catastrophic events. This includes everything from hardware failures and local power outages to cyberattacks, natural disasters, civil emergencies, and criminal or military assaults.  As AI continues to transform and enhance a seemingly endless array of tasks and functions, it should come as no surprise that the technology has caught the attention of disaster recovery professionals.  Preparation and Response  Joseph Ours, AI strategy director at Centric Consulting, says AI can assist disaster recovery in two essential areas: preparation and response. “In many respects, speeding disaster recovery means planning and preparing,” he observes in an email interview. Ours notes that a growing number of government agencies and insurance companies are already routinely performing these tasks with AI assistance. “They use predictive and classification models to analyze historical data and environmental factors to determine potential risk.”  AI-enabled resiliency planning provides speed and precision that traditional methods lack, says Stephen DeAngelis, president of Enterra Solutions, an AI-enabled transformation and intelligent enterprise planning platform provider. “AI’s ability to process large volumes of data quickly allows it to detect anomalies and potential risks earlier,” he explains in an online interview. Unlike conventional disaster recovery plans, AI-powered solutions are adaptive, updating in real-time as conditions change. “This means companies can pivot their strategies almost immediately, reducing the time needed to return to normal operations and ensuring minimal disruption to the supply chain.”  Related:Nation-State Threats Persist with Information Breach of US Treasury Automatic Detection  In businesses, AI-enhanced disaster recovery automatically detects anomalies, such as ransomware-corrupted data, allowing technicians to skip over unusable files and focus on clean, viable backups, says Stefan Voss, a vice president at data protection and security firm N-able. “This eliminates the time-intensive, manual review process that’s standard in conventional recovery methods.”  AI can also improve boot detection accuracy, ensuring that machines will bounce back successfully after recovery, Voss says in an email interview. “Well-trained AI models can significantly reduce false positives or negatives, enhancing technician confidence in the reliability and efficiency of the restored systems,” he explains. “With AI-driven accuracy, organizations can recover systems faster, with fewer errors, and minimize downtime.”  Related:Bridging a Culture Gap: A CISO’s Role in the Zero-Trust Era AI solutions rely on access to high-quality data to generate accurate predictions. “When data is siloed or incomplete, models are likely to produce less reliable results,” DeAngelis warns. To ensure success, he advises businesses to establish robust data management practices before implementing AI solutions. “Today, we’re seeing innovators develop sophisticated techniques, such as advanced data modeling, to bridge critical data gaps and enhance AI accuracy.”  Getting Started  An important first step toward using AI in disaster recovery is conducting a comprehensive assessment of current supply chain vulnerabilities. “Identify critical points of failure and gather historical data on past disruptions,” DeAngelis suggests. Next, collaborate with an AI partner to build predictive models that simulate various disaster scenarios, such as geopolitical risks or extreme weather events. Focus on implementing AI tools that integrate seamlessly with existing systems, allowing for smooth data flows and real-time updates. “A phased approach is ideal, beginning with pilot projects and scaling up as the organization gains familiarity with the technology.”  Related:The Biggest Cybersecurity Issues Heading into 2025 Voss says the next step should be identifying any existing challenges in the disaster recovery process. “For example, if your main goal is increasing recovery testing accuracy, look for AI tools designed to improve boot detection and guarantee reliable system restoration,” he suggests. “On the other hand, if the goal is precisely detecting backup anomalies, focus on AI solutions that specialize in identifying compromised or corrupted data quickly and accurately.”   After clearly defining the issue at hand, seek out the AI solution that will meet your needs, Voss advises. “Always start with your pain points and let AI provide the answer, not the other way around.”  Challenges  AI disaster recovery can offer significant advantages, yet it also comes with several serious drawbacks. High development and integration costs can be a barrier, especially for small businesses, Voss says. “The skills shortage in AI expertise makes it difficult for organizations to develop or maintain AI-driven systems.”  Remember, too, that even with well-trained models, AI is far from infallible. False positives or negatives can occur, potentially complicating recovery efforts, Voss warns. “Additionally, an over-reliance on AI can reduce human oversight, making it imperative to strike a balance between automation and manual processes.”  Perhaps the biggest drawback is that some disasters arrive as unpredictable black swan-type events. “In this case, AI is neither a benefit nor contributor to the failure to respond because, by their very nature, humans would struggle to respond adequately as well,” Ours says.  A Competitive Edge  A proactive investment in AI not only mitigates risk but can turn challenges into competitive advantages, DeAngelis says. He notes that by being prepared to adapt quickly when disruptions occur, enterprises can maintain continuity and even capture market share from less-prepared competitors. “As we’ve seen from recent events, such as the US port strike, hurricane-related supply chain impacts, and the ongoing pressures of inflation, businesses that leverage AI to build resilience are better positioned to thrive in uncertain environments.” source

How AI Can Speed Disaster Recovery Read More »

7 Private Cloud Trends to Watch in 2025

There are better and worse ways to approach private cloud, which some companies are learning the hard way. While it’s tempting to repatriate some things from public cloud to private cloud, it’s better to do it with applied cloud learning versus a traditional infrastructure mindset.  “I’m seeing people increasingly wanting to find additional efficiency on premises. If I was to pick a word for 2025, it would be, “optimization.” Everyone’s under a lot of pressure. They’re trying to bring in new compute capabilities to their data center, including GPUs to support AI and more storage to support the data activities related to AI and [analytics],” says Hillery Hunter, CTO and GM of innovation at IBM and an IBM Fellow. “[P]rivate cloud is often used as a vehicle to reset the efficiency of an environment.”  However, the on-premises environment may include many IT silos supporting different lines of business and equipment purchased for specific projects. When there’s not a consistent control plane, the aggregate utilization of all the systems is lower than it needs to be because only certain applications or workloads run on configured environments. The goal now is to optimize that for better efficiency.  “[A] private cloud environment that is virtualized and offers container support can be used as a migration destination, still on premises, but then you have more people sharing a more consistent set of resources,” says Hunter. “You’re having people develop to a common set of templates in terms of the kind of system configuration. And while it takes work to get to that kind of environment, it can have huge payoffs in terms of the security [because] the configurations are much more consistent, the compliance overheads are lower and the speed to get new capacity added to the environment.”  Related:The Network Metrics That Really Matter Following are some more private cloud trends in 2025.  1. Repatriating workloads  A lot of organizations are repatriating workloads to private cloud from public cloud, but Rick Clark, global head of cloud advisory at digital transformation solutions company UST warns they aren’t giving it much forethought, like they did earlier when migrating to public clouds. As a result, they’re not getting the ROI they hope for.  “We haven’t still figured out what is appropriate for workloads. I’m seeing companies wanting to move back the percentage of their workload to reduce cost without really understanding what the value is so they’re devaluing what they’re doing,” says Clark. “If they’d given more forethought into what they were taking to the cloud and what to be bringing back, they’d be in a better place. [T]hey don’t really understand what they’re moving back and they’re comparing apples and oranges.”  Related:Y2K and Infrastructure Resilience 25 Years Later A key factor is understanding the business value and being able to communicate that in business terms. All too often, organizations are randomly choosing what to put in private cloud as opposed to thinking critically about what workloads are where and why. In the worst cases, the organization has lost the operational skill to manage and operate things in their own data center, but they haven’t considered this issue.   2. Hybrid environments will become even more popular  Trevor Horwitz, CISO and founder at cybersecurity, consulting, and compliance services provider TrustNet believes private cloud strategies will evolve as companies seek more control over data security, regulatory compliance, and operational flexibility.   “I expect to see more organizations embracing hybrid and multi-cloud environments and integrating private clouds with public cloud resources to keep data flexible yet secure,” says Horwitz in an email interview “This shift is driven by the need for resilience and vendor flexibility, and zero-trust frameworks make this possible by securing data across multiple environments. As the regulatory landscape tightens with laws like GDPR and CCPA, private clouds will become essential for companies handling sensitive data to ensure compliance and control over data sovereignty.”  Related:Best Practices for Managing Hybrid Cloud Data Governance 3. Real-time monitoring and machine learning  Roy Benesh, chief technology officer and co-founder of eSIMple, an eSIM offering, believes private cloud will continue to be in high demand, especially in sectors like healthcare and finance that have stringent data protection regulations.  “I think businesses will depend more on real-time monitoring and machine learning to strengthen data protection as they use private clouds to satisfy security requirements,” says Benesh in an email interview. “In my experience, private clouds can have drawbacks, too, such as high upfront expenditures and the requirement for knowledgeable administration. This can be particularly difficult for smaller businesses to handle.”  4. AI and automation  Artificial intelligence and automation are also set to play a crucial role in private cloud management. They enable businesses to handle growing complexity by automating resource optimization, enhancing threat detection, and managing costs.   “The ongoing talent shortage in cybersecurity makes [AI and automation] especially valuable. By reducing manual workloads, AI allows companies to do more with fewer resources,” says Trevor Horwitz, CISO and founder at cybersecurity, consulting, and compliance services provider. “My advice is to prioritize adaptability. Be prepared to shift your strategy as business needs evolve, especially as technology advances. Mastering the private cloud is about building an agile, secure, and sustainable infrastructure, meeting today’s demands while preparing for what’s next.”  5. Multilayer cybersecurity  Security affects all aspects of a cloud journey, including the calculus of when and where to use private cloud environments. One significant challenge is making sure that all layers of the stack have detection and response capability.  “You have to protect each layer separately — network, cloud, host, server, and application.  They’re not “defense in depth. Each component — NDR, CDR, EDR, SDR, and ADR  — protects against a different set of threats,” says Jeff Williams, founder and CTO at runtime application security company Contrast Security. “The biggest code-to-cloud technology gap is the lack of application detection and response and application security monitoring (ASM) to create visibility and protection for their biggest asset — the application estate. In the last year, this area saw 100% growth in attack traffic,

7 Private Cloud Trends to Watch in 2025 Read More »

AI-Driven Quality Assurance: Why Everyone Gets It Wrong

Artificial intelligence is already a big deal, but not everyone is using it effectively. Many clients ask us how we’ve integrated AI into our QA process, but creating a real, usable approach wasn’t as easy as it seemed. Today, I want to share how we approached AI in quality assurance and the lessons we learned along the way.  The AI Hype and Reality  Two years ago, ChatGPT exploded onto the scene. People rushed to learn about generative AI, large language models and machine learning. Initially, the focus was on AI replacing jobs, but over time, these discussions faded, leaving behind a flood of AI-powered products claiming breakthroughs across every industry.  For software development, the main questions were:  How can AI benefit our daily processes?  Will AI replace QA engineers?  What new opportunities can AI bring?  Starting the AI Investigation  At our company, we received an inquiry from sales asking about AI tools we were using. Our response? Well, we were using ChatGPT and GitHub Copilot in some cases, but nothing specifically for QA. So, we set out to explore how AI could genuinely enhance our QA practices.  What we found was that AI could increase productivity, save time, and provide additional quality gates, if implemented correctly. We were eager to explore these benefits.  Related:What Could Less Regulation Mean for AI? Categorizing the AI Tools  Over the next few months, we analyzed numerous AI tools, categorizing them into three main groups:  Existing tools with AI features: Many products had added AI features just to ride the hype wave. While some were good, the AI was often just a marketing gimmick, providing basic functionality like test data generation or spell-checking.  AI-based products from scratch: These products aimed to be more intelligent but were often rough around the edges. Their user interfaces were lacking, and many ideas didn’t work as expected. However, we saw potential for the future.  False advertising: These were products promising flawless bug-free applications, usually requiring credit card information upfront. We quickly ignored these as obvious scams.  What We Learned Despite our thorough search, we didn’t find any AI tools ready for large-scale commercial use in QA. Some tools had promising features, like auto-generating tests or recommending test plans, but they were either incomplete or posed security risks by requiring excessive access to source code.  Yet, we identified realistic uses of AI. By focusing on general-use AI models like ChatGPT and GitHub Copilot, we realized that while QA-specific tools weren’t quite there yet, we could still leverage AI in our process. To make the most of it, we surveyed our 400 QA engineers about their use of AI in their daily work.   Related:6 AI-Related Security Trends to Watch in 2025 About half were already using AI, primarily for:  Assisting with test automation  Automating routine tasks  Developing a New Approach We then created an in-house course on generative AI tailored for QA engineers. This empowered them to use AI for tasks like test case generation, documentation, and automating repetitive tasks. As engineers learned, they discovered even more ways to optimize workflows with AI.  How profitable is it? Our measurements showed that AI reduced the time spent on test case generation and documentation by 20%. For coding engineers, AI-enabled them to generate multiple test frameworks in a fraction of the time it would’ve taken manually, speeding up the process. Tasks that used to take weeks could now be done in a day.  The Downsides  Despite its benefits, AI isn’t perfect. It isn’t smart enough to replace jobs, especially for junior engineers. AI may generate test cases, but it often overlooks important checks, or it suggests irrelevant ones. It requires constant oversight and fact-checking.  Related:Who Should Lead the AI Conversation in the C-Suite? Why Many Companies Get It Wrong  The biggest mistake companies make is jumping into AI without understanding its limitations. Many fall for the hype and end up using AI tools that don’t work well, only to face frustration. The truth is that AI is a valuable assistive tool, but it needs to be used thoughtfully and alongside human oversight.  Key takeaways from our journey with AI in QA:  AI is not a magic bullet. It provides incremental improvements but won’t radically transform your processes overnight.  Implementing AI takes effort. It needs to be tailored to your needs, and blindly following trends won’t get you far.  AI can assist, but it can’t replace human oversight. It’s ineffective for junior engineers who may not be able to discern when AI is wrong.  Dedicated AI testing tools still need improvement. The market isn’t yet ready for specialized AI tools in QA that offer real value.  AI is exciting and transforming many industries, but in QA, it remains an assistive tool rather than a game-changer. We at NIX are embracing it, but we’re not throwing out the rulebook just yet.  source

AI-Driven Quality Assurance: Why Everyone Gets It Wrong Read More »

Who Should Lead the AI Conversation in the C-Suite?

Before an enterprise can set any strategy in motion for AI, leadership at the top must decide what the plan of action will be. The question is, who should guide the conversation? The CEO, the overall executive leader? A more tech-oriented executive such as the CTO or CIO? How much say should other, operations-driven divisions have? The pressure is on to realize effective uses of AI, especially as rivals race to find a competitive edge with the technology. That, combined with potential differences of opinion within the C-suite, calls for clarity and cohesion in leadership. This episode of DOS Won’t Hunt brought together Adam Caplan, president of digital business and AI for Altimetrik; Bradon Rogers, chief customer officer for Island; Max Chan, CIO for Avnet; Ben Waber, PhD, visiting scientist at MIT; and Cliff Jurkiewicz, vice president of global strategy for Phenom. They discussed such questions as how does C-suite leadership tend to regard AI, key considerations when exploring how to leverage new technology, and if managers who are not part of the C-suite should be part of the conversation? Listen to the full episode here. source

Who Should Lead the AI Conversation in the C-Suite? Read More »

What Could Less Regulation Mean for AI?

President-elect Trump has been vocal about plans to repeal the AI executive order signed by President Biden. A second Trump administration could mean a lot of change for oversight in the AI space, but what exactly that change will look like remains uncertain.   “I think the question is then what incoming President Trump puts in its place,” says Doug Calidas, senior vice president of government affairs for Americans for Responsible Innovation (ARI), a nonprofit focused on policy advocacy for emerging technologies. “The second question is the extent to which the actions the Biden administration and the federal agencies have already taken pursuant to the Biden executive order. What happens to those?”  InformationWeek spoke to Calidas and three other leaders tuned into the AI sector to cast an eye to the future and consider what a hands-off approach to regulation could mean for the companies in this booming technology space.   A Move to Deregulation?  Experts anticipate a more relaxed approach to AI regulation from the Trump administration.   “Obviously, one of Trump’s biggest supporters is Elon Musk, who owns an AI company. And so that coupled with the statement that Trump is interested in pulling back the AI executive order suggest that we’re heading into a space of deregulation,” says Betsy Cooper, founding director at Aspen Tech Policy Hub, a policy incubator focused on tech policy entrepreneurs.   Related:AI-Driven Quality Assurance: Why Everyone Gets It Wrong Billionaire Musk, along with entrepreneur Vivek Ramaswamy, is set to lead Trump’s Department of Government Efficiency (DOGE), which is expected to lead the charge on significantly cutting back on regulation. While conflict-of-interest questions swirl around his appointment, it seems likely that Musk’s voice will be heard in this administration.   “He famously came out in support of California SB 1047, which would require testing and reporting for the cutting-edge systems and impose liability for truly catastrophic events, and I think he’s going to push for that at the federal level,” says Calidas. “That’s not to take away from his view that he wants to cut regulations generally.”  While we can look to Trump and Musk’s comments to get an idea of what this administration’s approach to AI regulation could be, but there are mixed messages to decipher.   Andrew Ferguson, Trump’s selection to lead the US Federal Trade Commission (FTC), raises questions. He aims to regulate big tech, while remaining hands-off when it comes to AI, Reuters reports.   “Of course, big tech is AI tech these days. So, Google, Amazon all these companies are working on AI as a key element of their business,” Cooper points out. “So, I think now we’re seeing mixed messages. On the one hand, moving towards deregulation of AI but if you’re regulating big tech … then it’s not entirely clear which way this is going to go.”  Related:6 AI-Related Security Trends to Watch in 2025 More Innovation?  Innovation and the ability to compete in the AI space are two big factors in the argument for less regulation. But repealing the AI executive order alone is unlikely to be a major catalyst for innovation.   “The idea that by even if some of those requirements were to go away you would unleash innovation, I don’t think really makes any sense at all. There’s really very little regulation to be cut in the AI space,” says Calidas.   If the Trump administration does take that hands-off approach, opting not to introduce AI regulation, companies may move faster when it comes to developing and releasing products.   “Ultimately, mid-market to large enterprises, their innovation is being chilled if they feel like there’s maybe undefined regulatory risk or a very large regulatory burden that’s looming,” says Casey Bleeker, CEO and cofounder of SurePath AI, a GenAI security firm.   Does more innovation mean more power to compete with other countries, like China?   Related:Who Should Lead the AI Conversation in the C-Suite? Bleeker argues regulation is not the biggest influence. “If the actual political objective was to be competitive with China … nothing’s more important than having access to silicon and GPU resources for that. It’s probably not the regulatory framework,” he says.   Giving the US a lead in the global AI market could also be a question of research and resources. Most research institutions do not have the resources of large, commercial entities, which can use those resources to attract more talent.   “[If] we’re trying to increase our competitiveness and velocity and innovation putting funding behind … research institutions and education institutions and open-source projects, that’s actually another way to advocate or accelerate,” says Bleeker.   Safety Concerns?  Safety has been one of the biggest reasons that supporters of AI regulation cite. If the Trump administration chooses not to address AI safety at a federal level, what could we expect?  “You may see companies making decisions to release products more quickly if AI safety is deprioritized,” says Cooper.   That doesn’t necessarily mean AI companies can ignore safety completely. Existing consumer protections address some issues, such as discrimination.  “You’re not allowed to use discriminatory aspects when you make consumer impacting decisions. That doesn’t change if it’s a manual process or if it’s AI or if you’ve intentionally done it or by accident,” says Bleeker. “[There] are all still civil liabilities and criminal liabilities that are in the existing frameworks.”   Beyond regulatory compliance, companies developing, selling, and using AI tools have their reputations at stake. If their products or use of AI harms customers, they stand to lose business.   In some cases, reputation may not be as big of a concern. “A lot of smaller developers who don’t have a reputation to protect probably won’t care as much and will release models that may well be based on biased data and have outcomes that are undesirable,” says Calidas.   It is unclear what the new administration could mean for the AI Safety Institute, a part of the National Institute of Standards and Technology (NIST), but Cooper considers it a key player to watch. “Hopefully that institute will continue to be able to do important

What Could Less Regulation Mean for AI? Read More »

Why Smarter AI is People-Led

Ethics. Responsibility. Governance. Trust. All big concerns for business leaders right now are with AI. In a recent survey, 76% of CIOs say their organizations do not have an AI-ready corporate policy on operational or ethical use. Businesses also see it as a barrier to defining their AI vision due to concerns about regulatory and ethical risk. We are seeing that attitudes toward ethics and responsibility when planning IT and technology investments have changed. A decade ago, it was more of an afterthought. Today in the AI age, it’s a board-level topic. But it isn’t easy to achieve – let’s explore why. Why Is Responsible and Ethical AI So Tough? The very nature of the technology is a challenge. Generative AI relies on the training data, architecture, and AI engine to produce unique results. If not designed carefully and monitored continuously, you could run into bias. For example, financial data might reflect problematic gender pay gaps or historical data might bring outdated cultural norms to outputs. You need a solution for that. Other challenges include: Legacy governance mechanisms that businesses will need to rework in some capacity. Models that had varying degrees of preparedness even before the Gen AI boom. Skill gaps. AI is a very new field – and companies are concerned about having the right expertise to deliver it quickly and effectively. Getting people on board. People generally fear AI, which will slow adoption. Lack of uniform regulations. There are regulatory gaps in LLM safety, content providence, and risk managements which AI companies are working together to fill. Why Being People-Led Is the Answer Lenovo and NVIDIA have created an AI readiness framework with four pillars in a very intentional order: Security, People, Technology, and Process. “Security” comes first, to make sure you can prevent harm, bias, and unintended or improper use of AI. Then there’s “People” to ensure good change management and that properly trained personnel are involved throughout the AI journey. “Technology” and “Process” come later because, without robust security and onboard people, you won’t release the full value of the AI technology anyway. In short, you should be people-led instead of technology-led. Here are some ways to do that: #1: Ensure Explainability with Constant Human Feedback Explainability is key. You must monitor how and why AI is providing each output and – critically – ensure it stays on track. Explainability usually comes in two forms: 1) White-box solutions: Semantic AI for example, where you can map the logic, training data, inputs, and prompts. As you test it, you can understand where outputs come from and refine from there. 2) Black-box solutions: Closed or open-source systems like ChatGPT. These are less transparent and explainable, so it gets trickier. You always need humans to judge the inputs, infer how reasonable the outputs are, then refine from there. Either way, you need humans to constantly monitor the LLM. It needs to be stress-tested. The model may drift, gain bias, or get different responses wrong. It’s going to learn depending on who’s using it and what data’s put into it – you must consider that in your monitoring framework. The best large-scale generative models will likely hit 80-85% accuracy in benchmark tests. Human feedback is instrumental in bridging that 15%. #2: Base Governance on Transparency and Alignment Companies need a level of governance where transparency is everything. It ensures people are always accountable for their actions. For example, let’s say you’re worried about IP protection. Put a broker in place where, if you use a third-party LLM, you must go through a certain gateway that tracks the prompts and responses. That means people will think twice about what they’re sending. Why? Because transparency is everywhere, placing the onus on to self-regulate and do the right thing. Another essential part is top-to-bottom alignment. Make sure AI initiatives fit the outcomes users, teams, and customers expect. This will help keep everything ethical and responsible – while reducing the risk of skills gaps as the resources you have will match overall business strategy. I’m not saying this is easy. Does corporate strategy always equal what people are doing? Many organizations found that tough even before AI. But it should be a priority here. #3: Get People on Board With a “Show Me” Model When convincing teams to adopt AI, use a “show me” model. Demonstrate clearly how it works and the immediate benefits. How will it make their lives easier and more effective? Here’s an example. Say you’ve got an NVIDIA NIM inference microservice that can accelerate sales pipeline by 30%, you should lead with that to make the benefit immediately clear. If you don’t, people are more likely to distrust or simply not use the AI solution. People Are Everything You need people at the center of your AI adoption strategy as, after all, they’ll be the ones actually using it. A robust governance framework for AI is essential to ensure the safe and responsible deployment of emerging solutions. This will be critical as responsible AI impacts both the AI industry as a whole and industries, such as industrial digitalization, retail, and financial services. source

Why Smarter AI is People-Led Read More »

In Global Contest for Tech Talent, US Skills Draw Top Pay

After several years of economic uncertainty and layoffs, salaries paid to US tech talent are once again some of the world’s most competitive. And in at least one significant US jobs category — sales and marketing — there is now pay equity between women and men.  Those are among the findings of an analysis our company conducted of more than 150,000 anonymized employment contracts in more than 100 countries for software engineers, product designers, and sales and marketing professionals. That includes three countries — the US, Canada and the UK — which tend to be the most competitive with one another for top talent.   This good news about the state of American tech talent, innovation and competitiveness comes at a time when that standing has been a source of public concern.  Much of the higher compensation for tech workers is presumably driven by the widely acknowledged skills gap, particularly in AI. For US software engineers, for example, median compensation had dropped below $100,000 during the big waves of tech layoffs in 2022 and 2023. But by the end of the second quarter of 2024, the most recent period in our analysis, it had rebounded to $122,000 — perhaps driven in part by the soaring demand for AI skills.  The US compensation level was second only to Canada, whose much smaller population has fewer tech workers for employers to compete for.     Related:Addressing the Skills Gap to Keep Up with the Evolution of the Cloud Overall, our survey indicates that when it comes to one of the factors that really matter to global talent — compensation — US tech workers are in high demand. And whether it’s companies based in the US or global employers offering remote contracts to Americans, the global business world is willing to pay what it costs to attract and retain that talent.   Here’s a deeper dive into the data:  Software Skills Are in High Demand For the people with the skills for in-demand tasks like writing code or developing AI models and algorithms, the US jobs market has some idiosyncrasies. One is the much higher potential portion of compensation that comes from stock or equity grants.  In positions where equity is part of the package, the median US compensation for a software-and-data engineer is $151,000 a year — the highest anywhere in the world — assuming the typically four-year vesting program pays out. That translates to an additional 35% a year in compensation, beyond salary. Of the countries we looked at, only Germany comes close, with a combined $135,000 in annual pay and equity.  Unfortunately, another characteristic of the US labor market for software engineers and data scientists is a stark gender gap. Women represent only 10.3% of workers in this category, roughly in line with the UK and Germany. And that disparity translates into a compensation gap. The median US compensation for men in software and data is $155,000, compared to $120,000 for women. Similar pay gaps are found in all other countries we surveyed.   Related:UK Launches Antitrust Investigations Targeting Big Tech Tech Product Development and Design  This line of work also has a gender gap, although a slightly narrower one. For jobs that might involve software development and design, or overseeing such activities, women hold 41% of the positions.  And women in those roles have median compensation of $128,000. While a bit closer to the male median of $150,000, it’s still a sizable gap. The same pattern is evident in other countries, although typically at lower pay scales.   Tech’s Silver Lining for Gender Parity Tech sales and marketing is one area where, in the US at least, there is full pay parity between men and women — median compensation of $100,000.   That’s second to the top figure in the UK. But there, the gender disparity is still sizable: $105,000 for men, compared to $92,000 for women. Canada shows a comparable gender gap, at $84,000 for men but only $77,000 for women.    Related:Tracking, Tackling, and Transforming Technical Debt: The New Challenge To AI Why women, who hold 42% of jobs in tech sales and marketing in the US, have been able to achieve pay parity deserves further study. One factor might be that sales performance is easy to quantify — the more a person sells, the better one is rewarded.   But why this parity doesn’t translate to other countries — maybe there’s a cultural component? — would be worth researching.   The Takeaway on Tech Take-Home Pay  Our findings lead to several steps that employers can take to remain competitive and retain the best talent:  Recognize the need for competitive compensation.    If inflation is a factor, ensure your pay scales include bi-annual adjustments or regular cost-of-living increases.  Offer equity, which especially in tech, is widely sought by employees and can ensure longer-term loyalty.   Given the all-too-common gender gap in compensation, position your organization to attract female talent by closing that gap.    For the global business world, the survey indicates that the US has bounced back as a top competitive market for tech talent. And for companies everywhere, the value proposition is clear: The relatively high cost of skilled US tech workers is well worth the price.    source

In Global Contest for Tech Talent, US Skills Draw Top Pay Read More »

Get Going With GitOps

Although most of the software development lifecycle is now automated, infrastructure continues to be a largely manual process requiring specialized teams. Yet with infrastructure demands rapidly growing, more organizations now look toward automation for help.  GitOps uses Git project repositories as the single source of truth for managing application configuration and deployment information, says Elliot Peele, senior manager of software development at analytics software provider SAS. “By using declarative specifications stored in a Git repository, it ensures that the desired state of the system is always maintained and continuously reconciled,” he explains in an email interview.  Mike Rose, data and analytics director at technology research and advisory firm ISG, notes that a GitOps framework ensures that the entire system — including infrastructure, applications, and configurations — is described in a consistent manner within Git, allowing for consistent, repeatable, and auditable changes across environments. “It enhances transparency and traceability and significantly reduces the risk of configuration drift between the desired state and the actual state of the infrastructure.” he states via email.  Peele adds that the approach not only enables continuous integration and deployment, but also provides version management and rollback capabilities, which are crucial for maintaining consistency and reliability in infrastructure management.  Related:What Developers Should Know About Embedded AI GitOps in Action  GitOps implementations have a significant impact on infrastructure automation by providing a standardized, repeatable process for managing infrastructure as code, Rose says. The approach allows faster, more reliable deployments and simplifies the maintenance of infrastructure consistency across diverse environments, from development to production. “By treating infrastructure configurations as versioned artifacts in Git, GitOps brings the same level of control and automation to infrastructure that developers have enjoyed with application code.”  Rose states that GitOps reduces manual errors, allows increased deployment frequency, and generally improves overall system reliability. “Probably one of the most valuable but intangible benefits of GitOps is its ability to foster closer collaboration between development and operations teams as both groups work from the same set of Git repositories to manage application code and infrastructure configurations,” he says. “This alignment will accelerate the feedback loop between development and operations.”  Related:Let’s Revisit Quality Assurance GitOps will have a significant impact on infrastructure automation, Peele predicts. “By providing consistency, version control, continuous deployment, reduced configuration drift, and enhanced security and compliance, GitOps is a game changer in software development and deployment practices,” he says. “It enables peer review for configuration changes and allows developers without prior operations experience to control their application’s deployment.”  Multiple Benefits  GitOps’ primary benefit is its ability to enable peer review for configuration changes, Peele says. “It fosters collaboration and improves the quality of application deployment.” He adds that it also empowers developers — even those without prior operations experience — to control application deployment, making the process more efficient and streamlined.  Another benefit is GitOps’ ability to allow teams to push minimum viable changes more easily, thanks to faster and more frequent deployments, says Siri Varma Vegiraju, a Microsoft software engineer. “Using this strategy allows teams to deploy multiple times a day and quickly revert changes if issues arise,” he explains via email. “This high deployment velocity accelerates releases, allowing teams to deliver business impact quicker.”  Related:Soft Skills, Hard Code: The New Formula for Coding in the AI Era Since infrastructure state is defined in code and stored in Git, static analysis can be performed to detect security misconfigurations, Vegiraju says. “This approach helps enhance the overall security posture by identifying and addressing potential vulnerabilities early.”  Rose reports that ISG research shows that an environment using GitOps — along with complementary AI Ops improvements — can see a productivity efficiency of at least 30% over a two-year time horizon.  Top Adopters  GitOps is most likely to be adopted by enterprises that focus on automation and consistency, Peele says. “The peer review nature of GitOps lends itself to companies that are focused on compliance, requiring multiple reviews of any application configuration or deployment changes.”  Enterprises with cloud-native environments, and those heavily invested in DevOps practices, are also likely to adopt GitOps, Rose says. “This includes any organization prioritizing rapid, reliable software delivery and infrastructure management,” he notes. Such enterprises often have a high rate of change in their infrastructure and applications, making the version control and automation aspects of GitOps particularly valuable.  Enterprises undergoing digital transformation or moving toward microservices architectures are also prime candidates for GitOps adoption, Rose says. He notes that Gits “single source of truth” aligns well with container orchestration platforms, such as Kubernetes, making it especially attractive for organizations using such technologies.  Possible Pitfalls  While GitOps offers numerous benefits, many new adopters face obstacles. “The significant challenge is the steep learning curve for teams unfamiliar with Git or DevOps concepts,” Rose says. “This can lead to initial productivity slowdowns and may require a substantial investment in training and upskilling.”  GitOps requires a deep understanding of the organization’s current IT infrastructure and applications, as well as advanced knowledge of Git, Peele warns. “This can be daunting for teams that are new to these concepts.”  Small organizations with simpler infrastructures may find GitOps adds unnecessary overhead, since the complexity of managing a GitOps pipeline may outweigh the benefits, Vegiraju says.  Looking Forward  An important emerging trend is the increasing intersection of GitOps and AIOps. “This convergence is leveraging AI and machine learning to enhance automation, predict issues, and optimize infrastructure management within the GitOps framework,” Rose says. He notes that AI algorithms can analyze Git commit patterns to predict potential conflicts or issues before they occur or to optimize deployment strategies based on historical performance data.  source

Get Going With GitOps Read More »

IT’s New Frontier: Protecting the Company from Brand Bashing

In November 2022, retailer Balenciaga launched an ad showing children holding teddy bears that appeared to be wearing what looked like bondage gear. This enraged social media users.  As a result, Balenciaga lost 100,000 Instagram followers and saw a decline in sales.  Initially, Balenciaga denied responsibility and even levied a lawsuit against its production company, but that didn’t quell the backlash. So, the company changed course by issuing an apology and announcing that it would use new content validation techniques to prevent an incident like this from occurring again.  Balenciaga is one of many companies that have faced a brand crisis in social media.  Companies including Kellogg’s, Delta Airlines, United Airlines, Dove, and KFC have all faced such crises.  When brand-damaging incidents on social media occur, those who deal with them include executive management, marketing, and even the board. But since social media is an online technology, does that mean IT has a role to play as well?  The answer is unclear in many companies. Often, IT isn’t part of the frontline response group, but that doesn’t mean that your IT team shouldn’t be involved.  How IT Should Get Involved  Mitigating a social media brand attack falls under the category of disaster recovery, which means that there should be a step-by-step sequence of responsive actions that are documented in a DR plan. In addition, there is the question of risk management and avoidance. If a risk policy is defined and documented, preemptive steps can be taken that reduce the chances of a brand attack being levied.  Related:Damage Control: Addressing Reputational Harm After a Data Breach IT has a role in both scenarios.   Risk Management  Vetting Software and vendor. When marketing launches e-commerce and informational websites, it also enlists outside firms to monitor Internet activity concerning the company’s online assets, and to report on any unusual or potentially damaging online activities. The goal is to preempt incidents like brand damage, and the monitoring software does this by “listening” for potentially damaging posts and then reporting them.  HR departments also use third-party software for internet monitoring. They use it to check the social media activities and posts of potential job hires and employees.  In both cases, IT can help in vetting the vendors of these services before marketing or HR enters into contractual agreements. This can be a value-add because technology vendor vetting is not a well-developed practice in either marketing or HR, and it is possible that they may contract with vendors that cannot meet their goals, or that fall short of corporate security, privacy and governance requirements.  Related:What Social Media Has to Offer Threat Intelligence Validate Content. As a best practice, IT can encourage marketing to secure content validation software that can vet internally developed messaging before the company publishes it online.   Employee message monitoring. The monitoring and surveillance of employee messaging and internet activities while employees are at work is a common and accepted corporate practice today. This right to monitor employee communication and internet activity extends to remote employees who are not in a corporate office.  Should there be IT involvement in this seemingly personnel-focused matter? Yes, because in many cases, it is IT that is called upon to select and administer the communications monitoring software and to issue monthly activity reports to user departments and management. Even if IT doesn’t do this, it’s still in IT’s best interest to stay involved. That’s because of IT’s significant role in corporate governance, and the necessity of weighing policy against employees’ personal privacy rights.   In more than one case, it was IT that first asked the question of whether employees had been informed upfront that their communications and internet activities would be monitored by the company, and if there was a written policy to that effect that employees were required to acknowledge and sign as a condition of employment.  Related:How to Detect Deepfakes Attack Response and Mitigation  Security breaches. It’s possible for a bad actor to pass malware into an e-commerce website through a message to the site. Or they could post a fake website of the company that fraudulently resembles the real one.  In both cases, IT should be involved with the security and monitoring of corporate online assets to ensure that the assets are free from cyberattacks and fakes. If unusual activities are detected from IT monitoring and management software, they should be promptly reported to management, marketing and other important stakeholders. If a security breach occurs, the DR response should be swift. Threat mitigation and elimination procedures should be written into the corporate DR plan.  Failover. When corporate e-commerce sites are taken over, or they are being pummeled by cyberattacks that are disabling the sites’ functions, a failover plan to an alternate e-commerce site should be executed. It should work in the same way that a physical retail store fails over to a generator when local power service fails.   In this way, a smooth failover allows the e-commerce site to keep working, and it reduces the number of social media posts that complain about the company, the site or the brand. Failover is an IT operation, and IT should take the lead by crafting the technical processes of the failover, testing them, and making sure that they work.   Summary  Social media crisis management is everybody’s business, but all too often, IT gets overlooked. Yet, because social media is an online activity that involves technology, it is almost guaranteed that IT will be called upon to get involved when a brand attack occurs. Consequently, it’s in CIOs’ best interests to stay ahead of the issue by assuming an active role in brand protection and defense.   “Brand protection is more than ‘protection’ and acts as a source of sustainable competitive advantage,” according to De La Rue, a banknote printing firm. It is a multifaceted approach that requires ongoing diligence and adaptability in the face of evolving threats.”   source

IT’s New Frontier: Protecting the Company from Brand Bashing Read More »

UK Launches Antitrust Investigations Targeting Big Tech

The United Kingdom’s antitrust watchdog on Tuesday said it would launch investigations into three areas of digital activity as the new Digital Markets, Competition and Consumers (DMCC) Act comes into full force — giving regulatory power over major players like Apple and Google. While the Competition and Markets Authority (CMA) did not specify which companies would be investigated, under its strict strategic market status (SMS) designation, only the largest tech firms would be considered. The announcement says two investigations would begin immediately and a third would begin in about six months. Once a company receives SMS designation, the CMA would have the authority to curtail practices like using access to customer data to gain unfair advantage, make it easier for consumers to switch providers, and more. A CMA report released in November called out Apple and Google’s dominant market positions in mobile as potential investigation targets, saying a revenue-sharing agreement between the two companies stifled competition. “We are committed to implementing the regime in a way that is predictable and proportionate, moving at pace whilst respecting fair process,” CMA Chief Executive Sarah Cardell said in a statement. “… The process for designing any interventions will also be participative and transparent, with the aim of keeping innovation-led markets open and bringing firms on the journey with us.” Related:In Global Contest for Tech Talent, US Skills Draw Top Pay The investigations will be complete within the statutory limit of nine months, the CMA said. Significant fines — up to 10% of the company’s global turnover — could be levied if the CMA finds a breach in consumer protection law. US tech firms are facing increasing scrutiny from regulators abroad. The EU’s Digital Markets Act is also taking aim at antitrust concerns, with probes targeting Apple, Google, and Amazon in March. Possible fines could also reach 10% of those companies’ global turnover, potentially costing the companies billions of dollars. CMA’s Impact on M&A, and US Firm Focus In a blog post, lawyers with Morgan Lewis said the UK’s new rules could have a profound impact on potential mergers and acquisitions for the world’s leading tech companies. “The UK CMA has gained a reputation in recent years as an aggressive and impactful antitrust enforcer,” lawyers Joshua Goodman, Omar Shah, R. Ryan Hoak, and Jack Ashfield wrote. “The UK DMCC only bolsters its powers and as such it is more relevant that ever to consider the role of and approach to the UK SMA as part of the global merger review process.” Related:Tracking, Tackling, and Transforming Technical Debt: The New Challenge To AI Critics of newly granted antitrust regulation powers like DMCC say the rules discriminate against US firms and jeopardize investment and cooperation between countries. “Given that the United Kingdom’s digital sector accounted for nearly 1.9 million jobs in 2022 and contributed over [$174 billion] to the UK economy in 2020, the UK government should tread carefully,” writes Meredith Broadbent, a senior adviser at the Center for Strategic & International Studies, in a post. InformationWeek has reached out to Google and Apple for comment and will update with any response. source

UK Launches Antitrust Investigations Targeting Big Tech Read More »