Information Week

What Tech Workers Should Know About Federal Job Cuts and Legal Pushback

The Trump administration and its Department of Government Efficiency (DOGE) are firing and laying off thousands of government employees across multiple agencies. In 2024, the federal government employed approximately 116,000 IT workers, and that isn’t counting contractors and military and post office employees, Computerworld reports. These legions of federal tech workers are in the same boat as all federal employees, afloat on a sea of chaos and uncertainty. Several lawsuits have been filed in the wake of the job reductions, but litigation is not a fast-moving process. InformationWeek spoke to three attorneys about the job cuts, legal action, and what could lie ahead for federal workers. The Terminations The total number of federal employees impacted thus far is not clear. Approximately 75,000 employees accepted the “deferred resignation” offer, referred to as Fork in the Road, to leave their jobs, according to The Hill. But the program has been paused following a ruling by a federal judge. Probationary employees, people who typically have been in their roles for less than a year, have been a significant target of layoffs. AP News reports that there are 220,000 federal employees who had been working in their roles for less than a year as of March 2024. Related:Tech Company Layoffs: The COVID Tech Bubble Bursts “I don’t think we’re getting any clear or transparent data about the segments of the government that are being most impacted,” Areva Martin, civil rights attorney and managing partner and founder of law firm Martin & Martin, tells InformationWeek. The workforce reductions are wide-ranging and the Consumer Financial Protection Bureau (CFPB) is essentially shuttered. Jobs are being cut at the Department of Agriculture, Department of Education, Department of Energy, Department of Health and Human Services, Department of Homeland Security, Department of the Interior, Department of Veterans Affairs, Environmental Protection Agency, Office of Personnel Management, and the list goes on. The Cybersecurity and Infrastructure Security Agency (CISA), a significant repository of technical talent, is also facing cuts.   “Maybe a few weeks ago, we all thought that there was categories employees that would be protected — like IT workers, like Department of Defense employees, employees that are essential to our national security like the nuclear safety employees — that were terminated,” says Liz Newman, member and litigation director at The Jeffrey Law Group, which focuses on federal sector employment disputes. The Lawsuits A flurry of lawsuits was swift to follow the firings and layoffs ordered by the White House and DOGE. Related:AI Upskilling: How to Train Your Employees to Be Better Prompt Engineers Several employees who received high marks on recent performance reviews were among those to be caught up in the mass firings, Reuters reports. “When you’re letting people go and you’re citing things like their performance and their fit, but at the same time you’re letting large groups go indiscriminately without surely looking at their performance and fit, I think that’s opening up this administration to some legal liability,” says Newman. Indeed the Trump administration faces class actions, representing thousands of people, for the way it is handling the firing of probationary employees. Alden Law Group and legal services nonprofit Democracy Forward are representing civil servants across nine agencies, with plans to cover others, in a complaint filed with the Office of Special Counsel (OSC). The complaint could go before the Merit Systems Protection Board (MSPB), a government agency that aims “to protect Federal merit systems against partisan political and other prohibited personnel practices,” according to the MSPB website. Complicating matters, the Trump administration is attempting to fire Special Counsel Hampton Dellinger, the head of the OSC, Federal News Network reports. Related:Quick Study: The IT Hiring/Talent Challenge While that drama unfolds, other pushback is underway. Several labor groups representing federal employees are suing the Trump administration, arguing that the Office of Personnel Management (OPM) does not have the authority to order the mass firings that occurred, Reuters reports. The National Treasury Employees Union (NTEU) represents more than 1,000 frontline employees, and it is suing the administration to challenge the closure of the CFPB. As DOGE takes an axe to government agency jobs in the name of saving money and improving efficiency, alarm bells around its access to sensitive data have been clanging. Several lawsuits are underway on that front. “They’ve been given unfettered access in some cases to the most private and sensitive information of not only government employees but of US citizens … I’ve been tracking lawsuits filed about violations of the Privacy Act of 1974,” says Martin. How successful could legal pushback be? “I think some of the employees, particularly those employees who again are governed by labor contracts [and] those employees who are civil service employees, they’re going to be met with greater success because their due process rights have been violated, and there are clear contractual terms that define how they can be terminated,” says Martin. The outcome of these lawsuits, and the more likely to come, is far from decided, and it could take years for some cases to reach their conclusion. “Some of these lawsuits may go past Trump’s four years in office. But many of them, I suspect, will be resolved during his term in office,” says Martin. She anticipates that some of these cases may make their way to the Supreme Court.   An Uncertain Future Thousands of federal workers are facing an unclear future: those who accepted the Fork in the Road offer, those who have been terminated, and those who were fired and then asked to come back. The possibility of more job cuts still looms; these frenetic firings took place in the very early days of the Trump administration. “We hear a lot of sadness from them [federal employees], even more so than the fear of not getting paid is the of disappointment in how this has all played out,” says Newman. As we see cases progress through the legal system, there are questions about action the current administration may take to make it easier to

What Tech Workers Should Know About Federal Job Cuts and Legal Pushback Read More »

AI Is Improving Medical Monitoring and Follow-Up

Ensuring continuity of care in a clinic or hospital is a nightmare of complexity. Coordinating test results, imaging, medication, and monitoring of vital signs has proven challenging to an industry reliant on ponderous technologies and deficient staffing. When patients are dealing with unfolding health crises and chronic conditions or recovering from procedures at home, managing their care becomes even more complex.   Doctors may miss important findings that can impact patients’ prognosis and treatment — leaving those patients without necessary information on how to make healthcare decisions. Some 97% of available data may go unreviewed per the World Economic Forum. And Electronic Health Records (EHRs) are messy and riddled with errors.  Following up with patients to ensure that they are receiving proper treatment based on the 3% of data that is reviewed constitutes a significant burden on providers.   Even when patients are stable and their cases have received thorough review, they may find that obtaining insights on how to best manage their situations is next to impossible, placing multiple phone calls to overloaded call centers only to spend hours on hold, poring over pages of inscrutable instructions, and attempting to interpret their own results using unreliable home tests and monitors.   Related:Medallion Architecture: A Layered Data Optimization Model Artificial intelligence technologies have shown promise in managing some of the worst inefficiencies in patient follow-up and monitoring. From automated scheduling and chatbots that answer simple questions to review of imaging and test results, a range of AI technologies promise to streamline unwieldy processes for both patients and providers.   These innovations promise to both free up valuable time and increase the likelihood that effective care is delivered. AI chart reviews may detect anomalies that require follow-up and AI review of images may detect early signs of conditions that escape human review.  But, as with other AI technologies, keeping humans in the loop to ensure that algorithmic errors do not result in damage remains challenging. When is a chatbot not enough? And when it isn’t, can a patient actually talk to their provider?  InformationWeek delves into the potential of AI-managed medical monitoring and follow-up, with insights from Angela Adams, CEO of AI imaging follow-up company Inflo Health; and Hamed Akbari, an assistant professor in the Department of Bioengineering at Santa Clara University who works on AI and medical imaging.  Administrative AI  Anyone who has gone through the healthcare system — so, basically everyone — knows how hideous the administrative procedures can be. It’s bad enough trying to schedule a primary care appointment with some clinics. But what about patients who are in recovery from surgery or suffering from debilitating chronic conditions?  Related:Is a Small Language Model Better Than an LLM for You? AI solutions may smooth out these processes for both the patient and the clinic. AI-assisted platforms offer efficient means of scheduling appointments, refilling prescriptions and getting answers to simple questions about treatment. Patients can simply respond to a text message or fill out a form indicating their needs.  Some 60% of respondents to a 2022 survey preferred intuitive, app-like services from their providers.   Patients may be more inclined to respond to texts or emails generated by AI programs because they can do so on their own time rather than taking a call at an inconvenient moment. They are thus able to provide useful feedback unrelated to their immediate needs — on how they rate their experience with a provider for example — when they might otherwise not be willing to do so.   In the case of anomalous responses — a complication or a dosage problem — a staff member can then follow up with a call or message to address the issue personally. Missed appointments can be flagged, indicating the need for follow-up and also coordinating openings that might be used by other patients who might otherwise need to wait.   Related:The Cost of AI: How Can We Adopt and Deliver AI Efficiently? More than 70% of patients prefer self-scheduling according to an Experian report. And up to 40% of calls to clinics relate to scheduling. Reduced call volumes can lead to enormous cost savings and free up time for dealing with more exigent issues that require attention and analysis by live medical professionals.  Medication Follow-Up and Adherence  Adherence to medication regimens is essential for many health conditions, both in the wake of acute health events and over time for chronic conditions.  AI programs can both monitor whether patients are taking their medication as prescribed and urge them to do so with programmed notifications. Feedback gathered by these programs can indicate the reasons for non-adherence and help practitioners to devise means of addressing those problems.   Adherence to diabetes management regimens is complicated by lifestyle, socioeconomic status, severity of disease and unique personality factors, for example. AI programs that take these factors into account may assist practitioners and patients in refining protocols so that they are both realistic and effective.   A study that used a smartphone app to remind stroke victims to take their medication and then followed up with blood tests to ensure that they had done so found significant increases in adherence to the drug protocol, resulting in better health outcomes.  AI programs can also use patient data to devise optimal dosing for drugs. Therapeutic drug monitoring has historically been a challenge given the differing reactions of patients to drugs, both alone and in combination, according to their unique physiology.   They can even correlate dosing to the effects of the drugs — a significant advance for conditions in which treatments themselves can have deleterious effects. Chemotherapy drugs, for example, can thus be optimized to maximize effectiveness and minimize side effects.   Monitoring of Chronic Conditions  Using AI to monitor the vital signs of patients suffering from chronic conditions may help to detect anomalies — and indicate adjustments that will stabilize them. Keeping tabs on key indicators of health such as blood pressure, blood sugar, and respiration in a regular fashion can establish a baseline and flag fluctuations that require follow up treatment using both personal

AI Is Improving Medical Monitoring and Follow-Up Read More »

Key Ways to Measure AI Project ROI

Businesses of all types and sizes are launching AI projects, fearing that failing to embrace the powerful new technology will place them at a competitive disadvantage. Yet in their haste to jump on the AI bandwagon, many enterprises fail to consider one critical point: Will the project meet its expected efficiency or profitability goal?  Enterprises should consider several criteria to assess the ROI of individual AI projects, including alignment with strategic business goals, potential cost savings, revenue generation, and improvements in operational efficiencies, says Munir Hafez, senior vice president and CIO with credit monitoring firm TransUnion, in an email interview.  Besides relying on the standard criteria used for typical software projects — such as scalability, technology sustainability, and talent — AI projects must also account for the costs associated with maintaining accuracy and handling model drift over time, says Narendra Narukulla, vice president, Quant analytics, at JPMorganChase.  In an online interview, Narukulla points to the example of a retailer deploying a forecasting model designed to predict sales for a specific clothing brand. “After three months, the retailer notices that sales haven’t increased and has launched a new sub-brand targeting Gen Z customers instead of millennials,” he says. To improve the AI model’s performance, an extra variable could be added to account for the new generation of customers purchasing at the store.  Related:Is a Small Language Model Better Than an LLM for You? Effective Approaches  Assessing an AI project’s ROI should start by ensuring that the initiative aligns with core business objectives. “Whether the goal is operational efficiency, enhanced customer engagement, or new revenue streams, the project must clearly tie into the organization’s strategic priorities,” says Beena Ammanath, head of technology trust and ethics at business advisory firm Deloitte, in an online interview.  David Lindenbaum, head of Accenture Federal Services’ GenAI center of excellence, recommends starting with a business assessment to identify and understand the AI project’s end-user as well as the initiative’s desired effect. “This will help refocus from a pure technical implementation into business impact,” he says via email. Lindenbaum also advises continued AI project evaluation, focusing on a custom test case that will allow developers to accurately measure success and quantitively understand how well the system is operating at any given time.  Ammanath believes that a comprehensive cost-benefit analysis is also essential, balancing tangible outcomes such as increased productivity with intangible ones, like improved customer satisfaction or brand perception. “Scalability and sustainability should be central considerations to ensure that AI initiatives deliver long-term value and can grow with organizational needs,” she says. “Additionally, a robust risk management framework is vital to address challenges related to data quality, privacy, and ethical concerns, ensuring that projects are both resilient and adaptable.”  Related:The Cost of AI: How Can We Adopt and Deliver AI Efficiently? Metrics Matter  Potential project ROI can be measured with metrics, including projected cost savings, expected revenue increases, hours of productivity saved, and anticipated improvements in key performance indicators (KPIs) such as customer satisfaction scores, Hafez says. Additionally, metrics such as time-to-market for new products or services, as well as any expected reduction in bugs or vulnerabilities revealed by a tool such as Amazon Q Developer, can provide insights into an AI project’s potential benefits.  Leaders need to look past the technology to determine how investing in generative AI aligns with their overall strategy, Ammanath says. She notes that the metrics required to measure AI project ROI vary, depending on the implementation stage. For example, to measure the potential ROI, organizations should evaluate projected efficiency gains, estimated revenue growth, and strategic benefits, like improved customer loyalty or reduced downtime. “These forward-looking metrics offer insights into the initiative’s promise and help leaders determine if they align with the business goals.” Additionally, for current ROI, leaders should consider using metrics that look at realized outcomes, such as actual cost savings, revenue increases tied directly to AI initiatives, and improvements in key performance indicators like customer satisfaction or throughput.  Related:Possibilities with AI: Lessons From the Paris AI Summit Pulling the Plug  If an AI project consistently fails to meet expectations, terminate it in a calculated manner, Hafez recommends. “Document the lessons learned and the reasons for failure, reallocate resources to more promising initiatives, and leverage the knowledge gained to improve future projects.”  Once a decision has been made to end a project, yet prior to officially announcing the venture’s termination, Narukulla advises identifying alternative projects or roles for the now-idled AI team talent. “In light of the ongoing shortage of skilled professionals, ensuring a smooth transition for the team to new initiatives should be a priority,” he says.  Narukulla adds that capturing key learnings from the terminated project should be a priority. “A thorough post-mortem analysis should be conducted to assess which strategies were successful, which aspects fell short, and what improvements can be made for future endeavors.”  Narukulla believes that thoroughly documenting post-mortem insights can be invaluable for future reference. “By the time a similar issue arises, new models and additional data sources may offer innovative solutions,” he explains. At that point, the project may be revived in a new and useful form.  Parting Thoughts  Establishing a strong governance framework for all ongoing AI projects is essential, Hafez says. “Further, a strong partnership with legal, compliance, and privacy teams can enhance success, particularly in regulated industries.” He also suggests collaborating with external partners. “Leveraging their expertise can provide valuable insights and accelerate the AI journey.”  When implemented and scaled properly, AI is far more than a technological tool; it’s a strategic enabler of innovation and competitive advantage, Ammanath says. However, long-term success requires more than sophisticated algorithms — it demands cultural transformation, emphasizing human collaboration, agility, and ethical foresight, she warns. “Organizations that thrive with AI establish clear governance frameworks, align business and technical teams, and prioritize long-term value creation over short-term gains.”  As AI continues to advance and evolve, IT leaders have an unprecedented opportunity to align investments with enterprise-wide goals, Ammanath says. “By approaching AI as a

Key Ways to Measure AI Project ROI Read More »

How Enterprise Leaders Can Shape AI’s Future in 2025 and Beyond

Once confined to narrow applications, artificial intelligence is now mainstream. It’s driving innovations that are reshaping industries, transforming workflows, and challenging long-standing norms.  In 2024, generative AI tools became regular fixtures in workplaces, doubling their adoption rates compared to the previous year, according to McKinsey. This surge in adoption highlights AI’s transformative potential. At the same time, it underscores the urgency for businesses to fully grasp the opportunities and significant responsibilities that accompany this shift.  AI’s applications are astonishingly broad, from personalized healthcare diagnostics and real-time financial forecasting to bolstering cybersecurity defenses and driving workforce automation. These advancements promise substantial efficiency gains and insight, yet they also come with profound risks. For enterprise IT managers, who often spearhead these initiatives, the stakes have never been more significant or more complex.  The years ahead likely will be defined by how adeptly businesses can navigate this duality. The immense promise of transformative AI innovation is counterbalanced by the equally critical need to mitigate risks through robust data validation, human-in-the-loop systems, and proactive ethical safeguards. As we head into 2025, these three themes will drive the future of AI.  Related:What Tech Workers Should Know About Federal Job Cuts and Legal Pushback Human-Machine Interaction Will Grow  The promise of AI lies not in replacing human oversight but in enhancing it. The increased adoption of AI means it increasingly will integrate into workflows where human judgment remains essential, particularly in high-stakes sectors such as healthcare and finance.  In healthcare, AI is revolutionizing diagnostics and treatment planning. Systems can process vast amounts of medical data, highlighting potential issues and providing insights that save lives. Yet, the final decision often rests with clinicians, whose expertise is essential to interpreting and acting on AI-generated recommendations. This collaborative approach safeguards against over-reliance on technology and ensures ethical considerations remain central.  Similarly, in financial services, AI aids in risk assessment and fraud detection. While these tools offer unparalleled efficiency, they require human oversight to account for nuances and contextual factors that algorithms may miss. This balance between automation and human input is critical to building trust and achieving sustainable outcomes.  Deploying AI responsibly requires enterprise IT managers to prioritize systems that maintain this collaborative framework. Setting the stage for responsible use requires implementing mechanisms for continuous oversight, designing workflows that incorporate checks and balances, and ensuring transparency in how AI tools arrive at their outputs.  Related:Tech Company Layoffs: The COVID Tech Bubble Bursts AI Accuracy Is Even More Important  Accurate AI systems are critical in fields where errors can have far-reaching consequences. For example, a health misdiagnosis resulting from faulty AI predictions could endanger patients. In finance, an erroneous risk assessment could cost organizations millions.   One key challenge is ensuring that the data feeding these systems is reliable and relevant. AI models, no matter how advanced, are only as good as the data they are trained on. Inaccurate or biased data can lead to flawed predictions, misaligned recommendations and even ethical lapses. For instance, financial models trained on outdated or incomplete datasets may expose organizations to unforeseen risks, while medical AI could misinterpret diagnostic data.  But capitalizing on what AI has to offer requires more than just accurate, clean data.   The selection of the right model for a given task plays a crucial role in maintaining accuracy. Over-reliance on generic or poorly matched models can undermine trust and effectiveness. Enterprises should tailor AI tools to specific datasets and applications, integrating domain-specific expertise to ensure optimal performance.  Related:AI Upskilling: How to Train Your Employees to Be Better Prompt Engineers Enterprise IT managers must adopt proactive measures like rigorous data validation protocols, routinely auditing AI systems for biases, and incorporating human review as a safeguard against errors. With these best practices, organizations can elevate the accuracy and reliability of their AI deployments, paving the way for more informed and ethical decision-making.  Regulatory Focus Will Be Narrow  As AI continues to evolve, its growing influence has prompted an urgent need for thoughtful regulation and governance. With the incoming administration prioritizing a smaller government impact, regulatory frameworks will likely focus only on high-stakes applications where AI poses significant risks to safety, privacy and economic stability, such as autonomous vehicles or financial fraud detection.   Regulative attention could intensify in sectors like healthcare and finance as governments and industries strive to mitigate potential harm. Failures in these areas could endanger lives and livelihoods and erode trust in the technology itself.  Cybersecurity is another area where governance will take center stage. The Department of Homeland Security recently unveiled guidance for how to use AI in critical infrastructure, which has become a target for exploitation. Regulatory measures may require organizations to demonstrate robust safeguards against vulnerabilities, including adversarial attacks and data breaches.   However, regulation alone is not enough. Enterprises must also foster a culture of accountability and ethical responsibility. This involves setting internal standards that go beyond compliance, such as prioritizing fairness, reducing bias, and ensuring that AI systems are designed with end-users in mind.  Enterprise IT managers hold the keys to striking this balance by implementing transparent practices and fostering trust. By acting thoughtfully now, organizations can harness AI to drive innovation while addressing its inherent risks, ensuring it becomes a cornerstone of progress for years to come.  source

How Enterprise Leaders Can Shape AI’s Future in 2025 and Beyond Read More »

Quick Study: The IT Hiring/Talent Challenge

So, you told a friend that you need to hire more IT folks. The friend replied, “Hah, good luck!”   Circumstances dealt IT leaders a challenging hand over the past few years, from the great resignation to executive demands for digital transformation, and onward to corporate fascination with artificial intelligence, hiring and keeping IT talent requires new strategies.  There was no single cause of today’s hiring challenges, and there’s no single, easy answer short of hitting the lottery and retiring. However, contributors to InformationWeek have shared their experiences and advice to IT leaders on ways to staff up and skill up, all while staying under budget and keeping IT operational lights on.  In this guide to today’s IT hiring and talent challenges, we have compiled a collection of advice and news articles focused on finding, hiring and retaining IT talent. We hope it helps you succeed this year.  A World of Change  Help Wanted: IT Hiring Trends in 2025  IT’s role is becoming more strategic. Increasingly, it is expected to drive business value as organizations focus on digital transformation.  IT Security Hiring Must Adapt to Skills Shortages  Diverse recruitment strategies, expanded training, and incentivized development programs can all help organizations narrow the skills gap in an era of rapidly evolving threat landscapes.  Top IT Skills and Certifications in 2025  In 2025 top IT certifications in cloud security and data will offer high salaries as businesses prioritize multi-cloud and AI.  How To Be Competitive in a Tight IT Employment Market  A slumping economy, emerging technologies, and over-hiring has led to a tight IT jobs market. Yet positions are still abundant for individuals possessing the right skills and attitude.  The Soft Side of IT: How Non-Technical Skills Shape Career Success  Here’s why soft skills matter in IT careers and how to effectively highlight them on a resume. Show that you are a good human.  Salary Report: IT in Choppy Economic Seas and Roaring Winds of Change  Last year brought a sustained adrenaline rush for IT. Everything changed. Some of it with a whimper and some of it with a bang. Through it all IT pros held steady, but is it enough to sail safely through the end of 2024?  Quick Study: The Future of Work Is Here  The workplace of the future isn’t off in the future. It’s been here for a few years — even pre-pandemic.  10 Unexpected, Under the Radar Predictions for 2025  From looming energy shortages and forced AI confessions to the rising ranks of AI-faked employees and a glimmer of a new cyber-iron curtain, here’s what’s happening that may require you to change your company’s course.  Finding Talent  AI Speeds IT Team Hiring  Can AI help your organization find top IT job candidates quickly and easily? A growing number of hiring experts are convinced it can.  Skills-Based Hiring in IT: How to Do it Right  By focusing directly on skills instead of more subjective criteria, IT leaders can build highly capable teams. Here’s what you need to know to get started.  The Evolution of IT Job Interviews: Preparing for Skills-Based Hiring  The traditional tech job interview process is undergoing a significant shift as companies increasingly focus on skills-based hiring and move away from the traditional emphasis of academic degrees.  IT Careers: Does Skills-Based Hiring Really Work?  More organizations are moving toward skills-based hiring and getting mixed results. Here’s how to avoid some of the pitfalls.  Jumping the IT Talent Gap: Cyber, Cloud, and Software Devs  Businesses must first determine where their IT skill sets need bolstering and then develop an upskilling strategy or focus on strategic new hires.  Top Career Paths for New IT Candidates  More organizations are moving from roles-based staffing to skills-based staffing. In IT, flexibility is key.  Why IT Leaders Should Hire Veterans for Cybersecurity Roles  Maintaining cybersecurity requires the effort of a team. Veterans are uniquely skilled to operate in this role and bring strengths that meet key industry needs.  How to Find a Qualified IT Intern Among Candidates  IT organizations offering intern programs often find themselves swamped with applicants. Here’s how to find the most knowledgeable and prepared candidates.  The Search for Solid Hires Between AI Screening and GenAI Resumes  Do AI-generated job applications gum up the recruitment process for hiring managers by filling inboxes with dubiously written CVs?  3 Things You Should Look for When Hiring New Graduates  Each year, entry-level applicants in IT look a little different. Here’s what you need to be looking for as the class of 2023 infiltrates the workforce.  Why a College Degree is No Longer Necessary for IT Success  Who needs student debt? A growing number of employers are hiring IT pros with little or no college experience.  Recruiting Talent  In Global Contest for Tech Talent, US Skills Draw Top Pay  After several years of economic uncertainty and layoffs, US talent is once again attracting good pay in the global competition for tech skills. But gender disparity continues in many job categories.  Hiring Hi-Tech Talent by Kickin’ It Old School  Using elements of a traditional approach to recruiting IT professionals can attract and grow the modern workforce, but it’s the soft skills shown during an interview that make a big difference.  The Impact of AI Skills on Hiring and Career Advancement  Demand is high for professionals with knowledge of AI, but do such talents really get implemented on the job?  How to Channel a ‘World’s Fair’ Culture to Engage IT Talent  Even the most well-funded and innovative companies will fail if they lack one thing: A diverse, united team. A CEO shares his experience and advice.  Bridging IT Skills Gap in the Age of Digital Transformation  Innovations in automation, cloud computing, big data analytics, and AI have not only changed the way businesses operate but have intensified the demand for specialized skills.  5 Traits To Look for When Hiring Business and IT Innovators  Hiring resilient and forward-thinking employees is the cornerstone to innovation. If you’re looking to hire a “trailblazer,” here are five traits to seek, as

Quick Study: The IT Hiring/Talent Challenge Read More »

AI Upskilling: How to Train Your Employees to Be Better Prompt Engineers

Generative AI’s use has exploded across industries, helping people to write, code, brainstorm and more. While the interface couldn’t be simpler — just type some text in the box — mastery of it involves continued use and constant iteration.  GenAI is considered a game-changer, which is why enterprises want to scale it. While users have various resources available, like OpenAI and Gemini, proprietary LLMs and GenAI embedded in applications, companies want to ensure that employees are not compromising sensitive data.  GenAI’s unprecedented rate of adoption has inspired many individuals to seek training on their own, often online at sites such as Coursera, EdX, and Udemy, but employers shouldn’t depend on that. Given the strategic nature of the technology, companies should invest in training for their employees.  A Fast Track To Improving Prompt Engineering Efficacy  Andreas Welsch, founder and chief AI strategist at boutique AI strategy consultancy Intelligence Briefing, advocates starting with a “Community of Multipliers” — early tech adopters who are eager to learn about the latest technology and how to make it useful. These multipliers can teach others in their departments, helping leadership scale the training. Next, he suggests piloting training formats in one business area, gathering feedback and iterating on the concept and delivery. Then, roll it out to the entire organization to maximize utility and impact.  Related:Quick Study: The IT Hiring/Talent Challenge “Despite ChatGPT being available for two years, Generative AI tools are still a new type of application for most business users,” says Welsch. “Prompt engineering training should inspire learners to think and dream big.”   He also believes different kinds of learning environments benefit different types of users. For example, cohort-based online sessions have proven successful for introductory levels of AI literacy while executive training expands the scope from basic prompting to GenAI products.   Advanced training is best conducted in a workshop because the content requires more context and interaction, and the value comes from networking with others and having access to an expert trainer. Advanced training goes deeper into the fundamentals including LLMs, retrieval-augmented generation, vector databases and security risks, for example.  Andreas Welsch, Intelligence Briefing “Function-specific, tailored workshops and trainings can provide additional level of relevance to learners when the content and examples are put into the audience’s context, for example, using GenAI in marketing,” says Welsch. “Prompting is an important skill to learn at this early stage of GenAI maturity.”  Related:Tech Company Layoffs: The COVID Tech Bubble Bursts Digital agency Create & Grow, initiated its prompt engineering training with a focus on the basics of generative AI and its applications. Recognizing the diverse skill levels within its team, the company implemented stratified training sessions, beginning with foundational concepts for novices and advancing to complex techniques for experienced members.   “This approach ensures that each team member receives the appropriate level of training, maximizing learning efficiency and application effectiveness,” says Georgi Todorov, founder and CEO of Create & Grow, in an email interview. “Our AI specialists, in collaboration with the HR department, lead the training initiatives. This dual leadership ensures that the technical depth of AI is well-integrated with our overarching employee training programs, aligning with broader company goals and individual development plans.”  The company’s training covers:  The basics of AI and language models  Principles of prompt design and response analysis  Use cases specific to its industry and client requirements  Ethical considerations and best practices in AI usage  Educational resources including online courses, in-person workshops, and peer-led sessions, and use of resources from leading AI platforms and collaborations with AI experts that keeps training up-to-date and relevant  Related:Will AI Chip Supply Dry Up and Turn Your Project Into a Costly Monster? To gauge individuals’ level of prompt engineering mastery, Create & Grow conducts regular assessments and chooses practical projects that reflect real-world scenarios. These assessments help the company tailor ongoing training and provide targeted support where needed.  “It’s crucial to foster a culture of continuous learning and curiosity. Encouraging team members to experiment with AI tools and share their findings helps demystify the technology and integrate it more deeply into everyday workflows,” says Todorov. “Our commitment to developing prompt engineering expertise is not just about staying competitive; it’s about harnessing the full potential of AI to innovate and improve our client offerings.”  A Different Take  Kelwin Fernandes, cofounder and CEO at AI strategy consulting firm NILG.AI says good prompts are not ambiguous.   “A quick way to improve prompts is to ask the AI model if there’s any ambiguity in the prompt. Then, adjust it accordingly,” says Fernandes in an email interview.  His company defined a basic six-part template for efficient prompting that covers:  The role the AI should play (e.g., summarizing, drafting, etc.)  The human role or position the AI should imitate  A description of the task, being specific and removing any ambiguity  A negative prompt stating what the AI cannot do. (E.g., don’t answer if you’re unsure)  Any context you have that the AI doesn’t know (E.g., information about the company)  The specific task details the AI should solve at this time.  “[W]e do sharing sessions and role plays where team members bring their prompts, with examples that worked and examples that didn’t and we brainstorm how to improve them,” says Fernandes.  At video production company Bonfire Labs, prompt training includes a communal think tank on Google Chat, making knowledge accessible to all. The company also holds staff meetings in which different departments learn foundational skills, such as prompt structure or tool identification.  “This ensures we are constantly cross-skilling and upskilling our people to stay ahead of the game. Our head of emerging technologies also plays an integral role in training and any creative process that requires AI, further improving our best practices,” says Jim Bartel, partner, managing director at Bonfire Labs in an email interview. “We have found that the best people to spearhead prompt training are those who are already masters at what they do, such as our designers and VFX artists. Their expertise in refinement and attention to detail is

AI Upskilling: How to Train Your Employees to Be Better Prompt Engineers Read More »

Digital Mindset: The Secret to Bottom-Up GenAI Productivity

As organizations look to increase business performance through generative AI, traditional methods for increasing adoption of new technologies are unlikely to be effective for several reasons.   First, unlike most enterprise systems, which are designed to automate specific tasks, GenAI tools are general purpose. While standard use cases can be developed and shared, sustainable productivity gains will result from employees innovating and finding novel ways to use GenAI tools in real-time as conditions change.  Second, many GenAI tools are enabled rather than implemented, thus bypassing the user engagement opportunities a formal implementation project affords. For example, many organizations are using GenAI for text generation in word processors and notetaking in video conference software. No implementation project was needed to make this leap; the new functionality was simply activated.   Third, GenAI tools are probabilistic rather than deterministic. Having employees attend structured training makes sense for a deterministic system, one that will always generate predictable outputs from a given set of inputs. Conversely, GenAI tools rely on statistical methods and have inherent variability in their outputs. Enter the same prompt in your favorite large language model (LLM) twice and you will get two different responses.   Related:Possibilities with AI: Lessons From the Paris AI Summit The final key difference between prior technologies and GenAI is the level of technical knowledge required. Unlike previous technologies, many GenAI tools are designed to be low code or no code. Users tell the technology what to do via natural language processing or simple graphic interfaces. Because there is no need to translate desired functions into computer code, employees can innovate automations independently, breaking the reliance on IT and specialized coding skills.  Culture at the Core of GenAI Adoption   The challenge for business leaders will be to increase the type of GenAI adoption that continually taps new pools of business value through independent, real-time use case innovation on pace with changing business demands. This will require an important cultural component that I call “digital mindset.”  Digital mindset entails a functional understanding of data and systems, enabling innovation in daily work activities across multiple domains. Digital mindset is a productivity accelerant, insufficient by itself, and most impactful when paired with domain expertise and other soft skills, like problem-solving and communications.   Leaders Can Drive Bottom-Up GenAI Adoption  Related:An AI Prompting Trick That Will Change Everything for You Cultural changes require a strong leadership push to be successful. There are several practical steps leaders can take to begin building or reinforcing digital mindset and driving value-add GenAI adoption:  Role model the behavior. Leaders should be embodiments of digital mindset, role modeling the desired behaviors and consistently walking the walk. To do this, leaders should gain hands-on experience using GenAI tools.  Create the right conditions. Encouragement for employees to use GenAI must be matched with a positive user experience, especially for first-time users. Leaders should establish an infrastructure that makes GenAI both safe and easy to use.  Communicate clearly and transparently. GenAI adoption should be enhanced through a multi-pronged communication plan, with messaging that evolves over time and, at a minimum, accomplishes a few critical objectives: provides clear guidance, demystifies the organization’s approach to GenAI, builds excitement, sets expectations, and celebrates specific examples of success.  Embrace the culture shift. For organizations that are resistant or lagging, leaders need to use cultural interventions to treat the root causes — the underlying employee beliefs and values — rather than the symptoms. Overcoming limiting beliefs like “AI is going to replace me” or “I need to wait for training before I can start” must be overcome to build momentum toward sustained success.   Related:How to Regulate AI Without Stifling Innovation Effective cultural interventions create positive changes in employee attitudes that drive new behaviors that generate artifacts that create business value. Because the change unfolds through these layers sequentially, it’s important to have benchmarks for each layer that help indicate a strong culture (“digital mindset”) versus a weak one (“analog mindset”). Some examples of good and bad at each layer include:  Layer 1: Culture — Beliefs and Values  Digital mindset examples – Technology can make my role more valuable; using new technologies will create skills that transfer to other systems; using new technology is a way to learn  Analog mindset examples – Technology will replace my job; by the time I learn this new technology, it will change again; I need to wait for training before I start  Layer 2: Attitudes  Digital mindset examples – Enthusiastic view of technology  Analog mindset examples – Cynical view of technology  Layer 3: Behaviors  Digital mindset examples – Seek out resources and training; experiment with new technologies on daily tasks; spread knowledge to colleagues  Analog mindset examples – Disparage and resist new technology; subvert implementation efforts; encourage complexity to reduce automation potential  Layer 4: Artifacts — Outcomes that Deliver Business Value  Digital mindset examples – Process innovation; productivity gains; analytics enablement  Analog mindset examples – Manual processes; unreliable data; stale skillsets  Measuring Progress   Levels of GenAI adoption can be measured across a continuum ranging from “resistant” to “champion adoption,” with several steps in between.   GenAI Adoption Levels (Worst to Best)  0 Resistant – Actively resists or avoids using GenAI tools, either due to fear, mistrust or a perception that they threaten job security.  1 Forced adoption – Engages minimally with GenAI, using only the basic features necessary to meet mandatory requirements or appease supervisors.  2 Cautious adoption – Begins to explore GenAI’s capabilities beyond the bare minimum, often through limited, low-stakes experimentation.  3 Enthusiastic adoption – Shows genuine interest in integrating GenAI tools into their workflow, actively participating in use cases provided by supervisors or team leaders.  4 Creative adoption – Develops novel use cases for GenAI independently, often designing solutions tailored to specific departmental needs or even contributing to larger strategic goals.  5 Champion adoption – Fully embraces GenAI as a core part of their work and actively promotes its use across departmental boundaries. Champions are adept at identifying new opportunities for GenAI, both operationally and strategically,

Digital Mindset: The Secret to Bottom-Up GenAI Productivity Read More »

How CIOs Can Prepare for Generative AI in Network Operations

AI networking has been a hot topic over the past few years and is a subset of AIOps. Generative AI (GenAI), which is part of AI networking, has taken this hype to a new level with the potential to transform network operations. However, with its conversational interface and ongoing learning capabilities, GenAI will likely be met with both favor and distrust.   But what can enterprises really gain by using GenAI as part of the network operations? CIOs must be aware of new GenAI capabilities for network operations, business case considerations and ways to build trust to minimize adoption risk.  GenAI promises great potential to enable improvements to long-standing traditional networking operations practices across Day 0, Day 1, and Day 2. With GenAI, network operations can accelerate initial configurations, improve the ability to change vendors, drive more efficient troubleshooting and simplify documentation access.  Day 0  For Day 0, for example, an engineer could use an iterative process and ask the GenAI network tool via a natural language interface to design a leaf-spine network to support 400 physical servers using Vendor X. Additional information like SLA requirements (such as availability and throughput) can also be included via natural language to deliver the desired performance level and design that includes cost implications.   Related:Possibilities with AI: Lessons From the Paris AI Summit Another example is in the area of capacity planning as new users, applications, and architectures are adopted, making network planning more complicated. GenAI can be used to help size network infrastructure and optimize costs based on the number and types of applications hosted on-premises, in the cloud and at end-user locations (in the office, at home or other locations).  Day 1  The GenAI network tool can then help generate/validate/optimize all the required Day 1 configurations based on desired criteria (for example, by price or performance). It may not be 100% accurate, which is why it may require an iterative process to refine GenAI tool outputs to accelerate/optimize network setup. Even if it requires several iterations, the use of GenAI would represent a substantial improvement over current rigid processes and tools, reducing time and errors by up to 25%. We envision that this will be leveraged in all networking domains (WAN, data center, cloud, and campus) to assist in the design and setup of networks.  Day 2  AI networking enhances Day 2 network operational support by correlating multiple data inputs, identifying problems faster, yielding quicker resolution and, where applicable, spotting problems proactively before a user is aware. GenAI will bring additional capabilities including a conversational interface and the ability to learn over time. It can also enhance user experience with specific outputs such as text, audio, video, or graphics.  Related:An AI Prompting Trick That Will Change Everything for You For example, to help isolate problems, CIOs can ask GenAI to build a dynamic graphic of networking performance issues over time based on packet loss, latency and jitter. It can also focus on specific questions such as “Is the CEO having network performance issues?”  GenAI can create detailed configurations and troubleshooting procedures based on natural language inputs without explicit templates. GenAI tools can drive network operational support time savings by up to 25% when compared with the status quo by driving efficiencies that can’t reasonably be achieved by scaling manual resources. It removes manual processes to identify issues more quickly, resulting in faster problem resolution.  Calculate the Value Before Investing  CIOs must ask pertinent questions to gain a complete understanding of the inherent value of GenAI networking, its use cases and common tools. A key facet in the process of GenAI adoption involves building the business case and calculating the value to the organization.   Related:How to Regulate AI Without Stifling Innovation Asking pertinent questions can offer more insights while creating a business case to determine the value of GenAI functionality. Specifically, determine if aligning network operations with GenAI can help build scale, control/reduce costs, drive resource efficiency, foster agility to keep up with the digital business and deliver a better end-user experience.  Prove the Concept First  In addition to the immaturity of GenAI networking functionality and the need to quantify the value, another key limitation that needs to be overcome to achieve wider adoption by network operations is a lack of trust. Network teams have been burned many times by vendor claims of automation or single pane of glass to solve existing issues. This, in part, is the reason why network operations teams have been slow to adopt network automation and are skeptical about GenAI. On top of this, GenAI networking tools may yield inconsistent responses, which introduces risk and fosters mistrust.  However, network operations teams need to include GenAI functionality in their RFPs/RFIs to determine the scope, value and capabilities of the solutions in the market as they mature.  Running a proof of concept (POC) is key for network operations personnel to determine the accuracy of the GenAI solution, alongside its maturity, level of trust and degree of comfort. This is really more about quantifying the accuracy of the GenAI networking solution across a wide range of scenarios. Even in production, we expect network operations personnel to have to validate some or many GenAI outputs, but baselining the capability gives context to the accuracy and the level of unsupervised trust (if any) that should be given.  When running the POC, begin by testing in a lab environment before moving to a real-life production environment. Test the solution over several weeks and months to stress it as much as possible. Have multiple personnel leverage the tool to capture multiple opinions/perspectives. Validate the GenAI networking tool outputs for accuracy by testing against alternative sources. Measure the time to perform tasks with the GenAI networking tool and with the previous/current method. In short, the goal is to compare process efficiency and accuracy of the current approach versus the intended GenAI approach. As part of this POC, both the level of trust and value (business case) can be determined to help inform a sourcing decision and simplify

How CIOs Can Prepare for Generative AI in Network Operations Read More »

The Cost of AI: Navigating Demand vs Supply for AI Strategy

In the midst of the second week of InformationWeek’s series on the Cost of AI, attention turns to better understanding some of the current limits on AI resources and how that can affect enterprises’ plans for the technology. So far the series has covered many facets of needs associated with delivering AI, and the following video features interviews on issues of supply and demand when it comes to the technology, the people needed to drive it, and other resources required to support it. Many organizations want to explore ways they can use AI, incredible ideas they believe could elevate their operations. The problem is — they might not be able to because the resources they need are not always available. That can mean not having access to the most popular tech, a shortage of AI gurus, or there is just not enough energy to support their ambitions. This does not necessarily mean they must give up on AI. Liz Fong-Jones, field CTO for Honeycomb; Brandon Lucia, CEO and co-founder of Efficient Computer; Simeon Bochev, CEO and co-founder of Compute Exchange; and Chaitanya Upadhyay, chief product officer for Aarki, discuss ways companies can adopt grounded strategies to navigate supply and demand for AI. source

The Cost of AI: Navigating Demand vs Supply for AI Strategy Read More »

Is AI Driving Demand for Rare Earth Elements and Other Materials?

Artificial intelligence is changing the world in innumerable ways. But it’s not all chatbots and eerily realistic images. This technology, for all its surreal qualities, has a basis in the material world. The materials that power its capabilities range across the periodic table — from easily accessible elements such as silicon and phosphorus to rare earth elements (REEs), derived from complex purification processes.   Rare earth elements are a series of 15 elements ranging from atomic numbers 57–71 on the periodic table called the lanthanide series, along with two other elements (21 and 39) with similar properties. They are divided into light and heavy categories. Heavy rare earth elements, which have higher atomic numbers, are less common.   The light rare earths are lanthanum, cerium, praseodymium, neodymium, europium, promethium, samarium, and gadolinium. The heavy rare earths are yttrium, terbium, dysprosium, holmium, erbium, thulium, ytterbium, and lutetium. Scandium falls outside the two categories.  These metals are not actually rare — they just exist in low concentrations and are difficult to extract. They are crucial components of the semiconductors that provide the computing power that drives AI. They possess uniquely powerful magnetic qualities and are excellent at conducting electricity and resisting heat.   Related:Possibilities with AI: Lessons From the Paris AI Summit These qualities make them excellent for graphics processing units (GPUs), application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs). REEs are also crucial to sustainable energy production that supposedly offsets the drain on the power grid by AI — notably wind turbines.   The market for these metals is expected to reach $10 billion in the next two years.  If recent headlines are to be believed, some of these materials are becoming increasingly scarce due to supply chain issues. China has throttled the export of REEs and other critical materials. It produces some 70% of global supply and processes around 90% of REEs.   Whether that is a genuine concern is debated. It has certainly resulted in trade tensions between China and the West. But other countries, including the United States, are attempting to ramp up production and prospects in the deep sea may offer additional sources.   InformationWeek investigates, with insights from David Hammond, principal mineral economist at chemical manufacturer Hammond International Group, and Ramon Barua, CEO of rare earths supplier Aclara Resources.   Which Elements Are Required to Power AI?  Related:An AI Prompting Trick That Will Change Everything for You Semiconductors comprise some 300 materials — with REEs and other critical minerals among them. Among the most crucial components are cerium, europium, gadolinium, lanthanum, neodymium, praseodymium, scandium, terbium, and yttrium as well as critical minerals gallium and germanium.  Some REEs are used in the manufacturing process and others are integrated into the chips themselves — used to dope other materials to alter their conductive properties. The performance of gallium nitride and indium phosphide are enhanced by doping with europium and yttrium, for example. And layers of oxides formed from gadolinium, lanthanum, and lutetium have improved logic and memory performance.   The proportions of the materials used in semiconductors are largely trade secrets — and thus the demand for specific REEs and other critical minerals for semiconductors is difficult to determine. But they are likely not the major driver of extraction of these elements.   “The usage of rare earths in semiconductors is really a minor aspect of all rare earth demand,” Hammond claims. “I don’t believe it will ever be a major demand driver for rare earths. Less than 10%, probably 5%.”  Dysprosium, neodymium, praseodymium, and terbium are essential components of the magnets used in wind turbines — which comprise a portion of the sustainable energy used to supposedly offset AI energy drain. Hammond thinks that demand for these REEs, also used in generators and solar panels, will be the major driver for extraction and consumption of REEs. Whether that demand will compete with demand from the semiconductor industry remains unknown.  Related:How to Regulate AI Without Stifling Innovation “The need for these other applications is probably going to create that marginal supply that is going to be used by semiconductors,” Barua predicts.  Additional elements, such as gallium, germanium and compounds such as high-purity aluminum (HPA) are also essential. Common elements including silicon and copper play key roles as well. Demand for copper is expected to grow significantly — by up to a million metric tons in the next five years.  Many of these elements, though crucial, are only required in small quantities. “Last year, the US required 19 metric tons of gallium,” Hammond says. “That’s basically 19 pickup trucks of gallium. The panic was so vastly exaggerated to be almost in the realm of stupidity.”  How Available Are These Elements?  China has a monopoly on REEs, both in terms of extraction and processing. It produced more than 240,000 metric tons in 2023. But REEs are also found elsewhere — the US, Australia, India, Myanmar, Russia, and Vietnam. They are relatively common and usually found together, in varying levels of abundance.   China only holds around 40% of the world’s reserves of these minerals. China was not always the primary producer — prior to the 1980s, the US was dominant. But China’s more lax environmental regulations proved advantageous and by the late 1990s had the upper hand in terms of availability and processing technology.   While China currently has a stranglehold on supply and processing, other countries are investigating how to leverage their own reserves of REEs. The US and Australia still manage to extract substantial amounts of these minerals. The processing technology required to turn these elements into usable materials is perhaps the most pressing issue — countries that extract REEs usually send them to China for refinement.   “The big issue for rare earths isn’t so much finding them. It’s processing them,” Hammond observes. “It requires a challenging chemical process to extract the individual components.”  David Hammond, Hammond International Group “The companies producing rare earths are pretty sticky about talking about it — for competitive reasons. But also, nobody really knows what the demand is going to be. Nobody

Is AI Driving Demand for Rare Earth Elements and Other Materials? Read More »