Information Week

Implementing an IT-User Exchange Program

Like foreign student exchange programs, a regular exchange program between the IT team and end user departments in which an IT business analyst spends six weeks in an end-user area doing end- user work, and a person from the end-user area spends six weeks in IT, can build bench strength and collaborative relationships between IT analysts and business users.    Yet many who have tried this idea have exited with mixed results. What are the pitfalls, and is there a way to run an employee exchange program that delivers quality outcomes for everyone?   First, Why Do it?  Cross-disciplinary team building and the development of empathy and understanding of the business and IT across departments are the driving forces behind user-IT employee exchanges. You can’t teach practical company business acumen to IT staff with textbooks and college courses. IT needs “boots on” experience in user departments, where business analysts directly experience the day-to-day process problems and pain points that users do.    End users who take a tour of duty in IT have a chance to see the “other side,” which must plan carefully about how to integrate and secure software, while users complain that application deployments are taking too long.   On paper, there is virtually no one in user departmental or IT management who thinks that employee exchange is a bad idea. So, why haven’t these exchanges been widely embraced?    Related:Finding Your Shadow: Can Shadow IT Be Controlled? Pitfalls There are several reasons why employee exchanges between users and IT have faltered:  1. The time commitment  Whether you’re in IT or end-user management, exchanging an employee who is fully trained in your department for another employee who will be a trainee, at best, is not an easy sacrifice to make. There are projects and daily work to accomplish. Can your department afford an employee exchange that could compromise productivity when you might already be running lean?    2. Lack of management commitment  The user-IT employee exchange starts out strong, with both user and IT management highly enthusiastic about the idea. Then, an unexpected priority comes up on either the user or IT side, and the manager who is affected says, “I’m sorry. I’m going to have to pull back my employee from the exchange because we have this important project to get out.”   I’ve seen this scenario happen. Employees get pulled out of the exchange program, and in good faith their managers try to reengage them in the exchange once a crisis has been resolved, but the continuity of the exchange has been interrupted and much of the initial effort is lost.    Related:The Top Habits of High-Performing IT Development Teams 3. Failure to set attainable goals    Often, users and IT will agree to an employee exchange with a loose goal of immersing employees in different departments so employees can gain a better understanding of the company. The employees, and those whom they work with in their new departments, aren’t really sure about what they should be focusing on. When the exchange period ends, no one is exactly sure about what knowledge has been gained, and they can’t explain it to upper management, either.    4. Lack of follow up   Did the employees in the exchange come back with value-added knowledge that is aiding them in new projects that they are doing? Most managers I speak with who have done these exchanges tell me that they’re not sure.   One way to be sure is to check in with employees after they complete exchanges to see what they’re learned, and how they’re applying this new knowledge to their work. For example, if an IT employee goes to accounting to learn about risk management and works six weeks with the risk group, does the employee come back with new knowledge that helps them develop more insightful analytics reports for that group?   5. Lack of practical know-how   Lack of know-how in running employee exchanges goes hand in hand with the failure to set attainable goals, or to follow up. The managers who are best in these areas are individuals who have backgrounds in teaching and education, but not everybody does.    Related:How to Become a Collaborative IT Team Leader When you exchange employees for purposes of knowledge transfer and growth of business understanding, setting goals and staying with and following up the process are fundamental to execution. Unfortunately, many managers who try exchanges lack skills in these areas.   6. Employee transfer requests  Many managers fear that the employees they send to other departments might like the work so well that they request a permanent transfer! This is a major fear.     Doing an Employee Exchange   Given the pitfalls, it’s small wonder that employee exchange programs aren’t aggressively pursued, but that doesn’t mean that they don’t work.   Where do they work?    1. Companies that want to improve their employee retention   Several years ago, a major appliance manufacturer offered an internal program where employees could sign up for projects outside of their regular business areas and get time to work on the projects. Other companies have followed suit. This “outside of the department” work unlocked employee creativity and career growth opportunities. It improved employee morale, which in turn reduced employee churn. In 2024, overall employee churn at US companies was at 20%, or one in five employees. With a tight job market, companies want to reduce churn, and expanding employee work experiences and knowledge is one way to do it.    2. Organizations that require cross-training  The military is a prime example of this. Recruits are trained in a variety of different functional areas to determine where they best excel.   3. Not-for-profit entities  Credit unions and other not-for-profit entities have historically been great proving grounds for employee exchange programs because of their people orientation. Upper and middle managers are genuinely committed to the idea of employee growth through cross-training. The not-for-profit culture also promotes resource sharing, so managers are less resistant to the idea that they could lose a valuable employee to another department because the employee likes working there.    4. When clear objectives are set, and follow-up is done   An employee exchange requires clear objectives to succeed at an optimal level. For example, you don’t send an IT staffer over to accounting to learn clerical processes of closing the month-end financials and reporting them to management. If it’s taking finance three days to do the month-end close, you send an IT employee over to learn the process and the process obstacles, and to determine why it’s taking finance three days instead of one day to do the close. The hope is that the employee returns to

Implementing an IT-User Exchange Program Read More »

Top IT Insights 2025: Navigating the Future of Tech

The future of enterprise technology is taking shape rapidly, and the start of 2025 is only accelerating the impact it will have on all businesses, no matter the industry. IT and business leaders face incredible opportunities alongside complex challenges that will reshape everything from daily operations to client value.   As technology redefines how we work, organizations are in a race to innovate to tap into new possibilities and drive sustainable growth, or risk falling behind. To succeed in this dynamic landscape, leaders must focus on the following trends shaping the tech landscape.    Natural Language: Default for AI-Human Interaction  Over the past several years, AI has become a major focus for businesses and in everyday life. We went from the onset of ChatGPT to learning how to engineer prompts and are now coming to terms with the privacy and governance considerations needed to use the technology safely.   AI will continue to become more intuitive and accessible, transitioning from basic web interfaces to seamless, natural interactions. AI will integrate into all aspects of our lives, enabling us to communicate with machines as naturally as we do humans. Further, we will see AI integrated into devices like phones and smart home systems, responding to voice commands, gestures and predictive cues.   Related:How to Turn Developer Team Friction Into a Positive Force This marks a reversal of the current approach to AI, a shift from teaching humans to prompt AI toward teaching it to understand humans. Today, prompt engineering plays a central role in making AI systems deliver optimal results. However, prompt engineering will become obsolete as natural language processing improves, and AI becomes more intuitive.   To stay ahead of this shift, organizations should identify key processes unique to their business models that could benefit from natural language automation Expanding AI use beyond traditional interfaces — incorporating voice, gesture and predictive features — will be crucial to staying ahead. Leaders should prioritize intuitive user experiences that make AI tools easier for all users to navigate. Finally, as AI’s capabilities grow, ensure that your infrastructure and security measures are robust enough to handle the demands of natural language processing at scale.  Small Language Models and Edge Computing  Due to connectivity, privacy and security concerns, not all AI applications can rely on large language models (LLMs). Small language models paired with edge computing can process data closer to the source — like on local servers, laptops and mobile devices — reducing LLM token usage, improving latency and addressing privacy challenges.   Related:3 Tech Deep Dives That CIOs Must Absolutely Make This hybrid approach enables organizations to process sensitive data locally, resulting in faster, more secure AI applications. Organizations can achieve more reliable AI-driven insights by using localized models that rely on curated data while optimizing resource use and managing operational costs. It mainly benefits organizations operating in regulated environments or those handling confidential information. Edge deployment also helps organizations control their AI operations, reducing reliance on external cloud providers.   Organizations can use this hybrid approach by evaluating where moving computing power to the edge can improve data confidentiality, security and cost-efficiency. For example, consider areas where sensitive or regulated data can be processed locally, minimizing the need to transmit information over less secure channels. Partnering with edge computing providers will allow organizations to expand their AI capabilities while keeping sensitive operations closer to home.   Energy-Efficient AI as a Competitive Advantage  As AI is incorporated into everyday life, it drives the construction of energy-intensive data centers, straining global power grids and raising environmental concerns. Consequently, the focus is now on developing energy-efficient models to balance innovation with sustainability.  Related:AI’s Next Frontier Is Applications: How to Stay Ahead Practicing sustainable AI can be a differentiator in two ways: optimizing the use of energy resources in the training and operation of AI, and applying AI to energy-intensive processes and applications.   Business leaders should consider implementing efficiency techniques such as model pruning, quantization, and knowledge distillation to reduce computational complexity and resource usage. Additionally, the focus should be on reusing datasets and optimizing data storage to avoid redundant data processing and reduce energy consumption. Partnering with cloud providers and hardware manufacturers who prioritize energy-efficient AI solutions is another step toward sustainability.  Entry-Level Workers and an AI Workforce  One temptation with generative AI technology is to assume it can do the work of entry-level workers. However, even with these advances, these workers remain essential to the future of business. While AI can automate many repetitive tasks, entry-level employees often possess a deeper understanding of generative AI tools and how to integrate them effectively into workflows. These workers are digital natives who are highly adaptable, innovative, and capable of handling AI technologies, making them valuable contributors to an AI-enabled workforce.  To leverage the potential of an AI-enabled workforce, organizations should prioritize hiring entry-level talent who bring valuable digital-native skills and a deep understanding of generative AI tools. Retaining these workers is key to nurturing future leaders who can harness AI’s capabilities for long-term success. Additionally, organizations should provide targeted training programs to help experienced employees adapt to AI advancements and integrate new technologies into their roles.   In an era of rapid technological advancement, businesses need to stay nimble while planning ahead. The trends shaping enterprise technology offer both challenges and opportunities. Taking action today will position organizations for sustained innovation and competitive advantage in 2025 and beyond.  source

Top IT Insights 2025: Navigating the Future of Tech Read More »

Why AI Model Management Is So Important

Many organizations have learned that AI models need to be monitored, fine-tuned, and eventually retired. This is as true of large language models (LLM) as it is of other AI models, but the pace of generative AI innovation has been so fast, some organizations are not managing their models as they should be, yet.   Senthil Padmanabhan, VP, platform and infrastructure at global commerce company eBay, says enterprises are wise to establish a centralized gateway and a unified portal for all model management tasks as his company has done. EBay essentially created an internal version of Hugging Face that eBay has implemented as a centralized system.   “Our AI platform serves as a common gateway for all AI-related API calls, encompassing inference, fine-tuning, and post-training tasks. It supports a blend of closed models (acting as a proxy), open models (hosted in-house), and foundational models built entirely from the ground up,” says Padmanabhan in an email interview. “Enterprises should keep in mind four essential functionalities when approaching model management: Dataset preparation, model training, model deployment and inferencing, and continuous evaluation pipeline. By consolidating these functionalities, we’ve achieved consistency and efficiency in our model management processes.”  Related:Breaking Through the AI Bottlenecks Previously, the lack of a unified system led to fragmented efforts and operational chaos.   Rather than building the platform first during its initial exploration of GenAI, the company focused on identifying impactful use cases.   “As the technology matured and generative AI applications expanded across various domains, the need for a centralized system became apparent,” says Padmanabhan. “Today, the AI platform is instrumental in managing the complexity of AI model development and deployment at scale.”  Senthil Padmanabhan, eBay Senthil Padmanabhan, eBay Phoenix Children’s Hospital has been managing machine learning models for some time because predictive can models drift.  “We’ve had a model that predicts malnutrition in patients [and] a no-show model predicting when people are not going to show up [for appointments],” says David Higginson, executive vice president and chief innovation officer at Phoenix Children’s Hospital. “Especially the no-show model changes over time so you have to be very, very conscious about, is this model still any good? Is it still predicting correctly? We’ve had to build a little bit of a governance process around that over the years before large language models, but I will tell you, like with large language models, it is a learning [experience], because different models are used for different use cases.”  Related:How AI is Transforming the Music Industry Meanwhile, LLM providers, including OpenAI and Google, are rapidly adding new models turning off old ones, which means that something Phoenix Children’s Hospital built a year ago might suddenly disappear from Azure.  “It’s not only that the technical part of it is just keeping up with what’s being added and what’s being removed. There’s also the bigger question of the large language models. If you’re using it for ambient listening and you’ve been through a vetting process, and everybody’s been using a certain model, and then tomorrow, there’s a better model, people will want to use it,” says Higginson. “We’re finding there are a lot of questions, [such as], is this actually a better model for my use case? What’s the expense of this model? Have we tested it?”  How to Approach Model Management  EBay’s Padmanabhan says any approach to model management will intrinsically establish a lifecycle, as with any other complex system. EBay already follows a structured lifecycle, encompassing stages from dataset preparation to evaluation.  “To complete the cycle, we also include model depreciation, where newer models replace existing ones, and older models are systematically phased out,” says Padmanabhan. “This process follows semantic versioning to maintain clarity and consistency during transitions. Without such a lifecycle approach, managing models effectively becomes increasingly challenging as systems grow in complexity.”  Related:How Big of a Threat Is AI Voice Cloning to the Enterprise? EBay’s approach is iterative, shaped by constant feedback from developers, product use cases and the rapidly evolving AI landscape. This iterative process allowed eBay to make steady progress.  “With each iteration of the AI platform, we locked in a step of value, which gave us momentum for the next step. By repeating this process relentlessly, we’ve been able to adapt to surprise — whether they were new constraints or emerging opportunities — while continuing to make progress,” says eBay’s Padmanabhan. “While this approach may not be the most efficient or optimized path to building an AI platform, it has proven highly effective for us. We accepted that some effort might be wasted, but we’ll do it in a safe way that continuously unlocks more value.”  To start, he recommends setting up a common gateway for all model API calls.   “This gateway helps you keep track of all the different use cases for AI models and gives you insights into traffic patterns, which are super useful for operations and SRE teams to ensure everything runs smoothly,” says Padmanabhan. “It’s also a big win for your InfoSec and compliance teams. With a centralized gateway, you can apply policies in one place and easily block any bad patterns, making security and compliance much simpler. After that, one can use the traffic data from the gateway to build a unified portal. This portal will let you manage a model’s entire lifecycle, from deployment to phasing it out, making the whole process more organized and efficient as you scale.”  Phoenix Children’s Hospital’s Higginson says it’s wise to keep an eye on the industry because it’s changing so fast.  David Higginson, Phoenix Children’s Hospital David Higginson, Phoenix Children’s Hospital “When a new model comes out, we try to think about it in terms of solving a problem, but we’ve stopped chasing the [latest] model as GPT-4 does most of what we need. I think what we’ve learned over time is don’t chase the new model because we’re not quite sure what it is or you’re limited on how much you can use it in a day,” says Higginson. “Now, we’re focusing more on models that

Why AI Model Management Is So Important Read More »

AI’s Next Frontier Is Applications: How to Stay Ahead

Every technological revolution follows a pattern: an installation phase typified by eruption and frenzy, followed by a more stabilized period of deployment seeing steady growth to maturity. It’s a concept studied and introduced by researcher and consultant Carlota Perez as early as 2002.  At the start of these technological revolutions, the focus is on building the infrastructure. It’s the phase where early adopters reap huge rewards by developing the tools and platforms for future innovation. In the AI era, dominant players like OpenAI, Anthropic and Google laid the groundwork by creating powerful large language models (LLMs) and multimodal systems.  But AI’s initial Big Bang is almost over. It’s now entering a new phase. As the cost of AI infrastructure falls and access to these tools becomes more widespread, the competitive advantage will shift from owning infrastructure to applying new tech in novel ways.  This is not just theoretical. It’s a real-life pattern I lived as co-founder of Vungle, a mobile advertising platform that emerged within the mobile app economy. In 2011, when we started Vungle, mobile app development was still in its nascent stage. It was anyone’s game. We saw that current advertising models hadn’t yet adapted to mobile-native experiences. So, we addressed the pain point through high-quality video ads designed specifically for mobile games and apps. By the time mobile advertising became ubiquitous, Vungle was already well-positioned, which ultimately led to our $780 million acquisition.  Related:How to Turn Developer Team Friction Into a Positive Force If — as philosophy and Perez’ model suggest — history repeats itself, the next events in AI will play out as in the pattern of the 2010 mobile app revolution. When Apple launched the App Store, early adopters like Instagram, Uber and WhatsApp saw an opportunity to rethink entire business models around mobile-first user behavior. They were among the first to recognize how smartphones could change user interaction, distribution and monetization. They were also the biggest winners of the mobile app boom.  What’s to Expected Given AI’s growth  Just as having a mobile app is now table stakes for most businesses, AI-powered features will soon be expected rather than optional. Integrating AI for efficiency will be commonplace, not a differentiator. The real winners will be those applying AI in ways that make entirely new experiences possible. This is why the application layer of AI will create the most long-term value.  And just as most mobile companies that saw massive success weren’t infrastructure providers but companies that leveraged mobile effectively, the most successful AI companies won’t be building new AI models. They’ll be using AI to solve critical problems in industries like healthcare, finance and enterprise SaaS.   Related:3 Tech Deep Dives That CIOs Must Absolutely Make What IT Leaders Need to Do (and Fast!)  So, how can organizations stay ahead in this next phase of the AI revolution?  Start approaching AI as an enabler. AI’s time as a mere feature or tool to automate existing processes is done. You should go from “How can AI automate our tasks?” to “How can AI drive new business models?” A report from McKinsey said that corporate AI use-cases could yield long-term added productivity gains as high as $4.4 trillion. The same report shares three questions for leaders navigating this AI-centered future:  Is your AI strategy ambitious enough?   What does a successful AI adoption look like for your organization?   What skills define an AI-native workforce?  Prioritize AI-native products. Businesses should adopt — or better yet, pioneer AI-native solutions that fundamentally redefine user experiences and decision-making. Take Boardy for example. As early as we are into AI’s deployment era, it’s already found a niche (professional networking) to disrupt (by using AI to facilitate smarter, more personalized introductions), automating what was once an entirely manual process.  Related:What VC Investments Look Like in 2025 Invest in talent with AI-first thinking. Now’s the time to launch AI upskilling initiatives, such as AI certification programs or company-led AI boot camps. Hiring efforts should seek out AI-native product leaders, engineers and executives who will design products that incorporate AI from inception, rather than retrofitting it into legacy systems.  We are at a defining moment in the AI revolution. Access to foundational models is becoming democratized, and the real opportunity is shifting to how AI is applied in the real world. During the mobile app boom, our company succeeded because we saw how mobile-first thinking separated winners from also-rans in the early days of the App Store. The same will be true for AI. Companies that iterate and operate with AI’s unique capabilities will emerge as the dominant players of the next decade.  There’s no longer any doubt whether AI will change industries — it already is. The real question now: Who will be the AI-native companies that define this new era?  source

AI’s Next Frontier Is Applications: How to Stay Ahead Read More »

How AI is Transforming the Music Industry

The music industry is always evolving. Artists, trends, labels, and media platforms emerge and depart with startling regularity. Yet performers, recording firms, concert promoters, and other industry players may now be facing their biggest transformation challenge yet — artificial intelligence.  Even at this relatively early stage, there’s no area of the business that’s unaffected, says Daniel Abowd, president of music publishing company The Royalty Network. “On the creation side, AI-powered tools are being used to enhance and synthesize performance, editing, production, post-production, and post-release content,” he explains in an email interview. “On the consumption side, AI is powering listener and playlisting algorithms and other tools that deliver listeners to content.”  There’s already been an incredible number of AI-supported use cases, says Andrew Sanchez, co-founder of Udio, which offers a generative AI model that produces music based on simple text prompts. He observes, via email, that The Beatles’ “Now and Then,” which was restored with the help of AI, was recently nominated for two Grammys in the Record of the Year and Best Rock Performance categories.  There’s always been a distance between music creators and listeners, Sanchez states. He notes that AI is helping to reduce that gap by allowing a more direct dialogue between artists and their fans. “When artists release music that fans can then remix, extend, distort, or otherwise interact with through AI, it opens up an entirely new revenue stream for artists and means of engagement.”  Related:Breaking Through the AI Bottlenecks GenAI, in particular, opens a new way to explore musical creativity, inviting people who might otherwise never engage with music, says Mike Clem, CEO of musical equipment retailer Sweetwater. “It takes patience and grit to learn an instrument, and AI lowers the bar on the talent required to sound good,” he explains in an online interview. As a result, there’s now a new wave of music makers experimenting with AI, who then learn to play a “real” instrument.  AI-generated music tools are also helping artists accelerate their creative processes, allowing them to generate hits that match the pace of pop culture innovation, Sanchez says. He notes that comedian Willonius Hatcher, known as King Willonius, used Udio to create an AI-assisted song called “BBL Drizzy.”   “The song made waves in pop culture when Metro Boomin sampled it,” Sanchez says, “marking the first time an AI-generated song was sampled by a major producer.”  A Generational Transformation  Related:Why AI Model Management Is So Important Unlike their predecessors, many modern musicians have no desire to appear live on stage or even record an album, Clem says. He believes there’s now a transition from ‘musicians’ to ‘creators,’ fueled in part by AI. “It’s about creating content that connects with their audiences to build and grow their following,” he explains.  Music has evolved throughout history, thanks to artists who aren’t afraid to push the status quo, Sanchez says. “The transformation in AI is really being led by artists who understand how AI-generated music tools can enhance their creative processes.”  Some industry observers view AI as a potential replacement for human artists. But Sanchez disagrees. “In reality, we believe that human creativity will never be cut out of the process,” he says. “The songs that rise to the top have the confluence of the creative spark and the understanding of what people actually want to listen to.”  Both Sides Now  AI-powered tools can enhance, empower, and inspire human creativity, Abowd says. They can simplify many creative tasks, such as editing out breaths from a vocal track. With consent, AI technology can also enhance or simulate the vocal sound of a singer who’s no longer able to perform as they did years ago, as well as inspire songwriters with a foundational sound concept they can build upon.  Related:How Big of a Threat Is AI Voice Cloning to the Enterprise? On the downside, there’s the possible existential threat posed by AI models that use unlicensed human-authored music to create new works that will compete in the same marketplace, potentially at a lower price point, Abowd says. “Reasonable people can disagree on the magnitude of that threat, but it’s certainly a conversation on the tip of many people’s tongues.”  A Golden Opportunity  Sanchez believes that blending AI with art presents a golden opportunity to create a powerful, transformative creativity technology that will open new revenue options for artists. Fans will benefit, too. “It’s clear from recent music tour successes … that consumers are interested in immersive experiences that put them at the helm of the storyline.”  There’s something very innately human and beautiful about expressing yourself musically, Clem observes. “AI may displace some commercial music production — for example, in commercials and video game soundtracks — but we’re in no danger of computers replacing our desire to express ourselves creatively, or our desire to experience live music and all its attached emotions and nostalgia,” he notes. “There’s something about music that resonates in our souls in ways that we cannot explain.”  source

How AI is Transforming the Music Industry Read More »

Should CIOs Lead User Education Initiatives?

In November 2024, McKinsey’s Alex Panas (the global leader of industries) and Axel Karlsson (global leader of practices and growth platforms) wrote:   “The tech opportunities for today’s organizations are alluring. Businesses are racing to capitalize on the proliferation of technologies like generative AI, and with more data at their fingertips than ever, the potential to transform the business through tech seems vast. But companies looking to make digital hay need to play their cards right, otherwise they risk falling into the same traps that befuddled business leaders of yore faced with earlier digital disruptions.”   Pana and Karlsson cited digital missteps like not having a clear vision for a digital project or overestimating a project’s ultimate economic return to the company. But there are two other “ground floor” caveats that also are requisite for digital project success: The new technology must be seamlessly integrated into company business processes; and the users must be trained to successfully use it.  The goal is total digital assimilation into the business. That digital assimilation is hard to attain if the business processes that use the technology don’t work right, or if employees get confused with the new technology. At this point, the project sputters and the blame game starts, often with the burden placed on IT.  Related:How to Turn Developer Team Friction Into a Positive Force Why is this? Isn’t it the job of HR or user departments to train employees and to redesign business processes so the business flows can work with new digital technology? And, isn’t it IT’s job to stick to technical tasks, like developing, integrating, testing and deploying new digital technologies so that users can use them?  That’s the general idea in theory, but all you have to do is to walk up to a bank teller or a clerk at a hardware store counter who’s struggling to put your transaction through. As they struggle, they will tell you, “It’s the system.”  How to Deal with the ‘It’s the System’ Problem  I still find CIOs today who will consider a digital project complete and successful if delivered within budget and timeline. They wash their hands of it and don’t consider it their responsibility if users later struggle with the system.  Or, maybe the new system renders an internal business process painful or unwieldy. Unfortunately, taking a position like this can cost a career!  Digital transformation expert Eric Kimberling talks about why CIOs get fired and says that CIOs can “become captivated by the technology itself, focusing on its bells and whistles and cool features,” while ignoring “the organizational and human dynamics of a transformation.”   Related:3 Tech Deep Dives That CIOs Must Absolutely Make He goes on to say, “CIOs sometimes assume that if technology works well from a technical perspective, it will automatically work for the business … However, this assumption may or may not hold true. The best CIOs I have worked with are actually those who possess limited technological knowledge but possess a deep understanding of operations and the business they work for. They recognize the value and importance of the human and organizational aspects of change.”  CEOs and boards see this, too. That’s why they expect their CIOs to be as strategically and operationally on top of the business as they are on the technology. It’s also incumbent on CIOs to assume more active roles in the human and business sides of digital project deployments if they want to avoid the “it’s the system” blame syndrome.  The CIO Role in User Education  User education and business process design isn’t the forte of most CIOs, nor of IT staff for that matter. How can CIOs and IT engage more substantially in digital projects to ensure that systems work well in business workflows and that knowledge transfer to employees has occurred?  Digital assimilation should be the goal of the CIO and the project team. If a digital system is to be assimilated into the business fabric of the company, it must meld well with business processes and be intuitively simple for workers to use and understand. Seamless business workflows and optimal ease of use should be ground-level goals of the user-IT project team, and it is the CIO who should push this idea. It is not enough to proclaim a project complete and successful just because it meets the timeline and comes in under budget.  Related:AI’s Next Frontier Is Applications: How to Stay Ahead Project tasks should reflect business processes and ease of use goals. If a business process needs to be redesigned to accommodate new digital technology, tasks should be assigned for developing the workflow, doing the business workflow walkthrough, documenting it, testing it for all routine operations foreseeable exceptions and debugging it until it runs cleanly. If this sounds a bit like the design, develop, test-and-deploy sequence of traditional IT application development, it sounds that way because it is. Developing, testing and revising business process flows, and usability should have equal billing with getting the software done.  New business processes using digital technology should be pilot tested. Before new software is deployed, it’s tested in a system environment that emulates the environment the software will run with in production. The same should be done with new business processes that incorporate digital technology. The new tech and business process should be run in a pilot environment that emulates the “live” business environment where users will be operating. This is the only way you can really see the business issues and fix them for a smooth project cutover.   The CIO should collaborate with other C levels. Launching new business processes and tech, and ensuring that employees have the skills to use them, is everybody’s business. However, it’s especially the business of the user area executive and the CIO who should be co-sponsoring the project and energizing their teams. When both parties and their staffs are aligned with the on-the-ground strategy of making sure the tech works, and that users know how to use that tech, they’ll not only

Should CIOs Lead User Education Initiatives? Read More »

3 Tech Deep Dives that CIOs Must Absolutely Make

When I was a junior programmer/analyst on my first IT job, I was working with a programmer-mentor named Bob who was teaching me to code subroutines. The day’s conversation got around to the CIO, and Bob unexpectedly said, “That guy’s nothing more than a pencil pusher. He doesn’t have a clue about what we’re doing!”   Bob’s words stuck with me, especially after I became a CIO. I kept thinking about the side conversations that happen in cubicles. I determined that although it wasn’t my business as a CIO to code, I would make it my business to stay atop technology details so I could actively interact with my technical staff members in a value-added way. I decided to also learn how to communicate about technology at a plain English “top” level with other executives and board members.  Staying on top of technology at a detailed level isn’t easy for CIOs who have a broad range of responsibilities to fulfill. Meanwhile, it’s crucial to be able to articulate complicated tech in plain English to superiors who lack a tech background when your own strength might be in science and engineering, but not in public speaking.   Nevertheless, it’s absolutely essential for CIOs to do both, or they risk losing the respect of their superiors and their staff.  Here are three tech deep dives that CIOs must make in 2025 so they can meet the technology expectations of their superiors and staffs:  Security  Security worries corporate boards. It’s a key IT responsibility, and as cyberattacks grow more sophisticated, preventing them is becoming more than just monitoring the periphery of the network and conducting security audits. Using traditional security analysts who are generalized in their knowledge also might not suffice.   Enter technologies like network and system observability, which can probe beyond monitoring, drilling down to security threat root causes and interpretations of events based upon the relationships between data points and access points. You’ll have to break down the concept of observability and possibly the evolution of new tech roles in security for the board and executives who will be asked to fund them.   On the IT staff side, implementing observability will be a topic of technical discussion. There may also be a need to discuss new security roles and positions. For instance, in sensitive industries like finance, law enforcement, healthcare or aerospace, you may need a cyberthreat hunter who seeks out malware that may be dormant and embedded in systems, only waiting to be activated. Or, it may be time for a security forensics specialist who can get to the bottom of a breach to identify the perpetrator. These are positions that are more specialized than security analyst. You may have to develop the skillsets for cyberhunting or forensics internally or seek them outside. Adding these roles could force a realignment of duties on the IT security staff, and it will be important for you to work closely with your staff.  Generative and Agentive AI  Companies are flocking to invest in AI,  with boards and CEOs wanting to know about it, and the data science and IT departments want direction on it.  Generative AI is the most common AI used, but how many boards know what Gen AI is, and how it works? Meanwhile, agentive AI, in which AI not only makes decisions but acts upon them, is coming into view.  Both forms of AI can dramatically impact business strategies, customer relationships, business processes and employee headcount. CEOs and boards need to know about these forms of AI, what they are capable of doing, where the risks are, and what the impact could be. They will come to the CIO for information. They don’t need to know about every nut and bolt, but they do need enough working knowledge so they can understand the technology at a conceptual business level.   On the IT and data science staff side, generative AI engines must operate on quality data from a variety of external and internal feeds that must be vetted. In some cases, ETL (extract-transform-load) software must be used to clean and normalize the data. The technical approach to doing this needs to be discussed and implemented. It is a plus for everyone if the CIO partakes in some of these meetings.  With agentive AI, there should be discussions about technology readiness and ethical guardrails as to just how much autonomous work AI should be allowed to perform on its own.  For all AI, security and refresh cycles for data need to be defined and executed, and the algorithms operating on the data must be trialed and tuned.  Collectively, these activities require project approval and budget allotments, so it is in the staff’s and CIO’s best interests that they get discussed technically so the nature of the work, its challenges and opportunities are clearly understood by all.   NaaS  We’ve heard of IaaS (infrastructure as a service), SaaS (software as a service) and PaaS (platform as a service), and now there is NaaS (network as a service). What they have in common is that they are all cloud services. The intent is to shift IT functions to the cloud so you have less direct responsibility for managing them in-house.  Boards and C-level executives are attracted to cloud services because they perceive the cloud as being less expensive, easier to manage, and a way to avoid investing in technology that will be obsolete three years later. But now there is NaaS, which most of them haven’t heard about.  Just what is NaaS (network outsourcing), and what does it do for the company? They will ask the CIO to explain it.  On the IT side, if you’re discussing NaaS, there are decisions to be made as to how much (if any) of the network you’re willing to outsource. Also, if you did outsource, what will be the impact on cost, management, security, bandwidth, application integration service levels. The discussion can get into the weeds of the technology, and the CIO should be prepared to go there.  The Quandary for

3 Tech Deep Dives that CIOs Must Absolutely Make Read More »

AI Hallucinations Can Prove Costly

Large language models (LLMs) and generative AI are fundamentally changing the way businesses operate — and how they manage and use information. They’re ushering in efficiency gains and qualitative improvements that would have been unimaginable only a few years ago.  But all this progress comes with a caveat. Generative AI models sometimes hallucinate. They fabricate facts, deliver inaccurate assertions and misrepresent reality. The resulting errors can lead to flawed assessments, poor decision-making, automation errors and ill will among partners, customers and employees.  “Large language models are fundamentally pattern recognition and pattern generation engines,” points out Van L. Baker, research vice president at Gartner. “They have zero understanding of the content they produce.”  Adds Mark Blankenship, director of risk at Willis A&E: “Nobody is going to establish guardrails for you. It’s critical that humans verify content from an AI system. A lack of oversight can lead to breakdowns with real-world repercussions.”  False Promises  Already, 92% of Fortune 500 companies use ChatGPT. As GenAI tools become embedded across business operations — from chatbots and research tools to content generation engines — the risks associated with the technology multiply.   Related:Breaking Through the AI Bottlenecks “There are several reasons why hallucinations occur, including mathematical errors, outdated knowledge or training data and an inability for models to reason symbolically,” explains Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania. For instance, a model might treat satirical content as factual or misinterpret a word that can have different contexts.  Regardless of the root cause, AI hallucinations can lead to financial harm, legal problems, regulatory sanctions, and damage to trust and reputation that ripples out to partners and customers.  In 2023, a New York City lawyer using ChatGPT filed a lawsuit that contained egregious errors, including fabricated legal citations and cases. The judge later sanctioned the attorney and imposed a $5,000 fine. In 2024, Air Canada lost a lawsuit when it failed to honor the price its chatbot quoted to a customer. The case resulted in minor damages and bad publicity.  At the center of the problem is the fact that LLMs and GenAI models are autoregressive, meaning they arrange words and pixels logically with no inherent understanding of what they are creating. “AI hallucinations, most associated with GenAI, differ from traditional software bugs and human errors because they generate false yet plausible information rather than failing in predictable ways,” says Jenn Kosar, US AI assurance leader at PwC.  Related:How AI is Transforming the Music Industry The problem can be especially glaring in widely used public models like ChatGPT, Gemini and Copilot. “The largest models have been trained on publicly available text from the Internet,” Baker says. As a result, some of the information ingested into the model is incorrect or biased. “The errors become numeric arrays that represent words in the vector database, and the model pulls words that seems to make sense in the specific context.”  Internal LLM models are at risk of hallucinations as well. “AI-generated errors in trading models or risk assessments can lead to misinterpretation of market trends, inaccurate predictions, inefficient resource allocation or failing to account for rare but impactful events,” Kosar explains. These errors can disrupt inventory forecasting and demand planning by producing unrealistic predictions, misinterpreting trends, or generating false supply constraints, she notes.   Smarter AI  Although there’s no simple fix for AI hallucinations, experts say that business and IT leaders can take steps to keep the risks in check. “The way to avoid problems is to implement safeguards surrounding things like model validation, real-time monitoring, human oversight and stress testing for anomalies,” Kosar says.  Related:Why AI Model Management Is So Important Training models with only relevant and accurate data is crucial. In some cases, it’s wise to plug in only domain-specific data and construct a more specialized GenAI system, Kosar says. In some cases, a small language model (SLM) can pay dividends. For example, “AI that’s fine-tuned with tax policies and company data will handle a wide range of tax-related questions on your organization more accurately,” she explains.  Identifying vulnerable situations is also paramount. This includes areas where AI is more likely to trigger problems or fail outright. Kosar suggests reviewing and analyzing processes and workflows that intersect with AI. For instance, “A customer service chatbot might deliver incorrect answers if someone asks about technical details of a product that was not part of its training data. Recognizing these weak spots helps prevent hallucinations,” she says.  Specific guardrails are also essential, Baker says. This includes establishing rules and limitations for AI systems and conducting audits using AI augmented testing tools. It also centers on fact-checking and failsafe mechanisms such as retrieval augmented generation (RAG), which comb the Internet or trusted databases for additional information. Including humans in the loop and providing citations that verify the accuracy of a statement or claim can also help.  Finally, users must understand the limits of AI, and an organization must set expectations accordingly. “Teaching people how to refine their prompts can help them get better results, and avoid some hallucination risks,” Kosar explains. In addition, she suggests that organizations include feedback tools so that users can flag mistakes and unusual AI responses. This information can help teams improve an AI model as well as the delivery mechanism, such as a chatbot.  Truth and Consequences  Equally important is tracking the rapidly evolving LLM and GenAI spaces and understanding performance results across different models. At present, nearly two dozen major LLMs exist, including ChatGPT, Gemini, Copilot, LLaMA, Claude, Mistral, Grok, and DeepSeek. Hundreds of smaller niche programs have also flooded the app marketplace. Regardless of the approach an organization takes, “In early stages of adoption, greater human oversight may make sense while teams are upskilling and understanding risks,” Kosar says.  Fortunately, organizations are becoming savvier about how and where they use AI, and many are constructing more robust frameworks that reduce the frequency and severity of hallucinations. At the same time, vendor software and open-source projects are maturing. Concludes

AI Hallucinations Can Prove Costly Read More »

How to Turn Developer Team Friction Into a Positive Force

Teams occasionally generate a certain amount of internal friction, and development staffs are no exception. Yet, when managed properly, team friction can actually be turned into a motivating force.  Developer team friction can become a positive driving force when it encourages diverse perspectives, promotes critical thinking, fosters innovation, and improves communication skills, observes JB McGinnis, a principal with Deloitte Consulting. “Constructive disagreements can lead to more robust solutions, continuous improvement, and stronger team cohesion,” he explains in an email interview. “By tapping into and exploring this friction positively, teams can enhance performance and drive innovation.”  Friction can be a fantastic driver for positive change, states Andy Miears, a director with technology research and advisory firm ISG. “When members of a development team are at odds with each other, it often indicates some degree of inefficiency, lack of work product quality, a poor working environment, or unclear roles and responsibilities,” he says via email. “Using friction as a compelling way to identify, prioritize, and address pain points is a healthy behavior for any high-performing team.”  Multiple Benefits  Developer team friction, while often seen as a negative trait, can actually become a positive force under certain conditions, McGinnis says. “Friction can enhance problem-solving abilities by highlighting weaknesses in current processes or solutions,” he explains. “It prompts the team to address these issues, thereby improving their overall problem-solving skills.”  Related:3 Tech Deep Dives That CIOs Must Absolutely Make Team friction often occurs when a developer passionately advocates a new approach or solution. That’s generally a good thing, notes Stew Beck, director of engineering at work product management solutions provider iManage. “When team members have conflicting ideas, you naturally end up with some friction — it’s something you want to have on every team,” he says via email. If team members aren’t advocating their own ideas, there’s a risk they’re not fully engaged in the problem. “Without friction, teams could be missing out on a way to make the product better.”  Allowing team friction in a controlled and safe way helps everyone. “Team members can challenge ideas, ways of accomplishing a task, encourage better results, and hold each other accountable to shared objectives, standards and processes,” Miears says.  Team seniority and status shouldn’t matter. “The best ideas don’t always come from the most senior person in the room,” Beck observes. Yet failing to encourage open discussions, regardless of rank, risks overlooking something important that could cost the team, and the entire enterprise, later.  Related:AI’s Next Frontier Is Applications: How to Stay Ahead Channeling Friction  To channel friction into positive results, the team leader should encourage balanced and constructive productive feedback. “Additionally, the leader should commit to creating an environment that’s open to a wide set of opinions, where teammates are encouraged to share their thoughts,” McGinnis advises.  The team leader should schedule regular meetings with their development team to identify what’s currently working and, more importantly, what may be failing. “In a mature Agile development framework, retrospectives should take part at the end of every sprint,” Miears recommends. Larger retrospectives, meanwhile, should be scheduled at the end of releases or program increments. “These sessions should be used to create new, better, or more efficient value for users, stakeholders and the overall team.”  Maintaining Control  Team leaders should set clear expectations and goals for all members. “These objectives should be defined for both the team as a whole and for individual members,” McGinnis says. Leading by example is also critical. “As a leader, you are a reflection of your team, so demonstrating the handling of conflicts with a professional demeanor, while showing empathy, goes a long way.”  Related:What VC Investments Look Like in 2025 Friction can easily spiral out of control when retrospectives and feedback focus on individuals instead of addressing issues and problems jointly as a team. “Staying solution-oriented and helping each other achieve collective success for the sake of the team, should always be the No. 1 priority,” Miears says. “Make it a safe space.”  As a leader it’s important to empower every team member to speak up, Beck advises. Each team member has a different and unique perspective. “For instance, you could have one brilliant engineer who rarely speaks up, but when they do it’s important that people listen,” he says. “At other times, you may have an outspoken member on your team who will speak on every issue and argue for their point, regardless of the situation.” Staying in tune with these differences and quirks helps to foster a healthy discussion environment.  Parting Thought  Team building is a great way to ensure a safe team when friction arises, Miears says. “Celebrate successes and individual accomplishments together,” he recommends. “Do the work to build a safe and inclusive culture in which the team can thrive.”  source

How to Turn Developer Team Friction Into a Positive Force Read More »