Information Week

How Cloud-Focused Upskilling Drives Business Growth

The cloud skills gap is one of the most significant challenges facing the tech industry today. IT leaders are grappling with a shortage of cloud-competent professionals, which is slowing down innovation and preventing organizations from fully leveraging cloud technologies.   To remain competitive in a rapidly evolving landscape, it’s critical for IT leaders to address this gap by prioritizing upskilling initiatives that equip their teams with the expertise needed to meet cloud demands.  Recent data shows that roughly 90% of tech workers under the age of 25 are considering new career opportunities, underscoring the need for skilled cloud professionals. Without strategic intervention, organizations risk missing out on the talent and innovation required to thrive in today’s cloud-driven world.  The Case for Cloud-Focused Upskilling  By providing both new and current employees with the skills to take advantage of emerging IT practices, upskilling can help bridge the cloud skills gap. Importantly, the prioritization of upskilling helps IT leaders meet cloud demands while supporting the growth of the next generation of tech workers.  But despite the clear need for upskilling, fewer than 20% of company leaders report progress on these initiatives. Many struggle due to limited resources, insufficient buy-in from leadership, and low employee motivation. Combined with the rapid evolution of cloud technologies, this creates a bottleneck in cloud deployments, making it even more difficult to meet demand.  Related:2024 InformationWeek US IT Salary Report: Profits, Layoffs, and the Continued Rise of AI The solution lies in making upskilling a higher priority. IT leaders must advocate for the value of upskilling, not just as a means of solving the skills gap, but as a driver of future growth and innovation.  Champion Upskilling to Strengthen Your Organization  Building a successful upskilling program takes more than just providing training — it requires a cultural shift within the organization that embraces continuous learning and development. Here’s how to get started.  1. Educate your organization on upskilling benefits  To gain internal support, it’s crucial to highlight the long-term benefits of upskilling to employees and leadership alike. Upskilling can lead to improved efficiency, stronger data security, and more seamless cloud integration — advantages that can keep your organization ahead of the curve in the competitive tech landscape.  By showcasing how upskilling empowers teams to leverage cloud technology more effectively, you can make a compelling case for prioritizing training initiatives.  Related:Forrester Speaker Sneak Peek: Analyst Jayesh Chaurasia to Talk AI Data Readiness 2. Secure C-suite buy-in for cloud upskilling programs  The key to any successful initiative is leadership buy-in. Demonstrating the potential return on investment of upskilling initiatives can help secure the support you need from the C-suite.   Focus on how upskilling can lower operational costs, boost productivity, and create a more agile workforce capable of driving innovation. Citing success stories from organizations that have benefited from upskilling can also provide valuable examples of its impact.  3. Focus on Gen Z talent to drive cloud innovation  Gen Z professionals bring a unique advantage to cloud upskilling initiatives — they are the first generation to have grown up with digital technologies and are naturally more adaptable to new tools and platforms.   Engaging younger workers through upskilling programs not only helps close the cloud skills gap but also ensures your organization is nurturing the next generation of tech leaders.  4. Develop a supportive work culture for upskilling  Gen Z professionals are drawn to organizations that prioritize growth and development. Offering clear pathways for cloud-related upskilling not only enhances employee engagement but also ensures your organization retains the talent needed to stay competitive in a cloud-first world.  Related:The Impact of AI Skills on Hiring and Career Advancement Embrace a Cloud-First Future With Upskilling  Closing the cloud skills gap is no small task, but with a clear focus on upskilling, IT leaders can equip their teams with the expertise needed to thrive in a cloud-first world. By prioritizing training initiatives and fostering a supportive work environment, organizations can bridge the gap, drive innovation, and ensure sustainable growth in the ever-evolving tech landscape.  source

How Cloud-Focused Upskilling Drives Business Growth Read More »

The Next Generation Will Be the Driving Force Behind AI Regulation

The wide-scale introduction of artificial intelligence sent shockwaves through every industry, as it disrupted the way we live, work, and even learn. In the education sphere specifically, it’s caused traditional educators to experience a “Gutenberg printing press shock,” as much of their skills have essentially become obsolete overnight. AI’s quick rise has raised fear of risks such as plagiarism and lessened student engagement, causing many learning institutions to restrict or in some cases even ban the technology from classrooms. While we acknowledge and understand the potential risks associated with AI, I believe there is a lot more opportunity for the good of humanity than harm — if harnessed properly and responsibly, this groundbreaking technology has the potential to support and augment students’ learning exponentially — much like the printed book, the calculator, or the computer has done for previous generations. So, the question is not if we should harness AI, but how we should harness AI. It’s clear the technology needs guardrails. In fact, many groups from government officials and business leaders to celebrities like Tom Hanks have joined the debate on AI regulation. Yet, world leaders have been slow to act, and efforts have been restricted to national and regional spheres Related:The Blinking of ChatGPT Why the reluctance and the emphasis on local perspectives? Even during the peak of the Cold War, opposing factions aimed for international consensus, especially on ethical norms or ‘red lines’ related to nuclear weapon usage. Some theorize that this hesitancy toward AI regulation stems largely from their insufficient grasp of the technology and its ramifications. Why wouldn’t we engage the generation that seamlessly integrates AI into their daily routines? Undoubtedly, they not only have viewpoints on the matter but can also provide a more expansive and insightful perspective on the ethics of the technology. A proactive group of international students aged 13 to 18 from Institut auf dem Rosenberg decided to take the initiative and developed a 13-point charter to govern AI, calling for world leaders to promptly regulate AI development and usage through an international treaty and regulatory agency. A selection of the students’ proposed guardrails as a seed for global accord include: Control input and output. All organizations, whether private or state, engaged in designing, engineering, and/or distributing AI products, shall be held unequivocally accountable for the information generated by AI systems. These organizations must establish specialized departments amalgamating human oversight and automated technologies grounded in machine learning to guarantee the responsible utilization of AI. An external, impartial global agency shall meticulously oversee and ensure strict adherence to proper AI usage, conferring AI-Safe-Use approval badges exclusively upon organizations that diligently comply with AI standards. Transparent tracing of sources. Complete transparency in acknowledging the entities responsible for AI processes is imperative. Therefore, all AI-processed information must be transparently traceable to its origins, specifically attributing it to the entities conducting the information processing using AI. Users shall enjoy unrestricted access to all original input data employed by AI systems. Violations of source tracing obligations will be met with resolute legal enforcement. Regulation of deepfakes. Mandatory watermarks or detectable patterns are recommended for all deepfake or artificially created content. We advocate increased investment in deepfake detection technologies. Unethical deepfake actions, including defamation and identity theft, must be unequivocally prosecutable offenses. AI systems shall rigorously maintain accessible interaction histories, with AI software manufacturers being legally accountable for verifying the origin of disseminated information. Prevention of monopolies and duopolies. In the pursuit of equitable AI development and access, signatory parties solemnly pledge to actively champion diversity and counteract monopolies, duopolies, or oligopolies within the AI creative sphere. This commitment aims to foster innovation, fairness, and global collaboration. Support for cultural and academic endeavors. AI programs must be designed exclusively to support cultural and academic creators, refraining from autonomous generation of cultural and academic content. The excerpts provided are just a glimpse into the thorough work of our students. The question about ethics in AI is one that has the potential to bring together a polarized world for the greater good of all mankind — it is an opportunity that we should give the next generation. For a detailed insight into the Rosenberg AI Charter and this significant project, please visit here. source

The Next Generation Will Be the Driving Force Behind AI Regulation Read More »

Jumping the IT Talent Gap: Cyber, Cloud, and Software Devs

While hype over artificial intelligence may be spurring organizations to hire professionals with matching skills to maintain a competitive edge, many businesses have more fundamental IT talent gaps.    An April survey of 1,400 executives and IT professionals found skills gaps throughout cybersecurity, cloud, and software development — along with interest in skill development for these areas for 2025.   In fact, understanding tech skills gaps is something many organizations struggle with — just a third of executives surveyed said they completely understand their organization’s skills gaps, and 68% of technologists say that business leaders aren’t aware of their IT skills gaps.   Chris Herbert, chief content officer at Pluralsight, says to combat this lack of knowledge, business leaders need a data-driven approach to uncovering skills gaps.  “This can be in the form of tech skills assessments, which can benchmark where technologists fall on a sliding scale of expertise in a given tech skill,” he says.  He adds that it can be useful to survey tech teams internally in areas where they feel they need to deepen their skills. Creating a culture of learning always starts at the executive level, he says.  “Business leaders need to be vigilant about the areas where their tech teams are falling behind and set up systems and initiatives that will help enable direct managers to assess their team’s skills on a consistent basis,” Herbert says.   Related:2024 InformationWeek US IT Salary Report: Profits, Layoffs, and the Continued Rise of AI Anant Adya, executive vice president and head of Americas delivery at Infosys, says that businesses are moving away from hiring or training workers based on expertise in a single technology and towards cultivating talent proficiency across many disciplines.  “Building diverse talent pipelines and offering opportunities to build both hard technical skills and soft communication skills are effective strategies,” he says.  Adya adds there is great value in investing in “data readiness” and fostering a culture of responsible experimentation as part of upskilling.  Continued Demand for IT Pros   According to CompTIA’s State of the Tech Workforce 2024 report, tech occupation employment over the next decade is expected to grow at about twice the rate of overall employment across the economy.  Projected growth rates for several tech occupations are well above the national rate, most notably for data scientists and data analysts and cybersecurity analysts and engineers.   Tim Herbert, chief research officer at CompTIA, explains AI is “undoubtedly” the wildcard factor on the minds of many employers and workers.  Related:Forrester Speaker Sneak Peek: Analyst Jayesh Chaurasia to Talk AI Data Readiness “While some of the AI hype has moderated, there continues to be plenty of experimentation and anticipation for what comes next,” he says via email.   According to CompTIA analysis of Lightcast data, AI job postings account for 10-12% of all tech job postings in recent months.   “Every industry is hiring technology professionals,” he says. “There really aren’t high-tech or low-tech industries anymore”  He lists infrastructure, software, cybersecurity, and data as the four big buckets, with help desk and support in another category.  “Increasingly, the problems become less technology problems and more related to the industry — compliance and privacy, for example,” he says. “You have to know the drivers and priorities within the industry.”  He notes that there are also more and more jobs that require technology professionals to interact with other teams and to understand more about the business and the problems they are trying to solve.   Power of Programmatic Upskilling  The Pluralsight survey indicated upskilling employees is proving to be more cost-effective and timelier than hiring new talent.  While hiring can cost over $23,000 and take up to 10 weeks, upskilling costs around $5,000 per employee and can be implemented faster.  Related:The Impact of AI Skills on Hiring and Career Advancement Despite these advantages, time constraints remain a major barrier to successful upskilling programs, as reported by organizations over the past three years.  Pluralsight’s Herbert says it is crucial to make upskilling within the organization “programmatic,” which involves mapping your technologists’ skill-building journey to business needs.  “This is where developing a culture of learning comes in,” he says. “If upskilling is integrated as a core business competency, tech teams will have the nimbleness needed to switch focus from one skill area to the next as business needs ebb and flow.”  Steve Watt, CIO at Hyland, says that CIOs should also provide robust upskilling opportunities to show IT professionals they’re dedicated to their growth and career interests.  “The job market for IT professionals is being swayed by the demands of AI but security and cloud professionals have invaluable skills that are still very much in need,” he explains.  By offering opportunities for IT professionals to sharpen their core skills in their roles, companies are helping strengthen their own business while showing talent they’re committed to their long-term success and interests.  Adya says that training must balance skills required for basic infrastructure with those needed for swiftly emerging technologies.   “For cloud in particular, programs should be self-paced, in collaboration with academic institutions, specialized for local hiring, and grounded in digital reskilling,” he says.  The program should additionally incorporate hands-on experiences and input from cross-functional teams.  “Companies should additionally create incentives, necessary infrastructure and support to properly add employees to the process,” he says.   Top IT Talent Desire Flexibility   Watt says remote work flexibility continues to be a key differentiator as well.  “Because IT and security are ubiquitous across every industry and skills transfer almost regardless of vertical markets, these workers have a lot of options when picking a company or industry to work in,” he says.  Being inflexible — especially with IT and security staff when it comes to remote work — can significantly hinder the ability to attract and retain top talent.  “I recently spoke with another CIO who had commented that after a rollout of a mandatory three-day-in-office policy they lost 20% of their IT staff in about 4 months,” he says. “They rolled that policy back specifically for IT very quickly.”  source

Jumping the IT Talent Gap: Cyber, Cloud, and Software Devs Read More »

Digital Resilience: Merging IT Growth with Environmental Responsibility

“Digital Resilience: Merging IT Growth with Environmental Responsibility“ Building Resilient IT Infrastructures for Sustainable Digital Growth The white paper titled “Digital Resilience and IT Growth” by Chatsworth Products explores the critical aspects of building resilient IT infrastructures in the face of rapid digital transformation. It delves into strategies for enhancing data center reliability, scalability, and efficiency to support growing digital demands. The paper emphasizes the importance of integrating advanced technologies and best practices to mitigate risks, improve operational continuity, and ensure robust performance. By adopting these strategies, organizations can effectively manage IT growth, protect vital data, and maintain a competitive edge in an increasingly digital landscape. Download this whitepaper to learn more.  Offered Free by: Chatsworth Products, Inc. See All Resources from: Chatsworth Products, Inc. Thank you This download should complete shortly. If the resource doesn’t automatically download, please, click here. Thank you This download should complete shortly. If the resource doesn’t automatically download, please, click here. source

Digital Resilience: Merging IT Growth with Environmental Responsibility Read More »

Gartner Keynote Bites into AI ‘Sandwich’

ORLANDO, Fla. — During their opening keynote Monday at the Gartner IT Symposium/Xpo 2024, analysts Mary Mesaglio and Hung LeHong described the key ingredients to building a successful AI stack and how businesses and organizations should pace themselves. “Because of the relentless innovation happening in the tech vendor race, CIOs feel like they are always living the hype, which the reality of their AI outcomes race — how tough it is to get value — makes it feel like they are also in the trough,” Mesaglio said. The conference was expected to attract more than 8,000 CIOs and senior IT leaders. With an arms race to adopt AI and GenAI strategies, the analysts tried to add some clarity for business leaders who may have varying degrees of need. They also talked about the different races going on to adopt GenAI. It’s important to understand, they said, that the vendor race happening should be separated from their own race to implement AI technologies. CIOs are bearing most of the burden of rapid AI innovation expectations: A Gartner survey showed 57% of CIOs were tasked with creating an AI strategy. “And even with all this GenAI fatigue of the last year, you’re still under pressure from the CEOs to execute,” Mesaglio said. That pressure can cause leaders to lose sight of the AI needs for their specific business and outcome needs. Related:2024 InformationWeek US IT Salary Report: Profits, Layoffs, and the Continued Rise of AI AI Steady vs. AI Accelerated LeHong added, “However, CIOs can set the pace in their outcomes race. If you have modest AI ambitions, in an industry that isn’t being remastered by AI yet, you can afford to go at a more measured pace. This is an AI-steady pace. For those organizations with bigger AI ambitions, or in an industry that’s being reinvented by AI, the pace will be faster. This is an AI-accelerated pace.” No matter which paths a business chooses, the goal should be the same: delivering value and outcomes, LeHong said. But generating business value has been difficult for many businesses. A 2024 Gartner survey of over 5,000 digital workers in the US, UK, India, Australia and China found employees said they saved an average of 3.6 hours per week by using GenAI. While those savings can help cut costs, the gains vary from business to business. “Here’s the real challenge with AI productivity,” said LeHong. “Productivity gains from GenAI are not equally distributed. Gains vary by employee, not just because of their personal interest and levels of adoptions, but according to complexity of job and level of experience.” Hung LeHong (photo by Shane Snider) Building an AI ‘Sandwich’ The analysts shared a visualization of a successful AI strategy or “stack” that looked like a sandwich, with structured and unstructured data and all the types of AI used making up the top and bottom slices of bread. The middle of the sandwich consists of an organization’s trust, risk, and security management (TRiSM) technologies that create security. Related:Forrester Speaker Sneak Peek: Analyst Jayesh Chaurasia to Talk AI Data Readiness “As CIO, your job is to design a tech sandwich that can handle the messiness of AI, but still keeps you open to new opportunities,” said Mesaglio. “AI-steady organizations (10 AI initiatives or fewer) will govern their tech sandwiches using human teams and committees. AI-accelerated organizations will add TRiSM technologies — a set of technologies designed to create trust, monitor risk, and manage security for safe AI at scale.” Being Mindful of AI’s Human Impact The analysts noted that employees’ feelings about AI can range from positive to negative — with some employees feeling threatened or resentful. Those negative feelings can impact work performance. In a Gartner survey, only 20% of CIOs said they are being proactive about protecting the employee’s well-being when it comes to the potential negative impacts of GenAI. “Most enterprises aren’t curious enough about how AI makes their employees feel. This matters because AI can lead to all sorts of unintended behavioral outcomes,” said Mesaglio. “The critical point is that if you use change management to manage this, be intentional about who owns which behavioral outcomes. Organizations must manage behavioral outcomes with the same rigor as technology and business outcomes.” Related:The Impact of AI Skills on Hiring and Career Advancement source

Gartner Keynote Bites into AI ‘Sandwich’ Read More »

Nvidia’s Jensen Huang on Leadership, ‘Tokenization,’ and GenAI Workforce Impact

Orlando, Fla. — Wearing his trademark black leather jacket, Nvidia CEO Jensen Huang on Tuesday delivered a highly anticipated keynote at Gartner’s IT Symposium/Xpo — where he talked about a range of leadership topics. Nvidia has experienced meteoric success with its graphics processing units (GPUs). Once thought of mainly as a processor to handle graphics intense workloads, like video games, it turned out that the high-performance units were also efficient tools for large language models (LLMs). The near overnight success of Open AI’s ChatGPT after launching two years ago has created an arms race for companies to build GenAI platforms. Nvidia has profited well from that race, launching it to the top of the world’s most valuable companies. So CIOs were eager to hear from Huang about finding similar success. Hundreds of attendees lined up more than an hour before the doors to Huang’s keynote started opening. Huang sat for an interview with Daryl Plummer, a Gartner analyst and vice president. “Nvidia showed us a different path, from graphics chips to data centers to large scale generative AI, they released computing power that hits AI, the game, then world changing phenomenon that it is today,” Plummer said before Huang came onto the stage. Related:2024 InformationWeek US IT Salary Report: Profits, Layoffs, and the Continued Rise of AI Fielding a question from Plummer about his personal style — which consists of the same publicly worn all-black attire — and if that simplicity leaves room for his leadership vision, Huang said his leadership has more to do with leaning into the future than focusing on style. “When you see something impactful, something surprising and unexpected, you’ve got to ask yourself, ‘What does this mean and what’s the impact long term?’ … Now the next part is that if you deeply believe something, are going to do something about it. The best technique is to get started.” Living in the Future and ‘Tokenization’ Huang said CIOs should embrace a future-forward mentality that allows them to embrace a quickly changing technology landscape. “It’s easier to live in the future than it is to live in the past,” he said to applause. “Living in the past is more painful.” Future thinking is “hopes and it’s dreams, it’s belief… the question is, once you manifest that future in your mind, are you going to go do something about it?” Nvidia certainly did something about it. The company’s quick transformation into a critical supplier of AI-enabling processing units has paid off. In its most recent financial report, the company reported $30 billion in revenue in the second quarter of 2024, marking a 15% increase from the previous quarter and a 122% increase year-over-year. Related:Forrester Speaker Sneak Peek: Analyst Jayesh Chaurasia to Talk AI Data Readiness He talked about how the industry has changed very rapidly, from one focused on hardware and software, to one focused on invisible ‘tokens’ that could translate visual and linguistic data into usable commodities. “This industry never existed before, and this industry is going to have factories — these buildings with computers inside — and these computers are incredibly good at transforming the raw material, which is data, into this new invisible thing that is monetized by millions of tokens per hour… floating point numbers that could be reconstituted into language, reconstituted into images and videos.” Eventually, Huang said, “we’ll tokenize robotic articulation, we’ll be able to tokenize proteins and chemicals… What we are witnessing … this is a beginning of a new industrial revolution.” Digital Workers vs. Human Workers While many have cited concerns about the rapid development of artificial intelligence possibly replacing a large amount of the workforce, Huang offers a more optimistic vision. Digital workers, working alongside human workers will increase productivity and create more opportunities for everyone. He said agentic AI will have human employees interacting with a digital workforce that’s not necessarily there to replace them, but to enhance productivity and growth for the whole company. Related:The Impact of AI Skills on Hiring and Career Advancement “And all of these digital employees … I’m prompting them in the same way I’m prompting biological employees. They’re going to find each other, they’re going to work together as teams, and we’re going to give them issues they can accomplish together.” He added, “We need to create more AI jobs first so we can create more human jobs… If you created more AI jobs right now, you will be a more productive company. You would generate more earnings, which will allow you to hire more people. source

Nvidia’s Jensen Huang on Leadership, ‘Tokenization,’ and GenAI Workforce Impact Read More »

3 Ways the CTO Can Fortify the Organization in the Age of GenAI

Few technologies have captured the public imagination quite like generative AI. It seems that with every passing day, there are new AI-based chatbots, extensions, and apps being released to eager users around the world.  According to a recent Gartner survey of IT leaders, 55% of organizations are either piloting or in production mode with generative AI. That’s an impressive metric by any degree, least of all considering that the phrase ‘generative AI’ was barely part of our collective lexicon just 12 months ago.   However, despite this technology’s promise to accelerate the productivity and efficiency of its workforce, it’s also left a minefield of potential risks and liabilities in its wake. An August survey by Blackberry found that 75% of organizations worldwide were considering or implementing bans on ChatGPT and other generative AI applications in the workplace, with the vast majority of those (67%) citing the risk to data security and privacy.  Such data security issues arise because user input and interactions are the fuel that public AI platforms rely on for continuous learning and improvement. Consequently, if a user shares confidential company data with a chatbot (think: product roadmaps or customer information), that information then becomes integrated into its training model, which the chatbot might then reveal to subsequent users. Of course, this challenge isn’t limited to public AI platforms, as even a company’s internal LLM trained on its own proprietary datasets might inadvertently make sensitive information accessible to employees who are not authorized to view it.  Related:Bridge the Gap Between Business Leaders and Tech Teams To better evaluate and mitigate these risks, most enterprises who have begun to test the generative AI waters have primarily leaned on two senior roles for implementation: the CISO, who is ultimately responsible for securing the company’s sensitive data; and the general counsel, who oversees an organization’s governance, risk, and compliance function. However, as organizations begin to train AI models on their own data, they’d be remiss to not include another essential role in their strategic deliberations: the CTO.  Data Security and the CTO   While the role of the CTO will vary widely depending on the organization they serve, almost every CTO is responsible for building the technology stack and defining the policies that dictate how that technology infrastructure is best utilized. Given this, the CTO has a unique vantage point from which to assess how such AI initiatives might best align with their strategic objectives.  Related:How to Submit a Column to InformationWeek Their strategic insights become all the more important as more organizations, who might be hesitant to go all-in on public AI projects, instead opt to invest in developing their own AI models trained on their own data. Indeed, one of the major announcements at OpenAI’s recent DevDay conference focused on the release of Custom Models, a tailored version of its flagship ChatGPT service that can be trained specifically on a company’s proprietary data sets. Naturally, other LLMs are likely to follow suit given the pervasive uncertainty around data security.    However, just because you choose to develop internally does not mean you’ve thwarted all AI risks. For example, consider one of the most valuable crown jewels of today’s digital enterprise: source code. As organizations increasingly integrate generative AI into their operations, they face new and complex risks related to source code management. In the process of training these AI models, organizations are often using customer data as a part of the training sets and storing it in source code repositories.   This intermingling of sensitive customer data with source code presents a number of challenges. Whereas customer data is typically managed within secured databases, with generative AI models, this sensitive information can become embedded into the model’s algorithms and outputs. This creates a scenario where the AI model itself becomes a repository of sensitive data, blurring the traditional boundaries between data storage and application logic. With less-defined boundaries, sensitive data can quickly sprawl across multiple devices and platforms within the organization, significantly increasing the risk of being either inadvertently compromised by external parties, or in some cases, by malicious insiders.   Related:How Many C-Levels Does It Take to Securely Manage Regulated Data? So, how do you take something that is as technical and as abstract as an AI model and tame it into something suitable for users — all without putting your most sensitive data at risk?   3 Ways the CTO Can Help Strike the Balance  Every enterprise CTO understands the principle of trade-offs. If a business unit owner demands faster performance for a particular application, then resources or budget might need to be diverted from other initiatives. Given their top-down view of the IT environment and how it interacts with third-party cloud services, the CTO is in a unique position to define an AI strategy that keeps data security top of mind. Consider the following three ways the CTO can collaborate with other key stakeholders and strike the right balance:  1. Educate before you eradicate: Given the many security and regulatory risks of exposing data via generative AI, it’s only natural that so many organizations might reflexively ban their usage in the short term. However, such a myopic mindset can hinder innovation in the long run. The CTO can help ensure that the organization’s acceptable use policy clearly outlines the appropriate and inappropriate uses of generative AI technologies, detailing the specific scenarios in which generative AI can be utilized while emphasizing data security and compliance standards.  2. Isolate and secure source code repositories: The moment intellectual property is introduced to an AI model, the task of filtering it out becomes exponentially more difficult. It’s the CTO’s responsibility to ensure that access to source code repositories is tightly controlled and monitored. This includes establishing roles and permissions to limit who can access, modify, or distribute the code. By enforcing strict access controls, the CTO can minimize the risk of unauthorized access or leaks of sensitive data as well as establish processes that require code to be reviewed and approved before being merged

3 Ways the CTO Can Fortify the Organization in the Age of GenAI Read More »

State of ITSM in Financial Services

“State of ITSM in Financial Services“ An InformationWeek Report | Sponsored by TeamDynamix Data from InformationWeek’s State of ITSM in Financial Services Report shows that there’s a wide range of maturity in how ITSM teams are dealing with the unique challenges of supporting technology stacks in today’s financial vertical. While application portfolios grow and tickets mount, ITSM teams remain fairly lean. But they’re not necessarily running efficiently, as they’re forced to cope with legacy ITSM platforms, a low level of automation, and inefficient project management capabilities. Key Findings: 40% of FS ITSM teams support 100 or more applications13% of these ITSM teams service 400 or more applications58% of FS firms manage more than 500 tickets per month40% of FS IT teams struggle with low ITSM maturity43% of FS IT Service Desks identify manual processing as top issue Download this report to see how you compare. Offered Free by: TeamDynamix See All Resources from: TeamDynamix Recommended for Professionals Like You: source

State of ITSM in Financial Services Read More »