CIO CIO

Trump’s semiconductor tariffs threaten CIO budgets with up to 80% cost surge

His analysis accounted for inventory buffers, indicating “pricing surge would neither be in a single spike but rather 2-3 spikes and nor the pricing surge will be concurrent across vendors.” Manish Rawat, Semiconductor Analyst at TechInsights, provided a more graduated timeline showing enterprise hardware prices rising 15-25% within 6-18 months as vendor stockpiles diminish, escalating to cumulative increases of 30-40% for systems using advanced Asian-manufactured chips. “Pricing may become tiered based on component origin,” Rawat warned, creating new complexity for enterprise procurement teams. Vendor landscape reshuffles competitive positioning The exemption framework creates clear winners and losers among enterprise suppliers, potentially reshuffling competitive relationships that have dominated the industry for decades. Companies with established US manufacturing — including NVIDIA, Intel, Micron, and Apple — gain significant pricing advantages over competitors dependent on Asian production. source

Trump’s semiconductor tariffs threaten CIO budgets with up to 80% cost surge Read More »

Ushering in a new era of mainframe modernization

In an era defined by AI-driven innovation, real-time insights, and digital transformation, mainframes continue to serve as the foundation for the world’s most critical systems. They are used by 71% of Fortune 500 companies, handle 68% of global production IT workloads, and are responsible for processing 90% of all credit card transactions. Without them, the digital world simply would not function. That’s why IT modernization doesn’t necessarily mean abandoning the mainframe. The recent launch of the IBM z17 marks a significant milestone in the journey to unlock the mainframe’s full potential. More than just an infrastructure upgrade, z17 represents a strategic leap forward in enabling hybrid cloud integration, embedded AI, advanced security, and intelligent automation thanks to its new on-chip AI acceleration capabilities. When paired with the right software solutions, organizations that lean in are positioned to fully harness these advancements in real-time inferencing at scale. These strategic changes are reflective of the latest priorities of today’s largest enterprises and most critical industries. We’ll explore those implications next. The rise of embedded AI in core systems One of the most talked-about advancements with IBM z17 is its ability to perform AI directly on the hardware, thanks to the new Telum II processor and the introduction of the Spyre accelerator cards. The Telum chip provides AI inferencing, and the Spyre cards provide the capability to support generative AI and LLMs right on the hardware, closer to the data. By embedding AI at the hardware level, z17 enables real-time analysis of transactional data and unstructured data. This is a game-changer for industries like financial services, healthcare, and logistics, where decisions must be made instantly and securely.  The broader takeaway for the enterprise is clear: the next era of innovation will be defined by how well organizations can integrate intelligence into operational workflows. Mainframe systems, traditionally viewed as static, are now capable of participating in this real-time, AI-driven ecosystem—shifting perceptions and expanding use cases in the process. Evolving expectations for security and resiliency The z17 also addresses an urgent reality in enterprise IT: the threat landscape is growing in complexity. Security breaches, ransomware attacks, and regulatory scrutiny are no longer hypothetical risks—they are everyday challenges for IT leaders. With enhanced encryption capabilities, built-in resiliency features, and expanded data recovery options, the z17 is designed to meet the moment. From an industry perspective, this raises the bar for what “secure infrastructure” truly means. More securable platforms like z17 are leading the shift toward more proactive, integrated approaches to cyber resilience—setting a new standard for enterprise systems going forward. When paired with strategic software and procedures designed for proactive monitoring and rapid recovery,  these systems become both safer and more resilient than ever before. A crucial step toward mitigating the ever-evolving risk of bad actors. Bridging the data divide in hybrid environments One of the key challenges in modern IT environments is integrating data across siloed systems. Mainframe data, despite being some of the most valuable in the enterprise, often remains underutilized due to accessibility barriers. With a z17 foundation, software data solutions can more easily bridge critical systems, offering unprecedented data accessibility and observability. For CIOs, this is an opportunity to break down historical silos and make real-time mainframe data available across cloud and distributed environments without compromising performance or governance. As data becomes more central to competitive advantage, the ability to bridge existing and modern platforms will be a defining capability for future-ready organizations. Mainframes are now more accessible than ever For many industries, mainframes continue to deliver unmatched performance, reliability, and security for mission-critical workloads—capabilities that modern enterprises rely on to drive digital transformation. Far from being outdated, mainframes are evolving through integration with emerging technologies like AI, automation, and hybrid cloud, enabling organizations to modernize without disruption. With decades of trusted data and business logic already embedded in these systems, mainframes provide a resilient foundation for innovation, ensuring that enterprises can meet today’s demands while preparing for tomorrow’s challenges. In today’s hybrid world – where workloads span cloud-native applications and core systems – mainframes like the IBM z17 offer a connective layer that brings consistency, performance, and security to complex IT landscapes. Proudly partnered with IBM, Rocket Software is committed to helping organizations leverage these advancements and modernize without disruption. To learn more about how we’re doing that, visit here. source

Ushering in a new era of mainframe modernization Read More »

How safe is your AI conversation? What CIOs must know about privacy risks

In a recent podcast appearance on This Past Weekend with Theo Von, Sam Altman, CEO of OpenAI, dropped a bombshell that’s reverberating across boardrooms and IT departments: Conversations with ChatGPT lack the legal protections afforded to discussions with doctors, lawyers or therapists. This revelation underscores a critical gap in privacy law and raises urgent questions about how organizations can responsibly integrate AI while safeguarding user data. For CIOs and C-suite leaders, Altman’s warning serves as a wake-up call to strike a balance between innovation and robust privacy, compliance and governance frameworks. Here’s what business leaders need to focus on to stay compliant and ahead of the curve in this rapidly evolving AI landscape.  The privacy gap in AI conversations  Altman highlighted that users, particularly younger demographics, are increasingly turning to ChatGPT for sensitive advice, treating it as a substitute for a therapist or life coach. However, unlike professional consultations protected by legal privileges, these AI interactions are not confidential. In legal proceedings, OpenAI could be compelled to disclose user conversations, exposing deeply personal information. This issue is compounded by OpenAI’s data retention policies, which allow chats to be stored for up to 30 days (or longer for legal and security reasons), posing risks to user privacy in cases like the ongoing lawsuit with The New York Times.  source

How safe is your AI conversation? What CIOs must know about privacy risks Read More »

Oracle SVP Thorsten Herrmann: Being late to the cloud has its advantages

Would it also be conceivable, for example, for AWS to set up its data center machines in the Oracle data centers and operate them there for its customers? Not an uncharming thought, but that would be purely speculative. This would definitely not be decided in Germany, but would clearly be decided by our central engineering teams. Various aspects would have to be assessed, for example security, etc. In general, there has been quite a cultural change at Oracle in recent years—a move towards openness. When I think back to how Larry Ellison used to berate AWS and Microsoft just a few years ago and today they’re cooperating. And vice versa, of course. I can still remember from my Microsoft days when we tried to replace Oracle database environments with Postgres SQL. That wasn’t so easy. This shows that the level of customer satisfaction with our database is high, particularly in terms of stability and innovation, especially now in the 23ai environment. When I made my trips to get to know the customers, there were certainly some discussions, but almost nobody confronted me with a discussion about instabilities, technological shortcomings or the like. That’s why this combination is so important: now I can also use the database stack that I have learned to value as a customer in the cloud and, above all, in the cloud of my choice. In other words, real added value. Everyone welcomes that equally. However, I currently have the feeling that many people are trying to draw up boundaries again and give preference to their own stack along the lines of: “Dear customer, if you are already in my cloud and in my stack anyway, then why not use my tools, my features and my services instead of something else external?” But regulation demands more openness from cloud providers. Exactly. Regulation is one aspect. But the other issue of modularity also plays an important role. We have always had this in the application environment. In the on-premises world, we have heard users complain about monolithic blocks versus modular systems and best-of-breed. The issue doesn’t disappear when I’m suddenly in the cloud. Of course, every provider wants you to use as much of your own stack as possible. But having the option of creating these possibilities via standard interfaces, standard data exchange and certain standard formats and thus supporting change should be the case. We keep hearing from many users that we are now in the cloud and are finding that the whole thing doesn’t work so well from an economic perspective if everything is only in one place. Here, too, we are to some extent the challenger in the market with a completely different price-performance ratio, which is also due to the architecture. With us, you can consume a fully-fledged cloud, i.e. its full functionality, in just four racks. With our competitors, you need many times that amount to get a fully-fledged cloud. And the smaller solutions are always a subset of the functions. But you can’t get very far with four racks, can you? Of course, four racks don’t have the same compute power and storage capacity as 100, but there’s no lack of functionality. That’s why we can also offer particularly attractive conditions and place this infrastructure and platforms in market segments where people were previously asking: “Well, this is actually too expensive, and can we even afford it?” But that is basically the crucial question. Do I focus on a technical migration to the cloud and simply move my IT from my data center to the cloud, which ultimately doesn’t bring any particular added value? Basically, you’re giving away all the opportunities for modernization and transformation. Hardly anyone does pure lift and shift these days. Many are moving into new application development with the cloud, or at least into a certain degree of cloudification, containerization, etc. Then there are perhaps a few topics that are no longer strategic in terms of the time horizon, but which will be needed for the next two or three years. Of course, the effort involved should be kept to a minimum, so encapsulate it or keep it on-premises. Or if you really want to empty the data center, then simply move it over to the cloud and continue to operate it there. Good, but that still has a very technical focus. I would go one step further in the direction of process and organizational modernization. You mentioned it: Oracle wants to focus on certain industries. Are you also going into a real business consultancy that you offer yourself or is this done via partners? Well, in those areas where we have sufficient know-how ourselves—and I would, first and foremost, mention the healthcare sector with Oracle Health, formerly Cerner—we have built up a lot of capacity in the retail environment or in the hotel industry. Otherwise, we pursue the approach of strategic partnerships, i.e. addressing these topics together with the relevant consulting firms. I would not see it as an obvious goal to build up corresponding capacities ourselves. There are excellent industry-specific specialist consultants and the large consulting firms such as PwC, Deloitte, Accenture etc., with whom we naturally talk intensively about partnerships and then also address certain industries. That is why we have also organized ourselves internally by industry in order to improve this connectivity. Oracle cooperates with various providers of large language models (LLMs). Their own bots and AI agent technology then build on this. How do AI agents from different platforms understand each other and how do they exchange information? PwC is building interesting platforms for this. At the end of the day, you need the right platforms, because there are LLM developers and there are also many customers who want to train their LLMs enriched with their enterprise data, and this requires powerful infrastructures. We operate the entire stack in the AI environment. We started integrating AI into our own applications very early on, both in the industry-specific applications and in the Fusion

Oracle SVP Thorsten Herrmann: Being late to the cloud has its advantages Read More »

Is the AI skills shortage a threat to IT leaders? | What IT Leaders Want, Ep. 10

Yeah, I think that’s what I would say. It’s I think there are two separate issues here. One is, yeah, the level of kind of scrutiny and stress on the CIO is probably at the highest level it’s ever been, which isn’t entirely a bad thing, by the way, but like, like. So it is. It is a very stressful role. It reminds me of, like, the CMO role of over, you know, a few years ago, like, in the expectations are very high. You’re working in a very changeable market. Like, there’s going to be stress and pressure, but you can’t divorce that from the external environment, right? So it’s also a time of economic instability. It’s a time of geopolitical instability, and we’re living through an acceleration in an industrial revolution. So like, you know, if the 10 years, 3.3 years, what’s changed in that time? Probably everything right for a lot of organizations. So it’s unsurprising that the person at the beginning of it might not be the person that survived it, just for the fact that they were there at the beginning of it, right, like they’d been inside it, so they can’t see what needs to happen externally. But I think what’s really interesting is, I do think we’re at a very specific point in time, and it’s one of those, you know, was, you know, that idea of, like, you know, people always underestimate how much is going to change in five years, but overestimate what’s going to change in a year, like, I do wonder what tenure will look like for senior executives in it in maybe three or four years, when maybe, like, there’s a bit more of a well trodden path around these things. And it’s not just everything gets thrown up in the air and see where it lands. Keith Shaw Yeah. source

Is the AI skills shortage a threat to IT leaders? | What IT Leaders Want, Ep. 10 Read More »

Caylent: A strategic approach to generative AI adoption, from vision to value

Overview Generative AI has the potential to redefine productivity, create novel applications, and reinvent customer experience. But without a strategic approach, you could not only miss out on the promise of this powerful tool, but also drain time, energy, and resources away from other mission-critical initiatives across your organization. To that end, Kristen Backeberg, Director of Global ISV Partner Marketing at AWS, and Val Henderson, President and CRO at Caylent, recently sat down to discuss maybe the most important consideration around adoption: How to tailor your generative AI strategy around clear goals that can drive your organization forward. Their conversation started, like so many around generative AI, with an overview of especially high-impact use cases. However, both AWS and Caylent have helped dozens of organizations adopt generative AI, and Backeberg and Henderson understand that starting this journey can be daunting. The solution, according to both Henderson and Backeberg, is knowing which use cases are going to bring the most ROI. “[The first implementation] has to generate real ROI,” Henderson said. “Based on what we’re seeing, if it doesn’t, generative AI adoption loses steam and attention. It loses momentum. That’s not good for anybody, because we’re seeing such incredible innovation, and the speed of that innovation has never been faster.” In addition, focusing on customer experience provides a clear north star for AI initiatives. By focusing less on buzzwords and more on clearly defining the system’s purpose, organizations can drive effective development and performance. “People are always going to want to understand the why,” Henderson added. “Why did we do this? Did we do it to check a box, or did we do it because it helps us move our vision, our desire to help our customers, to create a better experience moving forward?” By building together on top of Amazon Bedrock, Caylent has been able to help 50+ customers across even the most stringently regulated sectors adopt generative AI solutions, proving that this approach is a powerful way for organizations to bring use cases to life. “This is where AWS feels good about trying to make sure that we’re continuing to think differently,” Backeberg said, “think bigger, think outside of the box, and bring together the pieces along the way that create some of the necessary guardrails while we develop this space.” Register Now source

Caylent: A strategic approach to generative AI adoption, from vision to value Read More »

Navigating the honeymoon phase of a new product launch

What an expert is, however, varies based on your role. If your role is about setting the big picture and vision, then a broad understanding across different areas is more useful. But if you’re focused on a specific product area, then a deeper, more specialized knowledge is often better. This helps you dig into the technical details, user journeys and specific features.  Early in my career as a product manager at a Log Analytics software company, I was daunted by the lingo and intimidated by the breadth of knowledge those around me had on deep technical topics. I had a choice: I could go broad or deep. I chose to go deep and spent a lot of time learning the ins and outs of the product. That depth provided me with the knowledge I needed to define product requirements with confidence, and it also established credibility with both my peers and stakeholders. While I was not someone who could strategize broadly at that time, I was able to delve deeply and become a master of my domain, building trust among the engineering and management teams I was working with.  Finally, it’s important to give yourself grace during this learning process. Truly understanding a new product area, especially a complex one, just takes time. There are rarely shortcuts to becoming a real master; it needs consistent effort, active involvement and being open to learning from both wins and losses. Honesty about your knowledge, or lack thereof, can lead to two simultaneous outcomes. On one hand, it fosters trust; on the other, it might erode confidence in your preparedness. The aim is to achieve the former while skillfully avoiding the latter, as this balance allows for consistent transparency regarding your progress.  source

Navigating the honeymoon phase of a new product launch Read More »