CIO CIO

What Oracle's $300B OpenAI deal means for enterprise cloud strategy

Sure, customers can (and do) complain about Oracle’s costs and licensing fees and terms, but they shouldn’t about delivery, Kimball emphasized. They are “very measured, very execution focused, very driven” to deliver the same experience for every customer across every region. “They’ve been living in this space more than most companies have been around,” he said. “When they deliver a product out into market, they deliver a full-functioning product.” All told, when considering cloud environments, buyers have many factors to consider: the cost of moving data, workloads, hybrid clouds, security, residency, locality, and overall trust in the company. But when it comes to OCI, AWS, Azure, or Google, they needn’t worry about support from a computational perspective, Kimball noted. source

What Oracle's $300B OpenAI deal means for enterprise cloud strategy Read More »

The Tile Shop’s CIO Christopher Davis on cloud migration, AI integration

So one of the things that I’ve been thinking about is, and more than thinking about it, it’s something that I’ve always been attentive to, is really knowing your organization, knowing its culture, and finding good fit for the team that you do build and hire, or the partners that you engage in some cases, and so that’s a key factor in recruiting and retaining top talent. Just because they’re the smartest person in the room doesn’t necessarily mean they fit the culture and can operate. So that’s one of the things that I try and really look for in order to build those strong teams. The second thing that I try to remind myself is that, at the end of the day, all of the work that we do is transient. It’s you might build something and it’s gone in three years. You know, I’ve done I’ve done that multiple times, and so I try to remind my team that part of the objective is to meet the business needs, and remember that what you learn and the relationships that you build are what’s going to be most critical, so that if you have those relationships, the team is going to have probably more fun first of all, and second of all, they really value working as a team and feel success as they stay focused. And focus is something that I talk a lot about with my team, and so making sure that they don’t do everything, but instead. That they do the right things. And so we talk constantly around priority, is this really critical? And so recruiting and retaining, really it’s about having the right focus, the right priorities. And then I was taught long ago by a former CEO of Sleep Number about five steps of accountability and help people feel accountable. And if you hold them accountable, they’re successful. And this idea of vision, you define a vision for the team, like we said, cloud first, right? That was one of our vision points. And then you define the expectations transfer the responsibility to the team, and our team has done amazing things because of that. It’s really been impactful. And they’ve grown, they’ve learned. And I think oftentimes people simply want to be challenged and learn things. And so one of the things that I think a lot about is with retaining people, is developing them and so giving them those opportunities. I just had a conversation today, in fact, where I was talking to a young man who has been growing in our service desk, and we talked about ways that he can do new things. You know, I’ve had some people have great opportunities and have left the organization recently. And I said, Well, how do you feel about doing this? And he’s like, that’d be awesome. And just getting that excitement and painting a picture of opportunity to say, hey, you can do this. Why don’t we experiment with this? So I’m actually partnering him, probably with someone from marketing to work with our sales team on some of those tools that we talked about to help with sales conversion and helping them visualize the solution. Because these AI tools really can do many things, and if we can experiment and learn and develop someone along the way as well. Then those teams want to be here that they get excited about things, and that’s how you retain that talent. I look constantly for on the job training type of things, you know, give them the chance to get a certificate or get some training from a vendor tool that they’re interested in learning about. There’s so many ways that they can learn, and that’s been one of my big areas of opportunity, as well as one that just comes to mind as maybe a last thing that I’ll say is giving people your trust. And what I mean by that is if you do the five steps of accountability, transfer that responsibility, and then you coach and mentor and follow up those things make them feel empowered so that they can do it, and they step up to the challenge. And stepping up to the challenge means they’re they’re committed to the organization, and that’s how you keep and build those teams. Well, those are great insights. Shane O’ Neill source

The Tile Shop’s CIO Christopher Davis on cloud migration, AI integration Read More »

Why EGI will deliver the AI revolution that matters

EGI fundamentally transforms how enterprises operate — not through recycled pilots, but through systematic reinvention. It demands retiring legacy processes and roles while breaking down the silos that fragment critical workflows such as order-to-cash, procure-to-pay and design-to-release. The foundation requires creating unified enterprise taxonomies, building interconnected knowledge graphs and liberating decades of data trapped in PDFs, spreadsheets and disparate systems. This data — originally designed for yesterday’s rule-based world — must be re-engineered, labeled for AI consumption, teaching domain-specific LLMs your unique business context. When EGI takes hold, your organization deploys an army of intelligent agents exhibiting goal-driven autonomy, self-learning capabilities and adaptive behavior in complex environments. These agents coordinate as specialists across domains, seamlessly transferring tasks and context, sharing knowledge and solving interconnected problems. This is the EGI flywheel in action: Intelligent, collaborative systems managing enterprise-scale complexity while continuously learning and improving. Figure 1: While AGI remains theoretical with an unknown timeline, EGI delivers measurable value in 12-24 months. Raman Mehta According to McKinsey’s 2024 Global Survey on AI, 65% of organizations are already regularly using generative AI, nearly double from just ten months ago. The enterprise AI market tells the story: valued at $23.95 billion in 2024, it’s projected to reach $155.2 billion by 2030, growing at 37.6% CAGR. source

Why EGI will deliver the AI revolution that matters Read More »

Quantum computing is coming for your data. Here’s how to stay secure

This poses an existential risk for enterprises. Sensitive communications, transactions, intellectual property and even national security data protected by current standards could be exposed. Even more concerning, encrypted data intercepted today could be stored and decrypted years from now — once quantum computers are capable — through a ‘harvest now, decrypt later’ strategy. The promise of post-quantum cryptography Thankfully, the cybersecurity community is preparing. Researchers are developing post-quantum cryptographic (PQC) algorithms designed to withstand quantum attacks. These new methods rely on mathematical problems that remain hard even for quantum systems. In 2022, the U.S. National Institute of Standards and Technology (NIST) announced a first group of PQC algorithms for standardization, including lattice-based schemes like CRYSTALS-Kyber and CRYSTALS-Dilithium. source

Quantum computing is coming for your data. Here’s how to stay secure Read More »

Workday unveils new agents, a new cloud, and a developer platform

Flex Credits Workday’s new Flex Credits, a subscription-based consumption model for AI in Workday, also met with her approval. Customers receive an annual allotment of credits included in their Workday subscription which may be applied to any Workday agents or platform innovations. They can purchase additional credits if need be, with, the company said, no complicated tiers or hidden fees, and can monitor their spend on a dashboard. “While the consumption-based pricing model isn’t unique, Workday’s approach may provide procurement advantages compared to competitors’ traditional per-seat or per-feature models,” Brue said. Workday Build For developers, Workday announced Workday Build, a new platform that, it said “gives customers and partners the power to create, share, and scale AI-powered solutions directly on Workday.” It will include Workday Flowise Agent Builder, a low-code tool acquired in August, as well as Workday Extend. And, said Mark Woollen, Workday’s global VP for partner innovation, “There’s a new AI developer set of products and tooling that’s designed to help customers build apps and agents and orchestrations of the Workday platform, including a genAI-powered developer copilot and agent gateway to connect AI agents with Workday’s Agent System of Record. Also included with Build is a whole expanded ecosystem of vetted, purpose-built solutions from Workday and our partners.” source

Workday unveils new agents, a new cloud, and a developer platform Read More »

IBM, AWS unite to scale trustworthy AI with seamless governance integration

Overview In this episode of DEMO, host Keith Shaw sits down with Neil Leblanc (watsonx.governance Go-To-Market Lead, IBM) and Eduardo Fronza (Partner Solutions Architect, AWS) to showcase a powerful integration between IBM watsonx.governance and Amazon SageMaker AI. Together, they demonstrate how enterprises can automate AI governance, streamline collaboration across teams, and ensure compliance and trust throughout the AI model lifecycle. Whether you’re a data scientist, ML engineer, or a chief privacy or information officer, this solution empowers organizations to go beyond proof-of-concept and deploy AI at scale — securely and responsibly. See how the platform: * Enables seamless integration between AI governance and ML model development * Automates risk assessments and regulatory compliance * Aligns business, risk, and technical teams through collaborative workflows * Uses SageMaker’s model registry to feed insights back into watsonx.governance * Prevents unauthorized deployment of unapproved models Available via the AWS Marketplace, this joint solution gives customers the flexibility to deploy governance as a fully managed SaaS or within their own Amazon environment. This episode is sponsored by IBM and AWS. Register Now source

IBM, AWS unite to scale trustworthy AI with seamless governance integration Read More »

Good governance holds the key to successful AI innovation

Organizations often balk at governance as an obstacle to innovation. But in the fast-moving world of artificial intelligence (AI), a proper governance strategy is crucial to driving momentum, including building trust in the technology and delivering use cases at scale. Building trust in AI, in particular, is a major hurdle for AI adoption and successful business outcomes. Employees are concerned about AI’s impact on their job, and the risk management team worries about safe and accurate use of AI. At the same time, customers are hesitant about how their personal data is being leveraged. Robust governance strategies help address these trust issues while laying the groundwork for standardized processes and frameworks that support AI use at scale. Governance is also essential to compliance — an imperative for companies in highly regulated industries such as financial services and healthcare. “Done right, governance isn’t putting on the brakes as it’s often preconceived,” says Camilla Austerberry, director at KPMG and co-lead of the Trusted AI capability, which helps organizations accelerate AI adoption and safe scaling through the implementation of effective governance and controls across the AI life cycle. “Governance can actually be a launchpad, clearing the path for faster, safer, and more scalable innovation.” Best practices for robust AI governance Despite its role as a crucial AI enabler, most enterprises struggle with governance, in part because of the fast-moving technology and regulatory climate as well as an out-of-sync organizational culture. According to Foundry’s AI Priorities Study 2025, governance, along with IT integration and security, ranks among the top hurdles for AI implementations, cited by 47% of the responding organizations. To be strategic about AI governance, experts recommend the following: Focus on the basics. Because AI technologies and regulations are evolving so quickly, many organizations are overwhelmed by how to build a formal governance strategy. It’s important to create consensus on how AI strategy aligns with business strategy while establishing the proper structure and ownership of AI governance. “My advice is to be proportionate,” Austerberry says. “As the use of AI evolves, so will your governance, but you have to start somewhere. You don’t have to have it all baked in from the start.” Include employees in the process. It’s important to give people easy access to the technology and encourage widespread use and experimentation. Companywide initiatives that gamify AI encourage adoption and promote feedback for AI governance frameworks. Establishing ambassador or champion programs is another way to engage employees by way of trusted peers, and an AI center of excellence can play a role in developing a foundational understanding of AI’s potential as well as the risks. “Programs that are successful within organizations go that extra mile of human touch,” says Steven Tiell, global head of AI Governance Advisory at SAS Institute. “The more stakeholders you include in that conversation early, the better.” Emphasize governance’s relationship to compliance. Effective governance means less friction, especially when it comes to regulators and risk auditors slowing down AI implementation. Given the varied global regulatory climate, organizations should take a forward stance and think beyond compliance to establish governance with lasting legs. “You don’t want to have to change business strategy or markets when a government changes regulations or adds new ones,” says Tiell. “You want to be prepared for whatever comes your way.” To learn more, watch this webinar. source

Good governance holds the key to successful AI innovation Read More »

Doomprompting: Endless tinkering with AI outputs can cripple IT results

“Employees who don’t really understand the goal they’re after will spin in circles not knowing when they should just call it done or step away,” Farmer says. “The enemy of good is perfect, and LLMs make us feel like if we just tweak that last prompt a little bit, we’ll get there.” Agents of doom Observers see two versions of doomprompting, with one example being an individual’s interactions with an LLM or another AI tool. This scenario can play out in a nonwork situation, but it can also happen during office hours, with an employee repeatedly tweaking the outputs on, for example, an AI-generated email, line of code, or research query. The second type of doom prompting is emerging as organizations adopt AI agents, says Jayesh Govindarajan, executive vice president of AI at Salesforce. In this scenario, an IT team continuously tweaks an agent to find minor improvements in its output. source

Doomprompting: Endless tinkering with AI outputs can cripple IT results Read More »

Why AI upskilling fails, and how tech leaders are fixing it | What IT Leaders Want, Ep. 11

That’s a great question. I think it’s important to realize with technology that it’s constantly evolving. Like upskilling isn’t a choice you have to make. It’s kind of an imperative organizations must upskill, otherwise they’re getting left behind. In terms of how Red Gate does that, I think one of the first principles we operate from is we always try and hire curious folks and and that means people who have a thirst for learning. And you might wonder, how you find such such people, right? And you know that is hard. One of the simple filler questions we use is just to ask people, what’s the last book they read, what’s the last technology they played with? What makes them excited? That can give you a great impression of whether someone has that curiosity and that mindset to learn and adapt. Another principle we try and put in place is before you need to, before you introduce a technology, you really need to understand the why of that technology. You need to feel the problem that the technology is trying to solve. So for example, if you’re trying to learn Kubernetes, a container orchestration framework, and you haven’t felt the problem that Kubernetes solves, it’s going to feel like an over complicated solution to a problem you haven’t got. The way you can create that space for people is to not run workshops treating things in the abstract is to give people a chance to play with that technology and run into those problems themselves, so they can discover those solutions and learn to put them into practice. Some of the ways we try and do that. At Red Gate, we have this thing called 10% time, where we give up every Friday afternoon for people to embrace learning and development. And that might be through lightning talks. It might be through trying to fix a particular customer issue in a new and novel way, or it might just be trying to get to grips with a new technology, with a toy application, a Slack bot that orders lunch for the team every Friday, something akin to that. And the final way, I think is really important to upskill people is to expose expert thinking. And I think that that’s really key to see the decision making process in action. And again, one of the things we’ve put in place, and it’s taken a long time to get this actually showing value, is architecture decision records. So when we ask people to make or when people make changes to software at Red Gate, we ask them to fill in a short description of of why they’re doing it, the options they considered and why they chose the path that they chose. I think we put this in about five years ago. Now we’ve got a library of almost 500 architecture decisions that detail why we did something, and sometimes a few years later, why we were wrong about that. And that’s brilliant. It’s that organizational repository of knowledge that new starters can look in to understand why the decisions were made. They might be wrong. We’re still going to make wrong decisions. Everyone does, but at least you can see the thinking process underneath. Valerie Potter source

Why AI upskilling fails, and how tech leaders are fixing it | What IT Leaders Want, Ep. 11 Read More »