CIO CIO

Solving multicloud networking challenges to scale businesses with AI

Artificial Intelligence (AI) and its demand for massive computational power have spurred the growth of cloud and edge computing. As AI pilot projects take off and scale, there has been an increasing demand for more flexible and high-performing infrastructure. This has led to the rise of multi-cloud networking as organisations attempt to tap into the best-in-class tools while allowing workloads to be more evenly distributed. The advantages of a multi-cloud architecture include the ability to seamlessly pull and process data across clouds, data centres, and edge environments, making it a strategic choice for businesses scaling their AI initiatives. However, while multi-cloud may sound like the best solution on paper, there are many hurdles in actual implementation. According to the 2024 Cloud Computing Study by Foundry, 48% of IT decision-makers cited cost as their biggest challenge for overall cloud adoption. This is not surprising given that AI workloads, particularly in training large models, can quickly inflate cloud spending. Additionally, network latency poses performance challenges, especially when AI applications rely on real-time data processing. These issues underscore the need for smarter multi-cloud strategies. Identifying and addressing the multi-cloud roadblocks to AI success IT teams need to ensure their networks can scale flexibly, reaching bandwidths of up to 100 Gbps to support the high-throughput demands of AI-driven applications. “Networks should have the capability to activate new connections in minutes to keep pace with dynamic workloads, enable rapid deployment of AI services, and ensure seamless performance across various cloud and edge environments.” Hon Kit Lam, Vice President, Hybrid Connectivity Services, Tata Communications While not impossible, these demands come with high egress fees from transferring data between multiple clouds. This is often accompanied by low bandwidth and limited or siloed visibility across clouds. The complexities involved in managing multiple cloud instances also mean there are no performance guarantees, despite the costs. Leveraging a robust multicloud networking solution To help address the challenges of multicloud adoption, Tata Communications introduced the IZO™ Multi Cloud Connect, enabling businesses to bridge their cloud connectivity gaps and alleviate the IT department’s burden.  By simplifying the multi-cloud experience with software-defined multi-cloud connectivity as-a-service, IZO™ Multi Cloud Connect offers instant cloud-to-cloud connectivity for a lower total cost of ownership, higher bandwidth, and on-demand bandwidth upgrades. This means customers can upgrade their bandwidth and pay for the extra cost only when required. Together with the reduction in egress cost, enterprises can save around 50% in their cloud connectivity charges. Establishing a secure connection from the data centre to the cloud is essential for modern workloads, especially those driven by AI and real-time analytics. With high-throughput bandwidth to handle large volumes of data, low latency to ensure responsive application performance, and on-demand provisioning for dynamic scaling, IZO™ Multi Cloud Connect can remove the bottlenecks and keep operations agile. It also delivers a fully managed networking solution that combines both the underlay (physical network infrastructure) and overlay (virtual network functions). This integrated approach allows organisations to abstract and automate complex routing, optimise traffic paths, and enforce consistent policies across multi-cloud and hybrid environments.  Enabling seamless interconnection between clouds As part of its transformation process, leading specialty chemical company Clariant SE was migrating from on-premises data centres to the cloud and needed a solution to connect its distributed cloud data centres over the internet. “It required the performance of MPLS VPN, and IZO™ Multi Cloud Connect came through,” explained Lam. “The solution, with virtual router capability, provides a predictable internet connection from Clariant SE’s more than 100 sites to business-critical applications in the cloud.” As AI adoption continues to empower businesses across industries, the need for scalable, secure, and reliable infrastructure has become a necessity. Organizations that want to remain competitive will have to overcome the barriers to these complex technologies and implement the right tools that are necessary for success. Tata Communications’ IZO™ Multi Cloud Connect is built to address the challenges that come with change, supporting an end-to-end multicloud adoption journey.  Speak with an expert to learn how IZO™ Multi Cloud Connect can support your multi-cloud infrastructure. source

Solving multicloud networking challenges to scale businesses with AI Read More »

The AI revolution isn't about technology

In contrast to AI-assisted development, with vibe coding, you let AI take the lead to generate all the code for your application. Even in case of errors, the recommended approach is to let the tool analyze errors and provide the fix. If it fails to do so, that’s when human intervention may be required. Though still in the early stages, where it shows significant promise in building content-based sites, internal tools and small-scale apps, tangible gains are visible with as much as 30% of Microsoft code being written by AI, to use just one prominent example in well-known software. Along with other AI promises, vibe coding centers the software development around human conversations on needs and intents for the application. This approach democratizes application development and aligns with the original promise of digital transformation: empowering people to focus on creativity and innovation rather than worrying about development and implementation.  Autonomous workflow: End-to-end business process execution Agentic AI refers to the next step toward intelligent automation (IA), where AI functions as an autonomous agent (digital worker) capable of reasoning, adapting, learning and making decisions on complex tasks to execute end-to-end flow. Whereas in vibe-coding, AI can autonomously generate, enhance and bug-fix the code, with agentic AI, systems exhibit greater autonomy in orchestrating an end-to-end workflow. Based on the design patterns like reflection, tool use, planning and multi-agent collaboration to generate responses, agentic AI can continuously learn from feedback and self-improve over time. source

The AI revolution isn't about technology Read More »

Synology’s Active Protect Manager reinvents enterprise backup with speed, simplicity

Keith Shaw: Hi everybody, welcome to DEMO, the show where companies come in and show us their latest products and platforms. Today, I’m joined by Cody Hall. He is a product manager at Synology. Welcome to the show, Cody. Cody Hall: Happy to be here. Keith: And you’re a hardware company, right? I’m excited—we get so many software companies in here. You guys have been in the storage space for a while. So, what are you going to be showing us today? Cody: I’m going to be showing you Active Protect Manager, a dedicated all-in-one enterprise backup and software solution. Keith: Who is this really designed for? Are you moving into the enterprise space, and is that why we’re seeing some of these new products? Cody: Yes, this system was specifically designed for enterprises looking to streamline the design, implementation, and management of their backup infrastructure—even at scale—while taking into consideration industry standards for ransomware protection, air gapping, and the like. Keith: Okay, and I’m assuming this is geared towards network or storage managers within a company. Are there other people or groups who would benefit from this? Cody: Oftentimes at an enterprise, there’s a whole team dedicated to managing backups at that scale. So this is really targeted at those teams and the pain points they experience managing other systems. Keith: Is everybody doing backup well these days, or is it still kind of a mess? Cody: Actually, not so well. We’ve done some workshops over the past couple of years, and whenever we asked the audience how many had tested and run their backups, we saw very few hands raised. It was a little concerning. So no, I don’t think everyone is doing backups perfectly across the board. Keith: Okay, so what problem is Active Protect Manager aiming to solve? Why should a company be interested in learning more about this product? Cody: The reason you’d want to learn more is that, since it’s a dedicated hardware and software solution, your backup teams won’t spend as much time speccing out the hardware and software. The system comes with preconfigured hardware and software, so you can essentially repeat deployments as you scale. That means your IT team uses less brainpower and can focus their attention elsewhere. Keith: So if a company didn’t have this, they’d be spending a lot more time on backup setup? Cody: Oh, definitely. With other backup vendors, you might get hardware-agnostic flexibility, but that creates a burden of choice. Your IT team would have to design and implement everything themselves, making expansion harder. This solution aims to alleviate those pains. Keith: All right, let’s jump into the demo. Show us what you’ve got. Cody: Wonderful. This is Active Protect Manager. This will be the environment we’ll be walking through. source

Synology’s Active Protect Manager reinvents enterprise backup with speed, simplicity Read More »

Is your GenAI adoption outpacing your ability to secure it?

Generative artificial intelligence (GenAI) continues to soar in Asia Pacific, with spending predicted to reach US$26 billion by 2027. This highlights organizations’ strong belief in GenAI’s potential to enhance product development and design workflows, automate processes, and generate content, ultimately creating better business outcomes. Central to this growth is the evolving capabilities of large language models (LLMs). These advances, such as the ability to process multiple data types like text and images, as well as provide richer context, can offer more accurate insights, improved support for different applications, and increased efficiency. Innovating at the cost of security risks But as with any technological advancements, GenAI comes with risks. Early adopters leveraging LLMs, such as the open-source model DeepSeek for its high performance in reasoning tasks, may be capturing business value ahead of their peers. However, Gartner cautioned that the swift adoption of GenAI technologies has outpaced the development of data governance and security measures. “At present, there is a growing number of LLM vulnerabilities that can potentially expose businesses to threat actors. Among the more prominent ones is the Grandma Attack, a social engineering attack in the form of prompt injection that jailbreaks the safeguards of LLMs with specific inputs. In this instance, threat actors craft prompts that mimic the persona of a harmless elderly relative to execute specific instructions,” explains Ker Yang Tong, ASEAN and India CTO of Fujitsu. “This allows them to bypass security controls to extract sensitive data and even manipulate critical infrastructure controls, resulting in severe financial and reputational damage.” Another risk involves bypassing content filters. Threat actors can generate factually incorrect or inappropriate information to spread disinformation or conduct phishing campaigns. These LLM-related risks are often made more complex by existing data silos within enterprises, and an increasingly fragmented cybersecurity landscape fraught with sophisticated cyberattacks. To highlight how the meteoric rise of LLMs raises critical safety concerns, Fujitsu used its LLM vulnerability scanner to conduct the most extensive security analysis of DeepSeek’s flagship model, DeepSeek-R1 7B, to date, surpassing others in scope and attack coverage. Through comprehensive testing and over 7,000 simulated attacks, the scanner revealed that DeepSeek-R1 is the worst performing LLM model against malware and phishing attacks, with a 100% attack success rate in bypassing the model’s safeguards. A proactive security approach to GenAI While 45% of CIOs surveyed by IDC emphasized security as their primary concern for GenAI initiatives, it’s concerning that only 22.4% of organizations felt adequately prepared for AI-ready trust and security. With so much at stake, enterprises must prioritize a security-first AI strategy, such that security and governance are implemented at the same pace as innovation. “In this new era of AI, CIOs must adopt a long-term vision for innovation. True AI isn’t a race to the finish line; it’s a strategic tool that should be leveraged via a security-first lens,” says Tong. “This will not only allow businesses to drive business continuity, but also avoid pitfalls that may erode public trust.” Tools such as the Fujitsu LLM vulnerability scanner help enterprises to adopt a proactive cybersecurity stance. Popular LLM models—from DeepSeek-R1 to Llama 3.1—were analyzed, and more than 7,700 attacks were conducted, spanning 25 distinct attack types. By leveraging a database that aggregates state-of-the-art information, including LLM attack scenarios and vulnerabilities published by academia and the AI security community, as well as Fujitsu’s proprietary techniques and the latest attack techniques, the scanner provided unprecedented visibility into an LLM’s attack surface. This empowers enterprises to take a risk-based approach to AI adoption, prioritizing security without stifling innovation. Understanding and mitigating the security risks of LLMs is central to reaping the full benefits of GenAI. As GenAI continues to evolve, a security-first approach that can provide comprehensive visibility into the threat landscape will be paramount to business success. Find out how Fujitsu can help maximize the business value of your GenAI strategy today. source

Is your GenAI adoption outpacing your ability to secure it? Read More »

Maximizing manufacturing excellence with AI and edge computing

In today’s hyper-competitive manufacturing landscape, operational efficiency, product quality, and agility are no longer aspirations, they are imperatives. The traditional production model, once characterized by manual quality control, reactive maintenance, and rigid processes, is rapidly giving way to intelligent, data-driven operations powered by artificial intelligence (AI). According to McKinsey’s 2024 Global Survey, 78% of organizations have adopted AI in at least one business function, up from 72% in early 2024 and 55% a year earlier. The demand for AI in manufacturing stems from a perfect storm of challenges: increasing product complexity, the pressure for mass customization, global supply chain disruptions, and the critical need for sustainable operations. From predictive maintenance that prevents costly downtime to computer vision systems that automate quality control, AI is enabling manufacturers to meet these challenges head-on. Deloitte’s 2023 Manufacturing Industry Outlook  highlighted that AI-driven quality control alone can reduce defect rates by up to 90%, saving millions in rework and recalls while ensuring consistent product excellence. The integration challenge: From innovation to operationalization Yet, while the promise of AI is compelling, realizing its full potential is not without hurdles. Many manufacturers struggle with the complexity of integrating AI into legacy systems, ensuring consistent deployment across multiple sites, and processing vast amounts of data without introducing latency. Moreover, centralized AI models, often running in the cloud, cannot always deliver the real-time responsiveness required on the production floor. This gap between AI innovation and operational execution has been a critical barrier to scaling AI across manufacturing operations. Bridging the gap between AI and edge computing The solution lies in an AI + Edge computing paradigm. By distributing AI capabilities to the edge—closer to where data is generated—manufacturers can achieve real-time insights and instant decision-making. For example, BMW has implemented edge-deployed AI to perform surface inspection on painted vehicles, detecting even microscopic blemishes that human inspectors might overlook. Similarly, GE Aviation uses AI at the edge to monitor jet engine component production, enabling predictive quality control and minimizing scrap rates. However, deploying AI at the edge and integrating it seamlessly across cloud and on-premises environments requires a robust, flexible, and open infrastructure. This is where Red Hat’s enterprise solutions play a pivotal role. Red Hat OpenShift AI provides a powerful platform for developing, training, and deploying AI models at scale. Manufacturers can build models in the cloud and seamlessly distribute them to edge environments or factory floors. This hybrid approach ensures that AI insights are available where they are most valuable—whether in centralized analytics hubs or embedded directly into production equipment. Complementing OpenShift AI, Red Hat Device Edge brings AI inferencing directly to the shop floor, enabling ultra-low-latency decision-making critical for tasks like real-time quality inspection and adaptive process control. Red Hat Ansible Automation Platform further streamlines operations by automating model deployment, updates, rollbacks and ensuring high-availability. This automation reduces operational overhead and accelerates time-to-value for AI investments. By combining AI’s predictive and analytical power with Red Hat’s open hybrid cloud and edge solutions, manufacturers can not only improve quality and efficiency but also build resilient, adaptive operations ready for the demands of Industry 4.0 and beyond. The future of manufacturing belongs to those who can harness data-driven intelligence at every level of their operations—and with Red Hat, that future is within reach. Connect with us today to learn how we can help you in your journey to smart manufacturing. source

Maximizing manufacturing excellence with AI and edge computing Read More »

Cofidis offers a lifetime of support through digitalization

Currently, the company uses various market technologies that cover several key aspects: storage, processing, analysis, security, and legal compliance, all of which are managed by a team dedicated to data governance and digital ethics, since, for Cofidis, it’s essential to guarantee good practices when consuming data. “An example is our semantic layer project, which consists of applying a logical abstraction between physical data and the tools that users use to perform analysis,” says Almeida. “The objective is to translate technical data into well-defined and understandable concepts at the business level, which also gives us greater control when managing access to information.”  Digital onboarding  One of Cofidis’s major recent initiatives has been its digital onboarding project, which aims to digitize the entire onboarding process for new and existing customers, creating a more immediate and frictionless experience. “We designed this project with Innolab, our innovation hub, using several tools such as KYC for identity validation, open banking for creditworthiness assessment, digital signatures, instant payments, and 100% automated management,” says Almeida. “We also focus on analyzing interactions with our customers, and thanks to speech analytics technology, we can analyze thousands of conversations per day, better understand our customers’ real needs, and, best of all, we’ve done it in a way that respects privacy and empowers our own teams.” To carry out this project, the company worked with several providers such as Tink, Logalty, IDnow, and Experian. And the integration and development were carried out in-house with the support of the Crédit Mutuel Alliance Fédérale group.  source

Cofidis offers a lifetime of support through digitalization Read More »

Navigating ransomware attacks while proactively managing cyber risks

Well, it’s, you know, I it’s one of those things that it’s an interesting trend that I’ve been saying, you know, if you think about, you know, four or five years ago, a lot of CSOs were not responsible for cloud security, for instance, or they may not have responsibility over identity. That could have been the CIOs organization. So more and more CISOs are having to accept more risk. And, you know, not to oversimplify it, but a CISOs job is to accept risk and reduce risk. So, you know, we’ve seen this explosion in asset classes that CISOs are now responsible for. Ot being a great example where it was, you know, the manufacturing plant manager that was responsible for the OT security environment, and not the CISO. So now the CISOs are having all this responsibility. I think that you know the first thing, and not to oversimplify it, is having a dashboard that has your all of your inventory, far too often. I think organizations go, I’m only going to focus on the critical assets for my organization. But there, that’s again, a miss, because if you look at the ransomware example that we used far too often, organizations didn’t know that they had, you know, a Citrix Server externally facing, misconfigured and had known vulnerabilities on it, and the attackers based. Went breach that and move laterally within the organization. So have that inventory, analyze that inventory to understand what misconfigurations, what risks. I like to call them toxic combinations, you know, this asset plus these cohort of users that have access that’s bad, something that you need to go and focus on. I also think that building a baseline of how you want to go through and communicate with the other teams. It’s one thing for the security team, they probably have a very good technical understanding. But how are you going to communicate with the operations team? How are you going to drive that efficiency? Because unfortunately, the operations team, they’re about uptime and availability and patching that goes against it, they’re going to have to take downtime. So and then ultimately, how do you drive and explain this to the organization? How do you report on this? How do you make show how effective it is having a business conversation? You know, trend that I’m seeing is more and more CISOs are reporting directly to the board. They’re reporting into the CEO in some cases, and we’re seeing CISOs actually join boards now, because cybersecurity is no longer an insurance policy, it’s a critical business process. So I think for the CISOs out there, I think that’s kind of the baseline fundamentals. It’s the other side of the coin of incident response is exposure management. It’s how do you do proactive security? How do you understand what your risk is. How do you mitigate that risk? And I think that’s the give and take. Estelle Quek Yeah. source

Navigating ransomware attacks while proactively managing cyber risks Read More »

SAP’s Rise rebrand conceals cost changes

SAP’s spokesperson acknowledged that there have been adjustments to the FUE for some functions, but said that, on the whole, the company had moved things into less expensive tiers, acknowledging only that “We did upgrade three authorization entries in our ruleset, which is uncommon. In most cases, we downgrade these authorizations, which provides access to a broader group of people. In this instance, these authorizations were previously incorrectly classified. However, this does not mean a direct increase or decrease in cost as these are narrow by feature access, and users have multiple authorizations.” In the face of such changes, Bickley said, “SAP customers must factor into their TCO active and ongoing monitoring of SAP’s license requirements and audit against their current environment in order to stay compliant.” On premises off the menu SAP has been discouraging the purchase of on-premises licenses, Bickley said, sometimes even telling customers that they’re no longer available, in its attempts to push customers onto S/4HANA in the cloud. source

SAP’s Rise rebrand conceals cost changes Read More »

Cloud in the age of AI: Six things to consider

Recently I spoke to a senior leader of a Europe-based global consumer goods business, and they said that they had always believed that fundamentally cloud would be more efficient than the alternatives. That in a globalized economy pooling resources on infrastructure would always work out to be cheaper in the end. And that they no longer believe that!  In part that is due to plain old capitalism: cloud vendors make money. So once customers are dependent on them, costs increase. If cloud was always more efficient, Fin Ops wouldn’t be a thing.   To be fair to cloud providers: the other issue is the increasing demands of their customers, focused on data, analytics, and AI.   source

Cloud in the age of AI: Six things to consider Read More »