Apple Passwords App Vulnerability Exposed Users for Months

Apple’s Passwords app, designed to enhance security for iOS users, ironically left them vulnerable to phishing attacks for nearly three months. Security researchers recently revealed that the flaw exposed sensitive information, raising concerns about cybersecurity risks — even with trusted software. The vulnerability explained Researchers at Mysk identified the flaw, which stemmed from the app’s use of unencrypted HTTP connections when retrieving website icons and opening password reset pages. This security lapse allowed attackers to intercept data and redirect users to malicious phishing sites. >Mysk’s team discovered that the Passwords app contacted over 130 websites using unprotected HTTP traffic. This made it possible for hackers on the same Wi-Fi network — such as in cafes, airports, or hotels — to manipulate the requests and trick users into visiting fraudulent websites designed to steal login credentials. Apple’s response and fix Upon discovering the vulnerability in September 2024, Mysk promptly reported the issue to Apple. The tech giant addressed the flaw with the iOS 18.2 update, released in December 2024. This update implemented encrypted HTTPS connections for improved security. However, Apple only publicly disclosed the vulnerability in March 2025, emphasizing the importance of timely updates and robust cybersecurity measures. Must-read security coverage What users should keep in mind To protect their data, iPhone users are strongly encouraged to update their devices to the latest version of iOS. Updating to iOS 18.2 or later ensures the Passwords app operates with encrypted connections, significantly reducing phishing risks. Additionally, users should remain vigilant when accessing public Wi-Fi networks and consider using a reputable VPN for added protection. Key lessons for users and developers The incident highlights the critical need for secure data transmission protocols, especially for applications managing sensitive information. While Apple quickly resolved the issue, the case serves as a reminder that even the most trusted software can have vulnerabilities. By keeping software up to date and adopting best security practices, users can better protect themselves against emerging threats in an increasingly digital world. source

Apple Passwords App Vulnerability Exposed Users for Months Read More »

5 ways for CIOs to deal with AI proliferation

Whether you’re in an SMB or a large enterprise, as a CIO you’ve likely been inundated with AI apps, tools, agents, platforms, and frameworks from all angles. This isn’t surprising given that gen AI investments alone are expected to grow some 60% over the next three years, according to the Boston Consulting Group, accounting, on average, for 7.6% of IT budgets by 2027.  Dan Priest, US chief AI officer at PwC, says AI proliferation is a reality that’s only going to accelerate, with 79% of CIOs planning to leverage gen AI to help transform their businesses. But only 40% feel fully prepared to manage and integrate these technologies, as PwC’s recent Pulse survey suggests. “Each team and team member will create new agents to perform tasks, autonomously and intelligently,” he says. “At the same time, people are experimenting. They’re using approved tools and exploring others too, increasing the risk of leaking data. CIOs will need to activate multi-layer solutions to manage the complexities coming their way.” So as a CIO, how should you reign in the chaos and implement a suitable level of governance and control? Of course, you want to enable the entire workforce to innovate responsibly with AI and maximize productivity by utilizing these tools. But you also need to manage spend, reduce duplication of effort, ensure interoperability where necessary, promote standards and reuse, reduce risk, maintain security and privacy, and manage all the key attributes that instill trust in AI. source

5 ways for CIOs to deal with AI proliferation Read More »

Nvidia debuts Llama Nemotron open reasoning models in a bid to advance agentic AI

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Nvidia is getting into the open source reasoning model market. At the Nvidia GTC event today, the AI giant made a series of hardware and software announcements. Buried amidst the big silicon announcements, the company announced a new set of open source Llama Nemotron reasoning models to help accelerate agentic AI workloads. The new models are an extension of the Nvidia Nemotron models that were first announced in January at the Consumer Electronics Show (CES). The new Llama Nemotron reasoning models are in part a response to the dramatic rise of reasoning models in 2025. Nvidia (and its stock price) were rocked to the core earlier this year when DeepSeek R1 came out, offering the promise of an open source reasoning model and superior performance. The Llama Nemotron family models are competitive with DeepSeek offering business-ready AI reasoning models for advanced agents.  “Agents are autonomous software systems designed to reason, plan, act and critique their work,” Kari Briski, vice president of Generative AI Software Product Managements at Nvidia said during a GTC pre-briefing with press. “Just like humans, agents need to understand context to breakdown complex requests, understand the user’s intent, and adapt in real time.” What’s inside Llama Nemotron for agentic AI As the name implies Llama Nemotron is based on Meta’s open source Llama models. With Llama as the foundation, Briski said that Nvidia algorithmically pruned the model to optimize compute requirements while maintaining accuracy. Nvidia also applied sophisticated post-training techniques using synthetic data. The training involved 360,000 H100 inference hours and 45,000 human annotation hours to enhance reasoning capabilities. All that training results in models that have exceptional reasoning capabilities across key benchmarks for math, tool calling, instruction following and conversational tasks, according to Nvidia. The Llama Nemotron family has three different models The family includes three models targeting different deployment scenarios: Nemotron Nano: Optimized for edge and smaller deployments while maintaining high reasoning accuracy. Nemotron Super: Balanced for optimal throughput and accuracy on single data center GPUs. Nemotron Ultra: Designed for maximum “agentic accuracy” in multi-GPU data center environments. For availability, Nano and Super are now available at NIM micro services and can be downloaded at AI.NVIDIA.com. Ultra is coming soon. Hybrid reasoning helps to advance agentic AI workloads One of the key features in Nvidia Llama Nemotron is the ability to toggle reasoning on or off. The ability to toggle reasoning is an emerging capability in the AI market. Anthropic Claude 3.7 has a somewhat similar functionality, though that model is a closed proprietary model. In the open source space IBM Granite 3.2 also has a reasoning toggle that IBM refers to as – conditional reasoning. The promise of hybrid or conditional reasoning is that it allows systems to bypass computationally expensive reasoning steps for simple queries. In a demonstration, Nvidia showed how the model could engage complex reasoning when solving a combinatorial problem but switch to direct response mode for simple factual queries. Nvidia Agent AI-Q blueprint provides an enterprise integration layer Recognizing that models alone aren’t sufficient for enterprise deployment, Nvidia also  announced the Agent AI-Q blueprint, an open-source framework for connecting AI agents to enterprise systems and data sources. “AI-Q is a new blueprint that enables agents to query multiple data types—text, images, video—and leverage external tools like web search and other agents,” Briski said. “For teams of connected agents, the blueprint provides observability and transparency into agent activity, allowing developers to improve the system over time.” The AI-Q blueprint is set to become available in April Why this matters for enterprise AI adoption For enterprises considering advanced AI agent deployments, Nvidia’s announcements address several key challenges. The open nature of Llama Nemotron models allows businesses to deploy reasoning-capable AI within their own infrastructure. That’s important as it can address data sovereignty and privacy concerns that can have limited adoption of cloud-only solutions. By building the new models as NIMs, Nvidia is also making it easier for organizations to deploy and manage deployments, whether on-premises or in the cloud. The hybrid, conditional reasoning approach is also important to note as it provides organizations with another option to choose from for this type of emerging capability. Hybrid reasoning allows enterprises to optimize for either thoroughness or speed, saving on latency and compute for simpler tasks while still enabling complex reasoning when needed. As enterprise AI moves beyond simple applications to more complex reasoning tasks, Nvidia’s combined offering of efficient reasoning models and integration frameworks positions companies to deploy more sophisticated AI agents that can handle multi-step logical problems while maintaining deployment flexibility and cost efficiency. source

Nvidia debuts Llama Nemotron open reasoning models in a bid to advance agentic AI Read More »

Latest Microsoft and NVIDIA Collaboration is a 'Significant Leap Forward'

Image: NVIDIA Microsoft and NVIDIA are deepening their collaboration to advance artificial intelligence, unveiling new technologies designed to enhance AI performance and scalability. Their latest efforts focus on integrating NVIDIA’s cutting-edge Blackwell architecture with Microsoft Azure, expanding AI capabilities for business and developers. From high-performance virtual machines to AI deployment tools, the partnership aims to accelerate innovation across industries, shaping the future of enterprise AI. Integrating NVIDIA Blackwell with Azure AI “Our partnership with Azure and the introduction of the NVIDIA Blackwell platform represent a significant leap forward. The NVIDIA GB200 NVL72, with its unparalleled performance and connectivity, tackles the most complex AI workloads, enabling businesses to innovate faster and more securely,” said Ian Buck, vice president of Hyperscale and HPC with NVIDIA. Microsoft recently announced the launch of the Azure ND GB200 V6, a new virtual machine (VM) series that incorporates NVIDIA technology. The VM series includes NVIDIA Quantum InfiniBand networking and the NVIDIA GB200 NVL72, a liquid-cooled supercomputer meant specifically for high-performance AI workloads. The new VM series joins the current line of Microsoft VMs that already use NVIDIA GPUs, specifically the H100 and H200. Microsoft also plans to release a line of VMs supported by NVIDIA’s Blackwell Ultra GPUs, which are expected to launch later this year. More must-read AI coverage Adding NVIDIA NIM to Azure AI Foundry Microsoft and NVIDIA have contributed to the advancement of agentic AI. As part of this effort, the two companies have introduced NVIDIA Inference Microservices, or NVIDIA NIM, within Azure AI Foundry. NVIDIA NIM consists of pre-packaged, optimized containers designed to streamline the deployment of generative AI tools and AI agents. Epic, a major player in electronic health records, already plans to utilize NVIDIA NIM and Azure AI Foundry to their fullest extent. The company aims to enhance patient care, improve clinician efficiency, and perform AI-driven research into new medical breakthroughs and processes. Microsoft and NVIDIA are also working to optimize the performance of various language models for the Azure AI Foundry. This includes the recently optimized Meta Llama models, which are now available to developers already using Azure AI Foundry. Accelerating AI innovations across the board In addition to these developments, Microsoft and NVIDIA announced plans to help accelerate AI innovations for other companies. The partners have made several recent additions to the Azure marketplace, including: NVIDIA Omniverse. NVIDIA Isaac Sim virtual workstations. Omniverse Kit App Streaming. These additions, primarily aimed at AI developers, support the creation of robotics simulations, digital twins, and other AI-driven applications. source

Latest Microsoft and NVIDIA Collaboration is a 'Significant Leap Forward' Read More »

Celonis sues SAP for anti-competitive data access practices

Celonis accuses SAP of damaging its business SAP has introduced new rules and restrictions with the goal of destroying Celonis’ business and thus harming SAP’s ERP customers, Celonis argues. Customers, Celonis contends, are more or less trapped in this system because switching ERP providers is generally associated with high effort and expense. SAP is ultimately hindering competition, Celonis says, to gain an advantage for its own process mining solution, which it acquired with the Signavio acquisition. Celonis was launched in 2011. The following year, the Munich-based company participated in SAP’s Startup Focus program — the starting point of a long-term business relationship between SAP and Celonis. The startup, which ranked 13th on the Forbes Cloud 100 list in August 2024 with a valuation of $13 billion, closely integrated its process mining software with the SAP universe. This involved considerable costs, the lawsuit states. But SAP and its customers benefited. Ultimately, with the help of Celonis tools, it was possible to monitor, analyze, and ultimately optimize processes using data from SAP systems. When SAP acquired German process mining provider Signavio in 2021, SAP said it aimed to pair Signavio’s integrated, cloud-native process suite with SAP’s Business Process Intelligence to help SAP customers adapt their business processes end-to-end. The strategy would incorporate business process analysis, design, and improvement, as well as process change management. source

Celonis sues SAP for anti-competitive data access practices Read More »

AI Startup CoreWeave Launches Plans For $2.5B IPO

By Tom Zanki ( March 20, 2025, 11:51 AM EDT) — Artificial intelligence-focused startup CoreWeave Inc. on Thursday set plans for an estimated $2.5 billion initial public offering, represented by Fenwick & West LLP and underwriters’ counsel Latham & Watkins LLP, likely launching the largest IPO of 2025…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

AI Startup CoreWeave Launches Plans For $2.5B IPO Read More »

How Agentic AI is Changing the Face of Marketing

Just when businesses in Asia/Pacific thought they were getting to grips with artificial intelligence-led disruptions in their industries, here comes something new called Agentic AI and its ability to turn generative AI (GenAI) functionality and capabilities into actionable services. In practical terms, Agentic AI is more than just advanced chatbots, this is the next step in the evolution of AI, allowing AI agents to act with increased autonomy. A simplistic retail example would be, where previously the AI will recommend restaurants based on a user’s preferences, now it can book the restaurant and offer alternatives that match its users’ health and dietary needs (vegan, gluten-free, high protein, etc). From a marketing perspective, this progression in AI utilization has a drastic impact on how businesses promote their goods and services to potential customers. Previously, marketers would target their campaigns directly towards the customers but now the shortlisting and decisions are made by the AI. How does the AI choose the “right” product/service? What changes do marketing teams need to make to incorporate Agentic AI? Before we try and understand where we are going let’s first come to grips with where we are and look at the current state of marketing in Asia and how marketers are currently using AI. Some of the key goals for marketing in the region today are: Develop a more unified and enterprise-wide marketing strategy: Provide consistent marketing experiences across various touchpoints Personalization as a differentiator: Making marketing campaigns relevant to segments, micro-segments or even individuals based on their personal preferences and characteristics Streamline processes through automation Asian businesses, particularly those in China, have been relying on AI in all of these areas and some of the more common uses of AI are: Automated content creation (including visual content such as videos and images) for campaigns Predictive analytics to improve overall campaign effectiveness and performance Data analysis and insights into customer behavior trends, gauging public sentiment and achieving a better understanding of the customer journey More detailed insights of how Chinese firms are using AI in marketing can be found in IDC PeerScape: C2G Peer Insights to Augment Customer Intelligence Using Generative AI. By leveraging AI, marketers can optimize their marketing campaigns for targeted messaging to a wide range of customer segments across numerous online channels, while controlling distribution frequency to minimize advertising fatigue in a cost-effective method. For example, as shown in the below figure, digital advertising is still the most time-consuming process for marketers.  Actions such as amending and formatting communications for different digital mediums as well as determining which segments to target take a significant amount of time and prolong the time required to launch a campaign – all of which can now be done by GenAI and Agentic AI. With a better understanding of how marketers in the region are using AI, let’s now look at Agentic AI and what the future holds for marketing. Agentic AI Will Impact the Marketing Workforce Composition We will see that with continued use of AI, especially in campaign cost optimization,  impacts marketing workforce composition. Currently, marketers are using AI to take over mundane and repetitive tasks (e.g. formatting images across different social media platforms) which will eventually transition to taking over full-time marketing roles allowing humans to focus on more strategic initiatives. In the IDC FutureScape: Worldwide Chief Marketing Officer 2025 Predictions — Asia/Pacific (Excluding Japan) Implications, IDC lists the top most urgent trends that marketing leaders must pay attention to. One of our predictions on the impact of Agentic AI on workforces states that by 2028, 1 out of 5 marketing roles or functions will be held by an AI worker, shifting human expertise to driving strategy, creativity, ethics and managing a blended human and AI workforce. Increased Focus on AI Governance by Marketing not IT There will be a need to supervise and ensure proper performance with Agentic AI taking on more responsibilities. This will require monitoring by the marketing team themselves who know when something goes wrong as opposed to IT who monitor based on code alerts. This will require the marketing team to be trained on AI use, to make them comfortable with the use of AI and understand how it can help rather than replace them so they can truly appreciate the technology and learn  the processes and systems to use in finetuning AI performance or troubleshooting errors when they occur. Marketing Workflows Will Change The use of Agentic AI by consumers will force a change in the marketing mindset, creating new processes and areas of focus which will force businesses and marketers to rethink how they operate. As an example, let’s address the question raised earlier in this blog – How does the AI choose the “right” product/service? When the Internet and search engines grew in popularity, search engine optimization was created to improve the quality and quantity of website traffic. Companies had to rethink how they setup their websites to ensure it was “visible” in rankings to the search engines. In addition to this, many paid to have their websites listed on top of searches. IDC predicts that businesses in Asia will have to work with AI companies and start spending on Large Language Model (LLM) optimization in the same manner so that businesses and their products and services are visible to Agentic AI systems. By 2029, companies will spend up to 3x more on LLM optimization than search optimization to influence GenAI systems and raise the priority & ranking of their brands. The AI road ahead has no doubt more bumps and turns and marketers must be willing to meet these changes and challenges head on. Here are a few things marketers can do to prepare for Agentic AI: Build a portfolio of AI case studies and use cases to determine what works best for you. By matching thought leadership initiatives with AI-infused case studies, marketers will be able to develop campaigns that competitively differentiate their companies and products. Work with the technology team to ensure marketing

How Agentic AI is Changing the Face of Marketing Read More »

4. What is the best age to retire?

On average across 18 mostly middle-income countries surveyed, the ideal age for retirement is 57.9. Nigerians suggest the oldest ideal age for retirement (62.7, on average). Respondents in Ghana and Kenya also suggest a relatively high age (60.6 in each). Colombians give the youngest ideal age, saying 52.1 is the best age to retire. The ideal age is similarly low in Turkey (52.7). By comparison, the real retirement age – defined here as the age people become eligible for age-related pensions – varies widely across countries. And in most nations surveyed, people say the best age to retire is younger than the age they are first eligible to receive these benefits. For example, Mexico has one of the highest retirement ages of the countries surveyed: 65 years old. But Mexican adults say the best age for retirement is more than eight years earlier, placing it at 56.6. There are also countries where the ideal retirement age is older than the actual threshold for receiving benefits. Nigerians, for instance, suggest the oldest ideal age for retirement in the survey, though eligibility for benefits starts at age 50 in Nigeria. People in Ghana, India, Indonesia, Kenya, Sri Lanka and Thailand also say it is best to retire after the age of eligibility. Refer to Appendix A for actual ages of eligibility for retirement benefits in each country. The ideal retirement age also varies within countries. For example, though 31% of Turks say the best age to retire is between 50 and 54, a similar share think it’s between 55 and 59. Roughly one-in-five say it’s best to retire before 50, while 14% say the best age is between 60 and 64. The preferred retirement age similarly varies in Nigeria, where the best age to retire is placed at 62.7. While about a third say the ideal age is between 60 and 64 (36%), substantial shares think it’s best to retire between 65 and 69 (21%) and at 70 or older (22%). Views in Thailand are more uniform. A 59% majority say the best age to retire falls between 60 and 64, and roughly equal shares prefer older or younger age ranges. Views by age and gender In 11 of 18 countries, adults ages 50 and older prefer an older age for retirement than younger adults (those under 35) do. Views of the ideal retirement age also vary between men and women in some countries. In seven – Argentina, Bangladesh, Chile, Colombia, Ghana, Mexico and Tunisia – men suggest a higher age than women. The opposite is true in India, Kenya, the Philippines and South Africa, where women suggest a higher age than men. Notably, responses do not vary by education or income level in most countries surveyed. source

4. What is the best age to retire? Read More »

Prioritizing data integration to discover the untapped potential of data

Many companies today are struggling to manage their data, overwhelmed by data volumes, velocity, and variety. On top of that, they are storing data in IT environments that are increasingly complex, including in the cloud and on mainframes, sometimes simultaneously, all while needing to ensure proper security and compliance. All of this complexity creates a challenge. How do companies ensure their data landscape is ready for the future? Particularly when it comes to new and emerging opportunities with AI and analytics, an ill-equipped data environment could be leaving vast amounts of potential by the wayside. Not to mention the risk of errors or negligence that result from limited visibility which can affect compliance. The longer it takes for companies to address their data challenges, the worse these problems become, and the further behind they might fall in the push for future innovations. One of the most important pathways to better data management, and maximizing its value, lies in discoverability. Discovering data across a hybrid infrastructure Harnessing the full potential of data in a hybrid environment starts with a thorough discovery process. Teams must first identify the information that’s crucial to the business and any associated regulatory requirements. Taking inventory of the data landscape helps when planning future modernizations or technology changes. The next step is to map all dependencies between the company’s applications and data. This is especially useful for organizations that haven’t gone through an evaluation process like this in a long time. Dependency mapping can uncover where companies are generating incorrect, incomplete, or unnecessary data that only detract from sound decision-making. It can also be helpful to conduct a root cause analysis to identify why data quality may be slipping in certain areas. Once an organization has that better understanding of how data is produced and stored, implementing an impact analysis sheds light on the operational effect of actions that can be taken to address weak points in data management. Before moving forward with any data migration or modernization project, executives should know what the business stands to gain as a result. This is also a good opportunity to build a data lineage capability if it doesn’t already exist. Should the company make changes to improve data flows and infrastructure, there will be additional opportunities to optimize the data footprint. At this point, it’s easier to minimize data bloat, right-size infrastructure, and eliminate low-quality data that doesn’t add business value. This is also the right time to implement data monitoring to keep track of changes in data structure and data flows over time. Of course, getting a handle on hybrid environments that span transactional, distributed, and cloud systems is far easier said than done. The steps described here can take months or even years to execute depending on the data needs of the business in question. Invest in purpose-built data integration Putting an emphasis on solutions that ease the data integration process can help uncover critical answers to many lingering data questions an organization might have. What’s needed here is a purpose-built set of capabilities, with the backing of a trusted, knowledgeable partner. Solution sets like those provided in Rocket® DataEdge help accomplish exactly that. With solutions that provide greater visibility, an organization can quickly gain a comprehensive view of their data landscape and create a single source of truth for users that covers the company’s entire data footprint across distributed, cloud, and mainframe environments. The most sophisticated solutions enable all types of users to build compelling visualizations and reports. As a result, more people can understand and make decisions from higher-quality information. Regardless of industry, businesses are contending with exponential growth of data in their IT systems. And as technologies like AI dominate, maximizing the value of that data without compromising security or falling victim to rising operational costs has become the mandate that will define success for the future. Learn more about how Rocket Software can help you make the most of your data and fuel innovation. source

Prioritizing data integration to discover the untapped potential of data Read More »