Modernize enterprise PC fleets to ensure AI ambitions deliver business success

Investing in the right kind of PCs helps businesses innovate, become more secure and adapt to new ways of working. They also ensure that enterprises have a foundation from which to implement crucial AI-driven improvements that can boost productivity and growth. According to Michael Nordquist, Corporate Vice President of Client Product Marketing at AMD, IT leaders are currently weighing the merits of upgrading to AI PCs. “With Windows 10’s end of support approaching, IT decision-makers are caught between needing to invest and managing budget constraints,” he says. “Despite this, the buzz around AI is pushing IT leaders to consider how AI-enabled PCs can enhance productivity and manageability. AMD’s focus is on building AI PCs that both deliver classical reliability and offer a foundation for emerging AI functionalities.” A new era of security and manageability As enterprises upgrade to AI PCs, endpoint security will remain a priority. Nordquist, speaking in a CIO webcast, commented: “Since 2020 AMD has worked closely with Microsoft and OEM partners to better protect devices from firmware vulnerabilities and browser-based attacks. Today, we’re working together to embed future-ready AI capabilities like Microsoft Pluton into AI PCs for robust, hardware-based security.” The effective management of enterprise PC fleets is crucial for security and productivity in a globally distributed, hybrid workforce. In the age of AI PCs, management processes will become increasing automated and self-service. Where IT does need to step in, they will be able to do so remotely. “Thanks to advanced technologies like Microsoft Autopilot, IT teams can deploy PCs directly to employees’ homes with pre-loaded OS and automated system builds,” says Nordquist. “When challenges arise, AI PC processors can step into the breach. Our AMD Ryzen™ PRO processors, for example, come with ‘autoband’ technology that quickly restores systems to a safe state in the event of system issues. Processors like these can ensure seamless provisioning, deployment, and maintenance processes, making the transition to AI PCs smooth and cost-effective.” Powering the future of work Once deployed, AI PCs will help enable new ways of working within enterprises. According to Nordquist, one of the most important developments with AI PCs is the addition of Neural Processing Units (NPU), which are custom-built to run AI capabilities. Although NPUs have already been used for basic applications like video conferencing transcription, they promise to do much more. “Microsoft Copilot + PCs are ushering in the next wave of AI experiences, leveraging large language models to create complex presentations and videos with just a few keystrokes. What started as novelty in software development—using AI to generate or validate code—has quickly become essential. If you’re not integrating AI, you’re falling behind,” adds Norquist. For IT leaders considering making the move to AI PCs, weighing up the underlying technology will be critical. Nordquist concludes: “AMD has engineered our AMD Ryzen™ AI PRO systems to simplify IT challenges, integrating top-tier performance, security, manageability, and long battery life. With AMD Ryzen™ AI PRO processors, IT professionals can rest easy knowing that they are empowering workers with a tool that excels in every aspect necessary for modern computing. By empowering end users with high-performing PCs that also prioritize creative capabilities, we enable them to deliver the most value in their roles.” Watch the whole webcast below. source

Modernize enterprise PC fleets to ensure AI ambitions deliver business success Read More »

Responsible and Secure AI: The Key to AI-Fueled Growth

As Asia/Pacific businesses accelerate their digital transformation journeys, artificial intelligence (AI) is becoming a core innovation enabler. From identity and access management (IAM) to risk-based trust frameworks, AI is reshaping the cybersecurity landscape. However, as AI adoption grows, so do concerns around security, trust, and compliance.   According to IDC’s Asia/Pacific Security Study, 2024, 76.5% of enterprises in the region say that they are not confident in their organization’s ability to detect and respond to AI-powered attacks. Most are concerned about AI-driven vulnerability scanning by attackers, the rapid exploitation of zero-day vulnerabilities, increasingly personalized and effective social engineering attacks that leverage AI, and AI-powered ransomware attacks with dynamic negotiation and extortion tactics. The risk of AI-driven risk vectors increases in verticals dealing with sensitive and confidential information such as Banking and Financial Services (BFSI) and Healthcare as well as critical infrastructure sectors like energy, transportation, and telecommunications, where disruptions can have widespread consequences.  With cybersecurity emerging as a central theme across the region, AI-fueled business models must address key challenges:   How can organizations ensure AI systems are secure, transparent, and resilient?   How should regulatory frameworks evolve to accommodate AI-driven cybersecurity?   What steps can businesses take to balance AI innovation with trust?   How can enterprises implement a robust AI governance framework to manage security, compliance, and ethical risks effectively?  To navigate these challenges, enterprises must address three key areas that impact the secure and responsible deployment of AI:  1. Integration and Cost Barriers to AI Security Adoption  Despite its potential, AI-driven security automation struggles with integration issues and high costs. According to IDC FutureScape: Worldwide Security and Trust 2025 Predictions – Asia/Pacific (Excluding Japan) (APJ) Implications, by 2027, only 25% of consumer-facing companies in the region  will use AI-powered IAM (Identity and Access Management) for personalized, secure user experiences due to persistent difficulties with process integration and cost concerns, creating a trust gap in AI authentication and identity protection, particularly in consumer-facing sectors like retail, banking, and e-commerce.  2. Regulatory Fragmentation Complicates Compliance  Asia/Pacific’s inconsistent AI regulations make compliance difficult. While Singapore and Australia lead AI governance, India and ASEAN nations lag behind, creating inconsistencies in how businesses implement AI security solutions. China has implemented strict AI laws focused on security assessments and algorithmic transparency, while Japan follows a more flexible, self-regulatory approach emphasizing Responsible AI. One of the most critical shifts in cybersecurity will be the introduction of AI Bills of Materials (AI BoM). By 2028, 70% of data products will include a Data BoM, detailing how data was collected, processed, and consent was obtained. This evidentiary trail will be essential for demonstrating compliance and ensuring AI systems do not operate as black boxes. Alongside, AI governance is mandatory, rather than exploratory. Some nations have demonstrated leadership in already initiating AI governance frameworks – such as Singapore, Australia, India, and Japan – setting the stage for responsible and secure AI adoption across the region. These countries are proactively developing policies and frameworks to ensure AI-driven technologies align with security, compliance, and ethical standards.  3. Unchecked GenAI Adoption Creates Security and Compliance Risks  The rapid expansion of GenAI poses major security and governance challenges for enterprises. IDC predicts that in 2025, 20% of organizations in APJ will move from proof-of-concept (POC) to production in specific GenAI use cases without a comprehensive risk-based assessment of their trust capabilities, potentially creating a cybersecurity house-of-cards scenario. Key risks include data leaks, bias in AI models, and regulatory penalties as governments tighten AI security laws. Without proactive governance, enterprises risk non-compliance, reputational damage, and increased exposure to AI-driven threats.  To mitigate these risks and build trust in AI-powered security, organizations must establish a robust governance framework that ensures transparency, compliance, and operational resilience. This is where IDC’s Unified AI Governance Model comes into play.  IDC’s Unified AI Governance Model  IDC’s Unified AI Governance Model is a strategic framework that balances innovation with risk management, ensuring AI deployment aligns with compliance, security, transparency, and ethical standards. It is built on four key pillars: transparency and explainability, security and resilience, compliance and privacy protection, and human-in-the-loop (HITL) governance.  IDC defines AI governance as a system of laws, policies, frameworks, practices, and processes that enable organizations to manage AI risks while driving business value. Governance must be integrated into strategy rather than treated as a reactive measure. Without it, enterprises face operational inefficiencies, legal exposure, and reputational risks. The model also acknowledges external influences, such as regional regulations, ethical considerations, and societal expectations, which vary significantly across APJ markets. Ensuring that AI governance adapts to these external factors is critical for sustainable and trusted AI adoption.  IDC’s Unified AI Governance Model provides a structured approach to managing AI security and trust by addressing some key questions such as:   Who is using what data, and where is it stored?   How is personally identifiable information (PII) data protected through encryption or anonymization?   Are AI models being tested against risk controls and compliance requirements?   Is there a risk assessment framework for GenAI deployments?  Path Forward: Cybersecurity and AI Governance for Asia/Pacific Businesses  To foster a secure AI-driven future, businesses must take a proactive approach to cybersecurity and AI governance. Key steps include:  Embedding AI Bill of Materials (BoM) in Cybersecurity Practices: Developing transparent AI security frameworks that document data provenance, consent mechanisms, and compliance checkpoints.  Investing in AI-Powered (Identity and Access Management) IAM with Risk-Based Authentication: Incorporating adaptive authentication, behavioral analytics, and risk scoring to strengthen trust in AI-driven security systems, instead of relying solely on AI-driven IAM.  Conducting Comprehensive Risk Assessments for GenAI Deployments: Establishing robust governance policies to prevent unintended risks when moving from GenAI POC to production.  Integrating Autonomous AI for IT Operations: By 2027, GenAI and analytics deployments for IT operations use cases will increase team productivity by 15%, generating $1.5 billion in economic and business value. Automated IT service desk responses, anomaly detection, and predictive resource capacity planning will be critical for AI-enabled security frameworks.  Collaborating with Regional Regulatory Bodies: Actively participating in shaping AI governance

Responsible and Secure AI: The Key to AI-Fueled Growth Read More »

D-Wave Claims to Achieve ‘Quantum Supremacy’

According to a peer-reviewed paper published on March 12 in the journal Science, D-Wave claims to have performed a materials simulation that surpasses the capabilities of even the most advanced classical supercomputers. Specifically, D-Wave said its annealing quantum computer solved a difficult materials simulation problem that would take millions of years on the Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory. D-Wave states in its related press release that the achievement is “the world’s first and only demonstration of quantum computational supremacy on a useful problem.” However, some researchers have challenged this assertion, insisting that traditional computing methods may already achieve comparable results. Moreover, some experts take issue with the usage of the term “quantum supremacy,” advocating instead for alternatives like “quantum advantage” or “quantum utility.” Simulation in approximately 20 minutes compared to a million years According to D-Wave’s paper, its annealing quantum computer Advantage2 prototype successfully simulated the properties of complex magnetic materials used in smartphones, medical devices, sensors, and motors. The company reported the simulation was completed in less than 20 minutes. Frontier, the most powerful supercomputer at Oak Ridge National Laboratory, would require close to a million years of nonstop computing to achieve the same results. Some physicists have argued that more optimized classical algorithms may significantly reduce this projected gap. SEE: Gartner named post-quantum cryptography among its top 10 strategic technology trends for 2025 D-Wave’s paper was based on research performed last year and did not take into account contemporary computing, research scientist Miles Stoudenmire told The Wall Street Journal. Researcher Dries Sels at New York University said the same calculations can be performed on conventional computers using a field of mathematics called tensor networks. An issue of quantum semantics The marketing term “quantum supremacy” remains contentious in the scientific community. Many researchers have recently embraced alternative terms such as “quantum utility” or “quantum advantage” to describe breakthroughs with the next-gen technology. D-Wave insists its usage of the term “quantum supremacy” is accurate. “We’re solving an important problem, and it’s in a regime that is totally intractable for leading classical methods. That’s why we call it quantum supremacy,” Andrew King, a senior distinguished scientist with D-Wave, told The Wall Street Journal. More about Innovation Quantum computing’s competitive landscape Numerous companies, including Amazon, are currently developing their own quantum computers and associated chips. In 2024, Google introduced Willow, a quantum chip for use in its Sycamore quantum computer. More recently, the tech giant unveiled its quantum-safe digital signatures for Google Cloud’s Key Management Service. Google first claimed to have quantum supremacy all the way back in 2019. Per Google’s announcement, Sycamore performed a task in 200 seconds that would have taken a supercomputer approximately 10,000 years to complete. Engineers have been working on quantum computing for decades. While much of the early work was strictly theoretical, we’re now starting to see the culmination of their efforts with systems like Advantage, Sycamore, and others. TechnologyAdvice staff writer Megan Crouse contributed to this article. source

D-Wave Claims to Achieve ‘Quantum Supremacy’ Read More »

GOP Rep Says Lawmakers Ready For FCC Subsidy Fix

By Christopher Cole ( March 28, 2025, 8:12 PM EDT) — Congress will be prepared to reform the country’s telecom subsidy programs for low-income and rural consumers if the U.S. Supreme Court decides they must be overhauled, according to a key House Republican…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

GOP Rep Says Lawmakers Ready For FCC Subsidy Fix Read More »

AI Backlash and Sabotage Inside Companies: ‘It’s Tearing Us Apart,’ Employees Say

Image: stockasso/Envato Elements A growing number of workers are pushing back against corporate AI strategies, with 31% of employees — and 41% of Gen Z — admitting to refusing to use AI tools or outputs, according to a new study. The desire to sabotage their company’s AI strategy stems from widespread fears of job displacement and dissatisfaction with their company-provided AI tools, according to a survey by Writer that polled 1,600 C-suite executives and employees. Frustrations are so high that 35% are footing the bill themselves for the generative AI tools they prefer to use at work. More must-read AI coverage Internal tensions undermine AI adoption The report also highlights power struggles, poor internal alignment, and friction between IT and business leaders over how GenAI should be deployed. About two out of three executives said GenAI adoption has created internal tension and divisiveness, with 42% warning that it is “tearing their company apart.” Despite optimism surrounding GenAI’s potential, 72% of C-suite respondents said their company has faced at least one major hurdle during adoption. Meanwhile, 71% reported that AI applications “are being created in a silos,” disconnected from broader strategy and collaboration. Further, an overwhelming 95% of the C-suite admitted their company needs to improve its approach to AI integration. SEE: Will Power Availability Derail the AI Revolution? (TechRepublic Premium) Leaders and employees see AI progress differently The survey revealed a sharp divide between how executives and employees perceive AI implementation. Only 45% of employees believe their company has been very successful with GenAI in the past year compared to 75% of the executives who believe the rollout has gone well. Still, momentum around GenAI continues to build. The report found 88% of employees and 97% of executives have personally benefitted from using GenAI, and both groups are across a range of use cases. “It’s not enthusiasm that’s stalling adoption,” observed May Habib, CEO and cofounder of Writer. “It’s the lack of a real strategy, the right tools to empower teams, and a partner that can actually make it work at scale.” Employees driving solutions from within Encouragingly, 77% of employees using AI are “AI champions” — individuals helping lead adoption efforts within their organizations. Nearly all (98%) of AI champions have either contributed to developing AI tools at work or expressed a desire to do so. “The future of AI in the enterprise depends on leaders taking a collaborative and inclusive approach,” Writer’s chief strategy officer Kevin Chung told TechRepublic. “By nurturing these champions and fostering a culture of innovation, organizations can navigate the challenges and fully harness the transformative power of generative AI.” source

AI Backlash and Sabotage Inside Companies: ‘It’s Tearing Us Apart,’ Employees Say Read More »

Oracle’s AI Agent Studio is free for Fusion Cloud customers

This means that enterprises will almost have a guarantee that their agents are appropriately vetted for security, privacy, and performance-related considerations and this confirmation will help enterprises have more confidence in adopting agentic technologies, said Arnal Dayaratna, research vice president at IDC. Another point of advantage is the no additional cost factor of the Studio. Futurum’s Hinchcliffe said that the pricing strategy is an aggressive play against rivals who charge, such as Salesforce’s Agentforce, which sometimes charges $2 for a transaction. However, he pointed out that the actual value of the new offering will depend on how open-ended the agent orchestration is. “If Oracle’s approach remains tightly constrained to Fusion Applications, enterprises looking for broader AI autonomy and orchestration may still turn to AWS, Google, or Microsoft,” he said. source

Oracle’s AI Agent Studio is free for Fusion Cloud customers Read More »

We Must Allow Judges To Use Their Independent Judgment

By John Siffert ( March 21, 2025, 4:04 PM EDT) — The current political divide has hijacked the conversation about the importance of an independent judiciary and whether judges should only call balls and strikes, as advocated by Chief Justice John Roberts at his confirmation hearing in 2005…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

We Must Allow Judges To Use Their Independent Judgment Read More »

Want to Build AI Agents? Anthropic and Databricks Can Help

Generative AI companies Anthropic and Databricks have teamed up to sell AI tools for business, trying to raise $100 million in five years, according to the Wall Street Journal. The partnership is an attempt to alleviate both companies being “under tremendous pressure” to perform relative to their valuations as the AI bubble threatens to burst. What will the Anthropic/Databricks partnership look like for business customers? Anthropic and Databricks sales teams will promote and sell one another’s products, according to The Wall Street Journal. The companies will target large corporate customers who want to build their own AI agents, which are: generative AI tools that can chain together different tasks to, seemingly autonomously, arrive at the result the user expressed in natural language. For example, an AI agent asked to “order a pizza delivered to the office” might require access to the user’s mobile meal delivery app and place the order. “Databricks has built up that trust with 10,000 customers,” Kate Jensen, Anthropic’s head of sales and partnerships, told The Wall Street Journal. “Anthropic is still relatively new, but continuing to grow extremely quickly.” According to Databricks and Anthropic, customers have requested better integration between the two companies’ tools. DOWNLOAD this IT Leader’s Guide to Generative AI from TechRepublic Premium Companies that use Databricks’ cloud data storage platform will be able to access Anthropic’s advanced generative AI, Claude, within it. The two companies already have a relationship in place, with mutual customers like Block (owner of payment platform Square), using both Databricks and Anthropic’s Claude behind the scenes on its own AI agent. Coding is among the tasks Square employees use Claude on Databricks for. More must-read AI coverage Blockers to generative AI adoption Generative AI companies have struggled to generate revenue despite the hot buzz around investing in or using the technology; Anthropic and Databricks are betting on agentic AI being no different. Agentic AI still has a reputation for being inaccurate or inefficient; the Databricks research team is aiming for 95% accuracy among their AI agents, The Wall Street Journal said. Agentic AI is the current buzzword in AI for business, with OpenAI adding speech to AI agents and Microsoft developing agents for specific cybersecurity tasks. Generative AI still has a trustworthiness problem, and prompt writing is an art unto itself that can take time away from core business functions. source

Want to Build AI Agents? Anthropic and Databricks Can Help Read More »

New Australia CIO appointments

Congratulations to these ‘movers and shakers’ recently hired or promoted into a new chief information officer, senior IT, or board role in Australia. John Granger joins Healthscope Healthscope has appointed John Granger as Chief Information Officer. After an eight-year stint as CIO at Cleanaway Waste Management Limited, Granger brings deep technology leadership to Australia’s largest private hospital network. Dan Chesterman appointed at Teachers Mutual Bank Teachers Mutual Bank has appointed Dan Chesterman as Chief Information Officer. His background includes technology leadership roles at ASX, Commonwealth Bank’s CommSec & Private Bank, and Accenture, with extensive experience in financial technology consulting. source

New Australia CIO appointments Read More »