Information Week

What's New (And Worrisome) in Quantum Security?

A growing number of security experts are warning that quantum computing might soon devastate existing cryptographic systems, leading to a security crisis that could devastate businesses and governments worldwide.  Researchers and other security experts are drawing a direct parallel between the pending quantum security danger and the notorious Y2K threat, only on a much larger scale. In essence, the current basic and widely-used encryption mechanisms, such as factorization-based cryptography, are highly vulnerable to quantum computing’s processing power.  Avalanche  The major security issue is the ability of quantum computers to rapidly break cryptographic algorithms, which are used in multiple security architectures and products, says Doug Saylors, partner and cybersecurity lead, with global technology research and advisory firm ISG. He notes in an email interview that modern cryptography is easily broken by quantum computing. As a result, file encryption becomes completely worthless. “Imagine every private conversation, every strategic plan, every forecast or product under development, all out in the open for public consumption, from competitors to suppliers to partners,” Saylors states. The reputation damage alone, he believes, “could be bankruptcy-inducing.”  Related:Tidal Wave of Trump Policy Changes Comes for the Tech Space Cryptographers have demonstrated that quantum computers can break asymmetric encryption algorithms, such as RSA and ECC, which are widely used for secure communication and digital signatures, says Archana Ramamoorthy, senior director, regulated and trusted cloud, at Google Cloud. This vulnerability can enable attacks, such as ‘store now, decrypt later’. “As a result, the longevity of hardware firmware signatures generated by similar asymmetric encryption algorithms is also threatened,” she warns in an online interview. “In contrast, symmetric cryptography appears less vulnerable to quantum attacks.”  Everybody Knows  Quantum security’s biggest challenge is identifying the exact date on which a solution will be needed, says Tom Patterson, quantum security global lead at business advisory firm Accenture, in an email interview. “Unlike Y2K, when we knew exactly when it would happen, but we didn’t know what would happen, with QDAY we know exactly what will happen and what to do about it, but we’re not sure if it’s needed in a day or a decade.” The challenge for IT and security leaders today, he adds, “is where to slot quantum security into their five-year plan, and how best to get started today.”  Responding to the threat quantum computing poses to current asymmetric algorithms, leading organizations, including the U.S. National Institute for Science and Technologies (NIST), are now working with researchers worldwide to create and test cryptographic algorithms that are resistant to the power of quantum computers. “The aim is to standardize these quantum-resistant algorithms and complete a thorough crypto analysis,” Ramamoorthy says.  Related:Data Thefts: Indecent Exposure to Risk According to Ramamoorthy, NIST has already endorsed three quantum-safe algorithms, based on extensive research and analysis by the global cryptographic community: FIPS 203, FIPS 204 and FIPS 205. “These algorithms address key exchange for secure communications and digital hashes used in various cryptographic operations,” she says, adding that NIST is also considering additional algorithms to further bolster the security of digital certificates. “This ongoing work is crucial to safeguarding the privacy and security of our digital lives and ensuring that our communications remain confidential and protected.”  The Future  The solution side of quantum security is advancing even faster than quantum computers themselves, Patterson observes. “We now have the first of many new NIST encryption standards that aren’t susceptible to a quantum computing decryption attack, which is great progress and great news.” He adds that “crypto agility,” a data encryption practice used to ensure a rapid response to a cryptographic threat, is gaining traction, helping enterprises to actively manage new NIST standards as they appear.  Related:Infogram Test There are also advances being made in using quantum information science itself to defend against quantum computing attacks. With new research, development, and early deployments of quantum key distribution (QKD), a secure communication method that implements a cryptographic protocol incorporating components of quantum mechanics which, when perfected, will provide a way to exchange keys anywhere without fear of compromise, the future looks far from hopeless.  Closing Time  Quantum security is a good-news story in that there are already solutions to mitigate the critical new risk, Patterson says. He believes that upgrading old and vulnerable encryption methods early will help enterprises save time and money while lowering current and future risks. “While there’s a cost to do the upgrade, running on the latest secure encryption is no more expensive than running old vulnerable encryption, so it’s good from a budgeting perspective as well.”  The light at the end of the tunnel is the fact that quantum computing can be used to defend against quantum attacks, and researchers are already beginning to catalog presumed attack vectors and design countermeasures, Saylors says. “We’re still three to five years out from the potential for an attack, but quantum-based countermeasures could prevent the attack from spreading to other organizations.” source

What's New (And Worrisome) in Quantum Security? Read More »

How IT Leaders Can Weather Geopolitical Unrest

Over the past couple of years, geopolitical tensions have been rising in various parts of the world. In the best case, the Trump administration may help deescalate tensions and violence. Alternatively, the US could find itself in a war that isn’t limited to foreign soil. As a result, CIOs, CTOs, and CISOs need to be prepared.  “Political instability causes risks to global supply chains, data security and operational resilience,” says Steve Tcherchian, CISO and chief product officer at security solution provider  XYPRO. “Just in the past year, we’ve seen disruptions in logistics, fluctuating regulatory environments, and cyber threats escalating due to geopolitical tensions. This creates blind spots in business continuity planning, especially for organizations reliant on international vendors, partners or regional operations.”  For example, ensuring data compliance in multiple jurisdictions becomes exponentially more complex during politically volatile times.   “My biggest concerns are supply chain vulnerabilities, cybersecurity threats from state sponsored threat actors targeting critical infrastructure, and navigating changes in trade restrictions, compliance, tariffs and more,” says Tcherchian. “These are headaches to manage in normal times, let alone during geopolitical tensions.”  Related:Tidal Wave of Trump Policy Changes Comes for the Tech Space His advice is to be proactive. Understand the scope of assets, data and infrastructure and build operations that can adapt to unpredictable conditions. This includes having alternate suppliers, transport routes and workforce contingencies.  “The intersection of cyber and political risks throws cybersecurity in the forefront. Don’t take cyber resilience lightly,” says Tcherchian. “Implement a zero-trust security strategy, deploy real time threat detection, maintain compliance with global security frameworks, [and] educate your teams.”  Change Management and Monte Carlo Simulations Are Wise  Bob Hutchins, an organizational psychologist and author of Our Digital Soul: Collective Anxiety, Media Trauma, and Path Toward Recovery, says geopolitical instability is a challenge that many of his clients face.  “[P]olitical instability has created significant unpredictability in supply chains, workforce stability and market access,” says Hutchins. “I’ve seen businesses freeze expansion plans, lose key talent due to geopolitical unrest and grapple with regulatory shifts that feel like moving targets. These disruptions don’t just slow growth — they can create a pervasive sense of unease that trickles down to employees, further compounding challenges.”  Related:What’s New (And Worrisome) in Quantum Security? For example, one international company recently had to reconfigure its entire supply chain after new tariffs disrupted its primary import route. The ripple effect included increased costs, delayed production, and strained relationships with long-time partners.  “My biggest concern is the emotional toll instability takes on leadership teams and employees. Anxiety about the future can lead to decision paralysis or reactive strategies that lack long-term foresight,” says Hutchins. “This heightened tension can erode trust within organizations, making it harder to maintain cohesion during already difficult times. Another concern I’ve seen is the rise of ‘decision fatigue’ among leaders. Navigating constant upheaval drains energy and focus, which can lead to poor choices or a lack of innovation.”  Shock resulting from the unexpected can cause organizational leaders to scramble and make snap decisions that may make sense in the short-term, but backfire in the long-term, like the way the pandemic impacted organizations.  “One of the most effective strategies I’ve seen is fostering adaptability. Businesses that treat change as a constant and prepare for multiple scenarios tend to fare better,” says Hutchins.  Related:Data Thefts: Indecent Exposure to Risk To help his clients prepare, Hutchins prioritizes scenario planning to think through the “what if” scenarios so leaders can anticipate and mitigate risks. He also underscores the need for clear, honest communication with employees about the challenges the organization is facing and having a plan in place helps build trust. Finally, he recommends investing in mental health so employees and leaders can manage stress and perform better, even under pressure.  “Start by focusing on what you can control. While you can’t obviously stabilize geopolitics, you can create stability within your organization by being transparent, flexible, and supportive,” says Hutchins. “Be proactive in scenario planning and ensure you have redundancies in place for critical operations. Leaders who listen and adapt based on their team’s feedback are better equipped to make thoughtful, forward-thinking decisions.”  XYPRO’s Tcherchian also stresses the need for extreme agility.  “Constant political instability may be our new reality,” says Tcherchian. “Foster a culture of resilience within your company, partners and vendors. Be agile and adaptable. Don’t treat instability as an obstacle, rather as an opportunity to build a more adaptive, innovative and resilient organization.”  source

How IT Leaders Can Weather Geopolitical Unrest Read More »

Exploring the Positive Impacts of AI for Social Equity

Artificial Intelligence has become a defining force of the 21st century, sparking debates about its role in shaping the future. While sometimes portrayed as a harbinger of dystopian automation, AI, when leveraged appropriately, can be a catalyst for profound, positive change.   AI’s ability to deliver a positive impact is not just a concept shared at tech shows or espoused by non-governmental organizations. The technology is already actively reshaping industries and addressing some of the world’s most pressing challenges.  As the global water crisis threatens nearly two billion people with absolute scarcity by 2025, AI is proving to be a key player in smart water management. By deploying advanced data-driven solutions, AI is optimizing how we manage water resources, identifying innovative approaches to desalination, reducing environmental impacts by minimizing overflows, and ensuring that water utilities achieve maximum returns on infrastructure investments by optimizing maintenance and operations for improved longevity.  In the telecommunications industry, AI is boosting network efficiency and informing how operators can expand access to underserved populations. For instance, one developing country leveraged AI to bring mobile network coverage to 95% of its population while saving $200 million in CapEx compared to a non-AI network planning approach.   Related:How AI Can Help (Or Deceive) Gamblers This latter example shows how AI can be a vital contributor to bridging the digital divide. The scenario above, achieved on a national scale, expanded broadband to rural areas much like the United States is looking to improve broadband penetration through the BEAD program. This altruistic yet practical example demonstrates the power of AI to fuel economic development and enhance access to vital services like education and healthcare. And it’s not just theoretical; the results are already being felt.  This is the impact of AI at its best — transforming technological innovation into tangible societal progress.  Amid the rapid pace of AI innovation, many companies, governments, and researchers have focused on technical possibilities rather than the positive realities of deploying AI at scale.  AI holds immense potential to drive social equity and inclusion. Consider the water management scenario above. In regions facing severe water scarcity, AI has optimized resource management and reduced pollution, potentially saving millions of lives and improving the quality of life for vulnerable communities.  In the broadband example, AI has helped bring education, telehealth, and employment services to underserved populations, acting as a great equalizer for many communities.   Related:China’s DeepSeek Dethrones ChatGPT as US Tech Stocks Plunge Yet AI’s ability to benefit society is dependent upon the humans using it. AI, on its own, is neither unethical nor capitalistic. The key to tapping AI’s power to generate positive impact lies in practitioners focusing on society’s biggest challenges, identifying how AI can play a role in solving them, and implementing a robust governance framework to carefully monitor the project and ensure it stays on an ethical and “greater good” track.  Having worked in AI and data science for a decade, we often encounter projects that we choose not to pursue. The power inherent in AI solutions compels us to look beyond the question of “Can we do this?” to a discussion about whether we should. AI can be deployed in many areas, and with great effect, so we prioritize projects that have a clear opportunity to benefit society.  The path forward demands a concerted effort from companies, particularly those with the resources and influence, to lead by example. It also requires AI partners who share the vision of using AI for initiatives that deliver real, positive impact.  In the end, the true measure of AI for social good won’t be in what AI can do, but in how it helps build a future where technology enhances and equalizes the human experience. The choices we make today — whether in deploying AI for water conservation or expanding digital access — will define AI’s trajectory in shaping and achieving that future.  Related:How Must Staffing Change in Relation to AI? source

Exploring the Positive Impacts of AI for Social Equity Read More »

How to Persuade an AI-Reluctant Board to Embrace Critical Change

As an IT leader, you’re no stranger to helping executives decipher and understand groundbreaking technology. The process usually takes persistence, careful abstraction, and a stockpile of success stories to make a persuasive business case. With luck, you eventually persuade the board of the value of your next significant IT initiative. But selling the board on AI implementation is another challenge altogether.   It’s not surprising that many boards are undecided about AI. A recent Deloitte study on AI governance found that Board members rarely get involved with AI:   14% discuss AI at every meeting  25% discuss AI twice a year  16% discuss AI once a year  45% never discuss AI at all  Only 2% of respondents considered board members highly knowledgeable or experienced in AI. These circumstances present a serious hurdle as IT teams not only try to implement AI solutions but also strive to build the appropriate guardrails into the AI strategy.   Helping the board understand the power of black sky thinking can help to counteract some of their reservations about pursuing AI. Here’s what you need to know:    Black Sky Thinking Offers a New Approach to Innovation   Artificial intelligence is taking enterprises to a place where no man has gone before. Even though the market is starting to define AI norms, establish regulations, determine the technology’s shortcomings, and pinpoint when we need a human in the loop, we’re collectively flying through unfamiliar skies. As a result, IT leaders need to persuade the board of directors to embrace a more transformative way of solving problems. Enter black sky thinking.   Related:Tech Company Layoffs: The COVID Tech Bubble Bursts The black sky thinking concept emerged during the 1960s’ space race and was then popularized by Rachel Armstrong, author and futurist, at the FutureFest in London in 2014 as she described the mentality necessary for humans to thrive on the cusp of unparalleled disruption.   In a follow-up essay, she explains the difference between blue sky thinking (where we’re at now) and black sky thinking this way:   Blue sky thinking is a “way of innovating by pushing at the limits of possibility in existing practices.”   Black sky thinking is more aspirational, “producing new kinds of future that enable us to move into uncharted realms with creative confidence.”   Rather than being constrained by current paradigms, organizations’ boards and leaders need to envision the future they want and reverse engineer the steps necessary to reach the desired destination. It’s like planning for oceanic voyages or trips to the moon but at a societal level.   Related:Securing a Better Salary: Tips for IT Pros You might be saying, “That’s great, but how does it apply to convincing the board to embrace AI use cases?” Before you can unlock the power of AI, you need board members to shift from blue sky to black sky thinking and embrace aspirational, limitless potential.   Leadership Is on Board with Black Sky Thinking: Now What?  Even when they’re onboard with black sky thinking, most board members are going to focus on mitigating risk and maximizing profits for shareholders and the corporation. That’s a fine strategy if you’re trying to maintain stasis, but not if you’re attempting to break barriers and drive innovation. Your next goal is to convince the board that AI is an acceptable investment if they’re going to achieve their black sky-driven goals.   Fortunately, you can increase the success of your petition by getting two key board members on your side: the CEO and general counsel.   The CEO is often an easier sell. KPMG surveys indicate 64% of CEOs treat AI as a top investment priority. Since your goals align, the CEO can be a co-champion, providing profiles on each board member and answering these key questions:   Which specific industry AI use cases will be the most persuasive?   Related:Untangling Enterprise Reliance on Legacy Systems Will AI examples from Fortune 500s carry the most weight?   Which biases will you need to combat in your argument?   When it comes to in-house counsel, you need to demonstrate a strong command of the legal and ethical implications of what you’re proposing. General counsel and CFOs, being naturally risk-averse, require you to come prepared with your:   Recognition of potential risks  Awareness of pending legal cases  Commitment to ethical implementation   With your CEO and general counsel as AI champions, your next step is to demonstrate ROI if the board is going to approve investment in AI. Showcasing results from programs that have already yielded measurable success can reduce barriers to an AI-forward mentality. For example, in healthcare, Kaiser Permanente has demonstrated how AI can save clinicians an hour of documentation daily — a powerful use case to highlight.  Ultimately, you’ll need to show them that the risk of doing nothing at all can be just as catastrophic as taking a big gamble on emerging technology. Tailored pitches to board members, both individually and collectively, can embolden them to step out of their comfort zones. This approach encourages the embrace of unconventional — or even unknown — solutions to complex challenges. When everyone embraces black sky thinking, no horizon is completely out of reach.   source

How to Persuade an AI-Reluctant Board to Embrace Critical Change Read More »

A New Reality for High Tech Companies: The As-a-Service Advantage

Global IT spending continues to rise, and enterprises are increasingly moving budgets to services and software away from hardware investments. This shift in spending directly influences the strategic, operational, and investment decisions of high-tech providers. To stay competitive, they must prioritize customer-centric strategies and align business goals with operations. To facilitate this, embracing as-a-service (AaS) models is vital to meet current demands and drive future growth. Yet, most providers are not equipped to adequately address the demands associated with such an enterprise change.  The AaS Opportunity  Integration of AaS offerings will be crucial for companies’ reinvention strategies and a well-executed AaS strategy benefits both tech providers and their customers. Recent Accenture research found that executives recognize the flexibility, stability and potential growth opportunities that come along with this. We found there is a shared optimism, with a measurable confidence in generative AI’s (GenAI) applications to support business transformation. In fact, 97% of executives believe that gen AI can help their companies accelerate the shift towards models that focus on annual recurring revenue (ARR) and AaS offerings and 85% think that AaS offerings will add to their revenue stream but at the expense of their current products or services.  Related:Tech Company Layoffs: The COVID Tech Bubble Bursts Worryingly, 75% agree that legacy technology hardware companies will no longer exist unless they begin acting more like software companies. That underpins the urgency for high tech companies to reinvent themselves immediately, not plan for it somewhere down the line. The benefits are twofold, for the customer this shift provides continued and superior value year over year. Additionally, providers have registered a positive impact on long-term revenue, customer retention and overall customer lifetime value.  Addressing the Roadblocks to AaS Adoption  Despite the benefits of shifting to new models, which can bridge the gap between high-tech players and their customers, our findings point to a significant confidence split among respondents. Only 50% of executives believe they can meet their publicly stated ARR goals. Although high-tech companies have the ideal products and services that could benefit from a cloud-hosted, subscription-based model via AaS to generate recurring revenue, many face internal challenges like grappling with legacy systems and tech debt.  While there’s positivity around the opportunity that AaS can bring, there’s also hesitation in the industry to adopt it because many executives believe AaS models might cannibalize their existing offerings. They also believe that the success of implementing these models is heavily dependent on their sales force’s readiness to adopt new ways of selling. This outlook questions the preparedness of high-tech companies to adapt to such a transformation. Related:Securing a Better Salary: Tips for IT Pros However, to maintain a competitive advantage, high-tech companies need to implement a customer-centric strategy. This is especially critical given that enterprise customers are increasingly redirecting their IT budgets to prioritize services and software, with a notable focus on software as a service (SaaS).  Embracing AaS to Navigate Customer Demand and Retention  The primary benefits of shifting to an AaS model arm high-tech providers with the ability to address modern customer expectations and overcome the limitations of traditional product lifecycles, to build lasting value-driven relationships. Here are the key customer-centric strategies that executives need to focus on to establish themselves as leaders in the AaS era:  Pivoting from transactional to relational customer engagement: With 98% of executives acknowledging that a company’s products and services define their customer relationship, products need to serve more than just one transaction in their lifecycle and should be part of an ongoing relationship with the customer base. Therefore, they should move from product-focused to subscription-based organization to create long-term revenue growth and higher customer retention.  Related:Untangling Enterprise Reliance on Legacy Systems Replacing legacy systems with modern IT: Modernizing IT infrastructure is centered around creating a strong digital core, which consists of a cloud infrastructure, data and AI. This will help companies stay ahead of competitors, expedite growth and guarantee operational security.  Shifting focus from product features to customer outcomes: Customer needs have evolved and creating a dedicated customer success function will become a critical need for high-tech companies to enable AaS adoption. Gen AI is an essential technology that can provide a more detailed customer behavior analysis and will help identify new customer needs.  Recalibrating the sales force: Although executive confidence in their sales force’s ability to shift from transaction based to outcome-based compensation is in the majority, training talent to accelerate adoption and preparing them to sell under the new model is critical to enabling AaS across the organization.  A rapidly changing digital landscape and evolving market dynamics requires high-tech companies to assume more agility. To that end, meeting their ARR goals will also require adopting an AaS model that prioritizes customer-centricity. By leveraging these strategies, which rely on gen AI integration and Total Enterprise Reinvention, providers can make a decided effort to future-proof their companies and ensure sustainable growth.  source

A New Reality for High Tech Companies: The As-a-Service Advantage Read More »

Why Liberal Arts Grads Could Be the Best Programmers of the AI Era

In the world of programming, technical chops have always been the golden ticket. But over the years, some of the best programmers I’ve hired and worked with didn’t come from computer science backgrounds. They came from the humanities — music, philosophy, literature. These liberal arts grads brought a fresh perspective to programming, one that’s not always easy to find.   And as generative AI changes the game, this edge will only become more valuable. With AI handling the ABCs of programming — the line-by-line code writing — what’s left is the harder stuff: understanding problems deeply, communicating with stakeholders, and designing solutions that make sense in the real world.   Programming Isn’t Just About Code  Programming has never been purely about logic. Sure, you need what used to be called left-brain skill — the ability to translate technical specs into precise code. But a programmer’s real value comes when they push beyond that: recognizing patterns, solving complex problems, and seeing connections that others miss.   I first noticed this long ago. A talented colleague used to entertain a roomful of fellow IT workers by playing and singing Eric Clapton tunes. He was also a gifted coder, capable of recognizing patterns, and solving problems in a different way.    Related:Tech Company Layoffs: The COVID Tech Bubble Bursts Programming is a creative process, not unlike music. The notes matter, but so does knowing when to riff, how to structure, and how to build something that’s more than the sum of its parts. It’s no coincidence that the best developer I ever worked with, period, was a music major.  Liberal arts majors don’t come to work burdened with technical rigidity. They’ve spent their time dissecting ideas, making connections between concepts, and thinking critically. They’ve honed their writing and storytelling. Those skills are incredibly valuable, especially now.   GenAI Is Changing the Job  GenAI is fundamentally changing what it means to be a programmer. Tools like GitHub Copilot and Google’s Gemini can write code, debug simple issues, and automate many of the tasks that used to take up time. But AI doesn’t know how to ask the right questions, interpret user needs, or mold its output into something that makes sense in a broader context. That’s still a human job.  The role of the programmer is evolving, possibly splitting into two paths. There will always be a place for the hardcore programmer with a computer science background, someone to make systems talk to one another. For others, call them citizen programmers, the work is no longer just about writing code line by line; it’s about knowing how to work with AI, guiding it, and knowing when and where human input is most needed.   Related:Securing a Better Salary: Tips for IT Pros This is where that liberal arts mindset comes in — being able to understand the nuances, think critically about user experience, explain things simplistically, and piece together ideas in new ways.   Preparing for the AI Future  So, what should businesses do with this insight? First, it’s time to rethink talent and look for people who can adapt, think on their feet, and see the big picture. This outreach could start at the university level where IT recruiters begin visiting leading liberal arts and music colleges in addition to the traditional technical schools on their lists.   We also need to recognize that the most valuable skills don’t always show up on a resume. How do you measure the ability to see a new solution that nobody else considered? Or the capacity to understand what a user is really asking for, even if they can’t quite articulate it? These are the skills that will matter most, even if they don’t fit neatly into a job description.  And once these new minds are hired, there’s a need to change how we approach development within our teams. AI isn’t going to stop evolving, and neither can we. For the next few years, people will focus on learning how to use these new tools. But beyond that, it’ll be about figuring out how to create with them. And that’s going to require people who aren’t afraid to question how things have always been done.  Related:Untangling Enterprise Reliance on Legacy Systems All this change isn’t mere theory; it’s happening right now. Instead of looking for people who tick all the technical boxes, I’m looking for those who bring a creative mindset to the table. Hiring cannot be merely about pulling in more STEM graduates. It must be about building an environment where people with different backgrounds can work together to solve problems.   The future of tech work will be shaped by those who can use AI to amplify their creativity, their empathy, and their ability to solve tough problems. In my experience, that’s often the person with a background in the humanities.  source

Why Liberal Arts Grads Could Be the Best Programmers of the AI Era Read More »

China’s DeepSeek Dethrones ChatGPT as US Tech Stocks Plunge

DeepSeek, an underdog Chinese startup with a large language model boasting powerful performance at a fraction of competitors’ steep training costs, knocked OpenAI’s ChatGPT from its top position in the Apple App Store — a development that on Monday spooked investors enough to send US technology stocks plummeting. DeepSeek claims its V3 large language model cost just $5.6 million to train, a fraction of ChatGPT’s reported training costs of more than $100 million. With comparable performance to OpenAI’s o1 model, a 95% cost cut may be especially attractive to cash-strapped companies looking to leverage generative AI (GenAI). The development sparked a pre-market selloff for major AI players, including Nvidia, Microsoft, and Meta. Investors sold off around $1 trillion in tech stocks in pre-market trading alone, with the S&P falling 2.3% and Nasdaq dropping by nearly 4% before the opening bell. Nvidia, the world’s leading supplier of AI chips, fell more than 11% in early trading. Chip designer Arm, Broadcom, and Micron Technology also suffered losses. In a research note, Wedbush analyst Daniel Ives wrote: “Clearly tech stocks are under massive pressure led by Nvidia as Wall Street will view DeepSeek as a major perceived threat to US tech dominance and owning this AI revolution.” Chirag Dekate, vice president and analyst at Gartner, thinks Wall Street may have overreacted to the DeepSeek news. In an interview with InformationWeek, Dekate says developments that reduce training costs will have an overall positive impact. “It’s not just model innovation, it’s a system innovation,” Dekate says. “The DeepSeek innovations are real, and they matter … Lowering the cost structures is a net positive for the overall industry … DeepSeek enables a pathway to utilize resource more productively. Meta, Microsoft, Google, OpenAI, and other AI innovators can utilize those underlying capabilities even better. That will likely define the future of GenAI.” Why is DeepSeek a Potential Disrupter? Businesses can take advantage of massive cost savings on DeepSeek’s application programming interface (API) that boast costs of $.55 per million input tokens and $2.19 per million output tokens, a fraction of OpenAI’s API pricing of $15 per million input tokens and $60 per million output tokens. But those savings come at a price — experts say widespread adoption of a Chinese-made model could pose significant security risks. “From a security standpoint, you’re not going to want people putting data into servers that are hosted in China – same problem people had with TikTok,” says John Pettit, chief technology officer at IT consultancy Promevo. “You don’t know how data is being used and where it’s going to go. Even deploying it locally, you have to worry about supply chain injection.” National security concerns in November prompted a bi-partisan US congressional group to sound the alarm on China’s progress in AI. The US-China Economic and Security Review Commission called for a government-funded effort to quickly develop artificial general intelligence (AGI) before China. AGI, which promises language models that match or better human intelligence, could be harnessed as a powerful weapon and give the country that first develops the technology a huge geopolitical advantage. And DeepSeek CEO Liang Wenfeng stated in a recent interview that developing AGI is a top priority. “Our destination is AGI, which means we need to study new model structures to realize stronger model capability with limited resources,” Wenfeng told Chinese publication ChinaTalk in a November interview. The US also alleges China backed hacking group Volt Typhoon’s efforts to disrupt US critical infrastructure. “China remains the most active and persistent cyber threat to US government, private-sector and critical infrastructure efforts,” according to a blog post from the Cybersecurity & Infrastructure Security Agency (CISA), who warned of continuing state-sponsored security threats. Despite lower costs, Dekate says, enterprises will not likely rush into using DeepSeek widely because of potential legal liabilities. “Enterprises should always be careful about creating external facing products that are produced by open-source models,” Dekate says, noting that enterprise grade AI models offer more guardrails, security, and higher quality outputs. “There are going to be constraints [with open source models] that Gemini, OpenAI and other models do not have… you are going to get a more comprehensive answer on certain topics.” source

China’s DeepSeek Dethrones ChatGPT as US Tech Stocks Plunge Read More »

Why Every Employee Will Need to use AI in 2025

Over the past year, we’ve seen organizations differ in their approaches to AI. Some have taken every opportunity to embed AI in their workflows; others have been more cautious, experimenting with limited proof-of-concept projects before committing to larger investments.   But unlike past technology breakthroughs that were only relevant for specific employees, AI is a horizontal skill. Business leaders need to embrace this fact: Every single employee needs to become an AI employee.   In 2025 and beyond, we will start to see the difference between companies that treat AI as a feature and those that view it as a transformation. Here’s how business and learning leaders should think about AI adoption throughout their organization.   Establishing an AI-Ready Skills Vision  For businesses to develop an AI-ready workforce, they need to establish a skills vision that sets out which employees require which level of competency. This vision shouldn’t be permanent; instead, it should evolve in response to technological advances and the needs of the business.  There are two ways of structuring an AI skills vision. The first is simple: builders and users. A small portion — roughly 5% — of an organization’s workforce will require the expertise to build AI systems, products, evaluation tools and language models. The remaining 95% simply need to know how to use AI to augment and accelerate their existing workflows.   Related:China’s DeepSeek Dethrones ChatGPT as US Tech Stocks Plunge For a more detailed framework, leaders can break down their workforce into four levels: Center of excellence: Synonymous with “AI builders.” Think about data scientists, machine learning engineers, and software engineers. Their entire role is to design, build, and refine AI tools for internal or external clients.  “AI + X”: These are the subject matter experts whose roles can be reimagined with the addition of AI. Employees at this level could come from a wide range of backgrounds, from mechanical engineers to finance leaders. AI can help these employees build something truly meaningful in their specific area of expertise.  Fluency: At the fluency level, you don’t need to know how to use AI tools or apply them to your workflows. Instead, fluency is the required level for employees who are interacting with a technical counterpart. For example, a marketer selling a highly technical AI product needs a certain level of understanding to be able to accurately and effectively market that product.  Literacy: This is the basic level of AI skills needed for front-line workers and individual contributors. AI literacy could help these employees boost productivity depending on their role and responsibilities. But it’s equally important for these employees to be part of the broader cultural change. A company is in a better position to innovate when every employee has achieved a standard level of AI literacy.  Related:How Must Staffing Change in Relation to AI? Avoiding Dangerous Amateurs  For an organization to make the most out of AI, it needs to know the precise skill levels of its employees and where they need to grow in the future.  For example, a company’s solutions will only ever be as good as their best contributors. Organizations must do everything they can to maximize the abilities of their Center of Excellence employees, because they set the bar for the rest of the organization. At one software company, I saw leaders transfer an expert in clean coding to a team struggling with code quality; improvements were evident across the organization within weeks, demonstrating the contagious nature of expertise.  But, while experts should be placed at the forefront and driven to achieve more, organizations must be careful not to give the same opportunities to those who overstate their abilities. My friend and collaborator Fernando Lucini refers to these employees as “dangerous amateurs,” and they can slow down an organization’s progress with AI. As companies transition from prototyping to productizing an AI solution, they may realize that the experts they were counting on don’t have the skills needed to bring the product to market. Meanwhile, competitors with an accurate measure of employee skill levels will race ahead.  Related:AI Projects at the Edge: How To Plan for Success Create the Foundation for Innovation  For companies to innovate, they need to be able to adapt quickly to changing technologies and skills demands. In 2016, one of my most important tools was TensorFlow, a commonly used programming language. Less than a decade later, TensorFlow has evolved so much that I can no longer use it effectively without retraining and updating my skills. Highly technical skills perish quickly.  Employees must establish a strong foundation in durable skills in order to master the perishable, cutting-edge technical skills. OpenAI built ChatGPT using innovative, breakthrough technologies. However, they could only create ChatGPT by drawing on their foundations in durable skills like mathematics, statistics, coding and English. AI-ready companies will need to embrace a T-shaped approach to skills development, combining a broad base of horizontal skills with a narrow set of deep, vertical skills. Innovation breaks through as a result of perishable skills but sustains as a result of durable skills.  Every company is becoming an AI company. Every employee will need to use AI. Those who don’t embrace the change will inevitably fall behind.   source

Why Every Employee Will Need to use AI in 2025 Read More »

Should AI-Generated Content Include a Warning Label?

Like a tag that warns sweater owners not to wash their new purchase in hot water, a virtual label attached to AI content could alert viewers that what they’re looking at or listening to has been created or altered by AI.  While appending a virtual identification label to AI-generated content may seem like a simple, logical solution to a serious problem, many experts believe that the task is far more complex and challenging than currently believed.  The answer isn’t clear-cut, says Marina Cozac, an assistant professor of marketing and business law at Villanova University’s School of Business. “Although labeling AI-generated content … seems like a logical approach, and experts often advocate for it, findings in the emerging literature on information-related labels are mixed,” she states in an email interview. Cozac adds that there’s a long history of using warning labels on products, such as cigarettes, to inform consumers about risks. “Labels can be effective in some cases, but they’re not always successful, and many unanswered questions remain about their impact.”  For generic AI-generated text, a warning label isn’t necessary, since it usually serves functional purposes and doesn’t pose a novel risk of deception, says Iavor Bojinov, a professor at the Harvard Business School, via an online interview. “However, hyper-realistic images and videos should include a message stating they were generated or edited by AI.” He believes that transparency is crucial to avoid confusion or potential misuse, especially when the content closely resembles reality.  Related:Breaking Down Barriers to AI Accessibility Real or Fake?  The purpose of a warning label on AI-generated content is to alert users that the information may not be authentic or reliable, Cozac says. “This can encourage users to critically evaluate the content and increase skepticism before accepting it as true, thereby reducing the likelihood of spreading potential misinformation.” The goal, she adds, should be to help mitigate the risks associated with AI-generated content and misinformation by disrupting automatic believability and the sharing of potentially false information.  The rise of deepfakes and other AI-generated media has made it increasingly difficult to distinguish between what’s real and what’s synthetic, which can erode trust, spread misinformation, and have harmful consequences for individuals and society, says Philip Moyer, CEO of video hosting firm Vimeo. “By labeling AI-generated content and disclosing the provenance of that content, we can help combat the spread of misinformation and work to maintain trust and transparency,” he observes via email.  Related:Why Enterprises Struggle to Drive Value with AI Moyer adds that labeling will also support content creators. “It will help them to maintain not only their creative abilities as well as their individual rights as a creator, but also their audience’s trust, distinguishing their techniques from the content made with AI versus an original development.”  Bojinov believes that besides providing transparency and trust, labels will provide a unique seal of approval. “On the flip side, I think the ‘human-made’ label will help drive a premium in writing and art in the same way that craft furniture or watches will say ‘hand-made’.”  Advisory or Mandatory?  “A label should be mandatory if the content portrays a real person saying or doing something they did not say or do originally, alters footage of a real event or location, or creates a lifelike scene that did not take place,” Moyer says. “However, the label wouldn’t be required for content that’s clearly unrealistic, animated, includes obvious special effects, or uses AI for only minor production assistance.”  Consumers need access to tools that don’t depend on scammers doing the right thing, to help them identify what’s real versus artificially generated, says Abhishek Karnik, director of threat research and response at security technology firm McAfee, via email. “Scammers may never abide by policy, but if most big players help implement and enforce such mechanisms it will help to build consumer awareness.”  Related:Why Every Employee Will Need to Use AI in 2025 The format of labels indicating AI-generated content should be noticeable without being disruptive and may differ based on the content or platform on which the labeled content appears, Karnik says. “Beyond disclaimers, watermarks and metadata can provide alternatives for verifying AI-generated content,” he notes. “Additionally, building tamper-proof solutions and long-term policies for enabling authentication, integrity, and nonrepudiation will be key.”  Final Thoughts  There are significant opportunities for future research on AI-generated content labels, Cozac says. She points out that recent research highlights the fact that while some progress has been made, more work remains to be done to understand how different label designs, contexts, and other characteristics affect their effectiveness. “This makes it an exciting and timely topic, with plenty of room for future research and new insights to help refine strategies for combating AI-generated content and misinformation.”  source

Should AI-Generated Content Include a Warning Label? Read More »