Why Liberal Arts Grads Could Be the Best Programmers of the AI Era

In the world of programming, technical chops have always been the golden ticket. But over the years, some of the best programmers I’ve hired and worked with didn’t come from computer science backgrounds. They came from the humanities — music, philosophy, literature. These liberal arts grads brought a fresh perspective to programming, one that’s not always easy to find.   And as generative AI changes the game, this edge will only become more valuable. With AI handling the ABCs of programming — the line-by-line code writing — what’s left is the harder stuff: understanding problems deeply, communicating with stakeholders, and designing solutions that make sense in the real world.   Programming Isn’t Just About Code  Programming has never been purely about logic. Sure, you need what used to be called left-brain skill — the ability to translate technical specs into precise code. But a programmer’s real value comes when they push beyond that: recognizing patterns, solving complex problems, and seeing connections that others miss.   I first noticed this long ago. A talented colleague used to entertain a roomful of fellow IT workers by playing and singing Eric Clapton tunes. He was also a gifted coder, capable of recognizing patterns, and solving problems in a different way.    Related:Tech Company Layoffs: The COVID Tech Bubble Bursts Programming is a creative process, not unlike music. The notes matter, but so does knowing when to riff, how to structure, and how to build something that’s more than the sum of its parts. It’s no coincidence that the best developer I ever worked with, period, was a music major.  Liberal arts majors don’t come to work burdened with technical rigidity. They’ve spent their time dissecting ideas, making connections between concepts, and thinking critically. They’ve honed their writing and storytelling. Those skills are incredibly valuable, especially now.   GenAI Is Changing the Job  GenAI is fundamentally changing what it means to be a programmer. Tools like GitHub Copilot and Google’s Gemini can write code, debug simple issues, and automate many of the tasks that used to take up time. But AI doesn’t know how to ask the right questions, interpret user needs, or mold its output into something that makes sense in a broader context. That’s still a human job.  The role of the programmer is evolving, possibly splitting into two paths. There will always be a place for the hardcore programmer with a computer science background, someone to make systems talk to one another. For others, call them citizen programmers, the work is no longer just about writing code line by line; it’s about knowing how to work with AI, guiding it, and knowing when and where human input is most needed.   Related:Securing a Better Salary: Tips for IT Pros This is where that liberal arts mindset comes in — being able to understand the nuances, think critically about user experience, explain things simplistically, and piece together ideas in new ways.   Preparing for the AI Future  So, what should businesses do with this insight? First, it’s time to rethink talent and look for people who can adapt, think on their feet, and see the big picture. This outreach could start at the university level where IT recruiters begin visiting leading liberal arts and music colleges in addition to the traditional technical schools on their lists.   We also need to recognize that the most valuable skills don’t always show up on a resume. How do you measure the ability to see a new solution that nobody else considered? Or the capacity to understand what a user is really asking for, even if they can’t quite articulate it? These are the skills that will matter most, even if they don’t fit neatly into a job description.  And once these new minds are hired, there’s a need to change how we approach development within our teams. AI isn’t going to stop evolving, and neither can we. For the next few years, people will focus on learning how to use these new tools. But beyond that, it’ll be about figuring out how to create with them. And that’s going to require people who aren’t afraid to question how things have always been done.  Related:Untangling Enterprise Reliance on Legacy Systems All this change isn’t mere theory; it’s happening right now. Instead of looking for people who tick all the technical boxes, I’m looking for those who bring a creative mindset to the table. Hiring cannot be merely about pulling in more STEM graduates. It must be about building an environment where people with different backgrounds can work together to solve problems.   The future of tech work will be shaped by those who can use AI to amplify their creativity, their empathy, and their ability to solve tough problems. In my experience, that’s often the person with a background in the humanities.  source

Why Liberal Arts Grads Could Be the Best Programmers of the AI Era Read More »

Technological And Environmental Risks Take The Top Two Spots In 2025 WEF Risk Report

In advance of its annual meeting in Davos, the World Economic Forum (WEF) released its 2025 Global Risks Report. The report, based on the WEF’s Global Risk Perception Survey 2024-2025, identifies geopolitical risks as a top immediate concern for global risk officers and the broader risk community. Additionally, technological risks will become more problematic over the next two years, and environmental risks will be a predominant risk factor in the next 10 years. Here is our take on the highlights from the WEF report and what it means for risk leaders: Geopolitical conflict and its impact on global trade is the top concern today. WEF’s report highlights that today’s current risk outlook is dominated by concerns about state-based armed conflict, responsible for causing havoc for global business and international trade. Since the invasion of Ukraine nearly three years ago, companies have faced the harsh realities of geopolitical impact on business. Our own data also ranks geopolitical and social instability in the top five factors driving increased enterprise risk. Despite ceasefires, new administrations, and policy changes, this geopolitical conflict is still expected to remain a top three global risk by 2027. Risk leaders should strengthen their geopolitical risk intelligence and analysis to gauge the impact to the business. Misinformation and disinformation tops the two-year outlook list — again. For the second year in a row, misinformation and disinformation occupies the number one slot on the medium-term outlook in the WEF Global Risks Report. Our own data (from Forrester’s Business Risk Survey, 2024, and The Top Systemic Risks, 2024 Forrester report) supports this and highlights risks surrounding data integrity, speed of innovation, and interconnectedness of global business systems — three of our top systemic risks for 2024. These risks accelerate and exacerbate the spread of misinformation and disinformation. As companies and governments ramp up investment in AI and AI’s capabilities continue to rapidly evolve, expect the exploitation and weaponization of information to become more widespread, easier to execute, and more difficult to detect. Cyber events round out the top five but are likely underestimated. The WEF report identifies cyber espionage and warfare as number five on the near-term (two-year) outlook; however, Forrester data highlights concern among enterprise risk decision-makers globally about velocity of cyberattacks as the biggest driver of increased enterprise risk in 2024. Enterprise risk leaders should focus on how they are detecting misinformation about their organization and how they are set up to detect and react quickly to potential critical misinformation events. Climate risk, a dominant long-term concern, is agnostic to politics or policy. The mood music about environmental sustainability and climate risk may have shifted globally, but risk management leaders remain steadfast that environmental risks will be a dominant force and top business concern in the next 10 years. In fact, four of the top five long-term risks are environmentally related. Risk leaders we speak to are quietly getting on with executing their plans to reduce their environmental impact, as they understand it not only reduces risk but, in many cases, contributes to cost reduction and better use of resources as well as being good for the planet. Leaders are not paying enough attention to technology risks and tech resilience. Forrester data shows that enterprise risk leaders see cyberattacks and reliance technology as their biggest drivers of increased risk in 2024. However, one surprise from the WEF report this year is that technology risks are underrepresented in the current medium- and long-term outlook. Considering the scale of global technology outages in 2024, and the risk implications of generative AI and considerations for how to secure it, risk leaders should ensure that their technology risks are addressed and that their reporting on risk to senior leaders covers this important area to safeguard business investment in AI in particular in 2025. Our upcoming report, “The State Of Enterprise Risk Management, 2025,” will dive further into the trends driving global enterprise risk in 2025 and will publish in the coming weeks. Forrester clients can book an inquiry or guidance session with any of us to discuss further. source

Technological And Environmental Risks Take The Top Two Spots In 2025 WEF Risk Report Read More »

Synthesia becomes UK's biggest GenAI firm with $2.1B valuation

Synthesia has claimed the crown of Britain’s biggest GenAI company after raising $180mn at a $2.1bn valuation. The London-based business generates lifelike avatars for video content. Enterprises use the software to produce training content and corporate communications. The tech has made Synthesia a leader in the burgeoning synthetic media industry. According to the startup, over 60,000 businesses are customers — including more than 60% of the Fortune 100. Investors have also shown a growing interest. In 2023, Synthesia earned unicorn status after securing $90 million in Series C funding at a valuation of $1 billion.  The latest cash injection creates another milestone. According to Dealroom, Synthesia is now the UK’s largest GenAI media company by valuation. The next big thing? It might be you… TNW Conference is here to support startups & scaleups to become the next big thing. Be part of the journey. The startup wants the fresh funds to fuel a new phase of growth. At the core of the plans is Synthesia 2.0 — a product billed as the world’s first enterprise AI video platform. Development of the system is now underway. Synthesia is also preparing for the next generation of synthetic media. Victor Riparbelli, the company’s CEO and co-founder, expects big breakthroughs from blending AI videos with reasoning systems — such as large language models. “We will unlock a new type of media that can think, narrate, and personalise content for us,” he told TNW via email.  “These new interfaces will be centred around intuitive, human communication that is much more effective than text. You could imagine an AI that connects to your Spotify and teaches you music theory based on your skill level and favourite artists. “At work, we may interact with virtual guides that help us make buying decisions, coach us and teach us new skills like you would with a tutor in the real world.” Naturally, Riparbelli also shared the funding news in an AI video. Fully generated with Synthesia, the clip summarises the announcement in eight languages.  Synthesia’s position in Europe’s GenAI landscape Announced today, Synthesia’s Series D round was led by VC giant NEA. Existing investors GV and MMC Ventures also participated, alongside new backers WiL, Atlassian Ventures, and PSP Growth. The landmark raise has renewed optimism about Europe’s AI landscape. Yoram Wijngaarde, Dealroom’s founder and CEO, is bullish about developments across the continent. “Synthesia’s Series D signals that European AI is picking up where they left off in 2024,” he said. “AI startups accounted for over 25% of European venture capital last year, up from 15% just four years ago. In one of the most significant technological waves in decades, Synthesia stands out among the emerging AI unicorns reshaping the landscape from this side of the Atlantic.” The funding was also welcomed by politicians in Synthesia’s home country, who announced new plans this week to “turbocharge AI.” Peter Kyle, the UK’s science and technology secretary, said the funding “showcases the confidence investors have in British tech” and highlights the “global leadership” of the country’s GenAI pioneers. Riparbelli is also optimistic about the UK’s AI scene. He pointed to its combination of talent, capital, and infrastructure. He also praised Britain’s production of global leaders in the field. “There are many countries that want to become AI superpowers but few have a chance to actually succeed,” he said. “The UK is among the top three for sure, because it has a combination of talent, capital, and infrastructure. What’s also remarkable about the UK is that it produces global leaders, not just regional players.” source

Synthesia becomes UK's biggest GenAI firm with $2.1B valuation Read More »

OpenAI Stargate is a $500B bet: America’s AI Manhattan Project or costly dead end?

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More In case you missed it amid the flurry of executive orders coming out of the White House in the days since President Trump returned to office for his second non-consecutive term this week, the single largest investment in AI infrastructure was just announced yesterday afternoon. Known as “the Stargate Project,” it’s a $500 billion (half a trillion) effort from OpenAI, SoftBank, Oracle and MGX to form a new venture that will build “new AI infrastructure for OpenAI in the United States,” and as OpenAI put it in its announcement post on the social network X, to “support the re-industrialization of the United States… also provide a strategic capability to protect the national security of America and its allies.” The end goal: to build artificial general intelligence (AGI), or AI that outperforms humans on most economically valuable work, which has been OpenAI’s goal from the start — and ultimately, artificial superintelligence, or AI even smarter than humans can comprehend. Flanked by Trump himself, OpenAI cofounder and CEO Sam Altman appeared at the White House alongside Softbank CEO Masayoshi “Masa” Son and Oracle executive chairman Larry Ellison, saying “I’m thrilled we get to do this in the United States of America. I think this will be the most important project of this era — and as Masa said, for AGI to get built here, to create hundreds of thousands of jobs, to create a new industry centered here — we wouldn’t be able to do this without you, Mr. President.” Son called it “the beginning of our Golden Age.” Several high-profile technology companies have partnered with the initiative to build and operate the infrastructure. Arm, Microsoft, Nvidia, Oracle and OpenAI are among the key partners contributing their expertise and resources to the effort. Oracle, Nvidia and OpenAI, in particular, will collaborate closely on developing the computing systems essential for the project’s success. While some see the Stargate Project as a transformative investment in the future of AI, critics argue that it is a costly overreach, unnecessary in light of the rapid rise of leaner, open-source reasoning AI models like China’s DeepSeek R-1, which was just released earlier this week under a permissive MIT License — allowing it to be downloaded, fine-tuned or retrained, and used freely in commercial and noncommercial projects — and which matches or outperforms OpenAI’s own o1 reasoning models on key third-party benchmarks. The debate has become a lightning rod for competing visions of AI development and the geopolitical dynamics shaping the race for technological supremacy. A transformational leap forward? For many advocates, the Stargate Project represents an unparalleled commitment to innovation and national competitiveness, on par with prior eras of large infrastructure spending such as the U.S. highway system during the Eisenhower era (though of course, that was with public funds — not private as in this case). On X, AI commentator and former engineer David Shapiro said, “America just won geopolitics for the next 50 years with Project Stargate,” and likened the initiative to historic achievements like the Manhattan Project and NASA’s Apollo program. He argued that this level of investment in artificial intelligence is not only necessary but inevitable, given the stakes. Shapiro described the project as a strategic move to ensure that America maintains technological supremacy, framing the investment as critical to solving global problems, driving economic growth and securing national security. “When America decides something matters and backs it with this kind of money? It happens. Period,” he declared. In terms of practical applications, advocates point to the Stargate Project’s promise of AI-enabled breakthroughs in areas like cancer research, personalized medicine, and pandemic prevention. Oracle’s Ellison has specifically highlighted the potential to develop new personalized mRNA-based vaccines and cancer treatments, revolutionizing healthcare. A waste of (as yet un-procured) moneys? Despite this optimism, critics are challenging the project on multiple fronts, from its financial feasibility to its strategic direction. Elon Musk, head of the Department of Government Efficiency (DOGE) under President Donald Trump’s second administration and an OpenAI cofounder, cast doubt on the project’s funding. Musk, who has since launched his own AI company, xAI, and its Grok language model family, posted on his social network, X, “They don’t actually have the money,” alleging that SoftBank — Stargate’s primary financial backer — has secured “well under $10B.” In response, Altman replied this morning: “[I] genuinely respect your accomplishments and think you are the most inspiring entrepreneur of our time,” later writing that Musk was “wrong, as you surely know. want to come visit the first site already under way? this is great for the country. i realize what is great for the country isn’t always what’s optimal for your companies, but in your new role i hope you’ll mostly put [US flag emoji] first.” Others have questioned the timing and strategic rationale behind the initiative. Tech entrepreneur and commentator Arnaud Bertrand took to X to contrast OpenAI’s infrastructure-heavy approach with the leaner, more decentralized strategy employed by China’s High-Flyer Capital Management, creators of the new, highest performing open-source large language model (LLM), DeepSeek-R1, released earlier this week. Bertrand noted that DeepSeek has achieved performance parity with OpenAI’s latest models at just 3% of the cost, using far smaller GPU clusters and data centers. He described the divergence as a collision of philosophies, with OpenAI betting on massive centralized infrastructure while DeepSeek pursues democratized, cost-efficient AI development. “A fundamental question remains,” Bertrand wrote on X. “What will OpenAI customers be paying for exactly if much cheaper DeepSeek matches their latest models’ performance? Having spent an indecent amount of money on data centers isn’t a customer benefit in and of itself.” Bertrand further argued that OpenAI’s focus on infrastructure may represent outdated thinking. “This $500B bet on infrastructure may be OpenAI fighting the last war,” he warned, pointing to DeepSeek’s success as evidence that innovation and agility — not scale — are the key drivers of modern AI

OpenAI Stargate is a $500B bet: America’s AI Manhattan Project or costly dead end? Read More »

China’s DeepSeek Dethrones ChatGPT as US Tech Stocks Plunge

DeepSeek, an underdog Chinese startup with a large language model boasting powerful performance at a fraction of competitors’ steep training costs, knocked OpenAI’s ChatGPT from its top position in the Apple App Store — a development that on Monday spooked investors enough to send US technology stocks plummeting. DeepSeek claims its V3 large language model cost just $5.6 million to train, a fraction of ChatGPT’s reported training costs of more than $100 million. With comparable performance to OpenAI’s o1 model, a 95% cost cut may be especially attractive to cash-strapped companies looking to leverage generative AI (GenAI). The development sparked a pre-market selloff for major AI players, including Nvidia, Microsoft, and Meta. Investors sold off around $1 trillion in tech stocks in pre-market trading alone, with the S&P falling 2.3% and Nasdaq dropping by nearly 4% before the opening bell. Nvidia, the world’s leading supplier of AI chips, fell more than 11% in early trading. Chip designer Arm, Broadcom, and Micron Technology also suffered losses. In a research note, Wedbush analyst Daniel Ives wrote: “Clearly tech stocks are under massive pressure led by Nvidia as Wall Street will view DeepSeek as a major perceived threat to US tech dominance and owning this AI revolution.” Chirag Dekate, vice president and analyst at Gartner, thinks Wall Street may have overreacted to the DeepSeek news. In an interview with InformationWeek, Dekate says developments that reduce training costs will have an overall positive impact. “It’s not just model innovation, it’s a system innovation,” Dekate says. “The DeepSeek innovations are real, and they matter … Lowering the cost structures is a net positive for the overall industry … DeepSeek enables a pathway to utilize resource more productively. Meta, Microsoft, Google, OpenAI, and other AI innovators can utilize those underlying capabilities even better. That will likely define the future of GenAI.” Why is DeepSeek a Potential Disrupter? Businesses can take advantage of massive cost savings on DeepSeek’s application programming interface (API) that boast costs of $.55 per million input tokens and $2.19 per million output tokens, a fraction of OpenAI’s API pricing of $15 per million input tokens and $60 per million output tokens. But those savings come at a price — experts say widespread adoption of a Chinese-made model could pose significant security risks. “From a security standpoint, you’re not going to want people putting data into servers that are hosted in China – same problem people had with TikTok,” says John Pettit, chief technology officer at IT consultancy Promevo. “You don’t know how data is being used and where it’s going to go. Even deploying it locally, you have to worry about supply chain injection.” National security concerns in November prompted a bi-partisan US congressional group to sound the alarm on China’s progress in AI. The US-China Economic and Security Review Commission called for a government-funded effort to quickly develop artificial general intelligence (AGI) before China. AGI, which promises language models that match or better human intelligence, could be harnessed as a powerful weapon and give the country that first develops the technology a huge geopolitical advantage. And DeepSeek CEO Liang Wenfeng stated in a recent interview that developing AGI is a top priority. “Our destination is AGI, which means we need to study new model structures to realize stronger model capability with limited resources,” Wenfeng told Chinese publication ChinaTalk in a November interview. The US also alleges China backed hacking group Volt Typhoon’s efforts to disrupt US critical infrastructure. “China remains the most active and persistent cyber threat to US government, private-sector and critical infrastructure efforts,” according to a blog post from the Cybersecurity & Infrastructure Security Agency (CISA), who warned of continuing state-sponsored security threats. Despite lower costs, Dekate says, enterprises will not likely rush into using DeepSeek widely because of potential legal liabilities. “Enterprises should always be careful about creating external facing products that are produced by open-source models,” Dekate says, noting that enterprise grade AI models offer more guardrails, security, and higher quality outputs. “There are going to be constraints [with open source models] that Gemini, OpenAI and other models do not have… you are going to get a more comprehensive answer on certain topics.” source

China’s DeepSeek Dethrones ChatGPT as US Tech Stocks Plunge Read More »

Cloud modernization: The critical step your migration may be missing

The public cloud turns 23 this year, and enterprise migration of on-premises workloads isn’t just continuing — it’s speeding up. According to the Foundry Cloud Computing Study 2024, 63% of enterprise CIOs were accelerating their cloud migrations, up from 57% in 2023. When organizations migrate applications to the cloud, they expect to see significant benefits: increased scalability, stronger security and accelerated adoption of new technologies. But CIOs have significant, well-founded concerns around costs. Specifically, CIOs worry about controlling costs (51%) and how much costs will add up in the long term (49%), according to the Foundry study. Much of that concern may be rooted in the conventional wisdom that migrating to the cloud does not cut costs. Which can be true if your efforts end with migration. Certainly, no CIO would try to migrate a mainframe or a traditional monolithic application directly to the cloud. But a VM or a containerized application also isn’t going to provide the full benefits of the cloud without modernization. As AWS points out on the company’s blog, “On-premises applications aren’t commonly designed to take advantage of the capabilities that the cloud offers, such as elasticity, resiliency, automation, and such.” What’s the solution? Modernization. “From the standpoint of the cloud, a containerized or virtualized application is usually still a monolith,” said Matt Leising, managing director of engineering at Solution Design Group (SDG). “You need to break that application down into its parts, because some parts are utilized more than others. Once you break it apart into a collection of services, with cloud capabilities, you can allocate fewer CPU and storage resources to those services that aren’t used often to bring those costs down.” By rearchitecting applications so each component functions as a separate, but connected service, organizations can avoid paying for unnecessary cloud resources. In fact, if infrequently used services are deployed in a serverless cloud environment, the cloud will only allocate CPU to them when they need to run, which can significantly cut costs. Cost savings are far from the only advantage of modernization, however. Modernization can also enable enterprises to cost-effectively scale applications as their businesses grow. As AWS writes, by refactoring an application, a coder’s aim is to “modify its architecture by taking full advantage of cloud-native features to improve agility, performance, and scalability. This is driven by strong business demand to scale, accelerate product and feature releases, and to reduce costs.” “We had a customer that migrated a large application to the cloud without modernizing,” Leising said, “resulting in the need for an enormous amount of compute resources. To scale the application to their other regions, they would have had to buy the same amount of compute capacity for each region, which would have been far too expensive. We helped them modernize the application, enabling them to scale efficiently and cost-effectively.” In another case, SDG partnered with Cargill to modernize an agricultural commodities trading platform for the cloud. The results are a good example of the benefits of modernization, including:   Cost reduction: Reduced administrative costs through improved infrastructure and processes. Improved user experience: A modern, mobile-friendly interface increased usability and reduced errors for users. Greater business agility: The AWS cloud-hybrid platform allows Cargill to rapidly build and deploy new capabilities as user needs evolve. Scalable performance: The platform can handle high-volume, globally distributed transactions while maintaining flexibility and adaptability. SDG is an employee-owned business and technology company with deep expertise in cloud migration, application modernization, and DevOps automation. Learn how SDG can help maximize your organization’s investment in the cloud. source

Cloud modernization: The critical step your migration may be missing Read More »

Algorithm Price-Fixing Ruling May Lower Antitrust Claims Bar

By Joshua Goodman, Minna Lo Naranjo and Geoffrey Holtz ( January 23, 2025, 4:19 PM EST) — On Dec. 4, the U.S. District Court for the Western District of Washington in Duffy v. Yardi Systems Inc. denied the defendants’ motion to dismiss, paving the way for the case to move forward to discovery, and held that the plaintiffs’ allegations were sufficient to allege a per se unlawful antitrust conspiracy…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Algorithm Price-Fixing Ruling May Lower Antitrust Claims Bar Read More »

Why Every Employee Will Need to use AI in 2025

Over the past year, we’ve seen organizations differ in their approaches to AI. Some have taken every opportunity to embed AI in their workflows; others have been more cautious, experimenting with limited proof-of-concept projects before committing to larger investments.   But unlike past technology breakthroughs that were only relevant for specific employees, AI is a horizontal skill. Business leaders need to embrace this fact: Every single employee needs to become an AI employee.   In 2025 and beyond, we will start to see the difference between companies that treat AI as a feature and those that view it as a transformation. Here’s how business and learning leaders should think about AI adoption throughout their organization.   Establishing an AI-Ready Skills Vision  For businesses to develop an AI-ready workforce, they need to establish a skills vision that sets out which employees require which level of competency. This vision shouldn’t be permanent; instead, it should evolve in response to technological advances and the needs of the business.  There are two ways of structuring an AI skills vision. The first is simple: builders and users. A small portion — roughly 5% — of an organization’s workforce will require the expertise to build AI systems, products, evaluation tools and language models. The remaining 95% simply need to know how to use AI to augment and accelerate their existing workflows.   Related:China’s DeepSeek Dethrones ChatGPT as US Tech Stocks Plunge For a more detailed framework, leaders can break down their workforce into four levels: Center of excellence: Synonymous with “AI builders.” Think about data scientists, machine learning engineers, and software engineers. Their entire role is to design, build, and refine AI tools for internal or external clients.  “AI + X”: These are the subject matter experts whose roles can be reimagined with the addition of AI. Employees at this level could come from a wide range of backgrounds, from mechanical engineers to finance leaders. AI can help these employees build something truly meaningful in their specific area of expertise.  Fluency: At the fluency level, you don’t need to know how to use AI tools or apply them to your workflows. Instead, fluency is the required level for employees who are interacting with a technical counterpart. For example, a marketer selling a highly technical AI product needs a certain level of understanding to be able to accurately and effectively market that product.  Literacy: This is the basic level of AI skills needed for front-line workers and individual contributors. AI literacy could help these employees boost productivity depending on their role and responsibilities. But it’s equally important for these employees to be part of the broader cultural change. A company is in a better position to innovate when every employee has achieved a standard level of AI literacy.  Related:How Must Staffing Change in Relation to AI? Avoiding Dangerous Amateurs  For an organization to make the most out of AI, it needs to know the precise skill levels of its employees and where they need to grow in the future.  For example, a company’s solutions will only ever be as good as their best contributors. Organizations must do everything they can to maximize the abilities of their Center of Excellence employees, because they set the bar for the rest of the organization. At one software company, I saw leaders transfer an expert in clean coding to a team struggling with code quality; improvements were evident across the organization within weeks, demonstrating the contagious nature of expertise.  But, while experts should be placed at the forefront and driven to achieve more, organizations must be careful not to give the same opportunities to those who overstate their abilities. My friend and collaborator Fernando Lucini refers to these employees as “dangerous amateurs,” and they can slow down an organization’s progress with AI. As companies transition from prototyping to productizing an AI solution, they may realize that the experts they were counting on don’t have the skills needed to bring the product to market. Meanwhile, competitors with an accurate measure of employee skill levels will race ahead.  Related:AI Projects at the Edge: How To Plan for Success Create the Foundation for Innovation  For companies to innovate, they need to be able to adapt quickly to changing technologies and skills demands. In 2016, one of my most important tools was TensorFlow, a commonly used programming language. Less than a decade later, TensorFlow has evolved so much that I can no longer use it effectively without retraining and updating my skills. Highly technical skills perish quickly.  Employees must establish a strong foundation in durable skills in order to master the perishable, cutting-edge technical skills. OpenAI built ChatGPT using innovative, breakthrough technologies. However, they could only create ChatGPT by drawing on their foundations in durable skills like mathematics, statistics, coding and English. AI-ready companies will need to embrace a T-shaped approach to skills development, combining a broad base of horizontal skills with a narrow set of deep, vertical skills. Innovation breaks through as a result of perishable skills but sustains as a result of durable skills.  Every company is becoming an AI company. Every employee will need to use AI. Those who don’t embrace the change will inevitably fall behind.   source

Why Every Employee Will Need to use AI in 2025 Read More »

Tech leaders respond to the rapid rise of DeepSeek

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More If you hadn’t heard, there’s a new AI star in town: DeepSeek, the subsidiary of Hong Kong-based quantitative analysis (quant) firm High-Flyer Capital Management, has sent shockwaves throughout Silicon Valley and the wider world with its release earlier this week of a new open-source large reasoning model, DeepSeek R1, which matches OpenAI’s most powerful available model o1 — and at a fraction of the cost to users and to the company itself (when training it). While the advent of DeepSeek R1 has already reshuffled a consistently topsy-turvy, fast-moving, intensely competitive market for new AI models — previous months saw OpenAI jockeying with Anthropic and Google for the most powerful proprietary models available, while Meta Platforms often came in with “close enough” open-source rivals — the difference this time is that the company behind the hot model is based in China, the geopolitical “frenemy” of the U.S., and whose tech sector was widely viewed, until this moment, as inferior to that of Silicon Valley. As such, it’s caused no shortage of hand-wringing and existentialism from U.S. and Western-bloc techies, who are suddenly doubting OpenAI and the general big-tech strategy of throwing more money and more compute (graphics processing units, GPUs, the powerful gaming chips typically used to train AI models) toward the problem of inventing ever more powerful models. Yet some Western tech leaders have had a largely positive public response to DeepSeek’s rapid ascent. Marc Andreessen, a co-inventor of the pioneering Mosaic web browser, cofounder of the Netscape browser company and current general partner at the famed Andreessen Horowitz (a16z) venture capital firm, posted on X today: “Deepseek R1 is one of the most amazing and impressive breakthroughs I’ve ever seen — and as open source, a profound gift to the world [robot emoji, salute emoji].” Yann LeCun, the chief AI scientist for Meta’s Fundamental AI Research (FAIR) division, posted on his LinkedIn account: “To people who see the performance of DeepSeek and think:‘China is surpassing the US in AI.’You are reading this wrong.The correct reading is:‘Open source models are surpassing proprietary ones.’ DeepSeek has profited from open research and open source (e.g. PyTorch and Llama from Meta)They came up with new ideas and built them on top of other people’s work.Because their work is published and open source, everyone can profit from it.That is the power of open research and open source.” And even Mark “Zuck” Zuckerberg, Meta AI’s founder and CEO, seemed to seek to counter the rise of DeepSeek with his own post on Facebook promising that a new version of Facebook’s open-source AI model family Llama would be “the leading state of the art model” when it is released sometime this year. As he put it: “This will be a defining year for AI. In 2025, I expect Meta AI will be the leading assistant serving more than 1 billion people, Llama 4 will become the leading state of the art model, and we’ll build an AI engineer that will start contributing increasing amounts of code to our R&D efforts. To power this, Meta is building a 2GW+ datacenter that is so large it would cover a significant part of Manhattan. We’ll bring online ~1GW of compute in ’25 and we’ll end the year with more than 1.3 million GPUs. We’re planning to invest $60-65B in capex this year while also growing our AI teams significantly, and we have the capital to continue investing in the years ahead. This is a massive effort, and over the coming years it will drive our core products and business, unlock historic innovation, and extend American technology leadership. Let’s go build!“ He even shared a graphic showing the 2-gigawatt datacenter mentioned in his post overlaid on Manhattan: Clearly, even as he espouses a commitment to open-source AI, Zuck is not convinced that DeepSeek’s approach of optimizing for efficiency while leveraging far fewer GPUs than major labs is the right one for Meta, or for the future of AI. But with U.S. companies raising and/or spending record sums on new AI infrastructure that many experts have noted depreciate rapidly (due to hardware/chip and software advancements), the question remains which vision of the future will win out in the end to become the dominant AI provider for the world. Or maybe it will always be a multiplicity of models each with a smaller market share? Stay tuned, because this competition is getting closer and fiercer than ever. source

Tech leaders respond to the rapid rise of DeepSeek Read More »

US GPU export limits could bring cold war to AI, data center markets

Eighteen countries, including the UK, Canada, Sweden, France, Germany, Japan, and South Korea, are exempted from the AI export caps. The Biden administration had previously banned the export of some powerful AI chips to China, Russia, and other adversaries in rules from 2022 and 2023. But other countries friendly to the US, including Mexico, Israel, India, and Saudi Arabia, would be subject to the quotas. The export limits would take effect 120 days from the Jan. 13 order, and it’s unclear whether the incoming Trump administration will amend or rewrite the rule, although Trump has targeted China as a primary economic competitor of the US. The cost of AI In addition to cutting off most of the world from large AI chip purchases, the rule will force countries such as China and Russia to pump up their own AI capabilities, ultimately reducing US AI leadership, claims Aible’s Sengupta. source

US GPU export limits could bring cold war to AI, data center markets Read More »