Meta enters AI video wars with powerful Movie Gen set to hit Instagram in 2025

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Meta founder and CEO Mark Zuckerberg, who built the company atop of its hit social network Facebook, finished this week strong, posting a video of himself doing a leg press exercise on a machine at the gym on his personal Instagram (a social network Facebook acquired in 2012). Except, in the video, the leg press machine transforms into a neon cyberpunk version, an Ancient Roman version, and a gold flaming version as well. As it turned out, Zuck was doing more than just exercising: he was using the video to announce Movie Gen, Meta’s new family of generative multimodal AI models that can make both video and audio from text prompts, and allow users to customize their own videos, adding special effects, props, costumes and changing select elements simply through text guidance, as Zuck did in his video. The models appear to be extremely powerful, allowing users to change only selected elements of a video clip rather than “re-roll” or regenerate the entire thing, similar to Pika’s spot editing on older models, yet with longer clip generation and sound built in. Meta’s tests, outlined in a technical paper on the model family released today, show that it outperforms the leading rivals in the space including Runway Gen 3, Luma Dream Machine, OpenAI Sora and Kling 1.5 on many audience ratings of different attributes such as consistency and “naturalness” of motion. Meta has positioned Movie Gen as a tool for both everyday users looking to enhance their digital storytelling as well as professional video creators and editors, even Hollywood filmmakers. Movie Gen represents Meta’s latest step forward in generative AI technology, combining video and audio capabilities within a single system. Specificially, Movie Gen consists of four models: 1. Movie Gen Video – a 30B parameter text-to-video generation model 2. Movie Gen Audio – a 13B parameter video-to-audio generation model 3. Personalized Movie Gen Video – a version of Movie Gen Video post-trained to generate personalized videos based on a person’s face 4. Movie Gen Edit – a model with a novel post-training procedure for precise video editing These models enable the creation of realistic, personalized HD videos of up to 16 seconds at 16 FPS, along with 48kHz audio, and provide video editing capabilities. Designed to handle tasks ranging from personalized video creation to sophisticated video editing and high-quality audio generation, Movie Gen leverages powerful AI models to enhance users’ creative options. Key features of the Movie Gen suite include: • Video Generation: With Movie Gen, users can produce high-definition (HD) videos by simply entering text prompts. These videos can be rendered at 1080p resolution, up to 16 seconds long, and are supported by a 30 billion-parameter transformer model. The AI’s ability to manage detailed prompts allows it to handle various aspects of video creation, including camera motion, object interactions, and environmental physics. • Personalized Videos: Movie Gen offers an exciting personalized video feature, where users can upload an image of themselves or others to be featured within AI-generated videos. The model can adapt to various prompts while maintaining the identity of the individual, making it useful for customized content creation. • Precise Video Editing: The Movie Gen suite also includes advanced video editing capabilities that allow users to modify specific elements within a video. This model can alter localized aspects, like objects or colors, as well as global changes, such as background swaps, all based on simple text instructions. • Audio Generation: In addition to video capabilities, Movie Gen also incorporates a 13 billion-parameter audio generation model. This feature enables the generation of sound effects, ambient music, and synchronized audio that aligns seamlessly with visual content. Users can create Foley sounds (sound effects amplifying yet solidifying real life noises like fabric ruffling and footsteps echoing), instrumental music, and other audio elements up to 45 seconds long. Meta posted an example video with Foley sounds below (turn sound up to hear it): Trained on billions of videos online Movie Gen is the latest advancement in Meta’s ongoing AI research efforts. To train the models, Meta says it relied upon “internet scale image, video, and audio data,” specifically, 100 million videos and 1 billion images from which it “learns about the visual world by ‘watching’ videos,” according to the technical paper. However, Meta did not specify if the data was licensed in the paper or public domain, or if it simply scraped it as many other AI model makers have — leading to criticism from artists and video creators such as YouTuber Marques Brownlee (MKBHD) — and, in the case of AI video model provider Runway, a class-action copyright infringement suit by creators (still moving through the courts). As such, one can expect Meta to face immediate criticism for its data sources. The legal and ethical questions about the training aside, Meta is clearly positioning the Movie Gen creation process as novel, using a combination of typical diffusion model training (used commonly in video and audio AI) alongside large language model (LLM) training and a new technique called “Flow Matching,” the latter of which relies on modeling changes in a dataset’s distribution over time. At each step, the model learns to predict the velocity at which samples should “move” toward the target distribution. Flow Matching differs from standard diffusion-based models in key ways: • Zero Terminal Signal-to-Noise Ratio (SNR): Unlike conventional diffusion models, which require specific noise schedules to maintain a zero terminal SNR, Flow Matching inherently ensures zero terminal SNR without additional adjustments. This provides robustness against the choice of noise schedules, contributing to more consistent and higher-quality video outputs  . • Efficiency in Training and Inference: Flow Matching is found to be more efficient both in terms of training and inference compared to diffusion models. It offers flexibility in terms of the type of noise schedules used and shows improved performance across a range of model sizes. This approach has

Meta enters AI video wars with powerful Movie Gen set to hit Instagram in 2025 Read More »

From Discard to Demand: The Growing Popularity of Used Smartphones

State of the India Smartphone Market India ships around 145-150 million new smartphones per year for the domestic market, ranking it second globally after China in annual shipping volume. There are approximately 650 million smartphone users in India or about 46% smartphone penetration in the country. There is no other market of this size with such huge untapped potential, making India a very attractive market for all smartphone ecosystem participants from brands to component makers. India’s smartphone market grew modestly in 2021, coming out of a challenging 2020 (due to pandemic-led shutdowns). This growth was driven by the need of a better device for remote learning/work and increasing media consumption on the go. However, in 2022 and 2023 the market faced challenges because of the rising average selling price (ASP) for devices (growing by a CAGR of 38% from 2020 till 2023), improving device quality, and continuing income stress especially in the mass consumer segment. This in turn has elongated the average smartphone replacement cycle in India from 24 months to almost 36 months currently, further restricting the growth of the new smartphone market. Why Are Consumers Choosing Used Smartphones? All the above mentioned factors are contributing to the increasing popularity of used smartphones in the past few years. As the quality of smartphone hardware improves, increasing device prices are keeping the new smartphone models out of reach of the mass segment. The aspiration to own a good device without paying much is making the used smartphones a very attractive choice for consumers wanting to upgrade or even with first-time smartphone users. Another important factor in the popularity of used smartphones is the rising preference for 5G smartphones. As of now only approximately a third of the 650 million Indian smartphone users have a 5G smartphone, the rest are still using 4G phones.  However, the price differential between 4G and 5G smartphones and the lack of wide availability of 5G models under INR 10K (US$125) is restricting their upgrade to a 5G device thus forcing many consumers to go for mid-priced used smartphones. According to the latest IDC research (IDC Used Device Tracker), India ranks third globally in used smartphone units’ annual volume after China and the USA, and is one of the fastest growing markets.  In 2024, IDC forecasts 20 million used smartphones will be traded in India with a YoY growth of 9.6%, outpacing new smartphone shipments of 154 million units in 2024, growing at 5.5% YoY. Apple and Xiaomi Are the Top Choices! The “premiumisation” of India’s smartphone market or more aptly the rising aspirations of the Indian consumer to upgrade to a mid-premium or a premium phone is also contributing to the popularity of used smartphone space. While Apple has seen healthy growth of new iPhone shipments in India in the past few years, it is also leading the used smartphone space, capturing a quarter of the market as per IDC Quarterly Used Device Tracker. Everyone in India wants to buy an iPhone because of its premium brand positioning and status signaling value, but not everyone can afford one. The used phone market comes to the rescue of many such aspirational consumers going for previous gen models like iPhone 11, 12 and 13 series. Xiaomi led India’s new smartphone market for 20 straight quarters from 3Q17-3Q22. As a result, it has a huge user base which is reflected in the used smartphone market as well. Xiaomi sits at the second position followed by Samsung. These top 3 brands combined make up around two-thirds of the used smartphone market in India. Who are the Market Players? IDC’s used smartphone research tracks both second hand and refurbished smartphones being traded via organized refurbished players in the market. It excludes the peer to peer sales. In India, several startups in this space like Cashify, Budlii, Instacash, Yaantra, etc. have tried to organize this hitherto largely unorganized market. With their efforts around marketing and omnichannel presence across both online and offline counters, these players have been able to build confidence and trust among consumers regarding the quality of the used smartphones on their platforms.  Cashify is one of the biggest platforms in this market with over 200 stores in 100 cities, many in Tier 2 & 3 towns. For Yaantra, the company is owned by Indian e-commerce giant Flipkart, with branding named as Flipkart Reset. It is mainly focused on its online portfolio. For offline space, the company has partnered with Airtel to be available in the telco stores in only two Indian cities for now (Delhi & Hyderabad). From Here, the Only Way Is Up! IDC forecasts the used smartphone market in India to grow at 8% CAGR in the next 5 years, reaching 26.5 million units per annum in 2028. It is evident that the used smartphone market in India is gradually taking shape with interesting channel play by key trading players making smartphones more affordable to a larger audience. This is also reassuring for the ever-discerning Indian consumer when they explore buying a used smartphone without worrying about the quality of the device and spending too much. From an overall market perspective, growth in used smartphone market in India can certainly be a factor in increasing smartphone adoption in India, creating a parallel revenue stream for channel players, help vendors in addressing e-waste concerns around discarded devices, and generate employment (skilled/unskilled). This market can certainly play a major role in achieving the goal of bringing a billion Indians in the smartphone fold in the next few years. source

From Discard to Demand: The Growing Popularity of Used Smartphones Read More »

CVS Eyes A Splitting Heart Decision: Separating Its Core Business To Shock Future Growth

US healthcare industry giant CVS Health is considering a strategic breakup of its retail and insurance units. A potential split would mark a significant shift in the company’s “one-stop shop” strategy that it has already invested billions in to realize. Its vision to date has been to create a seamless healthcare experience for consumers and employers by integrating its retail pharmacy, health services, and insurance segments. What’s Happened: Financial Woes Across A Complex Portfolio Cornered CVS CVS is under pressure from investors to improve its financial performance. As CVS CEO Karen Lynch explained in the Q3 2024 earnings call, the company has developed a multiyear plan to generate as much as $2 billion in savings by “ … continuing to rationalize our business portfolio and accelerating the use of artificial intelligence and automation across the enterprise as we consolidate and integrate.” A WARN filing prompted the company to share that this also includes reducing its workforce by nearly 2,900 employees. In recent months, challenges have mounted across the CVS portfolio — and also highlighted its strongest assets. Health insurance carrier Aetna is ailing as more members resume using medical services. CVS’s 2018 acquisition of Aetna aimed to create a healthcare powerhouse but has since encountered significant integration challenges while at the same time facing scrutiny over the vertical integration of the portfolio. In 2024, CVS to date has cut its earnings guidance three times due to escalating medical costs pressuring Aetna’s bottom line. One culprit: Post-pandemic, Medicare Advantage beneficiaries have resumed using medical services and visits to the doctor. Former Aetna President Brian Kane is now gone. But costs from Medicare Advantage plans will continue to skyrocket due to utilization and newly included benefits that have become table stakes for seniors. Pharmacy benefit manager (PBM) prosperity faces potential pitfalls. 2024 began with the loss of large, long-tenured clients, including employer Tyson Foods and narrowed business with health insurer Blue Shield of California. Midyear, the FTC called PBMs manipulative middlemen and highlighted their role in spreading medical deserts. In September, the FTC filed action against CVS Health’s PBM, Caremark Rx, with allegations of the PBM and its competitors engaging in anticompetitive and unfair rebating practices. These methods reportedly artificially inflated the list prices of insulin drugs, restricted patient access to lower-priced options, and shifted the burden of high insulin costs onto vulnerable patients. The suit builds on industry concerns regarding concentration risk in the PBM market. Retail stores provide a sturdy stronghold. CVS has over 9,000 physical locations in the US. Per Definitive Healthcare’s ClinicView, as of 2023, CVS also holds over 60% of the US retail clinic market. In Q3 2024, CVS’s retail clinics outperformed other business segments, benefiting from competitors’ retreats, such as Walmart’s exit due to lack of profitability and Walgreens’ shift to specialty pharmacy expansion. CVS’s digital experience enhancements and broader in-store services, especially for chronic conditions and mental health, have boosted sustained customer engagement and retention. Services like vaccinations continue to provide an (ongoing) one-time revenue boost and remind customers of the available convenient care options in their local store. What A Breakup Would Mean For Key Stakeholders While CVS is distracted pondering its next moves, competitors in the health insurance and pharmacy space should position themselves to take market share now. If a breakup plays out, we may see greater focus within each of the (erstwhile) CVS business units. Unlocking financial gains through technological advances, however, will take time and could lead to: Health insurers picking up new populations. If Aetna becomes independent, expect some of its members to shop around for new insurers. After all, Aetna’s synergy with CVS was one of its key selling points. Competitors should highlight established adoption of emerging technologies such as generative AI, proof of efficiencies that reduce administrative burden for providers, and care advocacy services that drive member trust and appropriate utilization of healthcare services. Retail pharmacies expanding services. If a breakup happens, expect retail competitors to try to poach CVS shoppers with expanded pharmacy services like home delivery and virtual consults. We expect retailers like Amazon and Walmart to make prescription transfers easy to execute and to market price transparency and better prescription drug pricing that benefits consumers. Degradation in the consumer experience. Consumers have benefited from the vertical integration of the PBM and retail pharmacy. Dismantling this connection would lead to disjointed experiences and push employers with frustrated employees into the open arms of competitors that have preserved their integration, such as UnitedHealthcare or Cigna. One CVS group could come out ahead in a breakup: CVS’s Caremark PBM. As all PBMs face regulatory scrutiny over concentration risk, a breakup could actually put CVS ahead of the curve. We will continue to watch as this potential strategic shift evolves. Forrester clients can schedule time with us — Arielle Trzcinski and Sucharita Kodali — to talk more about the future of health insurance and retail health. source

CVS Eyes A Splitting Heart Decision: Separating Its Core Business To Shock Future Growth Read More »

Apple releases Depth Pro, an AI model that rewrites the rules of 3D vision

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Apple’s AI research team has developed a new model that could significantly advance how machines perceive depth, potentially transforming industries ranging from augmented reality to autonomous vehicles. The system, called Depth Pro, is able to generate detailed 3D depth maps from single 2D images in a fraction of a second—without relying on the camera data traditionally needed to make such predictions. The technology, detailed in a research paper titled “Depth Pro: Sharp Monocular Metric Depth in Less Than a Second,” is a major leap forward in the field of monocular depth estimation, a process that uses just one image to infer depth. This could have far-reaching applications across sectors where real-time spatial awareness is key. The model’s creators, led by Aleksei Bochkovskii and Vladlen Koltun, describe Depth Pro as one of the fastest and most accurate systems of its kind. A comparison of depth maps from Apple’s Depth Pro, Marigold, Depth Anything v2, and Metric3D v2. Depth Pro excels in capturing fine details like fur and birdcage wires, producing sharp, high-resolution depth maps in just 0.3 seconds, outperforming other models in accuracy and detail. (credit: arxiv.org) Monocular depth estimation has long been a challenging task, requiring either multiple images or metadata like focal lengths to accurately gauge depth. But Depth Pro bypasses these requirements, producing high-resolution depth maps in just 0.3 seconds on a standard GPU. The model can create 2.25-megapixel maps with exceptional sharpness, capturing even minute details like hair and vegetation that are often overlooked by other methods. “These characteristics are enabled by a number of technical contributions, including an efficient multi-scale vision transformer for dense prediction,” the researchers explain in their paper. This architecture allows the model to process both the overall context of an image and its finer details simultaneously—an enormous leap from slower, less precise models that came before it. A comparison of depth maps from Apple’s Depth Pro, Depth Anything v2, Marigold, and Metric3D v2. Depth Pro excels in capturing fine details like the deer’s fur, windmill blades, and zebra’s stripes, delivering sharp, high-resolution depth maps in 0.3 seconds. (credit: arxiv.org) Metric depth, zero-shot learning What truly sets Depth Pro apart is its ability to estimate both relative and absolute depth, a capability called “metric depth.” This means that the model can provide real-world measurements, which is essential for applications like augmented reality (AR), where virtual objects need to be placed in precise locations within physical spaces. And Depth Pro doesn’t require extensive training on domain-specific datasets to make accurate predictions—a feature known as “zero-shot learning.” This makes the model highly versatile. It can be applied to a wide range of images, without the need for the camera-specific data usually required in depth estimation models. “Depth Pro produces metric depth maps with absolute scale on arbitrary images ‘in the wild’ without requiring metadata such as camera intrinsics,” the authors explain. This flexibility opens up a world of possibilities, from enhancing AR experiences to improving autonomous vehicles’ ability to detect and navigate obstacles. For those curious to experience Depth Pro firsthand, a live demo is available on the Hugging Face platform. A comparison of depth estimation models across multiple datasets. Apple’s Depth Pro ranks highest overall with an average rank of 2.5, outperforming models like Depth Anything v2 and Metric3D in accuracy across diverse scenarios. (credit: arxiv.org) Real-world applications: From e-commerce to autonomous vehicles This versatility has significant implications for various industries. In e-commerce, for example, Depth Pro could allow consumers to see how furniture fits in their home by simply pointing their phone’s camera at the room. In the automotive industry, the ability to generate real-time, high-resolution depth maps from a single camera could improve how self-driving cars perceive their environment, boosting navigation and safety. “The method should ideally produce metric depth maps in this zero-shot regime to accurately reproduce object shapes, scene layouts, and absolute scales,” the researchers write, emphasizing the model’s potential to reduce the time and cost associated with training more conventional AI models. Tackling the challenges of depth estimation One of the toughest challenges in depth estimation is handling what are known as “flying pixels”—pixels that appear to float in mid-air due to errors in depth mapping. Depth Pro tackles this issue head-on, making it particularly effective for applications like 3D reconstruction and virtual environments, where accuracy is paramount. Additionally, Depth Pro excels in boundary tracing, outperforming previous models in sharply delineating objects and their edges. The researchers claim it surpasses other systems “by a multiplicative factor in boundary accuracy,” which is key for applications that require precise object segmentation, such as image matting and medical imaging. Open-source and ready to scale In a move that could accelerate its adoption, Apple has made Depth Pro open-source. The code, along with pre-trained model weights, is available on GitHub, allowing developers and researchers to experiment with and further refine the technology. The repository includes everything from the model’s architecture to pretrained checkpoints, making it easy for others to build on Apple’s work. The research team is also encouraging further exploration of Depth Pro’s potential in fields like robotics, manufacturing, and healthcare. “We release code and weights at https://github.com/apple/ml-depth-pro,” the authors write, signaling this as just the beginning for the model. What’s next for AI depth perception As artificial intelligence continues to push the boundaries of what’s possible, Depth Pro sets a new standard in speed and accuracy for monocular depth estimation. Its ability to generate high-quality, real-time depth maps from a single image could have wide-ranging effects across industries that rely on spatial awareness. In a world where AI is increasingly central to decision-making and product development, Depth Pro exemplifies how cutting-edge research can translate into practical, real-world solutions. Whether it’s improving how machines perceive their surroundings or enhancing consumer experiences, the potential uses for Depth Pro are broad and varied. As the researchers conclude, “Depth Pro dramatically outperforms all prior work in sharp delineation of object boundaries, including fine structures such as hair, fur, and vegetation.” With its open-source release, Depth Pro could soon become integral to industries ranging from autonomous

Apple releases Depth Pro, an AI model that rewrites the rules of 3D vision Read More »

Most Voters Say Harris Will Concede – and Trump Won’t – If Defeated in the Election

66% of voters say the threat of violence against political leaders and their families is a major problem in the country Pew Research Center conducted this study to understand Americans’ views of the 2024 presidential election campaign. For this analysis, we surveyed 5,110 adults – including 4,025 registered voters – from Sept. 30 to Oct. 6, 2024. Everyone who took part in this survey is a member of the Center’s American Trends Panel (ATP), a group of people recruited through national, random sampling of residential addresses who have agreed to take surveys regularly. This kind of recruitment gives nearly all U.S. adults a chance of selection. Surveys were conducted either online or by telephone with a live interviewer. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other factors. Read more about the ATP’s methodology. Here are the questions used for this report, the topline and the survey methodology. The presidential race between Vice President Kamala Harris and former President Donald Trump continues to be deadlocked among all registered voters. And with less than a month until the election, a growing share of voters (86%) say it’s not yet clear who will win on Nov. 5. Looking beyond Election Day, Harris and Trump supporters are deeply divided over the importance of their candidate conceding defeat if they lose. Nearly twice as many Harris supporters (61%) as Trump supporters (32%) say, if their candidate loses next month, it is very important for them to acknowledge the opposing candidate as the legitimate president. There also is a sizable gap in expectations for how each candidate will handle a possible election defeat: 72% of voters overall say if Harris loses – that is, if Trump wins enough votes cast by eligible voters in enough states – she will accept the results and acknowledge Trump’s victory. Virtually all Harris supporters (95%) and about half of Trump supporters (48%) expect Harris to concede. Just 24% say if Trump loses he will concede, while 74% say he will not. About half of Trump supporters (46%) and only 4% of Harris supporters expect Trump to acknowledge Harris as the election winner. The latest national survey by Pew Research Center, conducted among 5,110 adults (including 4,025 registered voters) from Sept. 30 to Oct 6, 2024, finds that the race among all registered voters (not all of whom will vote) is little changed since early September, before the presidential and vice presidential debates held over the last month: 48% favor Harris or lean toward supporting her, while 47% back Trump or lean toward supporting him. Another 5% of voters support or lean toward a third-party candidate. Jump to Chapter 1 for more on voters’ preferences. In further evidence of the tightness of the presidential contest, there are virtually no meaningful differences in the shares of Harris and Trump supporters who say they are certain to back their own candidate, are extremely motivated to vote and that it “really matters” who wins the election. These attitudes are closely associated with voting. However, an analysis of voters who expressed these views prior to the 2020 election and their actual turnout finds that small shares of voters who said they had thought a lot about the candidates, were highly motivated to vote or saw high stakes in the election did not end up voting, while some voters who did not express these views did vote. Views of Harris and Trump as president and prospects for change in the country Voters’ expectations for a Harris or Trump presidency are deeply polarized. This also is the case for opinions about whether each candidate would change the country, for better or worse. More voters express negative than positive views of both candidates as possible presidents. While 36% say Harris would be a good or great president, 18% say she would be average and 46% say she would be poor or terrible. More voters also think Trump would be a poor or terrible president (48%) than a good or great one (41%). Fewer expect Trump to be an average president (11%) than say that about Harris (18%). Compared with Harris, Trump is viewed more widely both as a great and a terrible potential president: 22% of voters say he would be a great president while 14% say this about Harris. Nearly four-in-ten (38%) say Trump would be a terrible president, while fewer (32%) view Harris that negatively. More voters say Trump would change the way things work in Washington (89%) than say that about Harris (70%). However, more voters say both candidates would change Washington for the worse than for the better. Nearly half of voters (48%) say Trump would change Washington for the worse, while 41% say he would bring positive change to the nation’s capital and just 10% say he would not bring much change. By 41% to 29%, more voters say Harris would make things worse than better, while 30% say she would not change things much. Jump to Chapter 2 for more on expectations of potential Harris and Trump presidencies. Other findings: Views of the campaign, concerns over political violence, clearness of candidates’ issues positions, an absence of shared facts Views of the 2024 campaign are mostly negative. Just 19% of voters say the campaign makes them feel proud of the country. That actually is higher than the share who said this in July (12%), with much of the change coming among Harris supporters. Majorities also say the campaign is too negative (71%) and not focused on important policy debates (62%). Still, more than twice as many voters describe the campaign as interesting (68%) than dull (30%). Trump voters are more likely than Harris voters to say the threat of violence against politicians is a major problem. Overall, 66% of voters say the threat of violence against political leaders and their families is a major problem, while 30% say it is a minor problem. Just 4% think it is not

Most Voters Say Harris Will Concede – and Trump Won’t – If Defeated in the Election Read More »

The Four Key Issues CrowdStrike Exposed: CIOs’ Next Steps

IDC’s Quick Take The recent IT outage caused by silent updates pushed out by CrowdStrike to its Falcon agent exposes an issue that is at the heart of how the IT industry operates. It highlights the contrasting trust and attestation mechanisms taken by operating system vendors like Microsoft, Apple, and Red Hat in allowing its ecosystem of independent software vendors (ISVs) direct access to certain parts of the operating system stack and especially software that can potentially severely negatively impact the system kernel. While this issue impacted Windows devices– network and human centric – managed by CrowdStrike, none of the iOS, MacOS, or even Linux devices were affected. That is very telling and should compel vendors like Microsoft and Apple to take a long hard look at what “openness” means in the wake of regulations like EU’s Digital Markets Act (DMA). It should also compel the largely Windows-dependent customer base to redefine their long-term cyber recovery strategy. It should include making a shift to more modern operating system environments. Event Highlights On July 19, 2024, at 04:09 UTC, a sensor configuration update was released by CrowdStrike for Windows systems as part of the Falcon platform’s protection mechanisms. This update contained a logic error that led to a “blue screen of death” (BSOD), affecting certain systems. A remediation was implemented by 05:27 UTC on the same day. According to CrowdStrike, the impact of this event was specific to customers using Falcon sensor for Windows version 7.11 or higher. It needs to be pointed out that to make their endpoint protection products effective, vendors like CrowdStrike require access to the system files. Any configuration issues with these files can lead to unpredictable behavior at best and leave the system in an unrecoverable state at worst. The resulting outage caused disruptions to airlines, businesses and emergency services and could be the largest IT outage in history. In time, we will know whether the scale and impact of the outage will reach the level of the “NotPetya” cyberattack in 2017. At the time of writing, two days later, airlines – the biggest group of affected enterprises – were still reeling from the outage. It is important to note that this incident was not caused by a cyberattack but rather routine update to configuration files, often referred to as “Channel Files.” In the context of the Falcon sensor, Channel Files are integral to the behavioral protection mechanisms that safeguard systems against cyber threats. These files are dynamically updated multiple times daily. The Falcon sensor’s architecture, designed to incorporate these updates seamlessly, has been a foundational component. In Windows environments, Channel Files are typically located within the directory path C:WindowsSystem32driversCrowdStrike, identifiable by their “C-” prefix. Each file is uniquely numbered, serving as an identifier that aids in the management and deployment of updates. For instance, Channel File 291, denoted by the filename “C-00000291-“, plays a crucial role in how Falcon assesses the execution of named pipes—a standard method for interprocess communication within Windows systems. The significance of Channel File 291 came to the forefront during an update aimed at neutralizing the threat posed by malicious named pipes associated with prevalent Command and Control (C2) frameworks. The update introduced a logic error, leading to a system crash. IDC’s Point of View For historical context, this is not the first time something like this has happened. For example, in 2010, McAfee had an issue with a “DAT” file. The issue with McAfee’s DAT file version 5958 caused a reboot loop and loss of network access on Windows XP SP3 systems due to a false positive that misidentifies the Windows binary “svchost.exe” as the virus “W32/Wecorl.a”. In 2017, Webroot released an update that misidentified Windows system files as malware and Facebook as a phishing site. This update quarantined essential files, leading to instability in numerous computers. In 2021, a mass internet outage was caused by a bad software update by Fastly, there have been many others. This situation – which is not unique to CrowdStrike – exposes four key issues that are fundamental to the IT industry and its complex ecosystem of ISVs. First, it exposes the fact that by giving its ecosystem ISVs direct access to the system kernel, the operating system vendor is essentially removing itself from the trust value chain. Thus, the trust value chain now only includes the ISV and its customers. Second, the process of silent updates in which the customers implicitly rely on the QA process employed by the ISV leaves them inadequately prepared for drastic and timely intervention in the case of mass outages that leave the system in an unrecoverable state. Third, this situation is a wake-up call for the industry on what a system of checks and balances means and what kind of accountability operating system vendors, ISVs and customers must play to avoid this kind of a situation from repeating itself. And finally, fourth, this situation indirectly exposes the fragile human-centric Windows stack that unlike modern network-centric Unix and Linux operating systems cannot robustly manage exception errors instead defaulting to a manually recoverable state. The first point exposes contrasting approaches taken by leading operating system vendors. On one side there are vendors like Apple that take a very prescriptive and closed approach to endpoint protection making it almost impossible for any ecosystem ISV/provider like CrowdStrike to push out configuration changes that can potentially catastrophically impact on the operating system (e.g., iOS or macOS) kernels. Apple has been a fierce advocate of a “walled garden” approach implementing stringent attestation mechanisms to ensure that no one – and we mean no one – gets to modify the system kernel without express approval from Apple. This has made Apple run afoul of the European Commission, and its hawkish regulatory approach to open up operating systems under the premise of fair competition. On the other hand, Microsoft takes – or more importantly was forced to take – a more open approach enabling at least a dozen ISVs in offering modern endpoint protection

The Four Key Issues CrowdStrike Exposed: CIOs’ Next Steps Read More »

How Corpay Used Compensation To Improve Lead Conversion

While Corpay had seen tremendous growth in inbound/digital sales results, the inbound leads team had a static conversion rate that wasn’t hitting revenue targets. Corpay went through a series of steps to improve these issues. It identified key gaps that could be solved with technology. A lead routing and scoring tool was implemented to move leads with the highest likelihood to close to the front of the queue, and a sales cadence tool was added to engage buyers that had the highest probability of success. A duplicate detection tool was also implemented that reduced the number of duplicates a seller had to sort through to find an active lead. Yet despite these process and technology changes, performance hadn’t improved enough. Further analysis showed that compensation plans weren’t aligned to desired outcomes, meaning sellers used less efficient approaches because it led to better compensation. Corpay analyzed the inbound lead team’s performance and determined that sales compensation was the primary reason for their lack of improvement. It identified four issues: Misaligned compensation plans. The compensation mechanics built into Corpay’s legacy compensation plans weren’t aligned with revenue generation because of the gap between what generates revenue and how sellers are paid. The plan couldn’t adapt to market changes. Fuel cards, the sales team’s primary product offering, depend on macroeconomic factors that have extreme swings in demand based on fuel prices. Sellers didn’t have visibility on how they were performing. The measures in the plan were difficult to understand and lacked reporting for showing performance compared to quotas. Organizational structure made it hard to create plans. The responsibilities of inside sellers weren’t aligned to what the teams could control, leading to complex, hard-to-understand plans that were misaligned with roles. Corpay realized that if it wanted to achieve the benefits of its process changes, then changing its compensation plans was needed. To affect the outcome, Corpay redesigned its compensation plan to align with desired business goals by: Identifying the best measure. While revenue was the goal, the measure that most closely aligned to revenue growth for the leads team was converted leads. Moving to quarterly quotas. Attainment data showed annual quotas weren’t working because of the highly cyclical nature of the fuel card business. Transitioning to quarterly quotas allowed the company to account for market dynamics and set more accurate quotas. Implementing new mechanics. Another challenge with the compensation plan was the lack of urgency to close sales at the end of a month or quarter. Legacy plan structures didn’t motivate sellers to push hard to maximize sales at the end of each month. Therefore, a unit-based/tiered method was introduced to incentivize maximizing each month’s revenue. Realigning focus to the best opportunities. The legacy plan incentivized sellers to continue to chase old leads despite low close rates compared to new leads. This measure was removed to focus on leads that gave sellers the best chance of maximizing in-month goals. The new compensation plan led to the lead conversion rate improving 40%–50%, resulting in additional multimillions in incremental annual revenue per year. Compensation isn’t the solution to every sales challenge. It can become a problem if companies try to use it to solve the wrong problems. When applied in the right way, however, it has a significant impact. Want to learn more about how we help clients optimize their compensation challenges? Reach out to Forrester. source

How Corpay Used Compensation To Improve Lead Conversion Read More »

Inflection helps fix RLHF uninformity with unique models for enterprise, agentic AI

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A recent exchange on X (formerly Twitter) between Wharton professor Ethan Mollick and Andrej Karpathy, the former Director of AI at Tesla and co-founder of OpenAI, touches on something both fascinating and foundational: many of today’s top generative AI models — including those from OpenAI, Anthropic, and Google— exhibit a striking similarity in tone, prompting the question: why are large language models (LLMs) converging not just in technical proficiency but also in personality? The follow-up commentary pointed out a common feature that could be driving the trend of output convergence: Reinforcement Learning with Human Feedback (RLHF), a technique in which AI models are fine-tuned based on evaluations provided by human trainers.  Building on this discussion of RLHF’s role in output similarity, Inflection AI’s recent announcements of Inflection 3.0 and a commercial API may provide a promising direction to address these challenges. It has introduced a novel approach to RLHF, aimed at making generative models not only consistent but also distinctively empathetic.  With an entry into the enterprise space, the creators of the Pi collection of models leverage RLHF in a more nuanced way, from deliberate efforts to improve the fine-tuning models to a proprietary platform that incorporates employee feedback to tailor gen AI outputs to organizational culture. The strategy aims to make Inflection AI’s models true cultural allies rather than just generic chatbots, providing enterprises with a more human and aligned AI system that stands out from the crowd. Inflection AI wants your work chatbots to care Against this backdrop of convergence, Inflection AI, the creators of the Pi model, are carving out a different path. With the recent launch of Inflection for Enterprise, Inflection AI aims to make emotional intelligence — dubbed  “EQ” — a core feature for its enterprise customers.  The company says its unique approach to RLHF sets it apart. Instead of relying on anonymous data-labeling, the company sought feedback from 26,000 school teachers and university professors to aid in the fine-tuning process through a proprietary feedback platform. Furthermore, the platform enables enterprise customers to run reinforcement learning with employee feedback. This enables subsequent tuning of the model to the unique voice and style of the customer’s company. Inflection AI’s approach promises that companies will “own” their intelligence, meaning an on-premise model fine-tuned with proprietary data that is securely managed on their own systems. This is a notable move away from the cloud-centric AI models many enterprises are familiar with — a setup Inflection believes will enhance security and foster greater alignment between AI outputs and the ways people use it at work. What RLHF is and isn’t RLHF has become the centerpiece of gen AI development, largely because it allows companies to shape responses to be more helpful, coherent, and less prone to dangerous errors. OpenAI’s use of RLHF was foundational to making tools like ChatGPT engaging and generally trustworthy for users. RLHF helps align model behavior with human expectations, making it more engaging and reducing undesirable outputs. However, RLHF is not without its drawbacks. RLHF was quickly offered as a contributing reason to a convergence of model outputs, potentially leading to a loss of unique characteristics and making models increasingly similar. Seemingly, alignment offers consistency, but it also creates a challenge for differentiation. Previously, Karpathy himself pointed out some of the limitations inherent in RLHF. He likened it to a game of vibe checks, and stressed that it does not provide an “actual reward” akin to competitive games like AlphaGo. Instead, RLHF optimizes for an emotional resonance that’s ultimately subjective and may miss the mark for practical or complex tasks.  From EQ to AQ To mitigate some of these RLHF limitations, Inflection AI has embarked on a more nuanced training strategy. Not only implementing improved RLHF, but it has also taken steps towards agentic AI capabilities, which it has abbreviated as AQ (Action Quotient). As White described in a recent interview, Inflection AI’s enterprise aims involve enabling models to not only understand and empathize but also to take meaningful actions on behalf of users — ranging from sending follow-up emails to assisting in real-time problem-solving. While Inflection AI’s approach is certainly innovative, there are potential short falls to consider. Its 8K token context window used for inference is smaller than what many high-end models employ, and the performance of their newest models has not been benchmarked. Despite ambitious plans, Inflection AI’s models may not achieve the desired level of performance in real-world applications.  Nonetheless, the shift from EQ to AQ could mark a critical evolution in gen AI development, especially for enterprise clients looking to leverage automation for both cognitive and operational tasks. It’s not just about talking empathetically with customers or employees; Inflection AI hopes that Inflection 3.0 will also execute tasks that translate empathy into action. Inflection’s partnership with automation platforms like UiPath to provide this “agentic AI” further bolsters their strategy to stand out in an increasingly crowded market. Navigating a post-Suleyman world Inflection AI has undergone significant internal changes over the past year. The departure of CEO Mustafa Suleyman in Microsoft’s “acqui-hire,” along with a sizable portion of the team, cast doubt on the company’s trajectory. However, the appointment of White as CEO and a refreshed management team has set a new course for the organization. After an initial licensing agreement with the Redmond tech giant, Inflection AI’s model development was forked by the two companies. Microsoft continues to build on a version of the model focused on integration with its existing ecosystem. Meanwhile, Inflection AI continued to independently evolve Inflection 2.5 into today’s 3.0 version, distinct from Microsoft’s. Pi’s… actually pretty popular Inflection AI’s unique approach with Pi is gaining traction beyond the enterprise space, particularly among users on platforms like Reddit. The Pi community has been vocal about their experiences, sharing positive anecdotes and discussions regarding Pi’s thoughtful and empathetic responses.  This grassroots popularity demonstrates that Inflection AI might

Inflection helps fix RLHF uninformity with unique models for enterprise, agentic AI Read More »

Sales Planning: Uncovering Blind Spots and Eliminating the “Swivel Chair Effect”

When selecting a restaurant, you’d likely ask friends for recommendations, do an online search from reliable and trusted sites, and check out the menu before making a reservation. It’s this data that allows you to sort through all the choices, make an informed decision, and have a memorable meal.   If only it were that easy when it comes to sales planning, prospecting, and account segmentation strategies. Traditionally, these tasks have relied on a certain amount of subjective opinion, historical CRM data that may be outdated, and independent, third-party data that may — or may not — be reliable for your cut of the market.  In an ever-evolving B2B market driven by outcomes, you don’t have time to waste on guesswork or relying on unreliable data when you’re prioritizing sales efforts.   The Dizzying Effects of the “Swivel Chair Approach” For many, the sales planning and market evaluation process involves gathering and reconciling historical data from internal and external sources. Without consistent and reliable data, you risk relying on inaccurate information — and that can lead to wasted efforts, suboptimal sales resource allocation, missed opportunities, strategic misalignment, and low win rates. Bottom line? Sellers are not set up for success and, ultimately, your business pays the price.  The “swivel chair approach,” when disparate data is gathered, then manually transferred between processes, can lead to inefficiencies and an incomplete — or incorrect — view of the opportunities that could have led to big revenue wins.   Inadequate data has very real consequences. Consider the customer service platform vendor forced to make assumptions about segments to go after and companies to target based only on their own limited historical data. Accurate information could uncover inherent blind spots in their planning and open new revenue streams for their business.  Or, consider a cyber security platform vendor who used a freelance consultant to build a propensity-to-buy model, but now can’t update it with new data. The result? Wasted and inefficient resource allocation, missed opportunities, inability to efficiently iterate on the original findings and potential loss in revenue.  Sales and Revenue Operations teams consistently strive to streamline operations and improve accuracy through automation and better system integration. But, don’t discount the critical importance of gaining access to data that is consistent, contextual, and reliable at a granular market level.    When Data Can’t Be Trusted, the Cost Can Be High  When it comes to data used to support planning and prospecting functions, trust can’t be overemphasized. Trusted data is informed, accurate, consistent, relevant, and has been vetted through stages of the data value chain. It’s data that sales operations can reliably use to identify areas of growth, set goals and priorities, improve sales productivity, increase competitive advantage and, ultimately, increase win rates.   But, gathering data and intel for sales planning can be challenging:  Multiple, independent data sources make it difficult to build cohesive and consistent plans. Historical internal data often leaves out current market dynamics and external factors needed to make key decisions. Competitive analyses may be faulty or incomplete.  Force fitting data to match growth projections leading to increased pressure and unrealistic expectations. And, defining ideal customer profiles (ICPs) and actual opportunities can be too much of a guessing game.  Fortunately, there’s a better option. By leveraging reliable data and strategic intelligence, you can break through complexities, drive operational excellence, and enhance sales performance. Transforming Sales Operations Through Trusted Data Trusted data is mission critical to formulating robust strategic plans and go-to-market (GTM) strategies. Trusted data is the foundation you need to optimize resource allocation, ensuring sales resources are invested where they will generate the highest returns. Planning that is strategically data-driven, and again built on trusted data, leads to improved sales outcomes and higher win rates.   To effectively plan, you need to define your addressable market, but all too often Total Addressable Market (TAM) calculations are too macro-level for actual sales planning and don’t reveal actual opportunity. This is why current, accurate, and trusted data is so important.  To more clearly assess the situation, you need Serviceable Addressable Market (SAM) data based on your own ICP metrics, such as geography, technology markets, and targeted industries.   And finally, you need Serviceable Obtainable Market (SOM) data, focusing on the market share you can realistically acquire, enabling you to set expectations and allocate resources where they have the best chance of reaping reward.  Data-driven TAM, SAM, and SOM calculations ease the process of market analysis with insights driven from high-quality data. This supports resource allocation by ensuring sales efforts are invested where they will generate the highest returns and assures that sales goals reflect market realities and are directly tied to ICP.  Access to granular data can help assess the viability of your ICP, as well as give insight into previously untapped sales opportunities. This solution negates the limitations of relying on historical data and/or past experiences alone. It identifies adjacent markets and potentially underserved or emerging markets within territories which leads to increasing competitive advantages and more productive sales outcomes.  In short, trusted data helps you address very real questions, including:  Which potential adjacent markets should be targeted for new growth opportunities?  How does our ICP definition translate to sales planning and account prioritization?  Is my competitor vulnerable in specific areas of the market, like certain revenue bands, geographic regions or vertical markets?  Are my projections accurate? Will they help us reach our goals — or maximize our growth potential?   The IDC Velocity for Sales Advantage  IDC Velocity for Sales provides reliable data-driven TAM, SAM, and, most importantly, SOM projections aligned to your areas of growth focus. So, you can streamline — and fine-tune — sales planning and prioritization with insights driven from the highest quality data. Your sales goals will reflect accurate market realities and will be directly tied to your ICP. This, in turn, ensures sales resources and efforts are invested where they will generate the highest returns. You’ll be empowered with strategic insight that can’t be gleaned from one-off purchase intent data sources alone. IDC Velocity provides

Sales Planning: Uncovering Blind Spots and Eliminating the “Swivel Chair Effect” Read More »

The modern workplace: Will remote tech workers tolerate being monitored?

Working remotely or working at the office? Choices for the modern workplace. Photo: Tom Foremski The Omicron surge has forced businesses to again delay a date for a return to the office. And that means a delay to an inevitable showdown: between workers and managers over remote or office-based work. To a degree, every business will have by now adapted to the reality of a hybrid workplace and the fact some staff will remain home-based while others will come back to the office.  Any business that cannot offer a hybrid workplace will face problems in recruitment during this worker shortage. And problems in developing in-house, the skills of managing a modern workforce. HOME MONITORS For work at home advocates the future looks rosy. With the current jobs boom it looks certain that they’ll get what they want – either at their current employer — or somewhere else.  But will workers agree to allow their employer to monitor their home office activities? Is it something that can be refused or not? How is the home different from the office where people can be seen to be working at their desks, engaged in meetings, and logging into their IT systems?  Do remote workers have a right to refuse to be monitored?  Digital.com released a survey late last year that found widespread use of remote worker monitoring software especially in IT (77%) and advertising (83%).  One in seven workers hadn’t been told about it.  Working from home might not be such a wonderful thing when you consider that people worked harder – a 10% boost in productivity was reported in the survey after the software was installed.  REMOTE WORKER ANXIETY Being away from the office can be very isolating and cause anxiety by being out of the informal communication loops. Further anxiety comes from the jobs that aren’t hourly paid – how many hours is enough to prove your worth? You’ll be competing against the unknown productivity of your colleagues.   You’ll feel pressured to go the extra distance especially since 88% of employers said they had fired people based on their remote work reports. Work from home might even become the norm for some organizations because if done right, they get a lot more productivity – and also they can confidently outsource some of their operations for big savings.  The home could easily become a dismal backwater for remote workers, always-on and always watching. I’d rather leave all that at the office, imho.  See also: source

The modern workplace: Will remote tech workers tolerate being monitored? Read More »